 Okay, so we have Marcus Reiniger from Bochum giving us a talk on the subject very close to my heart which is Quiver Moduli. Thank you very much. Thanks for the invitation and so the title is, as stated, Quiver Moduli and since this is the basic notion seminar I will not give precise statements of theorems but just try to tell you the idea of what Quiver Moduli are from the very beginning which will start from linear algebra and it will be more important to work out examples and to show you how Quiver Moduli feel like and what they might be good for and what they might do for you than showing you the precise theorems. At the end I will hint to some connections to various things and so okay, let's start. So basic notion, what is a modular space at all? So modular space, now that's not a definition, it's just a slogan but it might help, it's a geometric object, some kind of space, geometric object encoding the continuous parameters of a classification problem. So that's a general slogan of what a modular space is and usually constructing such a modular space is quite difficult. Well the name modular comes from work of Riemann where he claimed that in trying to parameterize Riemann surfaces you need three times genus minus three continuous parameters, so called modular but to actually construct a modular space for Riemann surfaces is difficult. Another situation is vector bundles on a curve. These were constructed in the 1960s, very important objects but difficult to construct. We will just choose another class, maybe we will look at classification problems from linear algebra and we will see that we know many of them and know many of the solutions already and actually with these linear algebra problems it's totally easy to construct these modular spaces. So this somehow serves as a toy example or just a huge playground for modular spaces and for modular theory without being too technical. If you are not interested in linear algebra classification problems at all then please stay seated, there will be links to other things later but you have to go through the linear algebra first. So today classification problems from linear algebra. So we will see that in a first-year linear algebra course actually we get to know many driven modular spaces but under a different name. So I would like, I want to try to make a table, I don't know if it works on the blackboard but in general the table of examples should be the following. So here classify, I will state the classification problem, here I will discuss the parameters appearing in the classification, parameters or invariance and this I will divide into the discrete and the continuous parameters. Then I want to give you sometimes a normal form for the solution of this problem and we will try to define modular spaces somehow from scratch and then at the end I will tell you what the quiver is. So that's what the table is supposed to look like and I hope it works, that should leave me enough space. Okay, so let's do our first classification problem in linear algebra. Classify matrices, rectangular matrices up to row and column operations. You are allowed to perform all usual row operations and all usual column operations. So classify matrices, m times n matrices, m by n, say over the complex numbers, up to row and column operations. So and then it is known from linear algebra what kind of normal form you can get. This is an r by r identity matrix and the rest of the matrix is zero, totally easy. Okay, and there is exactly one discrete invariant, namely the rank r of the matrix. So all matrices of the same rank are equivalent, well matrices are equivalent up to row and column operations even only if their ranks coincide and of course the size. Do we need a modular space in this case? No, we don't need because a modular space is something encoding the continuous parameters of the classification problem which we don't have. Okay, so let's make things more difficult by classifying such matrices but only up to row operations. So same problem but only up to row operations. And of course we know the solution because the solution is at the heart of solving linear systems of equations, it's the reduced echelon form of a matrix. So the normal form looks like a reduced echelon form where you have these ones here, zeros above the ones, zeros below and then all sorts of three parameters in between. What are the invariants or the parameters of this classification problem? Well we have a discrete invariant, it's the echelon form, the form of these stairs. But then we got all sorts of continuous parameters, namely all these three variables. Okay, so that's the solution in terms of invariants or parameters and in terms of a normal form but what could be a modular space? What could be a space encoding all these parameters? Well you've got lots of continuous parameters and the number of continuous parameters changes whenever the echelon form changes but nevertheless one can form a space out of all these and the idea is the following. So let me make one assumption, let's assume that the rank of the matrix is the maximum possible m, the number of rows. Then you can look at the row space of the matrix. So the span of all the rows which is a subspace of n-dimensional space and this subspace doesn't change with row operations. And this subspace we consider as a point in the grassmanian grmn, the grassmanian of m-dimensional subspaces of an n-dimensional space. So this grassmanian qualifies as a modular space. It somewhat feels like a space encoding continuous parameters and we will make precise later that this is indeed the modular space we aim to define. Alright, second problem solved. Now let's go to the third problem which is one square matrix up to conjugation. So next problem is grmn matrix up to conjugation. And again we all know a normal form from linear algebra, that's the Jordan canonical form. Again you have discrete and continuous parameters. The discrete parameters are the sizes of the Jordan blocks. And the continuous parameters for conjugation are the eigenvalues of the matrix. The eigenvalue is counted with their multiplicities so that an n by n matrix always has n eigenvalues. So now we can easily come up with a modular space. The modular space should encode the continuous parameters. So the continuous parameters are the eigenvalues. The eigenvalues are of course somehow not uniquely determined but only up to order. So the modular space is c to the n, that's n tuples of eigenvalues, but we consider them up to reordering so we factor out the symmetric group action. We have to make sense of that and we will see that in the world of algebraic geometry this is actually isomorphic to complex n space again. We will see how this works. Okay, so that was our third classification problem. Let me do some more. Oh yes, yeah this briefly, this still fits into this table. Now look at not only one matrix but two square matrices up to conjugation and b up to conjugation but now not conjugation by different matrices but by the same simultaneous conjugation. This is unsolved. So a normal form is completely unknown. The modular spaces are unknown. So simultaneous conjugation means a pair of matrices a, b is equivalent to any conjugate g, a, g inverse, g, b, g inverse where g is an invertible matrix in gln. But you are conjugating by the same matrix and now imagine you want to try to solve this by hand. Well okay, you bring the first matrix into Jordan canonical form but then of course the symmetry group breaks down and you are only allowed to transform the second matrix by those g which do not destroy the Jordan canonical form. And this leads to many, many case by case considerations and it's totally hopeless. Any equals one is okay. Any equals two is okay. I will show you the partial results which are known. But I mean in general for general n the normal form is unknown, the modular spaces are unknown. But at the end of the talk I will show you one general result about these modular spaces and concerning continuous parameters let me just say at the moment there are many, many. Okay, let me make this more precise. So I, okay, so I call pairs of matrices equivalent if one results from the other by a simultaneous conjugation and then I want to study the set of equivalence classes. That's the classification problem. We will make a space out of it a bit later but at the moment it's just a set theoretic thing. Yeah, we have pairs of matrices. We call them, we call two such equivalent if they are transformed like this and we want to study the set of equivalence classes. Okay, yeah, I mean this is really an unsolved problem of linear algebra which might seem surprising but it is. So let me show you some partial results. So for n equals one, okay, so conjugation of one by one matrices is easy, does nothing. So the result, our modular space is still C2. N equals two, that's also known. There are in general many, many invariants but for n equals two for pairs of two by two matrices you need exactly five invariants namely the trace of the matrix A, the determinant of the matrix A, the trace of B and the determinant of B. Well that's quite canonical. If you know the trace and the determinant of a two by two matrix you know it's characteristic polynomial so you know the eigenvalues. But now there's one more mixed invariant which is trace of A times B and it's a non-trivial but nice exercise in linear algebra to check that these five invariants are independent they can assume any value for given matrix and they almost separate the equivalence classes up to a little bit which is inseparable by algebraic means. N equals three, the best modular space one can come up is a nine-dimensional space, let me check, ten. It's a ten-dimensional space which is generically a two by one cover of an affine ten space. But starting from n equals four nobody knows. There is a general result about which invariants you need and actually there are many invariants you can come up with and you need many of them to classify the equivalence classes. So here's a general recipe for producing invariants. Take a certain power of A, multiply by a certain power of B, multiply by a certain power of A, multiply by a certain power of B, keep going on and at the end of that take the trace. If you conjugate A and B with the same invertible matrix then here in the middle all the gg inverse pairs eat up each other and the thing is now you know that the trace is invariant under conjugation. And well there's an extremely beautiful result of Projasi from 1996 saying that if you take all of these invariants they separate the equivalence classes. So these almost separate equivalence classes which is a result of Claudio Projasi from 1976 and the proof is classical invariant theory. So the proof uses all the invariant theory from the 19th century and brings it to the 20th and 21st century. So it's extremely instructive this proof. Which one? Yes? Yes. So the general formula for the dimension of the modular space is just n squared plus one. You know okay but if there's a direct relation to this modular space I have to think about it. Okay so one more. No let's keep it. No okay let me do one more. I don't have a further line here so let me write it here. One more example and then I will come to the general picture behind all that. Let's study lines in the plane, lines through the origin in the plane. So here's the modular problem, four lines through the origin in C2. Or if you want four points in P1. Four points in P1 we think of projective geometry and there is a continuous parameter which is the cross ratio. The cross ratio is a continuous parameter and so the modular space you come up with is a CP1, that's this one continuous parameter. So that's classical projective geometry. What about five? If you have five lines in C1 then you can do the following game. Take one of the lines and take the cross ratio of the remaining four. Then you get all in all five different cross ratios between any of four of the five lines. So you get five cross ratios but now these five invariants are not at all independent. There are many relations between the different five cross ratios and if you write down all these relations then you will find out that at the end in the five dimensional space of cross ratios you get a surface which is a del petso surface, dp5 meaning second better number is five. So that's one and actually the first non-toric del petso which appears naturally in this game. And if you want you can continue to six lines, seven lines and so on but these spaces no longer have any names. So that ends some of the examples and now what we want to have is a unified language for all of these problems and that is the language of quiver representations. So we want to find a common language for such problems and actually for many, many more problems. There are of course interesting problems in linear algebra, interesting classification problems which I haven't stated here because they don't fit into the quiver world. That's of course my very personal choice but so for example classifying some tensors non-linear things, higher tensors up to some symmetry that's not a part of that quiver language which I will introduce. But anyway that's already something, it's a rich family of interesting examples. So now I have to go through some quiver notation and I have to omit part of that unfortunately. What should I omit? Let me omit the modulized spaces. So let's keep in mind what appeared, grass manians, affine spaces, something we don't know, TP1 and TP5, okay and the normal form we can maybe also omit. Okay so now I'll come up with this unified language and this unified language of quivers actually goes back to Gabrielle from 1970 and we will later see a theorem of Gabrielle which is important for this theory. So what is a quiver? So quiver Q is just an oriented graph. So you might ask why do we rename oriented graphs? Well that was just Gabrielle's choice because he loved to rename things. And a graph is something consisting of vertices and arrows and a quiver is something which is where you put arrows in, that's it. Okay, a quiver is just an oriented graph and our standard notation will be Q0 is the set of vertices and Q1 is the set of arrows. And in vertex I will usually denote by I, J, K and so on. And in arrow I will usually write like that. I will always write it with its source and target, yeah? There's an oriented graph, so you have arrows and instead of edges. So now what is a representation? A representation of Q means the following. To any vertex attach a vector space. So for every vertex I choose a complex vector space V i. Alpha from i to j, alpha from i to j. Choose a c linear map, f alpha, from V i to V j, yeah? So vertices are sent to vector spaces, arrows to linear maps. There's some more fancy notation for that. Take this oriented graph and build a discrete category out of it. Then there's just a functor from this discrete category to vector spaces. That's a language which is preferred by many people. But well, this is the linear algebra formulation. Okay, so yes. All the examples we had so far were made up of vector spaces and linear maps, so that's some of its. Now the important thing in all these classification problems was the notion of equivalence, and that was the subtle thing. Because you remember from these first examples here, the objects were the same, just rectangular matrices. But we have different notions of equivalence, really, of equivalence, yeah? So now how to encode the equivalence? Well, now let's take two such representations, V and W. And I want to define when they are equivalent or isomorphic. Well, this V, what does it mean? Well, V is made up of all the vector spaces, V i and all the linear maps, F alpha. One vector space for every vertex, one linear map for every arrow. And the W is also made up of vector spaces, W i and linear maps, G alpha. So now I want to define what it means for them to be equivalent or isomorphic as follows. Namely, it should be possible to get W from V by base change. So there exist maps, a system of maps phi i from V i to W i. Such that the following holds. Now whenever you have all these data, you can form natural commutative diagrams. Namely, so these should be isomorphisms between the vector spaces. So V i, V j, F alpha. So that line is part of the structural datum of the representation V. We have another line, another row, for part of the structure of the representation W. Now these maps phi i somehow go from the top row to the bottom row. And they should be intertwined as for the situation. So this natural diagram should commute for all arrows. Such that, let me write it down. F alpha followed by phi j equals phi i followed by G alpha. For all arrows alpha, let's assume that everything is finite. Yes, well sometimes, let's assume these are both sets of vertices and of arrows of finite. Sometimes for some purposes, it is good to work with locally finite situations where you have infinitely many vertices, but only finitely many arrows between any two vertices. But for the general introduction, finite is okay. Okay, so that's the notion of equivalence. And what it means is, well, this was somehow in an intrinsic language, but now imagine you choose bases everywhere. If you choose bases in all four spaces, well, this is just a matrix of a certain size. This is a matrix of the same size. And the phi i encodes the column operation and the phi j encodes row operations. You're just performing a base change, yeah? And so this really gives you all the examples we considered before. And now let me write down the quivers. So almost nothing from this table survives, but the formulation of the problems. So now the exercises, I'll give you the solution so you can do this exercise easily. I will now write down the quiver situations for these classification problems which I wrote down. So matrices up to row and column operations. So it's the following situation. The quiver just has two vertices and one arrow. At these two vertices, here this indicates that I choose an n dimensional vector space and here an m dimensional vector space. Well, then this linear map from c to the n to c to the m. This linear map corresponds precisely to this m by n matrix. I'm allowed to do row and column operations. And that's precisely the possible phi i and phi j, which I have here. Yeah, so this quiver situation encodes this problem. Let me skip the second for a second and let me come to the third. One matrix up to conjugation, that's also easy to model. You take the quiver which has one vertex and an error from this vertex to itself. This is not forbidden. Okay, so let's see why this corresponds to the problem of classifying a square matrix up to conjugation. This n indicates you have here an n dimensional vector space, c to the n. And it's a linear map from c to the n to itself, so just a linear operator. And now, the equivalence relation is now much tighter than in the first example. Because you only have one phi i, you only have one matrix. You have one operator for every vertex. So you just have one matrix, which is responsible for the notion of equivalence, which then boils down to conjugation. Now, if you have pairs of matrices and simultaneous conjugacy, you might already guess that you just take the quiver with a single vertex and two loops. Okay, let me also give you the quivers for this situation, four lines in c2, that's the following situation. You have a quiver with five vertices and four arrows from the four vertices to one central one, and here you put ones everywhere. Again, let me give you a hint for the exercise. So what is the representation of that quiver? We have one dimensional space is here and a two dimensional space here. And these arrows are represented then by maps from c to c2. A linear map from c to c2 is just one vector in c2. And this one vector we represent by the line through the origin. And then the notion of equivalence, in here means you are allowed to operate by an automorphism of that one dimensional space. So you can just translate these four vectors. And this two here means that we consider these four lines in c2 up to the symmetries of c2, up to the general linear group. And this is the four lines problem which is solved by the cross ratio. And the five lines then just means maybe add another one. Okay, so this gives the del petso. And now the most tricky part of this exercise is the second one. So let me give it a start. This is the difficult exercise. We only want to operate by row operations. And for this we somehow have to artificially break the symmetry in the quiver. And it turns out that this is achieved by the following situation. Take a quiver with two vertices and end parallel arrows from the left to the right vertex. Now, the arrows here are represented by linear maps from one dimensional to an m dimensional space. So this is again just a vector in cm. So there's an n-tuple of vectors in cm. These n-tuples we put in the columns of the matrix. And now here, this is just a scaling action. This is just by an automorphism of one dimensional vector space. This is harmless. And here you have the glm, scaling by row operations. So this indeed models the second case, but that's some of the tricky part. Okay, sorry, yeah. So this is an m by n matrix. And the arrows are represented by vectors in m space. We put them reacting on the rows. Sorry, I said columns reacting on the rows. No, we are only allowed to rescale by one common factor. If you want to rescale every column, just one. If you want to rescale every column, then you take this quiver, yeah, that's, yeah. So here it's just rescaling by a common c star factor, which is irrelevant. And here you are transforming by glm, which is the row operations. Okay, so basically we succeeded in somehow getting all these linear algebra problems into that common language of quiver representations in this unified language. Okay, and now, well, we still want to define these modular spaces, these spaces encoding all the equivalence classes. But yes, okay, that's what we'll show you next. So now I will produce out of this quiver context a geometry. And actually what I have in mind is algebraic geometry, but we don't need any notion of algebraic geometry for this talk. We can just live with, say, complex manifolds. So modular spaces, construction. In any of these classification problems, we have obvious discrete parameters, namely the size of the matrix. That of course doesn't change in any of the problems. So what does the sizes of the matrices correspond to? Well, it corresponds to the dimensions of the vector spaces, which form up the problem. So to get rid of the discrete invariance and really come up with a moduli space encoding the continuous invariance, we should first fix the dimensions. So this is what we do first. So fix a dimension vector, d, consisting of, well, a dimension di for every vertex. This is non-negative integer, a q0 tuple of non-negative integers. So now the vector spaces are fixed. We have vector spaces vi of dimension di and modular choice of bases. We can assume them to be just c to the di. Yeah? Okay. So now the only remaining data for quiver representation is all these linear maps. Let's take all these linear maps, all possible choices of such tuples of linear maps into one big space. And this is the following space, rd of q. So we take a direct sum over all arrows. And for each arrow we consider the set of c linear maps from di dimensional space to a dj dimensional space. Or if you want just the set of dj cross di matrices of the complex numbers. So that's the space of all possibilities. Yeah? Every point in this space is a quiver representation. So a point here corresponds to a representation of q with these given dimensions. Great. That's the space of all possibilities. But now we want to encode this notion of equivalence. Equivalence being existence of these phi i. This we can encode in terms of an action of a group on it. The group is one of the nicest groups one can think of, gd. This is just a product of general linear groups. So we take general linear group, bldi over the complex numbers. And now I claim that this naturally acts on this space. Namely it acts by base change in every of these spaces. So the formula for this action is the following. So an element of the group is a tuple gi of automorphisms. It should act on the tuple f alpha index by alpha of linear maps. And the formula is gjf alpha gi inverse if alpha is an error from i to j. Okay, so that's the formula for this group action. Double check with reality. So in this case we have one linear map and we're acting by row and column operations. And this you clearly see here. If we send this linear map of alpha by a matrix, then the gj acts by left multiplication. That performs the row operation. And gi inverse acts by x from the right. This performs the column operations. Or if we take this one loop case, then there's only one g. And then we get the conjugation action g times matrix times gi inverse. And you can work it out in all other cases. So that's a very nice action of a actually reductive complex algebraic group. And this is just an affine space. This is just an affine space. And this is a very nice reductive complex d group. And the action is whatever you want it to be continuous, differentiable, algebraic, sufficiently nice. Okay. So now the observation is, now we have encoded this setting of quiver representations and the notion of equivalence of quiver representation. We have encoded all this in a group action on a space. And now the observation. And if you think about it long enough, then you see that it is really tautological bits for this group action. The gd orbits in rd of q. These correspond naturally to the equivalence classes of quiver representations. Well, let's have another look at this diagram defining the equivalence relation. And this diagram defining the quiver representation we saw this equation. Phi j by f alpha equals g alpha by phi a. Let's reformulate this equation in terms of g alpha. Everything is invertible. So g alpha is nothing else than phi i inverse. And now I'm a bit confused because the indices don't match. They do. Relief. Okay. Yeah. So just rephrasing this notion of equivalence means quiver representations are equivalent even only if there exists these phi i such that g is produced from f by the space change. And that's what we did here. So the orbits tautologically correspond to the equivalence classes. So now we want to have a space encoding the parameters in the classification problem. By this observation, we want a space of orbits. Well, nothing easier than that. So what we want is an orbit space rd of q divided by the group gd. Well, this makes perfectly sense in set theoretic topology. You just take the set of equivalence classes and endow it with the quotient topology coming from the natural topology on this affine space. Only problem is that this topological space is completely bad. It's non-house dwarf in general. The points are not even closed. So none of the nice properties of a topological space survive. So we have to do better. And well, this is in general a big problem how to define orbit space is luckily for us in this quiver situation, in the situation of a reductive free group action on an affine space. Everything was done even algebraically by Mumford with geometric invariant theory. So this leads to Mumford's geometric invariant theory, which was worked out for in this quiver setting by Alastair King in 1994. So the solution is, well, what makes this orbit space bad is that there are bad points in the sense there are points in this space where the group, well, points which have many, many symmetries which are left fixed by a large part of the group. And we just get rid of all these points. That's the easy solution. So we define a notion of stability on the space for quiver representation. And then we are only taking the quotient by the stable points. And we will now see what stability amounts to. And then I will tell you what Mumford's theory gives us immediately in the setting. So I hesitated a bit to really give the precise definition of stability because we will not really see it in action. But to somehow give you a flavor of these things I have to state it. So let me define stability. So first of all, you have to make a choice. And that's one of the bad points of the whole theory that you have to make a choice for defining stability. Well, is it a drawback or is it a virtue? I never really know. It's a very subtle choice anyway. So choose an integer-valued functional on the purchase use of the graph, theta. That's the first step. And this choice is very, very subtle. If you do the wrong choice, then all your modular spaces will be empty. Define the slope of a representation, of a curve representation, slope as follows. Slope is usually called mu. So mu of a representation v is, well, it's not defined, it's only defined in terms of the discrete data, only in terms of the dimension of the vector spaces. Namely, you take theta applied to this tuple of dimensions of the vector spaces. So put all the vector spaces which are involved into one vector and apply this linear form. And you divide by, say, the total dimension. Total dimension being some of all the vector spaces which are involved. And this we should only do for representation which is non-zero. So at least one vector space should be non-zero. Otherwise you have a problem in defining this fraction. Okay, you know, the central definition is v is called theta stable. Stable with respect to this choice. If the following holds, whenever you have a sub-representation, and I will tell you in a second what this is, the slope decreases. If the slope of mu is strictly less than the slope of v for you being a proper sub-representation, and what does sub-representation mean? It means in every vector space vi you have a subspace ui. But this should be compatible with the structural data of the representation. Such that, well, if you apply this map ui, f alpha to ui, then of course you should end up in uj for all arrows in the quiver. So this is called, so this is what is called a sub-representation. And now this condition looks really technical. Actually, if you work this condition out for any of the problems which I explained, it usually gives you a very technical solution. Oh yeah, okay, I should do an example. So let's work out the following examples. So e.g. if you take the following situation, we have seen this before. So you have many, many arrows pointing to a central vertex, and the number of arrows should be m, and the dimension type is this. So what is a quiver representation? It's just a tuple, an m-tuple of vectors in d-dimensional space, where the symmetry is now the gld symmetry, and we are allowed to rescale each vector individually, so just the point in projected space which counts. And so for such a tuple of vectors v1 to vm, the notion of stability then looks as follows, working out what this numerical condition means. Well, whenever you take a subset, a subset of this set of vectors, indexed by some indexed i, then the dimension of its span should be strictly larger if it's a proper subset. The dimension of the span should be strictly larger than d by m times the cardinality of the indexed i. That looks strange, and actually it is strange for purpose, but let's try to understand this qualitatively, not in the general setting but in this example. So what this means qualitatively is that the relative position of these vectors to each other is not too special. If you take a certain subset, then the span, the subspace they span is pretty large compared to the number of vectors you take to span it. So that's a generosity assumption. There should be some more in general position these vectors. In contrast, if all these vectors line up, so they belong to one-dimensional space, then the span you get is always one-dimensional, and this numerical condition is definitely wrong. So there should be some more in general position. So that serves, hopefully, as illustration for this stability. And now we come back to our problem of constructing an orbit space. So that was the naive approach, and now the non-naive approach, the idea here is that the theorem, and I would say this goes back to Mumford, is that if you take inside our big affine space only the stable locus, which is large, this is a Zariski open set, then you can form the quotient by this structure group, and this exists, well, again, you take the set of equivalence classes and endow it with the quotient topology. But now this is a complex manifold. It's a connected complex algebraic manifold of dimension. So there's the following formula. 1 minus sum over all i d i squared plus sum over all arrows from i to j d i d j. And this dimension formula you can almost guess. You take the dimension, well, if you want to perform, if you want to take this quotient, you take the dimension of the space minus the dimension of the group. This is not absolutely true because there's a small c star sitting inside this group diagonally, which acts trivially. If you just conjugate by the same scalar matrix everywhere, it does nothing. And this is plus 1. I mean, this is the dimension of the space, this is the dimension of the group, and this is the one to get rid of this scalar subgroup acting trivial. So first of all, I should be precise that this dimension formula only holds if the modular space is not empty. Well, first of all, you have to choose this function so that hopefully you get a non-empty space. But even for all the choices of theta where you have something non-empty, the space can change. So actually, you really have a kind of stability space where you have for all sorts of hyperplanes where the stability changes outside of the hyperplane. So inside these chambers, you can perturb the theta and the space doesn't change. But if you transform the theta through a wall, then the space changes by some birational transformations, complicated birational transformations. And this chamber system is not well understood in general, only in very, very special cases. So that's one of the big mysteries and that's one of the reasons why this choice of stability is really subtle. In a particular problem where you want to solve a classification problem, you first have to make up your mind what is the right choice for theta, which I want to perform, and you have to analyze what the stability means, which is usually a quite lengthy analysis leading you to some explicit numerical criteria. Okay, so these are the spaces. And yes, okay. I will not need it for the purpose of this talk, but I will call this space md of q. To be honest, it should be theta stable, yeah? And let me remark, there exists usually singular compactification, which is called the moduli space of semi-stables, but that's not so easy to define because you're not really parameterizing equivalence classes, but you have to combine several of these equivalence classes into one point, so I don't need it for the purposes of this talk, but it exists, like in general, non-frustability theory. No, it's not divisors. I mean, the properly semi-stable points might be in a higher co-dimension and you're producing really, really bad singularities. I mean, it's still normal, but you do not have good control over the co-dimension in which you add boundaries. So that's also not really understood. So I have six more minutes, that's great. So in many talks on moduli theory, I could stop here because usually people are happy when they can formulate a moduli space in such a way, a moduli problem, in such a way that they get a space, and then they say, well, okay, that's the moduli space, that solves the problem, full stop. But for my point of view, I would say now the story just begins. Now we have these moduli spaces, so let's do something with them. And that's the point where the story gets a quite dramatic twist, actually, which was also surprising for me in the last 10 years or so. Namely, well, first of all, oh, okay, I definitely have to mention one theorem. And this theorem says in which situations you don't need moduli spaces. Because this is also quite surprising and absolutely not clear from the definitions. So I will combine several theorems. A theorem of Gabriel from 1970, a theorem of Nazarova and Reuter from 1975, a theorem of Katz from 1982, and a theorem of Schofield from 1985, which tells you what these moduli spaces look like in case you can compute. Well, and that's really surprising. If Q, and I write absolute value, which means forget about the arrows, forget about the direction of the arrows, just take the unoriented graph. If this unoriented graph is a Dinkin diagram or an extended Dinkin diagram, it's extended Dinkin diagram. Well, these are the usual diagrams, A and D and E6, E7, E8, all their affine versions, A and tilde, D and tilde, E6, E7, E8, tilde. If this is the graph, then all the moduli spaces are known. Then all these moduli spaces are either empty or a single point or a punctured P1, P1 minus finitely many points. No matter what is what the direction of the arrows is and no matter what the dimension vector is. I mean, you can tell more precisely for which dimension vectors you get which kind of moduli space. But all the moduli spaces are known. So that means if your quiver is of Dinkin or extended Dinkin diagram type, you don't need the theory of moduli spaces at all because they are all known. That's not what a moduli space is good for. It should encode all the complicated continuous classification parameters and in these cases, there are no complicated parameters. It's empty, zero-dimensional, one-dimensional and that's it. In all other cases, it is as wide as possible which you can see at the following. The dimension of the moduli space grows quadratically with the dimension type, the sum of Di squared. So that's only a rule of thumb. The precise statement is a bit more complicated, but roughly if your dimension vector gets larger and larger, the dimension of the moduli space grows quadratically with the dimension. And this is absolutely dramatic because the moduli space encodes continuous parameters. And this means there are sum over Di squared independent continuous parameters in your classification problem, which is totally wild. So these are the cases where the moduli spaces are known and these are the cases in which they are totally unknown. So what can we do? We can, or this is at least what I did in the last years, we want to study global quantitative properties of these moduli spaces. Quantitative properties means compute certain numbers out of these spaces, compute topological invariants. So what is a good topological invariant? Well, let's take better numbers. The dimension of the singular cosmology group there with rational coefficients, the Betty numbers, I have an explicit formula, which I don't want to formulate, in a certain numerically nice case. Explicit formula in case that the GCD of all the entries of the dimension vector is one. That looks like a very strange numerical condition and actually it is. So let's do with the strange numerical nature of the stability. So in case this dimension vector is not a multiple of anything else and the stability should be sufficiently generic outside finitely many hyperplanes, then I have an explicit formula for all the Betty numbers. In general, one should not look at the Betty numbers but at something which I will definitely not define today which is the so-called Donaldson-Thomas invariants these are a generalization of the Betty numbers so in the case where the GCD of the dimension vector is one these Donaldson-Thomas invariants coincide with a set of Betty numbers and I worked on trying to compute these Donaldson-Thomas invariants and I found many surprising connections which I didn't expect. I mean, after all, what did we try to do? We tried to solve a linear algebra problem, right? And now we will see that this somehow drifts away into a totally different direction. So for example, this is related to a BPS state count, counts in string theory models. This is also related in very special cases to Gromov-Wittnen invariants of counting rational points in projective surfaces. This is related, this is a very recent result to certain Oguri-Waffa invariants of knots. This is also related to the so-called intersection cohomology of the compactified modular space. This is the only point where I need this compactified modular space. There exists a singular compactification. For singular spaces, cohomology is bad, replace it by intersection cohomology, that's a better theory. And these intersection betty numbers are also related to these Thomas-Thomas invariants. So you get all sorts of surprising links to things which you hadn't expected from a linear algebra problem. But apparently it's all in the linear algebra. Thank you very much. Do you have any questions? What about maps between modular spaces? Maps between the modular spaces. In general, there are few interesting maps between these modular spaces. For example, one type of map I can explain which turned out recently to be very interesting is the following. You can try to construct maps when you change the Q, the D, or the theta. Changing the Q sometimes gives you maps that are boring. Changing the D, you don't have any interesting maps. But changing the theta can be interesting. So I made this brief sketch of some of the stability system. And if you have a theta, which is on one of these walls, where you have properly semi-stable points, then you can deform it to a theta tilde in a chamber. So this means here you have properly semi-stable points, where you really need this compactification, and here not. So in this situation, for such a deformation, you get a canonical map from the modulite space from the deformed stability to the compactified modulite space for the usual stability. And quite often this is an interesting partial desingularization. Partial desingularization. Sometimes it's even a small desingularization in the sense of intersection-cormology theory and this makes it very interesting. So that's, for example, what you get when you change the theta. When you change Q or D, usually you don't get any interesting maps. Do you want the microphone? You can shout. You can shout? Okay. Yeah, well, it's... I can tell you the idea of the proof. I just stole the proof from modulite spaces of vector bundles. There you have the formula of Harder, Narasimhan, or Atea and Bot using the so-called Harder, Narasimhan stratification. And I just adapted this proof to the quiver situation. You get a recursive formula and I resolved this recursion. This also exists in the theory of a modular vector bundles. There is a resolution of the Harder, Narasimhan recursion by Zagier and one by Lomor and Rappaport. And so it's a resolution of the Harder, Narasimhan recursion and that gives an explicit formula, which is much easier in the quiver case than in the vector bundle case. Yeah, okay. I can show you one minute, 30 seconds. Yeah, okay. So this is a very recent result, but somehow pretty funny. So you take the following modular space for the following situation. This is the... Actually, it's that situation which we had here, points in a projective space. So you take a D here and here the number has to be 2D-1. 2D-1 arrows pointing to a central vertex at which you have D. And then you're taking the modular space for the most symmetric stability. And from this modular space you just take the order characteristic, just the easiest invariant you can think of. And this is actually the same as the number of irreducible rational curves of degree D in CP2 for filling certain incidences, like usual. And the incidence, let me just draw it in a picture. You pick 2D-1 points in general position and you pick one line with a fixed point. And then you are counting rational curves of degree D, which pass through 2D-1 points and which are order D tangent to this line at this point. So that's a reasonable incident condition where you can expect to find that number of solutions, so Gromov-Wittnien invariant. And this Gromov-Wittnien invariant was computed recursively with tropical methods by Fomin and Michalkin in 2009. And so my proof consists in verifying that these numbers fulfill the same recursion. So that's one of the incidences. I mean, there's no a priori reason. This is just the linear algebra problem of 2D-1 vectors in D-space up to symmetry. But that's the kind of result which pops out. Oh, that's definitely interesting, but usually very difficult. Okay, so you can model this by a more complicated quiver but with relations. That would be one idea. So let's take the easiest case. Let's take this quiver. But instead of vector spaces, you put here KT modules. Yeah, modules over polynomial ring in one variable. This means you can encode the action of this variable T in a loop. But then you have a commutativity relation. So this is a loop alpha 1. This is a loop alpha 2. This is an arrow beta. And then you are just looking at those representations where the commutativity relation is fulfilled because this map should be a map of KT modules. So it should respect the loops. And that's this intertwiner relation. But in this way, you can boil it down to a usual quiver problem where you just work with finite dimensional vector spaces if you want. Well, special linear transformations means that you fix a certain volume form. So you have to take as additional datum a volume form. But that's a higher rank tensor and that can't be modeled in the quiver language, unfortunately. So whenever it comes to other classical groups like SL or SO or SP, you need an additional datum which is a higher rank tensor and that's not part of the quiver language. People try to model such situations using generalized quiver language. But you don't get terribly far with that. That's the problem. So this quiver language really seems to be only adapted to these properly linear situations. And higher rank tensors can be formulated but none of the techniques actually works. Which is a pity, yes. One more question. The usual by linear. Well, because it's by linear, you have a rank 2 tensor and you're kind of formulated in this language. In this special case, there are some ways to boil it down to a quiver situation but not totally. So it's also another class of totally interesting invariant theoretic problems but they don't fit in the quiver language. It's a very specialized language for a special class of problems and of course I was suggestive with my examples. But for these problems you get pretty far because you get many explicit formulas. Excellent. Thank you. Thank you.