 OK, so let's first summarize what we did yesterday, also to somehow re-fix notation. We had this motivic ring, which will be our coefficient ring for all the computations we will perform today. And this I wrote as R mod, which is the Grottenich ring of complex algebraic varieties. So free-building group in all varieties, up to isomorphism, modulo, cut and paste relation, and then localized at motive of the affine line, which we want to have invertible, and we want to have a square root of it. And we want to have naturally denominators, 1 minus L to the i inverse, for a greater equal 1. And this is a ring where we have virtual motives of varieties. That's where the virtual motives live. OK, that was the one thing I explained yesterday. And the second thing was, if q is a quiver, we can attach to it a category of finite dimensional representations, which has very nice homological properties. One homological property was even something like a sadduality, which I only briefly mentioned. But the most important thing was that we have an explicitly computable homological Euler form, a category of representations. It is hereditary, so of global dimension 1. And the homological Euler form is something we know explicitly. Maybe I will redefine it today. And then finally, we consider the stacks of isomorphism classes. So the modularized stack of isomorphism classes in this thing is nothing else than, well, we have seen a certain affine space with an action of a nice algebraic group. And we can take from the quotient stack and take the disjoint union over all dimension vectors. So this is the quotient stack. This was an affine space, and this was a product of general linear groups. And using this, I then defined this motific generating function, which I will define in a minute. But that's the summary of what we did yesterday. And this is always an irritating moment for me in a lecture series that I summarized what I did in the previous talk, and it's just like five minutes. And then I always think, OK, I could have done this in five minutes, but it took me an hour. So what was going wrong? I, OK, it was the explanations missing and trying to give you a feeling for what all this is. OK, so I hope this whole hour yesterday wasn't useless for you. OK, so anyway, now we will come up with a formal definition of the motific generating series in the motific quantum space. So it's a generating series. Some of it feels like a zeta function, but very important is also the coefficient, the ring in which this generating series actually lives. And OK, so I will define the following. Let me, so this will be the motific quantum space. And this is the following. Well, this is a formal, it's like a formal power series ring, yeah, with as many variables as you have worthy C's in the quiver. And the coefficient ring is our motific ring R mod. And everything is slightly twisted by the quiver structure. And this we'll see now is the complete R mod algebra, complete because we are looking at formal power series, where the base is t to the d, where d is a dimension vector. So this is this one discrete invariant of a quiver representation, the dimension vector, encoding the dimensions of the vector space is involved. And multiplication, and the multiplication is defined as follows. So this is something like a quantum version of a formal power series ring. We take minus l to the 1 half, anti-symmetrized Euler form, t to the d plus e. That's our base ring for all the computations which we'll follow. So it's formal power series in as many variables as we have worthy C's in the quivers. And instead of writing down ti to the di, a product over all ti to the di, we formally work with such monomials t to the d, which is just short. It's a minus. Yes, it's a minus. It's the anti-symmetrization of the Euler form. It's the anti-symmetrization, yes. So you can see this in today. Exactly. Exactly. It's the anti-symmetrized Euler form. And this we take as multiplication. So it's like. Excuse me? And it's associative. It's associative, unital algebra. The unit is just t to the 0. And it's complete with respect to the, well, the augmentation ideal is the ideal everything generated by t to the d for d non-zero. And everything is complete with respect to this ideal. We have formal power series, which we'll use. There's one question. Yes, please. The introduction of this. Can I vote you? Well, I think yes, yes. Well, the most tricky point is somehow why do you anti-symmetrize the Euler form? And well, then this anti-symmetrized Euler form is like the honest Euler form of some three-calabi-yau situation. So you're somehow imitating, kind of imitating a three-calabi-yau situation with a quiver, which is hereditary of global dimension one. That's somehow philosophically the explanation. But the practical feature is that just the formulas are as smooth as possible with this twist. Yeah? It's a logic of quantum torus. This is, yeah, I call it the motivic quantum space because for quantum torus, I would like to have inverses of the t to the i maybe. So it should be over some Laurent polynomial ring. But motivic quantum space. So and inside this, so very pragmatically, the motivation is the formulas are very smooth when you work in this ring. For me personally, for many years, I worked with a different ring. I just worked with a twist just by the Euler form, not by the anti-symmetrized Euler form. And I always had problems with the formulas, which were not really nice. But this is the way the formulas are easier to remember, although you always have this ugly little minus square root of the left sheds motif inside. This you cannot get rid of. OK, so and now let me finally introduce this motivic generating function. And then explain why it somehow knows the quiver. So aq is then defined as sum over all dimension vectors. And then we take the virtual motif of this space of representations divided by the virtual motif of the base change group and form a variable t to the d. So this is the motivic quantum space. And the series is the motivic generating series. Instead of an a, you could also write a z. So that really feels like, well, whatever you prefer, a zeta function or a partition function. It's really somehow of this kind, as we will now see. Because that's the series we will try to factor. It's actually being smarted by some follow-up. Excuse me. The virtual motif, yes. OK, the virtual motif was you to normalize the motif by an appropriate half power of the left shift's motif for x irreducible. So that, for example, Ponka's duality manifests itself as invariance under exchanging L and L inverse. This was the example we have seen for projective space. That was the twist which is necessary. We will actually use this twist now and make this motivic generating series more explicit. Yes, let's do this. That's maybe the perfect point to formulate a little lemma. And you know the proof of this lemma when you have done this exercise which I gave you yesterday. The exercise was to compute the motive of GLN, of the group GLN, yeah? And so let's compute this thing here. Let's first. So this active ring does not depend on the quiver but the function does, right? No. OK, so we will see after this calculation that the function depends on the symmetrized Euler form and the ring depends on the anti-symmetrized Euler form. So altogether, the datum of this series in this ring recovers the quiver. We will see this in a second. Now, the multiplication in the ring, the multiplication in the ring, it really depends on the anti-symmetrized Euler form. And this function only depends on the symmetrized Euler form which we will now see if I formulate this lemma. OK, so the lemma just is an explicit calculation of that. So this Rd of q, if you recall the definition from yesterday, maybe I should recall it, why not? This Rd of q is just an affine space, yeah? Rd of q is just direct sum over all the arrows in the quiver of the space of linear maps from a di-dimensional to a dj-dimensional space. So this is just affine space of some dimension and its motive is just the power of the left-shed motive. The group Gd, which acts on this, is a product over general linear groups for the vertices. So its motive is a product of the motives of general linear groups, which we have calculated yesterday in this exercise. OK, and if you just plug in the result from yesterday's exercise into this, then you see that we can reformulate this Aq just as sum over all d into the q0. And then the numerator is minus l1 half to the power minus the Euler form applied to dd, d to the d. And then the denominator is just a product of Pochamer symbols, where you plug in the left-shed motive or the inverse of the left-shed motive. So it's just a product over the Pochamas, 1 minus l inverse times times 1 minus l to the minus di. So this is something like a l-hyper geometric series, like in q-hyper geometric series, except that our quantum parameter is now always the left-shed motive. So this really feels like a l-hyper geometric series. OK, and so this lemma is just really direct calculation with the motives of this affine space and this group. And from this we can now draw a remark that the datum of this series and the ring knows the quiver. So this motivic ring, so this motivic quantum space, knows the anti-symmetrized Euler form, because it's really in the definition of the multiplication. OK, the motivic generating series only involves the quadratic form defined by the Euler form. It's only about the quadratic form. But from the quadratic form you can recover the symmetrized bilinear form by polarization, the usual polarization trick. So Aq knows about the quadratic forms, and then after polarizing it knows about the symmetrization. So both together know about the whole Euler form. So both together know about the Euler form of the quiver, because, well, we can standardly decompose into a symmetric and anti-symmetric part. So we know the Euler form, and from the Euler form we can directly recover the quiver, because the Euler form just encodes the vertices and the arrows. OK, so this answer to this question from yesterday of how much of the quiver is seen by this. So Aq itself only sees the symmetrization, but the important thing is to have it in this ring and then we really recover the whole Euler form. OK, so time for first examples, namely for examples where you really don't see all this quiver machinery. Yes, Fabian? When you say that it knows about the quadratic forms, I mean, when you take the coefficient of this power series, then I mean, you want the exponent of the L part, right? But the coefficient ring is probably not like a unique factorization domain or something. So how can you get this out of this? Well, take the coefficient of t to the d and develop it into a formal power series in L inverse. Then the lowest degree coefficient is this one. And from the exponent you can read off the Euler form. Then you can read off the quadratic form, for example. I mean, knows really means, well, you can just read it off from the definition. So how much of the quiver is contained in the definition of the quantum space and the motivic generating series? OK, now we have to do two classical examples. There's a question? Yes, please. Is there some difference equation satisfied by Aq? Yes, yes, there is. I once used it in another version. So where I have different twists. But yes, yes, you can characterize it by some nice Q difference equation. And we will see other Q analysis business popping up in a minute. For example, taking the logarithmic Q derivative is something which plays a role here and leads to Hilbert schemes. Yes, we'll see these in a few minutes. Yeah, yeah, so I mean, this is some of the right keyword. So this brings you into Q or L analysis. And you can think about all possible tools from Q or L analysis, which you might want to apply to the series to study it. OK, so example. But I think I need a larger blackboard for this. So in this example, I will do the case of the quiver, which is just a vertex. And a vertex with a single loop. Because these correspond to classical things. Namely, let us recall the so-called Q binomial theorem. And it's a theorem which exists in two versions, luckily for us. One version is, let me first write down the classical version. So you take the following hypergeometric series, sum over all d, t to the d divided by the Pochamer symbol 1 minus Q times 1 minus Q to the d. And this summation factors into a nice product, product i from 0 to infinity, 1 over 1 minus Q to the i t. This is usually proved using just partition combinatorics. So on the left-hand side, you see a generating function for partitions by weight. And this is one way you can prove this. And another nice identity, which also goes under the name of Q binomial theorem, is if you have some quadratic form in the numerator, which is Q to the d, d minus 1 half t to the d. And then again the Pochamer. And this is product over all i from 0 to infinity, 1 plus Q to the i t. And now if you think, well, OK, this might generalize. Maybe I can put some other Q to the d squared here in the numerator or not, then number theory people will tell you no, that's a completely different story. If you don't put Q to the d squared half here in the numerator, but Q to the d squared, then you are in the realm of Roger's Roman Nujem identities and all sorts of really difficult number theory stuff. So there are just precisely these two values for the numerators where you have such a nice factorization. OK, so from this it follows quite easily that we can factor the generating series for the quiver, which is just a single vertex, into, I have to read this off because I have to be extremely careful about signs in this whole theory. Product over all i from 0 to infinity, 1 over 1 minus l to the i plus 1 half times t. This corresponds to the first identity and corresponding to the second identity, which is the quiver with just one loop, you have a Q equals product over i from 1 to infinity, 1 minus l to the i t. OK, so yeah, a good advice to get into this motivic invariant business is to do the calculation. I mean, everything's here. So for example, OK, let's look at the second case here. For this one loop quiver, the Euler form is actually identically 0 because you are taking a contribution d squared for the vertex minus d squared for the loop. So the Euler form is identically 0. So this term vanishes, and then you have just this quotient. And then you rearrange, bringing, so pulling enough powers of l out, and then you'll see you arrive here. And it's really recommended to do the calculation. It's completely elementary, but you will see that you have to be extremely careful about all the signs. And this is a typical phenomenon for this theory of motivic invariance, that all the signs and these ugly little half powers really are really important, so you should do this calculation. I mean, it's completely elementary, but we should do this. We will need this later. There's one more case of a connected quiver for which you can write down such a product factorization of the motivic generating function in elementary fashion. Namely, the third case is this quiver. And if you know about quiver representations, these three, the trivial quiver, the one loop, and this one, these are the only three symmetric connected quivers which are not wild. Because this is an a0 tilde, this is an a1 tilde, and this is an a0. And all other symmetric connected quivers are wild, and it's really a tame wild thing that you can write down the factorization. So there exists an explicit factorization for this, which I don't want to recall now. But I will recall the categorification of this maybe in the last talk. And OK, so these are the only elementary examples of these motivic generating functions. What does wild mean? What does wild mean? Oh, what does wild mean? OK, good point. This is minus. Surprisingly, it's minus. Although here in this identity, you have this distinction between the minus and the plus, but here it is minus both times due to this funny little twist by minus square root of left-shut. Wildness. OK, let's just check for the time. OK, a category C, a billion finite length, very nice category, like modules over some algebra, is called wild. Well, to be precise, strongly wild, but let me not explain this distinction, if you can embed the category of representations of a free algebra in it. If there exists an embedding of the category of representations of a free algebra in two variables faithfully into C. So it means the classification of isomorphism classes in the category is as least as difficult as classifying representations of the free algebra. And it is known, for example, that if A is any finite dimensional algebra, you can embed its representation category inside the representation of a free algebra. So having solved the classification problem for the free algebra, you have solved the classification problem for arbitrary algebras. So this is some kind of recursive thing, and that's what it's supposed to wild. Geometrically, you could say objects appear in arbitrarily high dimensional moduli. So equivalently, moduli are of arbitrarily high dimension, not moduli of all representations. So for example, if you look at smooth projective curves and the category of coherent chiefs, then you have two cases which are not wild, namely P1 and elliptic curves. Well, because for P1, the classification of coherent chiefs is discrete. Everything is direct sum of torsion chiefs and ONs. And for elliptic curves, you take the ATIA classification, and the moduli are harmless. The moduli of bundles are only the elliptic curve itself, basically. But starting in genus 2, you have moduli spaces of coherent chiefs of arbitrary high dimension. That's called wild. And for example, such an explicit embedding of representations of the free algebra into coherent chiefs on a smooth projective curve, that was really used by Sechardry in studying the biorational geometry of the moduli spaces. So this implicitly really appeared in vector bundle theory. Markus, we have a question in the chat. For a thin category, is the dimension of representations always bounded by 1? The dimension of, well, in the composable or stable representations, the moduli. So yeah, OK, well, you have to be precise about what moduli you consider. Usually, you consider stable objects. And if you're strict to stables, then for a tame algebra, you only get one dimensional moduli. Yes, indeed. That is the famous, I mean, infinite dimensional algebra theory. There's the famous tame wild dichotomy. Either your algebra is wild, moduli of arbitrary high dimension, and you can embed the free algebra. Or you are tame, which means moduli spaces are only one dimensional. Maybe with many, many different irreducible components, but only one dimensional. Where can we find this equivalence? Sorry? Where can we find this equivalence? Ha, ha, OK. I will try to find a good reference. Yeah, I mean, there are many technicalities around this. There is really a distinction of being wild and being strongly wild. And what I call wild here is really strongly wild. But OK, I'll check for some nice references. And I also maybe slightly really question. So I look at these two functions, and I think both on partition function and fairly on partition function. Why should I think about the point of being both on and the loop between them? OK, well, it's an accident, actually. There's nothing behind this, I think. So, namely, what you can do is you can exchange L and L inverse in the generating function. And this changes the Euler form to what? To the diagonal form delta minus the Euler form. So usually, if you take the Euler form of a quiver and instead compute the diagonal one minus the Euler form, you no longer have the Euler form of a quiver. Because negative numbers of arrows don't exist. The only exception is that you interchange the trivial quiver and the one loop quiver. So it's an accident. So I was always hoping for some boson fermion principle. Well, we will see this boson fermion thing also when we categorify all this in the homological whole algebras, because then we will get free bosons, free fermions as the homological whole algebras. But it's just a coincidence in this very special case. No more questions. OK, let me check the time. Yes, OK, wonderful. So let's start with actually working with this generating function, a of q. And yes, it's always very interesting to see, for me, that usually you, meaning the audience of some lecture series, is more interested in the qualitative aspects, like this wildness thing, whereas I'm really into this quantitative aspect. I really want to do some hardcore calculations with this series, what I really like, what the audience usually doesn't like so much. So that's always a conflict, and we have to navigate through this. But I'm always happy if you ask lots of questions and I can explain different things. So that's actually fine. And if I then have to skip some of the dirty calculations, then that's fine. OK, let me give you something which is some more halfway between qualitative and quantitative. Let me give you, OK, now this is all about factorization, part one. Factorization just means I want to do some multiplicative things with a q. So for example, I want to factor it into a product. This we have seen in these two examples that we can factor this into an infinite product. We will later see how to do this in general, but I first want to motivate. We can also do something else. This is a form of power series whose constant term is 1. The constant term is just for t to the 0. And for t to the 0, you just have a constant term 1. Any form of power series with constant term 1 is invertible. So aq is not only an element in this motivic quantum space. It's even an invertible element. So we can look at its inverse. And so here's a theorem. And later, when we have seen how to define dt in variance, then this theorem is exactly on a very formal level the dt-pt correspondence. Although I can't call it dt-pt at the moment. So it just says, OK. So let's take this motivic, which we're around. Let's take this motivic series, not in the variable l, but let's twist the variable l slightly a little bit. I will explain this notation in a second. Let me twist the variable t, the formal variable t, by minus l 1 half to the n, where n is a vector. And I have to explain what this notation is. And let me multiply it by its inverse with a slightly different twist, minus l 1 half minus n t inverse. I don't want to write this down as a fraction because I'm still working in a non-commutative ring. This motivic ring is still non-commutative because we are working with this anti-symmetrized euler form there. So I shouldn't write it as a fraction. If you calculate this fraction, then you get a motivic-generating series for Hilbert schemes, the virtual motive of Hilbert scheme of the quiver t to the d. So that's the formula. And now I have to explain all the terms. And actually, I will explain the idea of the proof, which is essentially really simple. And the rest is just dirty calculations in this motivic ring. OK, if you know Tom Bridgeland's papers about motivic-haul algebras, this is what he calls the quotient identity in the motivic-haul algebra. So let me first explain the terms. What does it mean to plug in not t, but minus l 1 half to the nt into this? Well, try to guess the definition. And your first guess is the correct one. f of minus l 1 half to the n times t is defined as, OK. Let's assume f of t is a series with coefficients ad, t to the d, just some formal power series. And then plugging in this, this we define as sum over d. And then we take the coefficient ad, and then we take minus l 1 half to the n times d t to the d, where the series is defined like this. OK, let's see if this makes sense. So let's assume it's l to the n over d. Our usual variable minus l to the 1 half. That's always the standard variable here. So assume f of t is a formal power series, sum over all d, a coefficient ad, t to the d. And this d is always a vector of length q0. And now I want to define what it means to have this twist in the variable t. And this twist in the variable t means you take the coefficient, you take t to the d, but then you twist to minus l 1 half n times d. n and d are both vectors of length q0. And this is just the new scalar product, sum of the product of the entries. Yeah? OK, so that's a way to twist a formal power series in many variables. OK, and that's what we do twice in defining the left-hand side. And we use the fact that aq is actually invertible in this ring. And then we get a generating series for Hilbert schemes. And I have to explain what Hilbert schemes are. Now, again, think about coherent sheaves on a curve. And think about how you define the moduli spaces of vector bundles on curves. This you usually do by GIT. So first, you find a large enough Hilbert scheme which parameterizes all your semi-stable bundles over fixed rank and degree. And then you're mod out by some further structure group, sl, and do the geometric invariant theory. And this kind of Hilbert scheme is precisely what we need here. So to define the Hilbert scheme, if we have such a dimension vector n, we have an associated projective representation of our quiver q. More precisely, this quiver q has one projective in a composable representation for each vertex, which is usually called pi. And then you just take the direct sum over all pi to the ni. And that's what you call pn. Well, we'll not see these projectives. And we don't really need to see them so, but there is a projective representation. This will somehow play the role of O of 1 in the vector bundle case. And now in the vector bundle case, to realize all semi-stable representations of fixed rank and slope, you present them all as a quotient of some large O n or large power of some O 1 or O n or whatever. And so this is what we do here. I define Hilb dn of q as a set as follows. It is the set of all presentations of a representation of dimension vector v as a quotient of a fixed projective representation up to a natural notion of equivalence, which you can already guess. Namely, two representations are, two such presentations are equivalent if you have a diagram intertwining the two presentations. That's what you would call a Hilbert scheme also in the coherent sheaf setup. And that's what you call a Hilbert scheme here in the quiver setup. And this exists as an algebraic variety using GIT. So you can realize this as a set of stable points in some large vector space. And then you're mod out by the structure group. And well, now we have such a nice identity. This is the full Quartz scheme. Is this projected? Yes, yes, the full Quartz scheme. Yes, yes, yes. Yeah, we're not taking care of any more numerical invariance like Hilbert's, yeah, yeah, OK. Well, OK, to call it the Quartz scheme would somehow be a better analogy. Yes, that's right. Yes, OK. So we're performing a very simple algebraic operation here. And in terms of q analysis, you would call this logarithmic l derivative. That would be the slogan. Well, in q analysis, the logarithmic derivative would be f of q times t divided by f of t. You take some twisted version of your former series and divide it by that. That's some more analog of that. OK, OK, so that's our first identity using this ring. And actually, it tells us that what is important about this is that the left-hand side is something very formal. Aq is defined in terms of quotient stacks. So they don't have a direct geometric meaning. But the right-hand side is something totally explicit. It's really about motives of actual varieties. In particular, I noticed this, just to finish the sentence. On the left-hand side, you're not a priori allowed to specialize l to 1. We have talked about these motivic measures yesterday. One motivic measure was the virtual Hodge polynomial. This is always OK. But to take Euler characteristic requires specializing l to 1, because the Euler characteristic of the affine line is 1. And a priori are not allowed to do this if you localize, like we do in our motivic ring, for localize by terms like 1 minus l to the i. So we are usually not allowed to take Euler characteristic. And this you can really see from this explicit form of the motivic general rating series. Better not specialize l to 1 here, because of these denominators. But taking this logarithmic l derivative, somehow mysteriously cancels out all the denominators. And what you're left with is these Hilbert schemes. And that's the honest motive of a variety of which you can take the Euler characteristic. And these are very interesting numbers. So the left-hand side is completely formal. And the right-hand side is something concrete. And that's the surprising thing, that you're doing very formal things in this motivic ring, but you get out some actual geometry. And that we will now see in factorization part two in another case. But first, Fabian's question. What do you mean we take the left-hand's motive to the exponent of the managing vector? Not relation. Yeah, yeah, yeah, OK, this notation which I defined here. So this is really the formal definition. So this power only makes sense in this formal substitution of the variable t, which is also nonsense. There is no single variable t. There are variables ti for the vertices of the quiver. But this is defined as twisting the coefficients of the formal series by minus square root of left-hand's to n times d, where this is the naive scalar product of two-dimensional vectors. So this is really just a formal definition to arrive at smooth formulas and to make the analogy to q analysis a little bit clearer. OK, so that was this first instance of this factorization thing. And let's do a second one. So this black one will reappear in a second. So usually when you want to define some modular spaces, you have to choose a notion of stability. In this factorization part one, we were quite lucky that we didn't have to choose stability. It's somehow implicit in defining the service scheme. But so there's no stability involved here. And now in factorization part two, we will involve a stability. Factorization part two. And so the usual modern formulation of stability would be with a central charge function. I will do the old-fashioned slope stability because this may be closer to the coherent sheave intuition, which some of you might have in mind. So let me just assume we are given a slope function mu from non-zero-dimension vectors to the reals. And this should be axiomatically. It should be something which looks like the slope function for vector bundles. So it's of the form degree divided by rank. So mu of d should be theta of d divided by kappa of d. And this should somehow play the role of degree and rank for vector bundles. So it should be linear functions on the Grottenig group. It should be linear on dimension vectors. So theta and kappa should be real-valued linear functionals on the quiver. And of course, you need some more condition. Namely, this denominator should never be 0 for a non-zero-dimension vector. So we require that kappa of nq0 except 0 is contained in the positive reals. OK. So whatever is formally necessary to define something which behaves like a slope function. So that's our abstraction of slope function. And if you just take theta and kappa as the real and imaginary part of a complex-valued function, then you have a central charge. So if you take theta plus i times kappa, which is a complex-valued function, then this is what is usually called central charge function. And this is some of the modern formulation of stability, which usually works with central charge. But I will keep this old-fashioned slope function. OK. Just as an aside. OK. Now we are given a slope function. We have a notion of semi-stability of quiver representations. As usual, a representation v of q is called stable or semi-stable if the slope decreases or really decreases on sub-representations. If the slope of any sub-representation is less than or equal the slope of the representation for all non-zero proper sub-representations. This is exactly the notion of semi-stability, which you have seen from in vector bundles or coherent sheaves in whatever setup. Just take the abstraction of not choosing the slope function a priori. And of course, well, if you make the wrong choice for theta and kappa, then this notion might be trivial. It might happen that any representation is semi-stable or just no representation at all is semi-stable. So all the subtlety is in choosing these linear functions for a given quiver. That's a completely different story. And we'll not touch upon this and just say we want to prove a result which does not use any special feature of this slope function. It should be just true for whatever slope function you have, even if it produces a trivial notion of semi-stability. All right? Oh, OK. So finally, then, inside Rd, inside the space of all representations, you have a semi-stable locus. This is the set of all representations which are semi-stable. And because semi-stability is a generosity condition, this is a Zariski open subset. You're just trying to avoid certain bad dimension vectors of sub-representations. You're trying to avoid dimension vectors of sub-representations which do not fulfill this property. And avoiding certain sub-representations is a generosity condition. So it's open. OK? Final ingredient is that, so now that we have this relative version of this representation space, we can define a relative version of the motivic generating function. So if x is any real slope, we define the x-semi-stable motivic generating function. No, this deserves a larger space. Sorry. So if x is a fixed slope, we can now define a relative or semi-stable generating function, stable motivic generating function, which is aq, but now x semi-stable of t. And this is, well, you can already guess what it is. Namely, you take the virtual motive, not of this whole representation space, but just of its semi-stable part. And you divide by the same structure group. OK? But now we have to bring the concrete slope into the business. So we are doing the summation only over those dimension vectors where the slope is x. So where d is in mu inverse x. So where the slope of the dimension vector d is x or 0. All right? And then the result is what is now called a wall crossing formula. And this is somehow the, it's the prototype of all wall crossing formulas you can encounter in this theory. And I will tell you what it formally amounts to, namely almost nothing. The coefficient d in mu. Yes, OK. So OK, so I should write this in a different way, equivalent but different, because this was not nicely written. So I would like to take the sum only over those dimension vectors whose slope is x, because I want to do everything local in the slope. But then I have a problem, namely the slope is only defined for dimension vectors which are non-zero. And I definitely want to have my motivic generating series having constant term 1, because I want to do multiplicative things. But OK, so let me just formally write 1 plus. 1 plus all the dimension vectors which are non-zero, but have this fixed slope. So that's maybe a more honest way of writing this down. OK, and then the result is the wall crossing formula. And this is a big advantage of this, of this motivic ring that this has an extremely smooth formulation. Aq is the product over all reals of these local functions. OK. And? Ordered product. Ordered. It's an ordered product, and it's ordered by a decreasing slope. And while you have to think a few minutes about well-defining ordered products over the reals, obviously, but the important thing is that all these are power series on the right-hand side. All are power series starting with 1, and everything is graded by the dimension vector. And then if you think about it for a minute, you can easily see that such an ordered product over the reals in descending order is actually well-defined. Yes? The slope only takes various interaction among both sides? No, because I allow theta and kappa to be real valued. A priori. Yeah? You gain anything by doing that? Yes, because later on I would like to make a generosity assumption. And sometimes to make things as generic as possible, I want this to be real valued. So that, for example, the fiber for fixed slope is just one ray. And if I just make these reals which are independent over the rationals, then that's definitely fine. So I gain a little bit. OK. Time is almost over, but let me just show you the proof of this. That's not my terminology. Well, OK, so it's OK. Yeah, before explaining the proof, let me show you what is the wall crossing aspect. The wall crossing aspect is not this identity, but on the fixed category of representations of the quiver, you can look at different notions of stability. So you could have two different slope functions, two different slope functions, giving you really, really different stables or semi-stables. But this formula tells you that this ordered product is always the same. So the ordered product over this series with respect to mu is the same as this ordered product with respect to mu prime. So instability space, in the space of all possible stabilities, you have many walls defining the transitions where changing the stability function really changes the set of stable or semi-stable objects. And it means that crossing a wall in stability space might drastically change the class of semi-stable representations, but there are certain things which are unchanged, namely these ordered products, because they just evaluate to the same motivic generating function. So that's where the name wall crossing comes from. And well, formally, two minutes, the proof is formally equivalent to the existence of the Harden-Arrow-Siemann filtration. So every object v admits a unique so-called Harden-Arrow-Siemann filtration. And this is incredibly strong. I mean, if you look at things like the Jordan-Hölder filtration, they are not unique. The sub-factors of the Jordan-Hölder filtration, they are unique up to isomorphism and permutation. But here it is the filtration itself which is unique. And if you define it correctly, if you index the terms in the following filtration by the reals in a tricky way, then it is even functorial. You get a functorial filtration. There exists a unique HN filtration. v equals 0 equals v0 in v1 in vs equals v. It's defined by two properties, namely first property. All sub-factors are semi-stable for your fixed slope function for some slope. And second property, the slopes are decreasing. So this already appears in Work of Harden-Arrow-Siemann on calculating the number of points of modular spaces of vector models on curves. And it's really an incredibly rigid structure on the category because it's a filtration which you can functorially attach to representation. OK. And this basically proves this motivic identity. And namely, these decreasing slopes you see here in this decreasing order of the product. And everything you have to do is you have to stratify your representation space. So your representation space, Rd, is a union of strata of a fixed Harden-Arrow-Siemann type, where Harden-Arrow-Siemann type just means you record the dimension vectors of the sub-crochets. This is a finite filtration, a finite stratification by locally closed. So you get a corresponding identity in the Gotten-Dickringer varieties. The motive of this is just the sum of the motives of these. And these are, well, they're isomorphic to base change from the group Gd to a certain parabolic group. And then you have a vector bundle over a product of semi-stable loci for smaller dimension vectors. Vector bundle over a product of semi-stable loci for certain dimension vectors, smaller dimension vectors. I don't want to be too technical about the notation here, but that's the principle. You stratify this affine space into locally closed things. Each of these locally closed things is a vector bundle over the product of semi-stable loci up to some change of group. You induce from a parabolic group to our base change group. And this is a very simple geometric fact, which is the consequence of the existence of the Harden-Arrow-Siemann filtration. And writing down what this means in the Gotten-Dickringer variety immediately gives you this product identity. So that's basically the whole proof. And while the most important feature is buried in here, the fact that this is a vector bundle, that's equivalent to the category which you're considering to be hereditary. So this is more or less equivalent to the global dimension of your category of representations to be one. So that's the limitation of this. The global dimension of this category of representations is one, global dimension of rep Cq equals one. So hereditary means global dimension. Yes, right, global dimension at most one, exactly. OK, so that's basically enough for today. And next time we will start with this identity and explore these local contributions, these functions, and relate them to modular spaces of semi-stable representations. And then finally define the DT invariance. OK, thank you very much. Are there any questions? Yes. Of course, it is a standard key, the scheme of the plane. Of course, it is related to the simulation for an experience. So it is also given to them. Yeah, yeah. So that's not related to a quiver, but to a quiver with relations, you can consider this Hilbert scheme, which I introduced for the quiver with one vertex and two loops. And the dimension is D, and this extra datum M is just one. So this parameterizes left ideals of finite co-dimension in the free algebra in two variables of co-dimension of, exactly, thank you, of co-dimension D in the free algebra. And now inside this, you can consider those points. I mean, formally this is represented as equivalence classes of two linear operators, A and B, and a cyclic vector, V. V is cyclic. And then you can consider the closed subset, which is defined by the equation AB equals BA, commuting linear operators. And that would be the, and that shows that inside here is a closed subset. You find the usual Hilbert scheme of points in the plane. This has actually much simpler structure than this one. Related to same with the term of definition, where you think that this is the same. Ah, OK. Ah, yeah, OK. The reason why this is the same as my definition is, well, the free algebra is one of these algebra which has the property that any projective representation is already free. So as projective representation, we just take the free algebra itself. And then let's have a look at what is the surjection from the free algebra to V. Well, let's take the image of one. And the image of one is then a cyclic vector. And X and Y are mapped to two linear operators, A and B. Such that V is a cyclic vector for this. And that's the connection to this other way of writing the Hilbert scheme. There are some questions online. Yeah. Can you please get a bit more detail on what the Haddon-Arceeman is defined to be, Haddon-Arceeman type? Type, yes. OK, yeah, OK. For time reasons, I tried to avoid this. Thanks for the question. And now I can give you the definition. OK, so I'll just elaborate on this thing here and do the precise notation. So the union you take is over all decompositions of your dimension vector into non-zero dimension vectors for which the slopes are strictly decreasing. Because that's the one condition which appears here. And then we define Rdq, let me just say hn d dot. D dot is this decomposition. And hn d dot, that's the set of all representations V such that the dimension vector of the i-th sub-crochant in the Haddon-Arceeman filtration is precisely di for i from 1 to s. And this is the i-th Haddon-Arceeman sub-factor, sub-crochant. OK, so from the Haddon-Arceeman filtration, I just record the dimension vectors of the sub-crochants. They have necessarily have to fulfill this condition. And this is, in fact, a locally closed subset. So existence of such an hn filtration, that's, in fact, a locally closed condition. That's not difficult to see. And then what you do is you have a, well, OK. And then you identify this with an associated fiber bundle. Namely, inside the group Gd, you find a parabolic subgroup corresponding to this d dot with respect to this decomposition. That's like the standard parabolic you find in a GL for any decomposition of your dimension. And then you have a certain vector bundle of known rank. So the rank is easily computable. But if I write it down, I make a sign mistake. Of known rank over the product i from 1 to s of the semi-stable locus in the representation variety of y. And the basic idea is that if you have something in here, then to the representation v, you attach the tuple of the hardener simmer and sub-crochants. That's the basic idea. Or that's what you want to do. But you can't do this literally on this level. You can do this literally on the stack level, but only literally, but only tautologically. So you cannot attach to the representation sub-crochants because the sub-crochants do not have a preferred base, linear basis. And that's why you have to perform some base change here. But that's essentially what's going on. And from this, you just compute the motive. Motive of rd is the sum over all these. Motive of gd divided by a motive of the parabolic times the motive of this product times a power of the left sheds motive for the vector bundle. And if you just write this down and simplify everything, then you get to this product formula. Another question. Is there a connection between the variation of stability x and the monotomy of aqx? The monotomy of aqx sounds great. I have no idea what this could mean, but it sounds great. It would be wonderful if it's true. So if the one who asked this question could give me some details of what this monotomy could be. It's a anonymous participant. OK, sorry. So if this anonymous participant can hear me now, if you could just send me an email, anonymous if you want, of what this monotomy could be, I would be happy to think about it because it sounds like a great suggestion. Yes? There is one more. Could you explain how to obtain the answer for aq? For the one point quiver without an arrow directly from the definition of rdq, if it isn't rdq trivial. Yes, it is. Yes, it is. OK, yeah. Let's do this calculation. Yeah. I was hoping for something like this for the question and answer session this afternoon. But if we have time, I can do this calculation now. Then they'd be also good idea to do this. Let's do one or two of these tricky calculations using all these signs and half powers of l in the Q&A session this afternoon. I ask myself one question. Are you going to prove the first structuralization? If you are interested, I can try to squeeze it in. Yeah? Fine. OK, can do it. I have a philosophical question. So can you do this for the stack of player chiefs on a MOOC project card instead of the quiver? Ah. Hm. How many defines a genetic series? Hm. No, there are tons of convergence issues. I mean, just in defining the modular space, you have all these problems of boundedness. For the modular stack, it's not finite type. Exactly, because it's not finite type. All these boundedness problems which you have will appear here. And things which you write down naively are usually not convergent because things are not a finite type. So it has to be very careful and just first approximated by restricting to bounded values of the slopes and then study the convergence for a slope going to plus minus infinity. But it has to be very careful. Any other questions? Yeah, just like the first formula. So the second formula geometrically means that you have to start on a semistrification. And does the first one have some geometric meaning? Yeah. Yeah, yeah. OK. That's also more or less your question. I don't know if I should do it now or? Do you think you can squeeze this in the question? I will do it in the Q&A session. Yeah? I'll give the proof of this and the calculation for the trivial quiver and maybe some other calculation. We'll do this. Yes. Perfect. Well, let's stop this again. Thank you very much.