 I wanted to correct one typo from the last lecture. It wasn't relevant really for the last lecture, but it will be relevant for this one. So for the stable pairs space, there's the notion of descendants, and there's two different symbols I introduced. There's the tau, and that's given by this correspondence and you pull up comology and you then multiply with k plus two, the churn character and push down. And the reason for that is you want this tau to have the same grading as the one in Grumov-Witton theory. And that part was correct, but then in the notes, I just cut and paste to the churn character notation, which I said is better. And I concentrated on the fact that I changed the sheaf to the complex, but of course I'm supposed to keep the k the same. There's in the notes yesterday, I had this k to k plus two, and that makes no sense to have the churn character k here and k plus two here. So the actual correct thing is k and k. So I don't know if anyone noticed that, but anyway, they're corrected now. And as I said, it didn't matter for the last lecture because the last lecture was about formal properties, about rationality and things like that. It didn't really play a role what the actual specific meaning of that index was, but it will now matter. So now I go to the last lecture. No, I guess it's this one. Oh, okay, it's the last lecture, the fifth lecture. So here I have to tell you about the stable pairs, Versailles constraints. And in some sense, that's the whole goal of the series of lectures to get here. And this is relatively new stuff. So most of the things in the lectures I've been speaking about, well, it's been back and forth, but a lot of it has been pretty old, but the actual work on the Versailles constraints for stable pairs is relatively new, but it's a long project. So I want to explain what that is, how to get there and what the formulas are. And there's some surprises. So we start with X is a non-singular projected threefold with only PP comology. So this is, we already on the Grammar of Witton side, I had this restriction to help us avoid the sign rules and to help us avoid the inclusion of the Hodge grading. So I keep that here. And the main example, and it's the place where actually the theorems are is as a Toric threefold. And now I want to write the Versailles constraints. And by now you should have some idea what these things are going to look like, well, at least the shape. So the Versailles constraints will take the form of universal relations among the descendant series. So that's what we discussed last time is when I put these descendant insertions in, I get a descendant series, which is a Q series. This is some over Q to the N, where N is the holomorphic Euler characteristic of the sheet FF. So this is every one of these, in Grammar of Witton theory, when we had this bracket, it was a number, but now on the stable pair side, these are every one of these are Q series. And in fact, by the conjectures or results, depending on where you're operating these, this is actually not only just the Q series, it's actually a rational function. Every one of these is a rational function in Q. In Q, so the Versailles constraints here are going to take the form of certain relations between rational functions in Q. And I think it's fair to say that the algebraic form that the Versailles constraints take on the sheaf side, on the stable pair side is simpler than for Grammar of Witton theory, but still they require some terminology to explain. So the constraints, so before I write down the formulas and how to prove them in the cases we can prove them, I would like to say some general things first. So the constraints are conjectural in almost all cases. So that's the, I mean, that's the where we are in the subject and the main theorem, the main general theorem is that the constraints hold whenever X is toric and unfortunately, not for every single constraint that what's called the stationary constraints. So I'll explain these two things during the lecture. Well, not toric. I'll explain what stationary constraints are, but that's the case where it's proven. And if you're interested in the case where the comology, as X has an interesting Hodge decomposition, then there's another page. So this part was proven in a paper with Miguel Moreira and then Oblomkova-Konkova and myself in 2020, you can find it on my webpage or on the archive. And if you're interested in the case where X has a more interesting Hodge decomposition, then there was a subsequent paper by Moreira where he explains how to write down a proposal for the VRSR constraints with the assumption that X is simply connected. So you get some Hodge decomposition there. And the last topic I'll talk today is about this more general, well, the VRSR constraints in general here, they're about these descendant integrals on stable pairs in the virtual class there. And you might, if you find yourself a little distant from that world, there's a way to connect it to even more concrete algebraic geometry, which is, since this is a theory about three-folds, you can dimension reduce it to a theory about surfaces. And when you do that, the VRSR constraints constrain total logical integrals on the Hilbert scheme of points of surfaces. And this was a topic of, well, part of the topic of Miguel's paper, where he actually proves the result for all simply connected surfaces. So I will explain that a little bit in the end. But this is, if you are interested in Hilbert schemes of points of surfaces, this gives some kind of new results about total logical integrals there that comes as just a corner of the stable pairs theory. So I will explain that also. Okay, so before I tell you how to write the VRSR constraints, the formulas depend on some algebraic construction. So the first thing is where are we going to work? So we're going to work in an algebra of descendants of X. And this is a very simple algebra to think about. And silently, we've already been working in that algebra is that the generators are these symbols with the churn characters. And there's a churn characters. And then they, inside the parentheses is any comology class of X. And so more or less you take the free algebra, polynomial algebra and those symbols. But of course there's some obvious relations. Like if you scale with a rational number inside the parentheses, that's the same as scaling outside. And also if you add inside the parentheses, that's the same as adding outside. So as I said, this is the kind of algebra of descendants that in some sense we've been silently working in anyway. And this is just making it explicit. Okay, so in order to define the VRSR constraints we need three algebraic constructions related to this algebra. And so the first is some derivations. So for every k greater than or equal to one, greater than or equal to minus one, sorry here. For every k greater than or equal to minus one, we define a derivation. And so it's a derivation, it's a Q derivation. So it annihilates Q and it's from the algebra to itself. So all I have to do to define this is to give you the action on these generators. And that's given by a very simple formula. If I take this RK and I feed it one of these descendants, then what I get is some combinatorial factor and then the turn index on the descendant is promoted by K. And this combinatorial factor depends on K, of course. And it depends also on the degree grading of this homology class. And here it's always a complex degree since we have PP, everybody's in even and that's the complex grading. In the, if you want to consider more general hodge decomposition, then you have to pick a hodge grading here. That's discussed in Miguel's paper. And if you're nervous about the minus one case, then of course minus one, there's no product that's a one. And then this doesn't promote the turn grading, but it demotes it. And then the convention is you can never have negative turn characters. So that's the definition of these derivations. They're pretty simple. Actually, they're pretty easy to think about. They just act as derivations and just have to remember to put the right combinatorial factor here. So that's the first one that wasn't so bad. Then there's a certain kind of diagonal splitting. This is more notation. So the generators look like this as we've discussed. So maybe a gamma, these are the generators of the algebra. But for notation, we want to introduce a different symbol where I put two churns together and gamma and I just have to tell you what I mean by this and it's some kind of coproduct with the diagonal. So I have to define this as the definition. And what you do is you just, you take a Coonati composition of the diagonal which looks like this. There's some left people and some right people and you just put it in the churn. The first one takes the left and the second one takes the right. So these are now generators. So this makes sense as an element in the descendant algebra. So that's the second point. So we've already gotten two thirds of the way. So then there's some more notation. This notation is a little more complicated. But again, it's usually not much going on. So I wanted to find this crazy thing here. So we've already kind of seen what this churn A, churn B of gamma is. That splits, that's the sum over the Coonati composition. But I want to now somehow weight this sum by various factors. So when I write this expression, what I mean is I sum over the Coonati composition with the left and right as I did before. But each term, I have some combinatorial factor for the left and the right. And that depends on the degree of that term. And then finally, there's a sign. So this is just shorthand for that, okay? So you can look at the slides. It's not necessary to somehow absorb all the notation at once, but I'm just trying to give you some notation. So when I write the formula that's small, complex degree, factorials with negative arguments to define the vanish. And then the operator that we want with all this notation is just the simple element. Well, it's not so simple maybe, but it's an element. And the operator is multiplication by the element. And I tell you what this element is, TK. So, well, I just explain what this symbol means here. Where I take this, I take the Hodge decomposition and I, sorry, I take the Coonati composition diagonal with C1 and I split it here and then I weight the factors by, weight the summands by these factors. And I sum, so anyway, this has now been completely defined. It's a particular element. It's just a single element of the, this algebra, well, it depends on K. And then the sums, there's some rules about that. I wanted to have the A's greater than equal to zero, A's and B's greater than equal to zero. And then of course, this is the interesting part that what I'm submitting here are the first and second churn classes of the tangent bundle. Okay, so we have a, that was kind of fast. We have a quick review. There's a derivation and the derivation is very simple. That just basically this K is bumping the churn index with some factor and then more complicated. So that was the RK. And then more complicated, if I do it backwards, is this element. We have to define this particular element for every K a particular element of the sentent algebra. And this notation defines this element. So it's, first of all, I have to sum over A plus B equals K plus two and A plus B equals K. And then inside the sum are these symbols I've defined. So inside the sum is a second sum. And the second sum is over these symbols, this Kuhn's decomposition of the diagonal times this fellow that I submit to it. And then I wait every one of the Kuhn's terms by some sign and some combinatorial factors. Okay, so as I said, you can look at this if you're curious about that, this is just a formula for a particular element. And once you get used to this notation, it's not really that scary. And that's it, actually. That's all I have to tell you. And now that the Versailles constraint operator, that's LK, it has this element. So whenever I write an element, that's an operator by multiplication by the element. That's a derivation. And then this is a composition, multiplication by this very particular element, but that's the point class, I should say that. Point class. This is a particular multiplication by this particular element, then composed with the derivation and then a factor. So that's the whole operator. And I would say that this is a simpler thing than that operator that I had defined in lecture two for the Versailles constraints in Gram-Ovitan theory. I mean, it is true that this element has a certain complexity. I would say this derivation is as simple as possible. And then after all here, we're just multiplying by a simple descendant of a point and then another derivation. So in some sense, if you wanna look where's the complexity in this operator, perhaps it's here, that's the most complicated term. But anyway, it's not such a bad, it's not so bad. And the Versailles conjecture in this form, I mean, this conjecture goes back many, many years to roughly speaking, the origin of this conjecture was once we understood, this was, I mean, a long time ago when we were in Princeton, that once Andre and I understood that there's this descendant correspondence and they're at the same time, the Versailles constraints held in Gram-Ovitan theory, then just by just, it just must be that there are Versailles constraints on the descendant side. And we were at that in those days, there were the ideal sheaves and we didn't have enough knowledge to actually do the transformation then. But still we could just guess that we could just, you could just, once you know that there are these constraints, you should just try to guess them. And we did had guessed them in many cases. That's the beginning. But the formulation here is formulation from 2020. And it says it affects has only PP-comology. You get to pick any curve class you want. You get to pick any element of this algebra, then you submit this element to that operator and you put the bracket and this is zero. So this is the descendant series and it's just always zero. So it's some kind of miraculous conspiracy among descendant series for three-folds, stable pair descendant series. They conspire in this way to always give you zero. And this is easy to play with. So, well, I mean, certain part of it's easy. For example, X equals P three. So I can pick whatever D I want. Of course, if I pick some D that has the wrong dimension and everything, then this thing is identically zero. So you don't want to pick a stupid D. But if you pick a D that's interesting so the dimension constraint satisfied. So for example, in this example for X equals P three, I pick this D, I pick L one. I get to pick also which Versailles operator I want to pick some L one. And I take this D equal this particular choice and then I apply L one to D. And that's just mechanical because I've just given you the specific rules. So it's some derivation, some multiplications. And if you follow those rules, you'll find that it's this operator applied to D. And then you can just do it. And this is the left-hand side is what you get. The left-hand side is this LKD inside brackets. I've written it into three terms because there's three terms here. And what this conjecture says is that if I take the descendant series for these three with these combinatorial factors, it should sum to zero. And this is about lines in P three. So it's not so hard to check this by hand. And it turns out that for these line invariance actually this series is not irrational. Well, it is a rational function, but in fact it's a polynomial. And it's a polynomial with three terms in each one of these. It's a very simple geometry, lines in P three. And each one of these numbers can be checked because it's each one of these numbers is some integral on some stable pairs space. And you can just, well, these numbers you check them and you do this precisely this combinatorial weights and you get zero. It works perfectly in this case. Oh, we have a question. Yeah. Is it known to be false if X is in dimension frame? I mean, I think that for the most part, so about dimension two, I told you that I'm gonna make some comments and a version of this is just exactly true for dimension two. Dimension one I haven't thought about, but dimension four is not clear what it means. That's, you know, dimension four, it's not so clear what, because there's some virtual class here. I mean, you could hope for maybe the question you're asking if I can interpret it is that maybe you could, you could try to ask whether this whole thing is just zero homologically. And that stuff is not gonna be true. Although I'm not prepared here to give you an example of that. But then the grimoire wooden side, it's not the case that, yeah, I mean, usually it's not the case that they vanish without the virtual class. And if you have, in dimension four, we don't have a virtual class in general, so it's not clear what it means. I don't know if that's an answer. But in dimension two, it's definitely true and interesting. And I will mention that at the end. Okay, so that's the conjectural part and that's the formulation. And as I said, it's really kind of mechanical and not at all hard to wrap your head around. You just take your NEX and then I said, for the formulation I'm picking here, we should have PPComology. Then there are these operators on this descendant algebra. And as I said, it's pretty easy to understand them. And then once you have that somehow loaded into your brain, then this conjecture starts giving you relations among the different descendants series. And it gives you lots of them and they're non-trivial. We have one whole question about this example, I think. The question is how is this sum zero? I guess somebody actually tried to compute it and they didn't get zero. Well, I don't know, it's like you took here minus three plus one plus two, that's zero. Six minus 10 plus four, that's zero. Minus three plus one plus two is zero. Looks pretty zero to me. I think her last exponent two should get three. Sorry, that's a good point. Now it's zero. Yeah, maybe that brings down the whole conjecture. Yeah, sorry, that these are each one of, in this line case, you get one number for the first three exponents. Thanks, I'll fix that on the slides. Okay, so yeah, and I will say that also in Miguel, so I think maybe another question you might think is that what happens when it's not PPComology? So when it is PPComology, the Torah cases, one can do a bunch of examples like this. Of course, as the degrees and things get harder and then it becomes harder to compute these and Alexei has some software, Alexei Oblomkov has some software, but of course the software doesn't go to infinity. It eventually, the computer breaks eventually. But for the Torah cases, you can check a lot. So in the sense of confidence, we're pretty confident in the Torah case because a lot of checks. Besides, there's a theorem which I'm gonna explain in a second. But in the case where it has interesting Hodge decomposition, then the examples are much harder to check because there's not so many good tools. And Miguel Moreira has done some basic checks for the cubic three-fold, which has some interesting Hodge decomposition in the middle and it passed those checks in that formula. But also on the Gromov-Witton side, the number of cases where the Averasar constraints are proven where there is some interesting Hodge decomposition is basically just for curves. I mean, there's some kind of case, some trivial cases, but the interesting case is just for curves because it turns out on both sides, it's pretty hard to compute for such varieties because they don't have this localization. And there's when, for two reasons, particularly basically is that it means that the variety does not have some kind of good Torah section, well, certainly not a Torah section with isolated fixed points. So that's one thing, that's one tool that you can't use immediately or at least off the shelf. And the second is that another tool that one likes to use is degeneration, taking break the variety. And typically what happens if the variety has some interesting Hodge structure, you can often break it, but there'll be some vanishing cycles and you'll lose some of that comology and then you'll lose, it'll just run away from you in the degeneration. The degeneration will tell you about some stuff, but it won't tell you about the cycles you've lost in the generation. Okay, so this was an aside. So what's the theorem? The theorem in our paper last year says that if X is TORIC, and of course I'm always assuming non-singular projectives, then these VSR constraints hold in all, for all K, for all X TORIC, for all curve classes, but the only restriction is that the D you submit must be stationary. And what that means is defined here, it's the subalgebra defined by all descendants. So there's no restriction on the churn index, but there's only one restriction on the comology index and its degree has to be greater than zero. So what does this mean? It just means that this thing can't be the identity class. Can't be the identity class. So this says that if the red D that you submit as long as you don't involve descendants, don't involve anything like descendants of identity class, then it's fine, then it's proven. Of course, we think it's true also for the descendants of identity class and you can check it by hand. So I'm not saying that there's anything wrong with the descendants of identity class, it just our proof doesn't capture them. Okay, but this is a lot of cases. And basically it's almost all of them is just you just can't put descendants of identity class, which is a shame and I'll explain why that one runs away from us. Okay, so there are any questions about this theorem? In particular, this theorem does cover this example, meaning that, yeah, independent of this kind of blue check, this thing is covered by the theorem. Shouldn't identity be the simplest case in a sense? Yeah, yeah, it is a reasonable philosophical position of identity is the simplest case, but it's also equally defensible that identity is the most complicated case. I'll make the second argument. And that is that if you look at this three-fold, it's some big space, Maria, it's like Russia. And the point can go anywhere it wants. Now, if I give it a cycle, then it has to live in a smaller place so that somehow it's movement is constrained. If I put the identity condition, then it can move anywhere it wants. And from this point of view, I've given it the most freedom and then from the point of view of controlling it, it's the least control. I don't know if you accept that argument, but that's actually some sense of the relevant one here. So this notion stationary comes from the study of P1. It's like an old word that came from how we used to describe, Andre and I describe the study of P1. P1 only has two classes. It's the identity class and the point class. And if you remember in the earlier lectures, I gave you some specific rule about how to deal with the sediments of the point class, how to relate them to Hurwitz theory. And it is the case that, I mean, if you're faced with a Calcutta and Grammar of Whitton and Varian of P1, you could have some sediments like Edmene and sediments of the point. The point ones are the easier ones to deal with. And we know how to deal with everything now, but there is some kind of staging of complexity and the points of the easier one and the descendants of identity, the harder one. And it's precisely because that descendants of identity, the point gets to move wherever it wants. If you take a descendant of a point, that the mark point has to be fixed that point and you control that control it a lot. In particular, if you take a degeneration, you can tell it where to go, et cetera. Okay, I think I've said enough, but it's certainly true that often one feels that the identity is the easiest Calmology class, but happens here, it's the hardest. Okay, is that enough? So how this proof works for the statement is it's kind of three steps. And so the first step is that, so we say it with the exoteric threefold and the first step I already told you is that the Grammar of Whitton and Varian of P1 constraints whole. So this is because every torque threefold is semi-simple and that's a result of irritania I think and every semi-simple, Comological Field Theory, well, though you have to use the classification for semi-simple, Comological Field Theory is given to Telemann classification and then Telemann proves that the Grammar of Whitton Versailles constraints hold. Okay, so that's the starting point. So then the issue is that if we believe their equivalent, we have to somehow get these Versailles, the Grammar of Whitton constraints to the stable pairs. And how is that gonna be done? So you need to have a Grammar of Whitton pairs and descendant correspondence. This has a promotion of the Grammar of Whitton DT correspondence that I explained in the previous lectures. You have to have a full promotion of that. And this has taken really a long time to develop. I would say maybe 15 years or something like that. Because you need to first prove this thing exists. Oh, first you imagine its form and you have to prove it exists. And then in some papers with Aaron Pickston starting around 2012, we proved, well, we conjectured a particular forum and we proved that forum holds in the Torah case, but we didn't compute the coefficients of that forum of the correspondence. And then returning to the actual problem of computing the coefficients, some progress was made in 2019 with Alexei and Andre. And the idea for that was that by then we know there exists, we know the form and we know that it's true that this abstract correspondence is true for Torah varieties. So we need just to need to compute the coefficients and to compute the coefficients. If you're clever enough, you can learn a lot by doing this example, this old example, P1 with two, the total space of two line bundles on P1. Actually, they're actually the trivial line, trivial. You take P1 cross C2, that's pretty simple thing. And you have to compute it completely in some sense on the Grammar of Whitton side and also on the stable pair side. And we know how to do those things very well, but you need to keep track of some equivariant aspect of this. It turns out to do these things very well, in some sense, you have to take opposite weights here. And this opposite weights looks like a drastic thing, but roughly speaking, you do the calculation, the descendant calculation with opposite weights on the Grammar of Whitton side and on the stable pair side. And what happens on the Grammar of Whitton side is you get that entire theory that I discussed, which is the Grammar of Whitton theory of P1 that I had solved with Andre years before, that all those formulas go in. Actually, on the stable pair side, it's a little bit easier. So anyway, you have these three things, the general form of the transformation that's proven, but the coefficients aren't known, the calculation of this on the Grammar of Whitton side and the calculation of this on the stable pair side. And this gives you some mathematical constraints for what that correspondence is. And it turns out that the outcome of that is we can calculate that correspondence, except we lose control of the descendants of one. The descendants of one somehow sneak out of this and they in some sense sneak out because of this T minus T local specialization. So that's some kind of summary. If you want, you can read the discussions in the papers, but roughly speaking, that's the development. First, you have to prove that there exists a correspondence with undetermined coefficients, but nevertheless that exists. And the second thing is you have to compute it. And the computing uses all sorts of tricks. I mean, some years of tricks, but the outcome is that the descendants of one escape those tricks. So that's only the second part. Then there's something that one has to do, which is a kind of huge thing in some sense, algebraically, is that there's some crazy correspondence, which I'm gonna show you the formulas for it. Then you have to take the Versailles constraints, which have their own complexity. And then you have to transform them by this correspondence. Then you get something that you know is true, and then you hope it's this one. We had conjectured this form long before we'd done these calculations from just data. You have to hope that you're gonna end up on this. So you have to go through this path and then you have to hope you're gonna end up exactly here. And that is what happens, but the calculations are kind of long. And I'll show you the complexity that happens. I mean, somehow as I was trying to say is that these Versailles constraints on the on the sheaf side are algebraically simpler. These formulas are kind of nice and simple. On the other hand, the Gromov-Witton side is somehow complicated and the correspondence is very complicated. So somehow the correspondence undoes some of the complexity of the way the constraint, the Versailles constraints are born in Gromov-Witton theory. And that's why this derivation is rather complicated. I want to show you some of the formulas. I'm gonna show you some formulas here. But maybe before I do that, I wanted to make some kind of other point, which is for this whole subject, this proof is slightly backwards. So this is another point I should have made earlier is that many people who work on these things would view the geometry of the stable pairs to be simpler than Gromov-Witton theory. And in fact, that's one of the nice facts about this correspondence, that there's various things in stable pairs theory that one likes to study and you can prove and then you can move them back to Gromov-Witton theory. And in many results, this idea is used. So one could hope that the descendant behavior on stable pairs is simpler than Gromov-Witton theory. And actually the form of Versailles constraints confirms that belief. And so given that, it's a little bit strange that the proof has to go through Gromov-Witton theory. And so that's, I would say that I would like to run the whole argument the other direction to have some geometric reason for the descendant or the descendants to respect the Versailles constraints on the stable pair side and use that rather to prove the Gromov-Witton variance. But I don't know how to do that at the moment. All the arrows are pointing down. I don't have to turn the arrows around, but I'm not able to. So that's the main challenge here. Proved Versailles constraints are stable pairs directly using the geometry of the multispace of stable pairs. And this is a problem that you can just, there's something, there's some relief when you can think about this problem. Especially I think that if you're, if you haven't gone through the past 20 years, there's some relief when I put the form in this form because you can just start here. You don't have to listen to the first five lectures. You just start with this constraint operator and think about stable pairs. The first four lectures, sorry. But I don't know how to do it. A couple of questions, Raul. Yeah. One of them is, is there some integrable hierarchy around? So yeah, I don't, I mean, you know, it is the case that if you look at like the Hilbert's scheme of points, there's all sorts of algebraic structures that are there. And I don't know how to lift, you know, I'm not sure exactly what you're asking also, but maybe you're asking, does this satisfy the VeroSara bracket? And it's close to doing that, not the separate topic. These constraints are very close to satisfying the VeroSara bracket. You have to correct them slightly because of the correspondences don't quite respect that. I don't know if that's what your question. The other question is that like the Hilbert's scheme of points has all sorts of algebraic structure on the full comology. And one could hope maybe that these VeroSara constraints for stable pairs would be some shadow of some parallel algebraic structure on the comology stable pairs. And there's some difficulties to go down that path, although it seems like an interesting line of thought. And one of the difficulties is that stable pairs as an actual physical space, they're kind of terrible. It's only the virtual class that's good. And it's not the case that I know how to promote that virtual class to a whole comology theory. This is a direction of refined invariance. And there are ways to do that when X is clubby out, but that's precisely not the case we're studying here. Other than this, I don't know, but it's a good question, I think. And the other question, I think it has all to do with the argument that you just gave us. Is there any hope to instead transfer the argument proving the VeroSara constraints using semi-simplicity and give it a formalism for directly to the stable pair side? Sorry, somehow you're speaking too softly. I heard some words, but not all of them. The question is in the Q&A. So. Okay, maybe I can try. Yeah. I can't read the Q&A now unless I stop my screen share. I'm telling you, my Zoom is not optimal. Have to be louder. Can you transfer the argument proving the VeroSara constraints using semi-simplicity and the give it a formalism? Yeah, I get it. Yeah, I know. We don't know how to do that. That's actually a good question. So there's a way in which the Grammar of Witten Theory has extra structure, which is the genus. And the semi-simplicity is about, that's about genus zero and then having genus zero control higher genus using classification. That's kind of a structure that's in Grammar of Witten Theory. In other words, like when you think about Grammar of Witten Theory, you start the simplest, that's genus zero and then you start moving up higher, genus one, genus two. What happens with stable pairs or the sheath theory is they're not like that. There isn't some kind of, I don't know how to formulate a classification theorem. I don't know how to grow up with these, sits over the modular space of curves. The modular space of curves is something that's independent of the target. You can think about, you can try to think about things like this and it's profitable to prove and it's used to prove some things. But if you take a target X and you take some modular space of sheaths on X, like in our case, the stable pairs that's a sheath with the section. What do you want? What structure do you want to, first of all, what's the simplest one? There's no sense in which there's a simplest one. It's not like there's a genus zero, a genus one. They all come at you at once. There is the Euler characteristic index but you don't really even know where that starts. It starts somewhere negative. And I think you could make a case that the lowest one is the simplest one but the lowest one depends on beta. It's not so clear what, it's not uniform like in the curves with this genus zero. That's the first thing. The second thing is the modular space of maps, maps the modular space of curves and that's come how crucial part of the idea of this classification that's used there. And it is an interesting thing to think about that kind of idea for stable pairs. If you take stable pairs, it's a sheath and a section. So what could I do? What structure could I forget? Well, I could forget the section. So stable pairs in some sense should map to sheaths. And that's a very, I mean, that's a very profitable line. Of course, it becomes more delicate immediately because then what's that space of sheaths? Well, maybe take an art and stack of sheaths or maybe you have to impose some stability conditions. But this is a useful line to think about stable pairs mapping to just modular space of sheaths. And that's used, for example, in this to prove that Q goes to one over Q in variances in the CloudBio case. That's the map that's available but that map has a very different flavor than the map from MG, from modular space of stable maps to modular space of curves. For example, it still depends on X. So I don't know how to do what that question is asking, which I interpret this somehow, finding all the parallel structures that are used on the modulate curve side to find those structures on the sheath side and then use the same argument. I don't know how to do that. Okay, that was a long answer. But anyway, I regard this for this lecture series, the main challenge is to prove the various our constraints for stable pairs directly using the geometry of stable pairs. And I tried to present this as a appealing path to think about, because as I said, if you're entering the subject for the first time, you can work on this problem without studying all of that history of Grimoire-Vitton theory. Sub-challenge is to control the sentence of one. I think that if, and then the advantage for this is that if one can find some kind of geometric argument where it's true, then maybe that geometric argument won't use anything. Well, maybe we'll be able to prove the various our constraints in all cases, not just a semi-simple one. That's the practical hope. Okay, so then I want to say a little bit about this correspondence. And this is going to hurt a little bit. So, in fact, it hurts so much, I didn't want to write all the formulas myself because they're already written and it actually took a long time to get these things correct. So this Gromo-Vitton-descendant correspondence in the form we use is a rule. It's just the explicit rule by explicit formulas. And that rule is going to tell you how, so can you see this or should I make it bigger? Is that better? Yes. Okay, so it's a rule that, so this is the stable pair side and this is the Gromo-Vitton side. And in the, sorry, I don't know what happened there. I'm got lost. So in the original Gromo-Vitton-DT correspondence, what we had, roughly speaking, was that if you just take no descendants because it's Kalabiyao on the stable pair side, and that's just the same as the series with no descendants on the Gromo-Vitton side. And then all you have to do is change the U here, sorry, the Q here and the U here and that's given by this change of variables. That's the old rule. It's a simple, you can't make any rule simple of that. Just one goes to one in our current language. But now we have to find a rule, this correspondence, says that if we start with, say, an arbitrary descendant on the stable pair side, and these are our churned descendants, it's the same notation except there's a tilde and what is this tilde? It's just the old one shifted by a little bit. It turns out that the formula's a little bit better there. It's not anything to worry about. But anyway, we have to find a rule where if you give me some descendant polynomial, then I have to use my rule to turn it into some descendant polynomial on the Gromo-Vitton side. And after that, it has to satisfy the correspondence with some specific rule to turn it to this side. And then when I apply the brackets, I should get the same series after this change of variables. There's some auxiliary Q and U coefficients. But that's the challenge to find some kind of universal rule that transform descendants on the stable pair side to descendants on the Gromo-Vitton side. And as I said, there has been a lot of thought about that and MNOP2 was the first ideas about the structure of the such a rule, the existence and structure of this rule. And my papers with Pickston, we explain a particular form, this universal form this rule takes. We conjecture that that form holds for all three folds and we prove it in the torrid case. And then the last step is to actually calculate specifically the coefficients of that form. So that's the goal and that's what's needed to transform. Because once you have this rule, you're really in good shape because exactly this rule tells you if you wanna compute stable pairs descendants and you know how to compute Gromo-Vitton invariance, this rule will just solve your problem. It just tells you how to do it. So once you have this rule, then you can take knowledge about the Gromo-Vitton stable pairs and the Gromo-Vitton descendants and move them to stable pairs. And the knowledge you wanted to move in this direction are the very sorry constraints in Gromo-Vitton theory. Okay, so what is the nature of this rule? And as I said, the formulas, they look complicated but after thinking about them for a long time, they could be worse let's put it this way. So these are the descendants in Gromo-Vitton theory and the first idea is you should switch to a different basis. And this is an old idea and I think these were first appeared in papers of Ezra Getzler. So sometimes this is called Getzler's renormalization but there's some specific change and it's given by what I circled in red here. It says that if you want to know these towels, I explain and these formulas are explained how to write them in terms of these A descendants. So it's not such a big deal and these formulas aren't too bad. Like these are some summations with some combinatorial factors. And in some sense the reason for this is that it was known that's why Ezra introduced them. The Gromo-Vitton theory of curves is best written in terms of these A operators. And as I explained in the overview that at some point we're gonna interact with the Gromo-Vitton theory of curves to nail the coefficients of the descendant correspondence. So this is the first idea. And the second idea is that, okay, I have to find this rule. If I take this rule, it takes a stationary descendant on the pair side and moves it to the Gromo-Vitton side. And that rule is this math frack C dot. I don't know how to pronounce that. But this C dot rule, that's what we're gonna define. And the way to define this is, first you sum overall ways of interacting. Actually in the MNOP2 we have this kind of discussion of this rule is in terms of chemistry is where we have these churn characters or little particles and they interact with each other. And so here you have to sum overall interactions. And the open dot is the kind of connected interaction. And that's given by three explicit formulas where you have one interaction that's a self interaction, two churn characters can interact or three can interact. Why is it only of these three? Because these are, I said they're stationary. So this comology grading has to be, it cannot be, it's not an H zero. It has to be in H2, H4 or H6. And one of the nice features about these interactions is they can only, these interactions are only supported where the actual comology classes interact non-zero. This is another reason for another answer to Maria's question is if I want to put in identity class then there's an infinite many, I have to consider infinitely many chains of these because you can have the comology class can always keep interacting. So it's actually from this point of view dramatically easier to think about to avoid this comology class because once I have the comology insertions being stationary, at most three of them can interact. So I only have to solve for the correspondence for these three things. And here are the explicit formulas. I said I take a lot of work and it takes an incredible amount of attention to get these formulas correct. And I would say it took years, maybe a decade or something like that. I mean, we weren't working on it all the time, but yeah, I would say that's fair statement roughly speaking. So this tells you how to take these churn characters and move them to this Getzler notation for the descendants. And this previous one tells you how to move the Getzler to the towels. And the outcome of that is a theorem. And it says that that theorem says that in this torrid case and we conjecture in general, but in the torrid case that's the formula for the correspondence. And if you're coming to this the first time, you think, okay, that's crazy. It's gonna, these formulas are terrible. They're not gonna be useful for anything. And I think that's a fair, fair first view of these formulas. That they look terrible and they're not useful for anything. But in fact, when you really get into the subject, they're not so bad actually, and they are useful. And to prove that they're useful, it's the last step here in this argument. In some sense, these formulas prove themselves to be useful because you can complete this last step using these formulas. You can transfer the Vierce-Ark constraints from Gromov-Witten theory, which have their own complexity to the Vierce-Ark constraints I wrote for stable pairs. And as I said, part of that, when you look at that in this paper, this transfer is done in this paper from last year. And if you look at that paper, there's a lot of kind of long calculations which take these formulas. Basically you have to take the long formulas for the Vierce-Ark constraints of Gromov-Witten theory and you have to crash them against these formulas for the transfer. And the inescapable feeling doing that is that somehow this correspondence is undoing some complexity in the original Gromov-Witten Vierce-Ark constraints. But that's kind of a pretty serious algebraic calculation which succeeded after some years also. So that's the end of the proof of the theorem. And I wanted to say that, so that's the end of the proof of the theorem and that's almost the last thing I want to say, but I do want to say one more thing, which is about the Hilbert scheme of points. And that's in Miguel's paper. We have a question, which I think is about what happened also. How to get a sort of constraints passed through cutting and gluing with respect to what people? Oh, that's a excellent question. And the sad answer is that we don't really know very well. And actually the fundamental way, I think the fundamental way to ask that question, let us say in Gromov-Witten theory is that if you take a smooth variety, like I said, that gives a usual Gromov-Witten theory and there's Vierce-Ark constraints for that usual Gromov-Witten theory. But over the years has developed also a log Gromov-Witten theory. The simplest one is if you take some variety respect to some divisor. There's a log relative Gromov-Witten theory. Now it's called log Gromov-Witten theory and log Gromov-Witten theory you can take more complicated targets. And I think the fundamental way to ask question is that is there a way to state the Vierce-Ark constraints in log Gromov-Witten theory? And the answer is that we don't know, but there was two pieces of evidence. The first is in the proof that Andrei Okunkov and I did for the curve case. That's dimension one. A crucial step is to formulate the Gromov-Witten invariance for log curves, not just curves but curves with well, or log structure at one point. That was a crucial step and we formulated and it's part of the proof. You have to, I mean, it's one of the things we use. So the answer is yes in dimension one and it's crucial. In higher dimension, we don't know how to do it. All there's been some effort. More recently, there's been some ideas about using negative tangency conditions. So various people have been working on this in Hung Lu Fan and Long Ting Wu to use negative tangency conditions to try to write Vierce-Ark constraints. There's been some, I think, some success in Gena Zero. I don't know, but the answer is that we don't know how to do this. I don't know how to do this. There's been some effort. It would be great if someone wrote down Vierce-Ark constraints for an arbitrary log target. I think that would help in the proofs. It helped in the idea of proving it. In some sense, that's one of the reasons why there's not so many cases outside the Torah cases. Okay. And these parallel statements on the stable pairs side, I mean, I haven't really discussed the stable pairs theory. Stable pairs relative to a divisor. This is actually one of my favorite parts of the whole subject. Three-fold relative to a divisor. And there's some kind of miracle that happens where the Gromov-Witton DT correspondence is sort of can be formulated in this log context. So the classical example is a three-fold relative to a smooth divisor. And that's been studied a lot. And one of the really interesting things there is that the boundary conditions on the Gromov-Witton side, if you're familiar with relative Gromov theory, transfer themselves to this incidence conditions on the Hilbert scheme. That's kind of what Hilbert scheme of points of that surface. That's what receives the boundary conditions in stable pairs. And it precisely goes to the Nakajima basis there. And more recently in the past year, there's a paper by Devesh Malik and Drew Vranganathan, which defines this log stable pairs ideal sheets theory for a arbitrary log target or maybe there's some conditions, but more or less a log target. And there also you can lift the Gromov-Witton DT correspondence. So if you go to these degenerations, there should be no essential problem with the Gromov-Witton DT correspondence. That survives everything. The difficulty is the various constraints. We don't know how to formulate them. Okay, so that was another long answer to the question. I hope that conveyed some information. So the last topic I said I wanna talk about is this Hilbert scheme of points of a surface. And one of the reasons is that there's something very concrete and nice here that comes. And the other is that somehow brings this stable pairs the subject to earth in some way in the sense that if people have not thought about stable pairs, it might think about some of the abstract space you might not bump into. But in fact, in one corner of the theory is the very familiar Hilbert scheme of points of a surface. And how that works is not surprising. If I have a three-fold, if my three-fold X is of the form S, that's my simply connected non-singular projective surface cross P1, then the result, so if I'm interested in the surface I can make this three-fold, which is just crossing it with a P1. And then moreover, I look at the curve class, which is just the vertical curve class, just the P1. And I look at N times that. So it looks like N, the curves look like this, the sheaves look like that. Or they could be all clustered together. Then the first geometric fact is the stable pair space for this three-fold and for exactly this curve class is as a scheme, just the Hilbert scheme of N points of S. So I leave that to you to check. There's not really much there to check. And then something else, which is nice, is the virtual class of this stable pair space is just the usual fundamental class of this, which is well-known to be smooth dimension to N. So not only is the Hilbert scheme, but the virtual class, it's not some wild class on the Hilbert scheme, but it's the usual fundamental class. So this means that these total logical integrals over the Hilbert scheme are a corner. They're just included as a certain corner of the stable pairs theory. And so then it makes sense to say that if we know, if we can constrain the descendants by these VRSR constraints, and they should also say something about the Hilbert scheme N points. And this is exactly the path that Miguel takes in that paper, that's this is the paper. VRSR conjecture for stable pairs, descendant theory of simply connected three-fold. So he proposes constraints for simply connected three-folds which then have to do with some of the hodge grading. And then he specializes them for surfaces in the way that I've suggested. And you get these constraints for surfaces, but moreover, he then he proves them for surfaces. And the way you can, why he can prove it for surfaces with this non-trivial hodge grading is because we know how to prove it for Toric services by our Toric results. And then you can use some universal properties of the Hilbert scheme to show that that actually proves it for all surfaces. So here, this is also evidence that the VRSR constraints for three-folds are right, the ones that he wrote with hodge decomposition, give some kind of confirmation of that. Anyway, so I wanted to write something about this. So what is the descendant of the Hilbert scheme? So this is, so this, you can now just forget about stable pairs. What is the Hilbert scheme? The Hilbert scheme is some space of points, sub-schemes of end points of S. And then there's the product and the universal object there is the universal sub-scheme. And what's a descendant here in this language, you take churn character, well, you take a homology class on S, you bring it back to the product, then you multiply a churn character of this universal sub-scheme. That's the, this universal sub-scheme has some, it's not smooth, but it has some finite resolution and you can take as churn characters are well-defined. And then as I said, we always shift it by the trivial one, it doesn't matter much, but it's a little bit nicer. And then you push it down and that gives you some total logical class on the Hilbert scheme. It's some churn class of this universal sub-scheme. It's a slightly non-trivial thing because this universal sub-scheme is singular. So that's the descendant acting in the homology of the Hilbert scheme. And what is the theorem? The theorem says that if you take a simply connected surface and you put in any, you put in some monomial, it doesn't matter. I mean, in the previous way I wrote it, I put anybody in the descendant algebra, but you put in some monomial and then there's an operator that acts on it and once I finish this action, then I integrate every term I should get zero and every term is a descendant integral. So the only question is, what is this operator? And you get this operator by actually just going up to this, you take X cross P one and you take this curve class and then you go take the operator that we know there since the virtual classes are doing the things you want, this will give you. So what is the formula for this operator? So there's going to be the T and R and these are very similar to the ones we had for three folds. I don't rewrite them now. You can look in Miguel's paper and then there's one more which takes a slightly different form. So I thought I'd just write that last one. SK, to define SK, you take this derivation on the algebra and here's the descendant algebra with generators for the surface and we have a slightly different thing here. We have this descendant operator with a comology class and there we bump and we also multiply the comology class. And this S operator then, I just wanted to give one show you one place where the hush decomposition matters. This S operator is this derivation now but with a comology class who's going to bump and then the churn character here and the sum runs over all terms of the decomposition of the diagonal but where the left-hand one only has zero in the hodge on the left side in the hodge decomposition. So if it's simply connected, there's not many choices. It could be zero zero or it could be zero two but this is roughly speaking how these things look. They look very similar to the three-fold one except there's some dimension reductions or some twos turn to threes when you go through this and then Miguel also puts the hodge weight so the hodge weights will go through all of these and they come in in this way somehow in order to find these operators you need to know something about the hodge decomposition. Okay, but then that's extremely explicit. Moreover, this is a theorem and it gives in practice what it does is you give the surface, you take these descendant operators and it gives you some non-trivial relations after integration there and they're true. Okay, I think I'm going to stop here. So it's too bad that I didn't come to Paris or to be able to be nice to meet people especially students have not met before. So if you've been to all these lectures I invite you if you're in Zurich to knock on my office door and then after a short exam I can give we can get a coffee or something like that short exam on the course material. And so here it's been raining all week and here's the view Zurich view on the kind of stormy day and sunset. And that's it. That's the end. Thank you very much, Thao. Any questions? Yes. You mentioned that you knew about the virtual sort of operator for PT sites before the correspondence. Could you explain how you could guess before? Even before you're saying we knew, oh yeah, how we guess so. Yeah, I know exactly how we guess that Andre, this was a really long time ago, maybe 2005, something like that. Andre had written the small box counting program for DT and then you see this answer is very simple. There's only one thing complicated in it. Oh, I'm sorry, I've stopped sharing. It doesn't matter. So I guess for P3 and I just was just, we just wrote down the terms that we thought what it could occur and you put them with undetermined coefficients and then you start generating data and you see whether, I mean, if there's gonna be one then you can solve for it because the system is massively over determined. If there's not going to be one then you're just gonna be wasting your time. But it turns out we found very quickly, we found L1 and once you find one, you can start taking the bracket. I didn't really explain this, but even on the, yeah, I should say this now, but on the Gromov-Witton side, L0 and L-1, they're kind of closed under brackets. So once you find these two, you don't get any more. You make no profit. And even L1, you get a little SL2. If you find L1, it's not enough because when you start taking the bracket, you'll just get more L0s and L-1. There'll be a little SL2 there. But if you find L2, then you win because you can start taking brackets of L2. First you find L1, then you take brackets of L1 and L2. You can generate the whole, all the relations. So actually guessing these relations, you really only have to guess one or two of them. So that's what happens basically by using just data and undetermined coefficients with a lot of hope for what you have, with the method undetermined coefficients, of course, you have to be correct on what terms you allow. But that's how we did it now, at the computer. Without the computer, it would have been impossible. Any other questions? Yeah, me? So in the variant, if you take serial power, not on C cross P1, but on some non-trivial vibration in P1, does it talk something about HILP-S? Sorry, I'm having a difficult time understanding. You're somehow too far from the microphone or it's not... So I understood there were supposed to look at some different vibrations over P1. Yeah, no, over S. So in the end, you consider... Yeah, S. Yeah, I don't think you learn too much about that. I mean, if you want, whether that gives you some new information about S. Oh yeah, that's a good question. You should ask Miguel. I think he's thought about those things. In fact, I think I asked him that once. And my memory is the answer is that you don't learn anything else from S, about S. But if Miguel's here, maybe he could say something. Or maybe he's not in Bure. Anyway, my memory is that Miguel's thought about that and the answer is you don't learn the thing. He answered in the chat. Yeah, okay, good. So we have some more questions if you're gonna... Have you thought about some more geometric correlation for the LK operators on the PT side? No, that's a good question. I mean, meaning somehow having some geometric construction that gives this without me having to define all the terms, right? Yeah, it'd be good if someone could just tell me what this class was. Yeah, go for it. I don't know how to do it, but it's a good... I guess that there are some... No, maybe I have some ideas about that in the ways that these were formulated, but... Okay, I think the Hilbert scheme case, the algebra of derivations of the cup product, I studied in this paper of Gillard where he studied this linear interaction. In the case of K3 surfaces, at least. So maybe there is some hope to connect them with... Yeah, it's a good direction. I mean, I think this is a question, the direction of trying to prove the Versailles constraints even in the geometry of sheaves, which as I said, it's a direction that I'm very happy if someone goes in that. Sometimes I feel that knowing all the grammar of wooden theory makes it harder to work on that problem. It's better than maybe just not know anything about curves and try to think about some structure on the sheaf side. There's another question, has there exists some blow up formula for PT counts? You know, the way one thinks about blow up formulas, the way one can think about blow up formulas is in terms of degeneration. That to find a blow up formula, you need to know, it's some kind of universal formula about projective space blown up. And I don't know, I mean, I don't know a closed formula for that, but using the degeneration ideas, you can make some progress in the sense that if you're interested in a particular invariant, you can often get it, but I don't know a general formula. Any other questions? For PT or for the Hulbert scheme, do the Versailles constraints uniquely characterize the descendants series? So I don't think so. I would say that the answer is no, at least for the stable pairs. You know, you could ask that on grandma Witten theory and I think the answer there is also no. It's in the semi-simple case, it's yes. So if you ask in the semi-simple case, maybe there's a bigger chance. So maybe the answer is yes in the semi-simple. If you take a semi-simple, if you take a Toric three-fold, then it is the case that the descendant that the Versailles constraints on the grandma Witten side, they do characterize it, but you have to use one piece of additional geometry, which is the topological recursion relations. This is explained in the paper of Gottman. There's different ways to think about it, but the most elementary one is that you use these topological recursion relations. So on the stable pair side, it won't be the case that, yeah, I guess that means that it won't be the case that the actual Versailles operators will uniquely determine it. You'll have to think about some replacement for these, something that will play the role of those topological recursion relations. It's a good question, but my guess is maybe one could think of some geometry that will play the role there. I don't know what though, but for this, the Hilbert scheme of points, I haven't thought at all about it. It's again, something you should ask Miguel. Any other questions? All right, thank you very much. Thank you again, thank you. Thank you.