 David has a dual affiliation with Alberta and Minnesota at present, so he completely qualifies as an Alberta representative. And his title is comological field theories from gauge linear sigma models. Okay. Thank you for that introduction and thank you to the organizers for inviting me to speak. I'm sorry that I wasn't able to make it in person. Now, if you have a small robot body, maybe I can come on the hike with you guys as well. That sounds. That sounds exciting. Right. So, I'm going to talk about homological field theories for GLSMs. And this is based on joint work with bump shake can. Sadly, bump shake passed away last year so this is dedicated to the memory of bump shake. All right, so let me tell you about what we did. So, I'm going to talk about what are GLSMs. And then what's the input data for co homological field theory for GLSMs. I'll talk about some of the results. And some of the history of the results in this direction. And if time permits, I'll talk about the construction of it. So, I should clarify, what's the norm for the, for the amount of time for these talks is it about 50 minutes or one hour. More like 50 minutes, 50 minutes. Okay, so I will talk about these three things. Most likely. All right, so GLSMs. So, the data for GLSM is some quintuple of data here. So we have a complex vector space and a group acting on it so a subgroup GLV, which is linearly reductive, and a character of that group. And another character. So that's the input data. And from that you define the usual group, G, which is the kernel of this character guy. And theta, which is the restriction of this extra character to G. I'm going to explain roughly how to think about this in a moment. And you need to require. A stable locus with respect to this new character is the same as the semi stable locus with respect to theta, which is the stable locus. So this is, this is this condition here is what's called a good lift of data. And this is like requiring that your stack is nice to lean on for the stack. And this, this here is what's called a good look. And then we also require that the critical locus of. I forgot to list w in this year. Okay. So I'll explain what all this amounts to in a moment but we require that the critical locus of w is proper. So that was the precise definition. Okay. But roughly. You can just think of this as an affine. G it quotient. With this super potential, such that the requirements are really saying that this G it quotient is a smooth DM staff, not necessarily proper, usually not proper. But the critical locus is proper. And this data is geo SM says probably many people are aware of these specialized to, well, complete intersections for w is zero. Just the following spaces, for example, projected space, or more generally torque varieties, or grass monions, or more generally quiver varieties, etc. Okay. So, what's the goal. The goal is to produce an enumerative theory for GL SMs. That's the goal of this talk and I'm going to tell you what that means and what goes into it, and what the history of it is. So, you want an enumerative theory for GL SMs which specializes to Gromov Witten theory, complete intersections. So, you want to find G it quotients like the ones I just mentioned, or which specializes to bond Jarvis run Witten theory, when she is fine. And in that case, you just have a portion. Any questions. Right, so a basic example maybe probably lots of people are comfortable with this. If you have C star acting on CN plus two. Spec of you have the projected variables and you add another variable of a P field sometimes. You have usual weights on projective space and then you take minus, minus the on this extra variable. All right, so you could choose to do two different characters theta plus be the identity character and theta minus is the inversion character. So, if you choose this data and to fix a homogenous polynomial F of degree D, and then you set w to be P times. Okay, so in the positive chamber in the with this positive stability, theta choice of theta. So this is actually going to give the Gromov Witten theory of this hyper surface and projective space. And with the negative choice of theta this inverted character this gives the fun Jarvis run Witten theory of well as on on this portion here. So let me talk about. What it is we mean to construct for these GLSMs. So what's the data that goes into a comological field theory. Well, you have the state space, which is just a graded see vector space space. And it has a pairing super commutative pairing and a unit element. It's just a distinguished element, but it will satisfy certain properties. And then the main data is a collection of maps so for each G, and each are and each D, you get a map from the earth tensor power of this created vector space to the co homology of the modular space of genus G are marked pairs. These are correlators. And so this data, which was the data in Gromov Witten theory was axiomatized by con savage and manning. They showed that in Gromov Witten theory, this collection of data, which I'll run through what that is in the in the next example, satisfies a whole group of axioms which I'm not going to go through. These axioms are essentially certain operations on modular spaces of curves. So, for example, if I take, if I want to glue two curves together, there's there's a map from the product of modular spaces of two curves to, to another modular space which takes those two curves includes them. And these axioms are saying that this data is natural with respect to all of those natural maps between the modular spaces. Okay, so take for example, the Gromov Witten theory of a smooth variety. We denote the modular space of maps to Z. So this is the modular space of maps from a curve to Z. We denote it by this. All right, and then we want to build Gromov Witten theory. So we need to, we need, I need to give you a state space. It's just the cohomology, say the Durom cohomology of Z. So I should say this would be smooth compact. And then the pairing is just integration. So now, using the PUNA formula, you have that the R tensor power is just the cohomology of Z to the R. You can pull back. So you have an evaluation map from this modular space, you have R, a map to Z to the R, based on taking this curve, it has our markings. And so it choosing each marking gives you a map to Z. So this evaluation map, give an evaluation map. From the modular space to the modular space to Z to the R, which just just takes this map. And, and, and takes the value of that map on the R mark points of this frame. Okay, so using that map, we can pull back to the cohomology. The R, D of Z of this modular space of maps. And then you tap. What's well known is that you tap with something called the virtual cycle that lands you in the homology of this modular space. Then you can use the forgetful functor, which forgets the map here and just remembers the curve. So pushing forward along that forgetful map, you man did the homology of MGR. So now this is just usual modulite curves you don't have a map to Z anymore. And this is a nice smooth, proper doing Mumford stacks and you have full credibility. And the co homology on GFR. And this map here is called the correlator. Any questions. Okay, so I'm going to explain how this works for for GLSMs. So, well I need to give you what the data is. So the first thing I need to tell you is what's the state space what is this C graded vector space. In general for GLSMs. This is the hyper co homology of the cotangent, essentially the exterior algebra on the cotangent bundle. But it's on the inertia space inertia stack of your GIT portion. And it's not just hyper co homology, because we have a super potential. We have to add this term to this complex. So this is what's called the twisted hodge for homology of the inertia staff. Let me take a second to explain what these things are. So first, first of all you have the inertia stack. This is defined intrinsically as the fiber product of the, of the two diagonal maps for the stack. But for a nice smooth, blue and rumpard stack it has this explicit form. So it just has a bunch of components. So, this probably looks odd. And this means conjugacy classes. G is acting by conjugation so G mod G is conjugacy classes. For each conjugacy class, you take the fixed locus of the semi stable points. And you quotient by the centralizer, you get a space. So you so this inertia stack is concretely this district union of these pieces. Now, whatever that is, I'm plugging it into here. So this actually has a bunch of summands indexed by the conjugacy classes of G, because I'm taking the co homology of the complex for each of these, for each of these components of this space. And on each of those summands. I'm taking this twisted Durham complex. Sorry, this twisted hodge complex. So you have the usual exterior powers of the cotangent bundle. You know, if you took, if you took the P co homology of the Q wedge of this, you know you get the HP Q of your manifold. But now, since we have a super potential we add this, this term. But we add this differential. This differential is just wedging with D of W D of W is an element of this algebra. So this is basically multiplication by BW. Okay. And now what do you, you do well this is a complex of sheaves on this space. Right. So I want to take it's hyper co homology so I need to resolve this by an a simple complex and then take the homology. So probably, if the most concrete way to think about this is, you just take the double resolution. So if you take the double resolution. Then, concretely, you can think of this as the co homology of the double resolution with, you just have del. So you, so since we didn't have a D here we lose del bar but we still have del. Okay, anyway, so this is just some general vector space you can find. And it satisfies the usual punitive formula. You just take an hour power here. And then you can take the time Sebastian. Any questions. So, next, I need to give you a pairing. So for the pairing. I need to set you up a bit. So it is going to be just integration. But properly set up so if I have a hybrid co homology class with the W and another hyper co homology class, the TV. Then their wedge product is actually a hyper co homology class in the time Sebastian. Sorry, in the sum of the two functions. Now this is just the max. So you can see this, for example, you could wedge these as double classes, and they'll just land here. There's also an intrinsic way to find this one wedge product without a modeling this hyper co homology. So, the next thing you need is this inverse map. So because this thing is indexed. If I have this, a stack tie. Because this inertia stack is indexed by the conjugacy classes of G. You can send. So, so here I'm denoting by pie is going to be the semi stable. So for this inertia stack, given an element in a component indexed by the conjugacy class G. We just send it to some route of unity times X. But in the component indexed by G inverse. Okay, so this is some kind of thing. Why do we do this. First of all, what is this data. It's, it's just a root of unity that has multiplicity to D. But it's a root, it's a to D through to unity, where D is the homogeneous degree of W. So in other words, it's like a square root of the deep root of unity of W. So what's the advantage of that. Well, it induces an isomorphism from the hypercohomology of this inertia staff. This properly wedged EW to the same thing, but with a minus sign. Okay. So we just set up here. And if we wedge alpha and beta, we get a D of the sum. So this thing. If I wedged the two. If I wedged a class and all right, sorry, so this is defined as the integral of alpha wedge in star. Given alpha and beta both in DW, this guy is now in minus DW. Okay. So now when I wedge them together, this becomes a zero. And so this lies in the huge, the co homology with zero here. Right. Which means this is usual co homology. And I can enter it. Okay, so now what's the unit. So, for this, I need to define a couple things. So, given this gamma, it has a map of what I'm going to call C star R. So this is the R charge action. And this group is an extension of the R charge of G by the R charge action. Okay. And now, W was homogeneous polynomial with respect to this, this, this action of degree D. And so, give this weights are going to be CI. So I want to assume that those weights are positive. Okay. Sorry, not positive but non negative. And I also want to assume that if I take the fixed locus of the R charge action and quotient by G, that commutes with quotienting by G, and taking the fix and taking the R charge fixed locus. So, let me comment on that. This GLS and variants aren't actually dependent on the choice of our, our charge action so you, you lift this action. It's not dependent on the choice of that lift. But in order to define this unit, you want your choice to have this compatible. Okay. So given that we can define this unit as essentially, it's the churn character. The locus of the R charge action. I'm going to write it like this. So, Kai is just filling in for, for my GIT portion. So, so it's this space, which is equal to the space is going here. Given that I define a matrix factorization. W, but w by this condition is trivial. On the. Sorry, I should also require. I don't want everything to be zero so I want to require this. So, by this condition w restricts to zero on this fixed locus and so this is now a factorization of w so these two conditions assured that this is a factorization of w and that the, and that what goes in is this space. Okay, and then there's something called the virtual Riemann Rock theorem, which would be like the quantum K theory class, but in for virtual Riemann Rock you want to do this Todd corrections you put here. You multiply this by the Todd class of J fixed locus of this thing. Relative to this. This is our R fixed locus. J here is the intersection of these two groups, when you when you look. So this is the thing this is the unit and what is this thing, this thing is lying inside the churn character, this is actually lying inside our state space. This churn character and this generality was defined by him and Polish. So this is a matrix factorization. And this is its churn character is actually a twisted hot class. Any questions. No, I don't think so. Right. So we've got the state space. We've got the pairing and we've got the unit. So the last thing to define as correlators, and for this. This is really the whole body of the paper and would take a very long time, but I give you the idea. So, there's something called LG quasi maps. So, if I give you GLSM data I have an analog. Basically, you can think of these as maps to my GLSM. So maps from curves into my GS GLSM is this modular space. Okay, which I'll just call probably MLG. Or, sorry, I'll just call it MGRD now now MGRD just means this huge modular space of LG quasi maps so curves into the GLSM data. If I have time I'll say a little bit more about what those are. But right now I just want to give you the idea. So, I'm going to embed this modular space. So this modular space is horribly smooth. Sorry, sorry, horribly, not horribly singular, right. Okay. I don't know if horribly smooth is a thing. So, I want to embed this modular space into something smooth. So I'll call that you, and it will also depend on the genus, the number of markings and the degree. Okay. All right, so now this is something smooth. And I wanted to, I want to create this embedding. So remember that I had two maps. In the Gromov Widen setting and also in this setting I have these two maps here. I have an evaluation map, which actually lands in the inertia staff. All right. And I also have a forgetful map, which just lands in the modular space of birds. So if I'm going to embed this into a smooth space, I wanted to extend these maps. But I bet it in such a way that it extends the evaluation and forgetful map. And now the virtual cycle is actually constructed here. But it's supported on this modular space. So the virtual cycle went to the MGRD. Sorry, the virtual cycle I'll denote by this notation. So M, GRD, virtual, I mean maybe this GRD could go in here. Anyway, it's just notation. And what is that? That thing is an element. So maybe if I have time I'll tell you a little more about how we construct this. But for now, you need to know that there's something called the virtual cycle, which is an element of the hypercohomology supported on this modular space. And then here, you take the twisted Doronc homology on this nice, smooth space, MGRD. And here, you put wedge E of the pullback via the evaluation map. And this modular space here, sorry. Just, I'm going to, this is just a complicated expression for W, basically. So what is W? Well, it's D of the pullback via the evaluation map of W sum to the R. So this modular space here, this is proper by a theorem of von Jarvis and Rouen. And so this is actually a co-homology class with compact support. All right. So what's the advantage of that? Well, because I have compact support here, I can take the push forward along the, with compact support. All right, so I have this push forward because of this. I also have capping with my virtual class. And notice, my virtual class, there should be a minus sign here. It's important. My virtual class has this potential but with a minus sign. Okay. So if I take a hyper co-homology class here, and I cap with this, I cancel this, I cancel this twisting. So clearly there's a zero here, but this is the usual hyper co-homology now of the hodge complex. So this is the usual HPQ numbers here. When we push forward, this is usual HPQ. And here's just the pullback along. Okay. So we use our Q-ness formula. We pull back along this evaluation map. We land in this smooth space. We cap with this class. That gives us a compactly supported class and cancels the twisting. And then we push forward. And what's the result? This whole thing is our correlator. That's how we define our correlator. Any question? All right, so I think I have just five more minutes. That's about right. Okay. So I'll just, I'll just tell you a little bit of the history of the results and some of the things we've proved here. So these enumerative invariants for GLSMs were first constructed by Fon Jarvis and Ruan when G is finite. And then Polish, Czech, and Vayntrel constructed a purely algebraic version where the virtual class comes from a matrix factorization. And that's the construction we generalized here. So we construct a matrix factorization whose churn character is the virtual cycle. Kim and Lee also did, so this is also the G, this is all G finite case. And then Kim and Lee also did the G finite case using what's called post-section localization. So all of these were what's called FJRW theory, Von Jarvis-Ruan Witten theory with G finite case. Then Von Jarvis and Ruan followed up and they proved this for what's called narrow sectors, but for general GLSMs. For narrow sectors, this is, this is taking the non-primitive cohomology. So this is only the state space only uses the components which are pullbacks from the ambient GIT portion. So for example, this works for projective spaces and, you know, cross-monions, torque varieties, these kinds of things, but not with, not the complete intersection part. So not the cohomology that's not coming from the pullback of that torque variety or that cross-monion or whatever it is. So then, together with Chiakan, Fontanine, Guara, Kim and Shoemaker, we constructed these invariants for convex hybrid models, models, and then maybe all sectors. And finally, with FUMSHIG, we completed the general base. So let me state some of the results about this. And then I'll be done. I'll finish there. So the theorem with unit Jeremy and Mark and Fontanine, the main theorem was that the numerative invariants for convex hybrid models specialize to Gromov FJRW theory. So maybe that's not the main theorem, but this is part, I have space here. So this is actually basically by construction, that this construction just generalized the construction of full of Schroek and Ventral. But then there's an AND, Gromov Witten theory. This was far less obvious. We had to, this was actually from work. But there's a caveat using the co-section localized virtual cycle. So that is strictly speaking, what we proved, and it wasn't known at that time that the co-section localized virtual cycle agreed with the Baron Fontechi cycle. So FUMSHIG and Junza, oh, went on improved this. So actually, this caveat can be removed. The co-section localized virtual cycle agrees with the usual definition of Gromov Witten theory up to sign agrees with Baron Fontechi virtual cycle up to sign. Okay. So then we generalized this construction. And actually, we also proved that this general construction is general invariance for macromological field theory. So we verified all of the axioms which I named but didn't detail for you. For these invariants which we constructed. All right, I think that's a good place to stop. So any questions from the audience? I know it's late. Is there progress? I mean, I see there is a lot of progress in the theory and that you can relate these things to each other but can you calculate them? Yeah, sorry, it's a little hard to hear you but you're asking me if I can calculate them? Yeah, so basically. If you're asking me if I can calculate them? That's what I'm doing, yeah. But if you ask Melissa Lu if she can calculate them, I think the answer may be yes in some cases. So she's calculated these in the Abelian case using this placer Morrison construction. So there are some of these modular spaces are completely Torah. And then everything can be used for localization or something like that. Sorry, can you use Torah localization to do this or how? So there's no, yeah, there's no C star localization theorem at the moment for these. But I think that's maybe something Melissa's working on. I tell you that she can calculate this in certain examples. Has shown that, you know, they analytically continue when you vary this stability for example. Okay, thanks. Thanks a lot. Any other questions? So, does your construction give any insight into what should happen on the other side of America respondents? That's a great question. In terms of, yeah, like what should be the beef theory, I actually I don't know the answer to that. For the next 10 years. Yeah, like site to give them thought theory or whatever it is for in this generality. Yeah, just given that you have what really seems to be the right generalization for JLSM, it would be very natural to look through them. Yeah, you certainly, I mean, I don't even think we know how, you know, how to mirror general GLS sounds. Maybe I'm lying, but if G is an arbitrary, you know, non a billion thing, maybe in the Toric setting, you know, you know, but for arbitrary G. It's even the mirror. And then, yeah, what's the theory for arbitrary GLS and maybe, maybe the audience knows better than I do. Okay, well let's thank David again. And David I guess I'll see you tomorrow. Yeah, I'll see you tomorrow. Okay. Okay. Okay, thank you.