 So let me just remind where we left off the last time. So we were considering symplectic dual pairs. So we had two symplectic resolutions. And we saw that symplectic duality was some list of relationships between these two pairs of symplectic resolutions. And the one that I sort of discussed the most was this relationship about the homology. So let me just remind you about that. So we had torus actions on these two symplectic resolutions. And we considered the attracting sets for these torus actions. And then we said that the homology. And in fact here, let me just say that I said the right thing last time. So this, so recall here that H of some variety of Z means the top or more homology. And the last time I said, and it was correct that if we take the attracting set of in Y then it's top real homology actually coincides with the total Braille more homology. So just for this Y plus we can either think top Braille more homology or total Braille more homology. And this top Braille more homology, this more total homology decomposes by the decomposition theorem. And after using the decomposition theorem and this hyperbolic stock function, we decompose it into the strata downstairs, the symplectic leaves downstairs. We see the top homology of the attracting set in each strata, closer in each strata, tensor the top homology of the fiber over a point in the strata. So recall here, F alpha is pi inverse of some point, little x alpha, little x alpha lies in some stratum, capital X alpha, this math is called pi. And under some lectic duality, we have a bijection between strata in the dual varieties, order reversing bijection. And also what's reversed is the roles of the fibers and the attracting sets. So we have some shrieks here. Shriek is just my notation for symplectic dual. And then we have a quality is going like this. And these equalities are sort of not just the qualities of vector spaces, but they're actually given by bijections between irreducible components of those two. There's a variety. Okay, I guess I can move this guy back a little. Okay, so that's just a quick recollection on that. By the way, I just wanted to point out, so I listed a number of things that are structures common or not common, but structures matching on both sides of the symplectic duality. And I wanna point out that there's some more structures that I didn't talk about. Let me just point out two more structures of continued numbering. So this will be four. This four I may come to later if I have time. So in the end of the lectures, that's called something called the hechidic conjecture. So maybe if we have time, we'll come to that. And number five, I'd like to point out is a matching of stable envelopes, particularly elliptic stable envelopes. And this will actually be the topic of Richard Romani's lecture next week. So you should tune in for that. I think it's toward the end of next week. And I should say that we have these different structures that match and you might ask, what's the relationship between the matching of structures as certain ones imply other ones? And usually you could say that there's, mostly I would say, no, we don't know that matching of certain structures implies other structures, but usually there's a kind of compatibilities between them, but not a perfect implication. I mean, I think the situation is very analogous, very similar to in the usual mirror symmetry, 2D mirror symmetry, where as you know, you have different structures like matching of Hodge diamonds or what are mirrors of Hodge diamonds or homological mirror symmetry. Anyway, many other things that I'm not an expert on, and I believe that it's not that one is like, implies all the rest of them, but that there is interrelations between these different matching. So it's a similar story here. And I would expect that there's probably more things that people just haven't thought about, or maybe they have thought about them and I just don't know about it. Okay, so today, what I'd like to do is talk about a particular example of some particular duality, some particular dual pairs, which turns out to in some sense be, I don't know, the main example, at least for me it's the main example. And it concerns core varieties and affine gross mining slices. And after doing that then, if we have time today, if not tomorrow, we'll get into this Barman-Finkelberg and Kagema construction. Okay, so to set the stage, I'm gonna start by reminding a little bit more about core varieties. So let me fix the algebra. So G is semi-simple, the algebra, and I'm gonna require it to be simply laced. So in other words, an ADE type. And let me associate some data to this semi-simple, the algebra, which would take two dominant weights. And we'll write, all these dominant weights will extract two vectors of numbers. So we write the first dominant weight as a linear combination of the fundamental weights. And we write the difference between these two dominant weights as a linear combination of the simple roots. And if it's, I would like to demand, so like my lambda and mu are not just arbitrarily chosen, but they're chosen so that these VI's are integers. So WI's and VI's, not just integers, but natural numbers. And sorry, here I, in both these sums, I ranges over capital I, which will be the vertices of the Dink and Diagram. The fact that VI's are natural numbers, that's saying, I mean, that's equivalent to the fact that lambda is greater than equal to mu in the usual partial order on the dominant weights. Then one more piece of data I'm gonna fix is I'm gonna write lambda as a sum of fundamental weights. So each of these are fundamental. So also those omegas that appear there are fundamental. So there's gonna be W1, omega one, W1, omega ones among this list, W2, omega twos among this list, but I'm fixing and ordering on them. So there's a tiny bit more. Choice fixed here. And the last thing I'm gonna require is that all these lambdas actually are not just fundamental, but so soon that they're actually minuscule. So in type A, this is not a requirement at all, but in types D, there's only three, there's three minuscule fundamental weights. So it's a very strong requirement. And in type E8, there's no minuscule fundamental weights. So it means I can't deal with E8. Now, this assumption, well, there's a few, this assumption is not strictly necessary, but we'll see later why I did it on both, both on the covariate side and on the other side. And if you're familiar with covariates, you might ask, why did I bother saying that mu was dominant? Well, we'll also see the answer to that question. So associated to this data, we're gonna consider representations. So we have these fundamental representations and then I'm gonna tensor them together. So this tensor product, the fundamental representations of our Lie algebra. And I'll write this as V lambda underline. So lambda underline denotes this list, V lambda underline denotes this tensor product. And then I'm really gonna be interested in setting the weight space in this tensor product, okay? What about this weight space? Well, we can take this tensor product and we can decompose it into ERAPs in the way we decompose any representation. So that these multiplicity spaces which give these columns and answer the ERAP over new dominant weights. And then in particular, we can look at the new weight space in there. So it's this, this is the kind of decomposition of a weight space of a tensor product into multiplicity spaces tensor weight spaces of ERAPs. So let's go to quiver varieties. So what do we do? We choose an orientation which won't really matter later in the day, but for now we'll choose an orientation of the Dinkin diagram. That oriented Dinkin diagram is gonna be called the quiver. So it's just a directed graph. And then we have dimension vectors coming from our V's and W's. Let's draw an example. So this is maybe for SL4. So the Dinkin diagram is a type A3. So I have three vertices, my arrows go to the right, and then I have three framings. And last time somebody pointed out I wasn't very consistent in which direction the arrows go to the framings that, anyway, maybe this time I'll be more consistent send them to the framings. So W determine the framings, V's determine the gauge vertices. So last time I explained that from this choice, we got a group G, which is the product of the GLVIs. And we got a representation of G, so which I'll call N, which is the direct sum of of CVI, CVJ, where I, J is an edge in the quiver and harms of CVICW. So then we form the neck and gym quiver variety and I'll use this new notation. So I'll call it M lambda mu definition. We take the co-tension bundle of the spectra space N and take the symplectic Hamiltonian reduction by the action of G and then we take a projective GIT quotient and then we put like I did less than two parameters. So the first parameter refers to the level of the moment map we take and the second parameter refers to the GIT parameter for GIT quotient. And so Kai here is a character of our group G and we just take it to be the product of the determinants. So that's the definition of the neck and gym quiver variety. Well, that's the definition of the smooth one and this is the definition of the affine one. So just the same thing, but at the zero level. And now we can state theorems of Nakajima. So one is that M lambda mu is smooth and I will separate them up. M lambda mu is a resolution of M zero and M lambda mu and T, thanks. Let's find any many fixed points. Oops, I didn't describe T. Back up one second. So I have a Taurus acting on my quiver variety. Where does it come from? Well, I have these framing vertices in the squares. By the way, Alan Knitzen told me why they're called framing because it looks like a frame, or maybe why they're called, why they're going in squares, right? So we have those squares, those frames. And so the, of course, the general linear group of those vertices, of the vector space of those vertices acts. In particular, we consider the Taurus to be the product of the diagonals inside those vertices. That's where it seems. So that's our choice. And access finding the many fixed points. So by the way, right here, we use that mu is dominant. And right here, we use that lambda I are minus. So those assumptions are used in this term. Two. The syntactic leaves this affine quiver variety, affine quiver variety, the singular guy, are just given by regular low side and smaller sub-singular guys. Joel? Yeah? We have a question. Oh, great. Is it possible to realize tensor products of Verma module similarly? Yes, similarly, not exactly in the same way, but similarly. It's sort of, in some sense, this will actually come up. It's a good question, but it sort of will come up a little later when we talk about this affine-gross mining slices and cooling branches. So there's kind of maybe two approaches you could use. One is you could take these framings to go to infinity. Of course, then the representation gets bigger and bigger and it becomes more like a Verma. Another approach is to get rid of the framings altogether, but then do a slightly different, realize the representation in a little bit different way. So if you get rid of the framings altogether and do something slightly different, then people sometimes call that the loostic, no-ponent variety, and that can be sort of used to realize a Verma module. But it doesn't quite fit in the same framework that I'm talking about, so I won't talk about it. Okay, the symplectic leaves of these singular guys are given by the smooth loci or regular loci. I'm just finding a good representation in the other such proper varieties. That's a new, by the way. And here are new ranges of over-dominant weights that are trapped between lambda and part three. So there's an action of our Liegeber G on our friend from before, so that we take this smooth square variety, take its attracting set, take its more homologous and action of G on that thing, such that. So here we have our fundamental decomposition that I talked about last today in the beginning today. So we have the attracting sets of strata downstairs, tensor, homology of fibers. So here F new denotes the fiber over a point in this new strata inside the poor variety and lambda mu. And this matches the tensor product, so the decomposition that I mentioned just a few minutes ago. So this is isomorphic, this homology of the attracting set in the total space is isomorphic to the mu weight space in the tensor product, and this matches this decomposition. So let's be, make sure I careful about which way around it matches it, yeah, this way around. So this attracting, homology of attracting sets becomes home spaces into tensor products, tensor product multiple space spaces. Quality here, quality here, quality here, so on. We have another question in the Q and A. Go ahead. Oh, sorry, I mean, what's the map? Well, there's always a map from a projective GIT quotient to the corresponding affine GIT quotient. This is prod of some ring and this is spec of some ring and I have to think for a second why that it gives them that, but if you think of, I mean, it's, oh, this is I think the degree zero part of the ring that this is prod job. So that's why there's no, yeah, yeah. So there's like spec of some A zero in the prod, graded ring. So there's always a map like this. I didn't define the projective GIT quotient, but I mean, I can if you like, but it's too, take us a lot of time. Okay, let's look at an example. So this will be my running example in this section. It's a very simple example. G to be SL2 lambda to be N copies of the first fundamental weight, I mean, there's only one fundamental weight in SL2 and mu to be lambda minus alpha, the simple root. So the representation in question is, well, we have the fundamental representation is C2. So we're interested in C2 tensor N and then looking at the N minus two weight space of C2 tensor N, C2 tensor N, well decompose into many representations, but only, only two of those representations have an N minus two weight space. So this is, there's two factors in this direct something composition. We can, we have the N dimensional representation and it's N minus two weight space and it occurs once. I mean, let me consider with my home notation, the N into C2 tensor N, answer the N minus two weight space of the N plus of the N minus two into C2 tensor N, answer its weight space, answer its weight space N minus two weight space. And so everything in sight is one dimensional except for this factor here, which is N minus one dimension. So what's our quiver in this example? Well, we just have a one here because one, the coefficient of alpha is one. So W is equal to N and V is equal to one N. So we just have the cotangent bundle of P one as our quiver variety. Again, it's like P N minus one is our quiver variety, again resolving N by N matrices of rank less than equal to one N squared zero. And we saw this before, there's two strata. And we saw before that over, that this N minus one, so that the decomposition of the positive locus here looks like, well, the only really interesting part is here where you have the, where you have this, all these orbital varieties. So you have this topomology of this space of upper triangular matrices, we call this the X I suppose. So here X plus is gonna be those upper triangular square zero rank less than equal to one matrices. And that's responsible for this N minus one. Here we have fiber one, I guess I called it last name. And here I have X zero, which is plus, which is the point. And here I have fiber zero, which is just P N minus one. There's another point. And here I have N minus one components. So there we see the same decomposition. But by the way, I don't know if I really said this before, but any co-attention bundle of a partial flag writing type A can be realized as an academic variety, as well as any resolution of a sort of a slice in type A, as well as any resolution of an intersection of a sort of a slice with a no-phone orbit closure. So in type A, lots of, I mean, anything you can think of can be realized as an academic variety. Well, not really anything you really can think of because in a few minutes we'll think of some other things that you can't. Okay, great. So that's the core varieties. Now I switched to these ethnograph mining slices. So let's take G to be the Langland's dual group to G, which looks a little weird, but it's not such a problem because actually the G is isomorphic to its Langland's duality algebra. Since it's an AD type, the Langland duality you don't really notice, we can kind of ignore it. But I mentioned this for thinking about more general situations. And then we're gonna be interested in the affine-grasse mining and of G. So which means I take G over the Laurent series and I take the quotient over G over power series. Affine-grasse mining will play the essential role in their remaining talks in the series actually in two different roles. So this is gonna be the first way and later we'll see a different appearance of the affine-grasse mining. So because I took Langland's dual, the reason why I'm mentioning that is because this Lambda I'm gonna think of now as a co-weight of G. So a map from C star into my group into the maximal torus of this group G. And therefore we can define a point T Lambda in the affine-grasse mining of G coming from this Lambda. So for example, example would be in say the SLN and this Lambda would be some integers adding up to zero. And T Lambda would be the matrix T to Lambda one. Up to T to Lambda. A lot of as a point on this affine-grasse. And we're gonna be interested in orbits in the affine-grasse mining. So two kinds of orbits. The first they call a grilamda. So then I take the group I co-ordinated by this power series group, it's orbit through T Lambda. And this is something called spherical Schubert variety. It's an analog or spherical Schubert's cell in agency. It's an analog of Schubert's cells in a finite dimensional five variety. So it's affine-grasse mining. One way to think about it is think of as a kind of G mod P and there's the orbits of P on G mod P. And these guys are finite dimensional. That whole affine-grasse mining is infinite dimension but these orbits are finite dimension. In fact, the dimension is given by the pairing of Lambda with two row, but it's not very important for our purposes. Oh, minor the important for our purposes, I suppose. And then a second thing I'm gonna be interested in is called W mu, which are transverse to these first orbits. And so to find their orbit, I take a group transverse to my first group, which I did note like this and take its orbit and then we have to find this group. So I take G now of polynomials in T inverse. I have an evaluation map to G sending T inverse to zero. And I take the kernel of that map and that's G1 T inverse. So you can think of as matrices where the coefficients are polynomials in T inverse and a module of T inverse, it's identity matrix. So I take the orbit of that group through T mu and I got W mu. So these are transverse orbits. In particular, they're infinite dimension. So these are like some kind of Schubert cells and these are like the opposite Schubert cells. And then the main objective study will be denoted W lambda mu, which is the intersection of verland with W mu. And maybe even more importantly, will be W bar lambda mu, which is the intersection of verland of bar with W mu. So this guy, this guy here is an affine variety, finite dimensional, in fact, the dimension of this W lambda mu bar is two row, here it was lambda minus mu. Row is the half the sum of the positive roots or maybe positive cohorts, positive root cells. Okay, so one more construction. So this is gonna be, this W lambda mu bar, this is gonna be our affine plus unvaried. It's also possible. And now we're gonna construct, it seems like a resolution. So to do that, we need one more sort of construction. We form a gr lambda underline. So I'm using this list now. Another, this is just notation. So it's the notation is suggesting that it's a kind of product of these gr lambda eyes, but not exactly a sort of twisted product. Okay, so that's just notation. Here's the definition. So it's a sequence of points in the affine-gross money with the condition. So these G is the node elements of G of moron series and brackets G, you know, it's a corresponding point in the affine-gross money. And the condition is that G nine minus one inverse GI brackets is in gr lambda I for all I. Okay, so I admit this probably like a little confusing if you haven't seen this before. The way that I like to think about this, one way I like to at least like to explain it is that these, we're setting points in the affine-gross money in like this. So either my G one, G two, G three and so on. And they have distances between them described by these lambdas. This is distance lambda one, distance lambda two. So it's a variety of polylines in the affine-gross money. So sequence of points with prescribed distances between them. So this condition can be thought of as distance between GI minus one and GI. And this is a sequence of points minus one and GI. And this distance, I mean, just some formal notion but it actually has some kind of metric implications. For example, if we take G to be equal to PGL two, then the affine-gross money of G is an infinite tree or more precisely the vertices of an infinite tree with P one branching. So that means it looks something like this. So have the tree but infinite tree and at every vertex there's many, many edges. In fact, a whole P one of them coming out. So that's, and then it continues. And affine-gross money is just the points in this tree. And this distance is just the distance in the tree measured as like aligned edges. Okay. So that's the definition of this gur lambda underline. And then finally, we define W till the lambda underline mu by definition of one piece of notation, one piece of notation. This space gur lambda underline comes with a map which I'll call M lambda underline to the affine-gross money and it takes this sequence to the last point in the sequence. So just remembering the last point of this line. And then I define W till the underline mu to the M lambda underline inverse of W. So I just want that last point to lie in this sort of slice W. Okay. So here's a theorem about these spaces. I guess this theorem basically do to myself with Ben Webster, Alex Weeks. So the first point is that W till the lambda mu is some section of resolution of W lambda. So in particular, it is a resolution and it has a symbolic structure. And the W lambda mu has a personal structure. And then there's Taurus, oops, okay. So Taurus here will just be the Taurus of the group which is acting just by left multiplication. So Tx on this W till the Mw with finally many fixed points. Oh, and I promise this, to tell you, this guy, this first fact, this uses that lambda I are miniscule. And here we see a manifestation of something I mentioned last time, which is that when this lambda I miniscule on the full variety side meant that was needed to ensure finally many fixed points. And on the dual side, this needed to ensure that we have a resolution. So that's interplay on both sides. And one structure existence of resolution matches the other structure finally many fixed points for the Taurus section. Okay, continuing this vein. So another result is that the syntactic leaves of this W lambda mu bar, Rw new mu bar, or maybe new mu delta bar, I suppose. And that's for news dominant ways trapped between lambda mu. And here we see a promised feature of syntactic duality bijection between the leaves. So both cases of leaves are indexed by those dominant ways trapped between them. And so part four, and this part four basically just uses the, it's not really our theorem, this is just part of the geometric syntactic correspondence of chemical etching belonging, and says there's an isomorphism between, well, let's move on as follows. So between the top homology, also full homology of this W tilde, lambda new R attracting set, and this weight space in the tensor product, compatible. So just an isomorphism with spaces but compatible with these decompositions. So recall from the representation theory we have this decomposition by the isotypic components. So no subject there. And tensor, the weight space is there. And geometrically, we're gonna have here sitting the homology of the fiber, which will be all right, M lambda mu inverse of this point. And sir, the homology of the attracting set in the leaf and we have qualities. This guy, I mean, is sort of a famous guy. This guy is called the, or the earliest components of this guy are called the American Ritual and Encyclicals. Joella. Yeah. A question? Yes. Is this situation the simple dual of the quiver I to picture you presented the first? Yeah, yeah, that's a whole point. Yeah, that's exactly what I'm, about to say. But before I say that, I'll just do a quick example. If we take lambda mu as above, so lambda is n times the first fundamental weight and mu is n times that fundamental weight minus alpha. Then just w lambda mu bar is just this C2 mod C mod n. And it's in this guy's, it's resolution. And okay, I already discussed previously how this decomposition of homologies works in this case. I won't bother saying it again, but in fact, well, you might be, you know, you might say, how do you see this so easily? Well, you can actually write down explicitly in matrices, this isomorphism. Another fact that's true in general fact is that, well, the dimension formula that I gave predicts so we have lambda minus mu, which is rho. So we just get two, the dimension of this covariate will just be given like this, which is just equal to two. So we definitely get something, not covariate, afringer's mining slices. We definitely get something two-dimensional here. And in fact, you always get sort of C2 mod C mod n. As for all two-dimensional afringer's mining slices are of the form C2 mod C mod n. Okay, so now I come to what Francesco said. So this claim is that these guys are subtracting dual. That's subtracting dual. And well, what does it mean to say this subtracting dual? I'm just saying I had some lists of things which are generally supposed to match. And I should check them off one at a time. Well, I did explain quite a few of them, but let me explain actually go back to almost the first one I said, which was a matching of the algebra of the tourist with the H2. So what tourist do we have? So recall that I have this C star to the sum of the WIs acting on the covariate. Try to make mechagemus covariate. Nothing, there we go. Acting on the covariate. If you're careful, you'll notice that this actually maybe is not effective, like some portion of this C star to the sum of WIs actually acts trivially, but ignore that for a second because actually it will match on the other side. And so the Lie Algebra, so that was our tourist here. So it's Lie Algebra, well, C to the sum of the WIs. Now, what that's supposed to match H2 on the other side. Well, this guy, this WM that told us mu is by definition a sub-variety of the affine-grasminium to the N, just because it's by definition, N points in affine-grasminium. So I end up with a map backwards from H2 of the affine-grasminium to the N, which of course just H2 of the affine-grasminium directs on itself N times. I mean, if I took the cohomology, I would get the cohomology, full cohomology, but so I get H2 directs on itself N times because I have the H0s tensor many, many H0s and then one H2. So I get this H2 directs on itself N times. So I map backwards like this. And this sum of the W is actually equal to N, almost beneficial. This N is the same as the sum of the Ws. Because remember, what was N? It was used to make this list. Well, first of all, this, let me just write this down and you'll see it's obvious. Okay, so the N here appears because it's the list of these fundamental weights adding to lambda and the W is how much of each fundamental weight occurs. So of course, N is the sum of the Ws. So in this way, we see that this torus is Lie algebra is C to the N and also this H2 here is also C to the N. I hesitate to write their isomorphic because they're not isomorphic, but somehow they actually have exactly the same sort of kernel, so they match. So, oh, I didn't really say this, but the H2 of the affine-grasminium is one dimension. And it's Picard group is just Z, it has a canonical line bubble. Okay, so that's the matching of the, remember to try it as an equation. So we get this matching of this, matching of this Lie algebra of the torus with this H2. And let's look in the opposite direction because it's also quite instructive. So in the opposite direction, let's start, I guess, with affine-grasminium, let's start with the quiver variety again. So how do we get line bundles on the quiver variety or how do we get comology of the quiver variety? There's this Kerwin map. So because it's a quotient, we get these tautological line bundles coming along. So we end up seeing that H2 is just C to the I. So I here is the vertices in the thinking diagram. So these, the line bundles correspond to determinants of the tautological vector bundles. Anybody who's worked with quiver varieties, this will be familiar. And if you haven't, this is part of the more general phenomenon called this Kerwin map. So that's the H2 of the quiver variety. And on the other hand, if we take our affine-grasminium slice, I have, this is a, this is living in the affine-grasminium of this group G. And so the torus of the group G, as I mentioned before, will act here on the resolution. And this torus, this group G has dicken diagram with vertices I. So this torus is just C star to the I. So we get H2 of this quiver variety, nice and warm to the C to the I, which is the the algebra of this C star. And so that's the matching there. And what's, you know, beautiful about this is if you could look at some examples, you might say, okay, well, sometimes maybe this tautological line bundle is trivial because of, you know, the nature of the quiver variety. Maybe it's not really, doesn't use that vertex. And so the tautological line bundle is trivial. And then if you go look on the dual side, you'll see, oh, that like component of the C star, that C star there will also act trivially. So everything always matches. Okay, so that's the matching of the tori. And then of course the co-logical matching is what I mentioned. So we saw, sorry to scroll a lot, but we saw these two isomorphisms. So this Mirkovic-Velonen isomorphism here, we're seeing this representation theoretically composition shown geometrically here and using this Mirkovic-Velonen result and then going further back up. The same thing, but now with flipped. Flipped the rolls. So I here I have tracking set fiber and down below I had fiber tracking set. So I flipped the rolls and that's exactly what we expect with some like duality. And I already mentioned that we have this bijection with leaves or reversing bijection with leaves. In fact, you can even go a step slightly further and you can produce a bijection bijection between the irreducible components of say one of these fibers. This is in the perverite side of the perverite with the irreducible components of this attracting set in the African-Gracian slice. I mentioned before that these guys are called Mirkovic-Velonen cycles. And one way I know to produce such bijection is using theory of Mirkovic-Velonen polytibes. So I wrote some many papers about this topic. But anyway, I'm not gonna bother explaining that. But there is exact bijections. I see a question. Where do the plus on structures and some like instructions on African-Gracian mining come from? Thanks, that's a great question. Of course, I've been like not really discussing these African-Gracian structures too much in general in this talk, but I apologize. So the answer to the question is we have a man in triple. I'll just write down these. Let's see. African-Gracian slices come from the man in triple. So there's just an aside. And this is explained in our paper, first K-W-W-Y paper. Okay, great. So, okay. So now I have a few minutes. So I'll maybe give a little preview of this Robin Finkelberg and Kajima construction. And then we'll really get into it next time. So what's the idea? Well, we have this notion of simple African-Gracian pairs. We've seen that many examples now, but now you might ask, how is there some systematic way of constructing the dual of if you have one simple African resolution, is there some systematic way of producing the dual simple African resolution? And well, in general still, there's no really good answer for that, but here's a sort of partial answer for that. So the answer, let's look at our list of some of these synthetic resolutions. We saw, we had a hyper-torque varieties. We had river varieties. We had something like T-star G-mini-pese, and we had these African-Gracian slices. So for these classes of examples, there of the following form, you take the contingent bundle of some representation of G and then take a Hamiltonian reduction by G for some group G, right? They're always of this form. And whenever our simple African resolution is of this form, that's when we can use the BFN construction. So it's this class of examples. So to use a slightly physically language, or well, not yet, so let's start with the following data, G a reductive group. So usually this group G will be product of some GLNs or a torus, which is just a product of C-stars, GLNs. So G will be a reductive group and N will be a representation. And for the physicists, they would say that this G and N define, well, one of these theories, a 3D gauge theory, which is one of these N equals four supersymmetric theories. So they define it as 3D gauge theory. And I mentioned before that from any, these N equals four supersymmetric field theories, the physicists associate two spaces, one called Higgs branch and one called the Coulomb branch. And this Higgs branch is just, for mathematicians, it's just this Hamiltonian reduction. So just take the cotangent bundle of N and take the Hamiltonian reduction by G. So that's the Higgs branch. And the Coulomb branch, I guess was more mysterious both of the physicists and mathematicians until, well, work of some physicists and I don't know the physics literature very well, and then the work of Barbara and Finkelberg-Makajima. So let me just give a rough description and then we'll give a more precise description next time. We're many five minutes. Okay. So for this rough description, let's introduce the following weird scheme. So sometimes called the ravioli curve, sometimes called a bubble, I'll call it B, raviolo curve, maybe more precisely. So you take two copies of the punctured disc and glue them, sorry, two copies of the formal disc and glue them together along the punctured disc. So it's a non-reduced, it's a non-separated curve. The usual way I like to think about this thing is you probably know that if you wanna construct P1, you should take two copies of A1 and glue them along C star. It's a usual description of how you build P1. And usually when you do that, you don't glue them like directly, but you glue them using the inverse map. So somewhere in this gluing thing, there's a T goes to T inverse. If you forget to do T goes to T inverse, well, you can still glue them, but then you get a non-reduced, a non-separated curve, sorry, something like this, right? You end up with A1 with double origin. So it's a kind of bad version of P1. So this bubble curve is like the bad version of P1 and then you just look in a formal neighborhood of this doubled origin. So that's the bubble curve. So it's pretty close to P1. If you don't like it too much, you can just think about P1. In fact, in Nakajima's first paper about the school loan branches, he explained basically that the natural thing to use would be P1, but for the purpose of, well, for some purpose, we'll see soon. It actually only works if you use this bubble curve, but you can think of it as just P1, okay? So what are we going to do with this funny bubble curve? We're going to consider the following modular stack of maps from this non-separated curve into the stack of N mod G. So N here is a representation of G, we'd like to group G and we consider the stack quotient N by G and consider maps. So I emphasize that you have to use the stack quotient here, whereas in constructing the Higgs branch, we've always used the GIT quotient. And when we, in doing so, we'd like thrown away the unstable locus, but here we need to use the full locus, full everything. So we take this full stack quotient here. We take such space of maps, then we take the homology of this mapping space. And this homology carries an algebra structure, the convolutional algebra structure. So where does this algebra structure come from? We'll see, like, I'm going to redefine this thing and it's like different language next time. So in that, and then I'll explain the algebra structure in a different way, but intuitively, it comes with some kind of gluing of these bubble things. So it comes from considering like a disk with tripled origin. And so considering such a modulus, maps from such a modular space into the second, the modular space and maps from such a tripled disk or disk with tripled origin into the stack, that's used to define this convolutional algebra structure. And that somehow is like related to why we don't use T1 because here we have this possibility of this triple origin. And this algebra structure, so it's similar to the, if you've studied the Steinberg variety inside of the product of the quotient, the square of the quotient of bundle of the five variety, you know that this Steinberg variety, its homology has a convolutional algebra structure, like in the book of Chris Ginsburg. What's a little different is that in this case example, the convolutional algebra structure is commutative. So this is a commutative algebra. And since it's commutative, we can then take sprang and we can get a steam and that's what we'll do. So that's the definition of the Coulomb branch, at least the singular. So this is going to be the dual of this Hamiltonian reduction. So these guys are going to be selecting two and this guy is going to be the Coulomb branch. Too much space, MC. Okay, at least, so this is how to produce the singular guy and then you can produce the smooth guys one. Okay, so next time we'll define all this stuff in here a bit more precisely or this is pretty precise but I'll define it more in a way that makes it easier to work with. So, and we'll see some examples and hopefully. Okay, we'll stop there. Any questions? You have a question and you have two questions now in the Q and A. Okay. Mentioned that stable envelopes play a role in something like this. What is known about stable envelopes for slices and affine gross mine in? So this is subject, I guess, of Yvonne Danylenko's thesis, which hasn't yet been published. So it has been studied by Yvonne Danylenko, but I mean, sorry, not being published. I mean, not even appeared on the archive. So it does sort of exist the study but not in public yet. And another question from anonymous person. Can we define WLMU for non integral London U? No. No. I don't know how to define it if they're not integral. One thing we'll see soon though, is how to define it when MU is not dominant. So in the definition, at some point in the beginning today, I said, we assume MU is dominant. And if you're paying close attention, you would see that actually on the curve variety side, much of what I said goes through, even if MU is not dominant. It's still a smooth, it's a resolution, but not exactly of this singular guy, not of M0 London U, but it's a resolution of something. So there's still a smooth curve variety, even if MU is not dominant. And going on the outfingers money inside, well, actually it looks like much of what I said also goes through if MU is not dominant, but we'll see soon how to really good definition of W London MU when MU is not dominant. But for not integral, MU, I don't know. What is G of the Laurent polynomials? Okay, let's back up then. Andrei answered that question. So you're beat up I guess, right Andrei? Yes. So don't worry. He answered it? Yeah, Andrei. Interesting question about the patient. Oh, great, great. I took the liberty. Any other question? Thank you. So I have a question, maybe. So before you say that queer varieties for finite dinking diagrams is understand where corresponds in particular to slices. So if I have any Nakajima variety, this corresponds to some slice and, I mean, what it would be the sympathy dual. If you have a curve variety outside of finite type, is that the question? Yes. Yeah, so, well, we'll get to that later. Yes. That's that. Then we're in the realm of generalized affine grass money and slices. And those are defined in this coolant branch way. So we'll, we'll get to that later in the talk. But in, maybe I should say though, in affine type A, which might be the case that you're most interested in. In affine type A, then the symphactic dual of Nakajima curve variety is again a Nakajima curve variety. Well, sorry, when at least when new is dominant. The reason, the reason for that is, and it also happens in finite type A. And the reason for that is there's this funny rank level duality, which happens in finite and affine type A, which, which, which ends up manifesting itself as an isomorphism between affine grass money and slices and quiver varieties. So something's called Mirkovic VWarnov isomorphism. Okay, but I'll, maybe I'll make sure to address that question a little more precisely a little later. Any other question? You're Mark, let's see. No. So I guess we can thank Joe again.