 All right, welcome everyone to the sugar seminar. So today we're happy to have Eli Kasby from Northeastern telling us about the representations of a fine quantum groups and the Kevarian homology of a fine grass man. Please take it away, Eli. Thank you. Thank you so much. So I would like to thank the organizers for this very, very kind of opportunity to present my work at the seminar. I'm very pleased. So yeah, as it has been said, we'll talk about representations of a fine quantum groups and equilibrium homology of a fine grass manians. So it's a joint work with Chien-Rong Lee at the University of Vienna. So part of it will be results that have already been published like one year ago. And parts of it will be ongoing joint work which is like in the sequel of this. So the plan is as follows. I will start with a very gentle, very gentle reminders on the algebraic, so some motivation and some algebraic setting. So basically algebraic structure of coordinate rings, their basis, and yeah. Then we'll move into more geometry representation theory. So I will explain how we can construct a basis of coordinate rings called Merkowicz-Wieland basis using the Geometrics Attache Correspondence. And then we'll see how we can have a different approach, more categorical approach. This is where quantum affine algebras or affine quantum groups will be involved. And then I will present results and perspectives. So yeah, so for this talk, I will have Lie Algebra G which is going to be finite dimensional simple. And in fact, it will always be ADEK, so always a simply laced Lie Algebra G. And so the so-called coordinate ring is the ring of regular function on the Lie group of nil-potent sub-algebra of G, maximum nil-potent sub-algebra of G. So C of n where n is, so it's really like an affine space. So it's a nice polynomial ring. But somehow there is much more to say about this algebra than just that it's a polynomial algebra. So I will not mention it too much here, but this algebra admits interesting quantizations. So this is related to the theory of quantum groups. And the study of these quantizations led many people beginning with Luxtic and Kashiwara to study nice bases of, so bases as vector space of this ring. So there is, so this began in the 90s. So there was Luxtic dual canonical bases and Kashiwara upper global bases, which were proved to be the same. And then there is also Luxtic dual semi canonical bases. And then a bit later, the Mirkovic-Villain bases that I will, yeah, I will say, we will study it in a bit more detail in a few minutes. And there are many others like theta bases and coming from theory of Gloucester algebras, et cetera. So then it's kind of a, there are many questions like, are these bases the same or not the same? So usually they're known to be not the same. So my running example will be Fr3, so type A2. So then my lead group is the group of upper unitriangular matrices with complex coefficients. So my algebra C of n is really regular functions on this affine space. So we can consider, so these are four nice examples of, yeah, four examples of very nice regular functions on this lead group. So we can take, so delta subset i subset j means you take the minor of, yeah, this is the set of row. This is the set of columns and you take the minor. So these are, so of course there can be much more complicated functions, but these are four very nice examples of regular functions on this affine space. And yeah, so on this example, of course it's a complete triviality that you can check that if you compute the function xx prime, you get the same as if you do y plus z because of this. So we have this nice identity in C bracket. And so in this very gentle example, it turned, so in this very, but it is some kind of accident, right? But in this case, all of these interesting bases that I mentioned are the same. And they split into two family of monomials. So you have the monomials in x, y, z on the one hand and the monomials on x prime, y, z on the other hand. And these two packs of monomials form a basis as a vector space of C bracket. Yeah, so now I will talk a little bit about class algebra, but really not too much, just strictly what I need for just motivations, but it will not be very crucial for the talk. So if you're not, if you don't want to bother too much, that's perfectly fine. But yeah, so this C of n, this coordinate ring has nice structure called cluster structure. So basically what does that mean? That means you're starting with n the nice regular functions x1 to xn. And starting with these, so for us n will be the number of positive roots of g, typically, but it's of course more general. So, and starting from this initial, from these n well chosen functions, you will cook up many, many more regular functions in an inductive one. So for this, we need a quiver with n vertices without two and two cycles. So the data of these variables and the quiver is called the seed. So for example, I'm gonna have my x, my y, and my z. So in SL3, there are three positive roots. So I have three functions x, y, z with this quiver. And this quiver tells me how I should produce new functions. So how does that work? So for each k in one blah, blah, blah n, you have this procedure called mutation in the direction k where you leave all the variables xi and change except the kth variable xk that you replace with this new function, this new, yeah, new element. So this is where the quiver q is involved. So q tells you how you're supposed to cook up these monomials. And you replace q by a new quiver q prime. And so that you have a new q prime, a new bunch of variables, x1, xk minus one, x prime k, xk plus one to x10. And then you can carry on this way. So for example, if I apply this procedure here, then y and z are unchanged, then I mutate here. So then I get x prime, which is gonna be, this is gonna be y because there is only one incoming arrow, sorry, z, one incoming arrow, one outgoing arrow. So this is why I get y plus z and I divide by x. So one over x times y plus z, so this is my x prime. Okay, so it's consistent with what we saw here with this identity here. So basically this identity is the baby example of a cluster mutation in this ring. So yeah, the general definition of this cluster algebra is that it's the algebra generated by cluster variables, cluster variables being all the possible new variables that you can produce using this procedure. So you start with your initial data. For each k in one n, you can mutate in the direction k. And for each of these n and new seeds, you can again mutate in all possible directions. So of course you can get infinitely many variables, not all the time, sometimes you get finitely many. But yeah. So it's theorem due to Bernstein-Familien-Zelevinsky and Geisler-Klashchera. So the coordinate ring, c of n, has a structure of cluster algebra. So the rank of this cluster algebra, meaning the capital N, the number of variables in each seed is the number of positive roots of g. And usually this algebra has infinitely many cluster variables, infinitely many seeds. However, there are still a certain finite collection of them that is well understood called standard seeds and which are indexed, sorry, by the reduced expressions of w zero. And the cluster variables of these standard seeds are called flag miners. So typically this procedure will produce all these so-called flag miners like typically in type A, they will correspond to regular functions on the form of my first line, so of this form. So then if you keep going and going and going on this mutation procedure, you can get uglier functions. But the point is at first, the first functions you will find applying this mutation procedure will be functions of this form. So like taking the miner of bunch of rows and bunch of columns. So these are, yeah, so that's why they're called flag miners because in types other than A, they can be constructed in some kind of similar way. Yeah, so yeah, for my running example, in type A two, there are two reduced expression of w zero as one is two as one as two. Each of these reduced expression gives you one standard seed. So this reduced expression tells you that the quiver should be this and the variable should be those same for here. And the mutation is the one that I already depicted. So this is x, this is z, this is y, this is x prime. And if you, so if you multiply the blue guys, you get the sum of the red guys. And the mutation between these two seeds correspond to a braid move on this reduced expression. Yeah, this is just a comment. So is the reason you have a closed loop on the right and you don't have a closed loop on the left? Yeah, that's a good point. So honestly, on this example, these variables are called frozen. So it doesn't really matter if you drop this arrow. But if you look at more general examples somehow, like the general procedure of mutation, then it's very important that that's, it's part of the algorithm to go from, so the algorithm that tells you how to cook up q prime from q absolutely requires that you close the loop here. If you start from this guy, you should really get this guy. So on this specific example, it's not going to matter really. But yeah, the more general procedure, you really have to take it into account. Other questions or should I move to equivalent homology of affine rest minus? Okay, so yeah. So I'm going to talk about the geometric strategy correspondence, which allows us to construct one of these beautiful, important bases of C bracket N. And we'll do some equivalent homology, which allow us to have some nice invariance. Yeah, to record the geometric data into some nice invariance. Okay, so brief recollection on geometric strategy. So it's kind of vast topic. And I'm far from being an expert on this, but yeah, so I put only the strictly necessary material here, which is already a lot. So we still have our little algebra gene, which, so yeah, so for me, it's going to be simply laced. So if you don't need really to bother with the Langlans dual here, I just put them for, yeah, you can just forget them. So we have a borrel inside G and a maximal torus inside the borrel. And so we look at the affine rest minus of G, which is the K-points module, the O points, where O is this formal series ring and K the tracking field of O. So this is affine rest minus of G and there is a natural torus action on, so T acts on G. So there are infinitely many fixed points. So the fixed points are indexed by the weight lattice of G. Okay. So this is a very complicated, it's a so-called in-scheme. So it's a complicated scheme and there are infinitely many fixed points, but we are going to focus on certain nice finite dimensional variates called MV cycles. So... Do you take the simply connected version of the group G or it doesn't matter? I think in my situation, it's not going to matter, but in general, I pretty much think you should take the simply connected version, yeah. Okay. Yeah, mm-hmm, yeah. So it's kind of, yeah, so we have some kind of, so very, very roughly speaking, MV cycles are going to be some kinds of rich arson variates. So I don't want to make it too precise as a statement, but very waving hands, it's like some kinds of rich arson variates. So how does it work? Well, this GURG has two different stratifications. So one stratification indexed by dominant weights and another stratification indexed by our weights. So yeah, whatever they are. And so what you do is you intersect one stratum and one opposite stratum. And so you can consider as this variety. So girl lambda bar intersection girl mu. So girl upper lambda intersection girl lower mu. So this is going to be a nice variety. And you look at, and so the geometric sataki correspondence tells that in its weak version, in its weaker form, tell us that the homology space of this variety is isomorphic to a vector space, to as a vector space to the, to weight spaces of highest weight irreducible finite dimensional representations of G. So for iso lambda is a dominant weight. So for each lambda, we have a L of lambda, a unique up to isomorphism, a unique irreducible finite dimensional representation of the G of highest weight lambda. Okay. And now for each mu, we have, we can look at the mu weight subspace of L of lambda. So especially if mu is less than lambda. And so yeah, the geometric sataki tells us that this mu weight subspace is isomorphic to the homology of space of this, yeah, of this nice variety. So in particular, we have a nice basis here, given by the homology classes of irreducible components of this variety. So this isomorphism tells us, so thanks to this isomorphism, we can cook up a basis. So this is called the MV basis of this, of the right hand side. Okay. So this is what's written here. So the classes of irreducible components, so the irreducible components of this variety are called MVs, Amirkovich, Villain, and Saigons. And their classes give us a basis of L of lambda mu and fixing lambda and putting all of this together for all possible mu, mu smaller than lambda, you get a basis of L of lambda. This is called the MV basis of L of lambda. So you can do that for every dominant lambda. Okay. So these, in simple example, these varieties are really concrete, like if you take in type A, lambda is a fundamental weight, these guys are going to be sugared varieties. Yeah. And then there is a classical fact that all of these L of lambda can be embedded into this algebra, into this ring. And in fact, it's even more than this, is if you embed, so the embeddings of the L of lambda for various lambda usually overlap. But still, if you put them all together, you can actually get the holes here. So I will not get too much into details here because it would be super technical and boring and lengthy. But like, yeah, basically, if you have a basis of L of lambda for each lambda, then if you are careful not to count twice some guys, you can cook up a basis of this ring C bracket, which is called the MV basis of C bracket. And its elements are going to be indexed by certain MV settings. So yeah, so usually this basis is very hard to compute. So it's a, there are many, there are still open questions on this basis, like does it contain cluster variables or this kind of things? So yeah, so this is the construction of, so are there questions on this construction or should I move to a covariance homology? Okay, so what I'm going to talk about now is due to work due to Pia Poman, Joel Kamnitzer and Alain Knutham. So basically what they did is they related some algebraic properties of the MV basis of C bracket N to certain geometric invariance of the MV cycles themselves. So that's the right model. So they consider equivalent Euler classes or also called sometimes equivalent multiplicities of MV cycle. So concretely, you dig an MV cycle in this affine-grasse-minion. So it's a nice closed irreducible variety inside this affine-grasse-minion. So now this MV cycle had, there are only finitely many fixed points for the torus action on Z. And we denote by epsilon of Z, the equivalent inverse Euler class with respect to the action of T at the point P. So concretely, this is going to be a rational function. So it's going to be a nice rational function in little n variable, alpha one, alpha n, where a little n is the number of simple rules of G. And not so, if you, so basically, whether that these invariants record the local action, of the weights of the, so locally around the point P, this invariant records the weights, the eigenvalues if you want of the action of T on the tangent weights, on the tangent space at P. So if you look at the class in this equivariate homology space, you can decompose it onto the basis of fixed points and these guys will appear as structure constant. Yeah, and one classical fact, so I don't know. Yeah, I think it's due to Brion, but probably I don't know Michel Verne and probably many other people. If Z is smooth at P, then this inverse Euler class is just the inverse of the product of tangent weights of the action of T at P, okay. So then if Z is not smooth, you're probably gonna have a complicated numerator here. But yeah, if Z is smooth, you get one on the numerator and on the denominator, you just get a product of weights. So very concretely for us, these weights will always be positive roots. So you can view positive roots as nice, I don't know, like linear polynomials in terms of simple roots. So I don't know, alpha one plus alpha two plus alpha three, typically. So these are the kinds of weights that will show up here for us, okay. So Boman-Kanit-Zachnetzen constructed a certain algebraic morphism D bar. So I put here the definition which we really don't need. So you don't need to bother reading all of this. So basically there is a way to, for each regular function on M, you can build up in a natural way, a nice rational function. And somehow the magic of this is that in addition to being a linear, this is actually multiplicative, it's actually an algebra morphism. So it turned out, it's not deep to prove, but it's, yeah, it's not like super obvious, but it's not like a deep theorem, it's just, yeah. But it's a good thing. So this mess is actually a morphism, an algebraic morphism. And now the deep theorem comes now is that if you take a, so an MV cycle Z and you look at B index Z, the corresponding element of the MV basis of C of N, then if you plug in this MV basis element, so this now is a regular function of N, if you plug in this guy into this machinery, you're gonna get some rational function. And what you get is actually the equivalent multiplicity of Z at a well chosen fixed point. Okay, so this Z has finitely many fixed point and there is one fixed point that in some sense is, yeah, it's like a good fixed point for each Z. There is a good fixed point and you look at the inverse Euler class at this particular fixed point and this rational function is going to be the same as D bar evaluated at BZ. So it's really saying that, yeah, this geometric data on Z can be like, yeah, can be seen at the level of the MV basis. So this MV basis is constructed through, so geometric sataki tells you how to go from Z to BZ. And then you plug in BZ into this and yeah, you get the same, right? So, yeah, so at the end of my PhD thesis, I noted that this guy here, this morphism, seemed to behave well with respect to the cluster structure that I mentioned earlier on this algebra. So I was able to properly prove, so basically in type AN and D4, but otherwise I was not able to prove it in full generality that if X is a flag miner, so this kind of nice collection of cluster variables in CN, then D bar of X is of the form one over some product of positive roots. And also these polynomials that appear here on the denominator are related to each other by some kind of very distinguished algebraic identities. So, yeah, so the motivation for our workers with Jean-Rongly was to prove this conjecture in full generality, at least in all simply days types and to understand, to give an interpretation of these remarkable identities here. And another question that, yeah, that turns out to be very much related to these first two question. Another question is, okay, so this morphism here has very strong connection to geometric satake via this result, and through its very definition, it has very natural, so these numbers here can be interpreted in a nice way via different categorifications, so like KLR algebras or representations of, yeah, finite and like preprojective algebras. So you can kind of give an intellectual interpretation of this, but this is really the definition of D-bar. But somehow there is a very interesting, another categorification of C-end due to Hernandez and Leclerc, which involves representations of alpha and quantum groups. And so we were curious to know whether with this categorification, we can have an interpretation of this morphism. And I think this is a very good place to have a break. All right. If there are questions, I'd be happy to answer them. Yeah, so let's take a five minutes break.