 Okay, welcome everybody. So today we are excited to have Arun Ram from University of Melbourne speaking about generalized divided differences and mocks rules. Please go ahead Arun. Great, thank you Anders. Thanks to all of you for inviting me and having me speak. It's really a pleasure to get to tell these stories. I was telling Leonardo. It's just such a beautiful theory. I feel lucky to work in this in the subject. So I wanted to talk about this paper with Tom Halverson that we wrote last year. And so it's about mocks rules for McDonald's. Really my focus recently hasn't been McDonald's but of course the the Schubert analogies are just everywhere. So that's how I want to start. It's with the analogies to Schubert calculus. And so let me begin just with divided differences. So of course symmetric group, SN for me is a symmetric group and that acts on polynomials. And I'll use Laurent polynomials because that's where the McDonald polynomials live. And that's of course, so let me I'm going to write many operators. And so I want to write them sort of in a consistent form. Which is that I've got SI acting on the operator F. And then that is going to be equal to so I take F. I leave the first I minus one variables fixed, I switch I plus one and I, and then I leave the remainder of the variables fixed. All right, so that's the standard symmetric group action. Of course in this everybody went black. Okay, super. So then the divided differences. Let's let del I be one plus SI applied to one minus X, XI minus XI plus one. So that's my standard divided difference. And then we immediately start to compute relations. So del I squared is zero. And these satisfy coxsider relations. Plus one, del I, del I plus one. And then of course the next one is a di. So di is one plus SI times one over one minus XI inverse XI plus one. So I view it as an operator on. And then di squared is di. Sorry, can I ask somebody either Anders or Leonardo or somebody to go back on camera because it's somehow it's very, feels very funny to be talking black. Okay, thanks. All right, so the di's, they satisfy, again, the same braid relation. But we have this, this square relation has changed to the, from being a nil potent operator to being an item potent operator. Okay, so now the fun starts, because I'm going to let T to the half be some complex number. I view that as just an extra parameter. And then we let CSI be this operator T to the minus a half minus T to the half XI inverse XI plus one divided by one minus XI inverse XI plus one. So, so you can see that same kernel is appearing in these factors. But this thing, this whole expression is called the C function. So actually Laura Colmenarro and I wrote a whole paper about how the C functions are just sort of integral part of the game here. So when I take this as my push pull operator, then it squares almost item potent but with a constant. So T to the half plus T to the minus a half times CSI. And if you look at the braid relation CSI CSI plus one CSI. And you want that to be equal to CSI plus one CSI CSI plus one. It's not. It's just not doesn't satisfy the same relation. So you have to fix it by adding a CSI. Yeah. Your CSI seems to be a constant. I don't see it. It does has no SI in it. Oh, I'm sorry. Thank you. Thank you, of course. Wonderful. So I can blame the time difference that I'm I'm still asleep. But thank you, Ellen. That's great. So it's I wanted to be the same kind of operator where you take some some Todd class and then you symmetries it. And I'm doing the same thing here taking some kind of a class and symmetries. That's my CSI. So yes, thank you. Okay, so I have to add to get the the relation correct. I have to add here a CSI plus one to the left hand side and a CSI to the right hand side. But then it works again. And you just have to build that into your relations. Of course, you can fix it. So if I let T I be CSI minus minus a half, then what happens is, well, it satisfies my usual heck a relation, which is t to the half minus t to the half minus a half ti plus one. So and the usual braid relation. So ti plus one ti ti plus one is equal to ti ti plus one ti. And that's my heck algebra. So so this is standard heck. Okay, great. So far so good. Okay, so now I want to move to Daha. Daha is the double affine heck algebra. And it's actionable. So for that, I need another parameter. So Q, which is also nonzero complex number, you can take it to be that or you can take it to be a free parameter. It's also fine. And then I have an operator YI, which acts on a polynomial log polynomial. And it gives you back. So you take the old polynomial, you leave the first i minus one coordinates the same. And you scale the ith coordinate by Q inverse. And then you leave the rest of the coordinates the same. So that's why I and using why I we can build t pi, which is s one up to s n minus one. So those are my operators, just symmetric group times y n. So that's a new operator that we're introducing. And we use it to define capital Y, which so these are some kind of Murphy like, if you know Murphy elements and symmetric group, they're Murphy elements of the Daha. And this is t pi times t n minus one down to t one. So I have y y one, that's the first Murphy element. And you get the others why why I so for I between two and n, you, you get the others by taking t i minus one inverse the previous y i y i minus one t i minus one inverse. All right, so I have so amazing fact is that that y i y j is y j y i just just like Murphy elements satisfy. But the you can just view them as operators on polynomials and and try to check do the exercise that the why is commute. Okay, so now we can really get to the generalized divided differences that we use to build McDonald's. So these are called tau I check, or at least I call them tau I check. And I, I suppose I should apologize for all the checks, which, which litter this thing. But if you're trying to keep Langland's duality straight in your head, you have to have them. So and tau I check is going to be CSI minus t to the half minus t to the minus a half y i inverse y i plus one divided by one minus y i inverse y i plus one. So I yeah. On the bottom of your previous page, you've got t pi is equal to something that doesn't mention pi. Is that right? t pi is just a notation for a new generator. Yeah. Okay, so pi is not a permutation. Pi is not a permutation. No. Okay. No. Okay. I mean, secretly it is an affine permutation. But since I haven't brought in the affine while group, you don't see that in this right right now. It's just an operator for me sitting on the polynomials. Yeah. So, so this tau I is sort of trying to be CSI, but it's got another correction in these new variable, these new operators, why? And it's really an amazing thing, in my opinion. It's called the Terednik inner twiner, because that's what he used it for. Okay, so I need then now one more set of operators. So, but these are very easy. So capital Xj is just going to be multiplying by Xj. So, so if I act on a polynomial, then that's just little xj times f x1, xn. So, but it's important to start thinking about these x's as operators on the polynomial. And then I'm going to need one more, one more inner twiner, which is called tau pi check. And that's going to be x1 t1 up to tn minus 1. So, so maybe let's copy this one. These are my key operators here, so that we've got them. All right, so that these are, that's the whole Daha. I've got the Daha action secretly, without telling you what Daha is, I've just defined all the operators on on the polynomial ring. And now we can start to think about Schubert and McDonald's in the analogies. So, let me go back to the Schubert case first. So, Schubert, so the, the standard way to do it is to talk about Schubert polynomials, which are these curly S math rack SWs, and they're indexed by permutations in S infinity. So, so permutations that are, that can be as wide as they like, but are secretly finite permutations. So, they permute the integers, one through whatever. And these form a basis, a basis of polynomials. So this is Schubert, sorry, using an infinite number of variables. And the way we want to define them is we want to, for each permutation, if wi is bigger than wi plus 1, then I want this original divided difference acting on SW to be SWSI. And then you need some normalization. So, the S of the top class, so the W not n for me is the most element in the symmetric group Sn. And that's x1, x1 to the n minus 1, x2 to the n minus 2, xn minus 2 to the 2. So that's, that's my standard definition of Schubert polynomials. And the point is I want to do the McDonald's story in an analogy to that. So, so McDonald's polynomials. So these are going to be denoted e mu, the McDonald's, these are what I call electronic McDonald's polynomials or non symmetric McDonald's polynomials. And they're indexed by n tuples of integers. And they form a basis, a basis of the ring of the long polynomials in n variables now. So I'm working x1 plus minus 1 up to xn plus minus 1. So that's my first analogy is that the Schubert polynomials form a basis of the polynomial ring with infinitely many variables. These guys form a basis of the polynomial ring in n variables, but Laurent polynomials. And then I want to construct them. So I'll take and I'll say that if, so an n tuple of integers is mu1 up to mun. And if the i-th component of my n tuple is bigger than the i plus first component, then how I check, so that's my fancy divided difference, generalized divided difference, when I act on e mu should give me e si mu. So the same way as for, and the way that I do it, I normalize this by a half because I try to follow Charrette and next notations. And then for two, we have to again, fix some normalization of the bottom class or the top class, depending on your point of view. So the zero partition or the zero compensations one to be one. And then these extra operators, so maybe, maybe I can grab this Schubert story for, so the extra operators for the McDonnell case are this tau pi. And tau pi, when it acts on e mu, it's going to give me e pi mu, where x on a sequence and what it does is it cycles this around. So I get mun plus one, mu one up to mu n minus one. Is that readable? I hope so. So what's happened is that mun has moved to the beginning, but when it moved to the beginning, it got augmented by one. It got an extra, an extra box, if I'm thinking in terms of partitions and boxes, that and all the other guys moved, just moved over. So it's a cycle, but with an augmentation to it. And that's how the t pi acts on the McDonnell planets. There is an extra factor, if I'm being careful with the contents, which is so stuff is equal to one half n minus one minus the number of i's such that mu i is greater than mu i plus one. So there's a lot of these little normalizations in McDonnell polynomials, and they're annoying, but they're not complicated. They're all combinatorial, and they're all so this is one of them. So the tau pi cycles around. And for me, if I'm constructing McDonnell polynomials, this is a very important operator, because I have to start with zero zero, which is no boxes. And somehow I have to add boxes to get big partitions or big compositions. And so it's this tau pi operator that allows me to do that. And then the last important step in McDonnell polynomials is the action of the yi's. And the action of the yi's is that the the emus, the electronic McDonnell polynomials are eigenvectors for the yi's. So those are called Drednick-Donkel operators. And the eigenvalue is q minus mu i t to the negative v mu of i minus one plus one half times n minus one. So v mu is a permutation, and it is permutation. So it is minimal length, such that. So I want to take mu and I want to sort it. So I want this to be weakly increasing. And that permutation that takes your composition, sorts it into increasing order, is the one that's appearing in these weights. And that always appears in McDonnell polynomial t, the t weights of McDonnell polynomial. The q's somehow keep track of the parts of the composition, and the t's are always keeping track of this minimal length sorting permutation. But this is the amazing property is that you've got a whole bunch of commuting operators that diagonalize on the basis of electronic McDonnell polynomials. All right, so that's the definition of all the McDonnell polynomials. How do we go? That's 25. Shall we break there? Yeah. Yes, that works. That works good. So I guess that means we start again at five. Yeah.