 Thanks for the invitation. This is something that I've been working on for the last two or three years, some of the time at least, sometimes all of the time. It has quantum comology in its name. It is actually just mostly a combinatorics and algebra project because it is related to quantum comology in a way you will see, but it is completely elementary in the sense all you need is some symmetric function theory. To some extent this is a weakness, to some extent this is a strength. If you actually understand quantum comology unlike me, there's going to be some interesting questions in there for you, I believe, which I so far have not been, have not come close to answering. So the slides are online. There is some kind of preprint online. There is a survey that I wrote for FIPSAC back before there were a few, there were not this many results. So this story starts with the Schubert calculus, which is from a modern perspective about the following two co-molichirings, the co-molichirings of the grass manian and that of the flank variety, both over the complex numbers. We're only going to be looking at the first one in the stalk, so we're going to be studying things that are in a way generalizing the co-molichirings. Classically it is well known since I think Borrell and his time, what the co-molichirings is as a ring. It is a ring of symmetric polynomials in k variables over the integers. Modulo, the ideal spent by k consecutive complete homogeneous symmetric polynomials. Now somewhere in the 80s and 90s, I think Maxim can talk, can say a lot more about that co-molichirings has been deformed. The new thing, well at least the simplest version of this deformation is called the small quantum co-molichirings. And for the grass manian it's been obtained that it is very similar to the co-molichirings except you're now dividing not by hn, but by a deformed version of hn. And this q is the polynomial and determinates that you add to the base ring. So instead of working over the integers, you're working over that q. And it has since been shown that this quantum co-molichirings still has a lot of the properties of classical co-molichirings. In particular, it has a basis of the module over that q formed by the Schur polynomials, but not all of them, only the ones where the partition fits into a k by n rectangle, sorry k by n minus k rectangle. The structure constants are the so-called Gromov-Widman variants. Well, the simplest ones again, but... And by now, people have written about it in a way that even I can understand. So a lot of this has been studied. There are some pretty interesting properties. Now the goal of this talk is to deform the co-molichirings instead of just deforming it with a single q to deform it with k parameters and thus generalizes qh. This new ring, I don't know what it means geometrically, but I have some indications that it does have a geometric meaning, or at least some kind of meaning. Maybe it's k theory of something, maybe it's a representation ring of something very weird. So let me start from scratch and quickly go over standard notions around symmetric polynomials. I assume much of it is known, but some things are worth mentioning nevertheless. And then I'm going to introduce this deformed co-molichirings purely algebraically. So we start with the commutative ring, boldface k, n is the non-negative integers, k is a fixed non-negative integer. This p is going to be the polynomial ring in k variables over the base ring. That's the store of course different case. So standard notations, if alpha is a tuple and i is the number, then alpha i is the i's entry of the tuple alpha. Same for infinite sequences. If alpha is a tuple, then x to the alpha means the monomial x1 to the alpha 1, x2 to the alpha 2, and so on. And this means the degree of this monomial, so the sum of the entries of alpha. So what is a symmetric polynomial? It's a polynomial that's invariant under permuting the variables. And I call the set of the symmetric polynomials s. It's the sum ring of p. Now a classical theorem I think by Artyn and his Galois theory book, I'm not fully sure where it first appears. It's saying all polynomials are a free module over the symmetric polynomials. And the basis is by the basis of what's called sub staircase monomials. It's the monomials of the form x1 to the power of something smaller than 1, x2 to the power of something smaller than 2, and so on, xk to the power of something smaller than k. So k factorial monomials in total, this is another way of writing this down. And the only place where I have really found the proof written up easily to understand is the LLPT notes, Laxov, Lascaux, Prague, Shanty, no, Torup. But it's been in the air for a lot of, for many years. So if k is 3, this basis is just six monomials. So now what about the ring of symmetric polynomials itself? That ring has several bases and most of them are indexed by certain sets of integer partitions. So a partition is a weakly decreasing sequence of non-negative integers that has only finitely many nonzero entries. So things like this or this or this. This does not qualify because it's not decreasing. This does not qualify because it does not have only finitely many nonzero entries. And I will lazily say k partition for a partition which has at most k nonzero entries, or in simpler terms, a weakly decreasing k tuple of non-negative integers. Why are these the same thing? Because if I have a k partition, lambda, I can just stuff it up with zeros and I get an actual partition. And conversely, if a partition has at most k nonzero entries, I can remove the zeros and this trail of zeros until I have only a k tuple left. Any k partition has a young diagram and the important thing for this audience, I guess, is that I'm writing it in English notation. Everything, you probably know what it is. So rows are the lengths of a row is lambda i and the number of rows is at most k because it's a k partition. Now here are the usual symmetric functions or rather symmetric polynomials. I'm not going to use infinitely many variables until the very end. So for each integer m, e m is the elementary symmetric polynomial is just the sum of all possible square free monomials of degree m. That's one way to put it. Another way is to say it's just sums of products of m element subsets of x's. So e 0 is 1, e something negative is 0 in particular. Now, what is e nu where nu is the whole tuple of integers? It's just the product of the e nu i's for the i's in the, for i's going, ranging over the indexes. In particular, if any of the nu i's is negative, then this is 0. So Gauss proved that the symmetric functions as a k algebra are freely generated as a commutative k algebra actually by the elementary symmetric polynomials. Equivalently, you can write it as a statement about the k module s, namely that k module has a basis, which is namely this family of all e lambdas, where lambda is the partition whose all entries are at most k. Note that e m is 0 when m is bigger than k because you won't have anything in the sum. It will be an empty sum. Now here is an analog. Instead of the e i's, I now have the h i's, or h m's, the complete homogeneous symmetric functions, or rather polynomials. These are the sums of all monomials of degree m. So again, h 0 is 1, h something negative is 0. And again, I extend this notion to tuples by just multiplying through the entries. And again, the k algebra s is freely generated by those complete homogeneous symmetric functions if you go from the h 1 to the h k. Now, of course, this time, it's not true that all the bigger h's are 0. So if you want free generators, you need to stop at h k even so they keep going on. And equivalently, the h lambdas were lambda is a partition whose entries are at most k, former bases of s. Now, let me mention this also. The h lambdas were lambda is a k partition, also former bases of s. And these two bases are actually different. So in finitely many variables, you have two h bases. Of course, the lambdas here are the transposers of the lambdas here or conjugates, you can call them. And now, I'm going to briefly introduce the Schur polynomials. So for each k partition lambda, one way to define it is this. It's a fraction. On the top, you have this modified Van der Monde determinant. And on the bottom, you have the actual, the original Van der Monde determinant. And yes, this is going to be a polynomial. Another way to define it is by the so-called Jacoby-Trudy formula, which is another determinant, basically. Now, it's well known that this equality holds. This is a symmetric polynomial with no negative coefficients. And a third way to define it is via semi-standard young tableaus. This is not something that I'm going to use much in this talk. So I'm just mentioning this as something that exists. If you've seen it, it's maybe the simplest way to define the Schur polynomials. And again, if you restrict lambda to a k partition, you get a basis of the k-module s. So one thing that's neat about the Schur polynomials, it's not only are they themselves polynomials with no negative coefficients, but also if you multiply two of them, you can expand the result in the Schur polynomials again with no negative integer coefficients. They're called C lambda mu nu. They're known as little wood reaches and coefficients. Yeah, so there are actually many theorems of this shape because these coefficients, they count many different things. Now, to preempt something I will have to do later on, we have, let me slightly extend the Schur polynomials. We have defined as lambda for any k-partition lambda. Now, I'm just going to blindly expand this definition to lambda in z to the k. So I allow non-partitions. I allow k-tuples with negative entries and with increases. And I define them in the exact same way. Now, it turns out that I don't actually gain anything new this way because if alpha is a k-tuple, then this newly defined s-alpha is either zero or it's one of the old s-lamdas for partitions lambda, except possibly with the minus sign. And there is a rule for actually finding this lambda. Basically, you have to raise the entries of alpha by the numbers k-1, k-2, and so on, k-k. If the resulting tuple still has a negative enter, you have zero. If the resulting tuple has two equal entries, you have zero. Otherwise, you have to sort it in decreasing order. You have to watch the sign of the permutations that you do, and then you have to un-raise these numbers again, and then you get the lambda. To be honest, this is probably not worth stating because this is what you get if you just correspondingly permute the rows of this matrix. Note that the alternate formula still holds in most cases. It holds if all this lambda i plus k minus i are no negative. Now, the reason I'm mentioning is this is not an exciting result. This is just something that will come up later, naturally. I will need those s-alphas. Now, so let's define this new quotient. First of all, we pick an integer n greater or equal to k, and we pick k polynomials, not necessarily symmetric at this point, such that the degree of ai is less than n minus k plus i for each i. In particular, they can be constants, and in many applications, they are constants. Let j be the ideal of the polynomial ring that's defined as follows. It's generated by these h's. Except from each h, I subtract the corresponding a. Note that this can count as a deformation because the degree of each a is just smaller than the degree of the corresponding h, so I'm subtracting smaller terms from each of them. Here is the first result that I actually had to work for. That isn't the literature. That's the quotient ring as a k-module is again free with the same kind of sub-staircase basis, except you have to adapt it a bit. The basis has the form x1 to the power of something smaller than n minus k plus 1, x2 to the something smaller than n minus k plus 2, and so on, xn to the something smaller than n. Overline always means projection because it's a quotient ring, and this is how many elements this basis has. So this ring is zero-dimensional as its geometers would call it, or it is a free module of this rank. So from now on, I will assume that this a's are symmetric polynomials, not just arbitrary polynomials. Then I can use the same differences to generate an ideal of s. Previously it was an ideal of p, but now that they lie in s, I can also generate an ideal of s from them, so an ideal of the symmetric polynomials. And I can take that quotient. What is that quotient? Well, I can characterize that as follows. We define omega to be the permutation that has k entries of size n minus k each. So it's a k by n minus k rectangle. And I define pkn to be the set of all k partitions that fit into omega. So this is this classical notation for partitions fitting inside each other. And this is the same thing restated in pedestrian terms. All it has to do is it has at most k entries and the first entries at most n minus k. Then the k module s over i, so symmetric polynomials model is this ideal is free with basis the sure polynomials that fit into this rectangle. Next, I'm going to assume something even more restrictive. I'm going to assume that the a's are actually constants, not just symmetric polynomials, but constants. Actually, some of what I'm going to do is does not require this assumption, but I think it's okay to make it here, particularly because the classical cases are particular cases of this. So the classical homology ring, you can just get it when the a's are all zero and k is the integers. Then s-modulo i becomes this classical homology ring. And this s lambda bars, they are the Schubert classes. Quantum homology still fits in this definition, except you have to put the indeterminant in the base ring, the q, so k must be that q. And this time, not all of the a's are zero, one of them is actually q with a sign. And then you get the quantum homology ring. So this above theorem allows us to forget about the geometry if you want, so we can just define the quotient. And the theorem tells us that the quotient looks like it should look. It has its nice bases. It doesn't, for example, collapse to zero or something else. So a lot of papers on the subject have, in a way, been using geometry sometimes to derive combinatorial consequences. With the theorems, this is not necessary, at least if you only want the consequences. So there is more to be said. So let's look at these bases one more time. Since it's a basis, it can, we can take the dual basis. So for each mu, for each partition mu that fits into this rectangle, we can take the linear form on s-modulo i that just sends every element to its s mu coordinate with respect to this basis. Moreover, for every k partition nu, we define the complement of nu. This is going to be the partition that you get. Remember, nu fits into this rectangle. And now if you remove nu from the rectangle, and you turn the result around by 180 degrees, you get a new partition. That's the complement of nu. So this is how you can define it formally. And finally, for any three k partitions, alpha, beta, gamma, you can look at the coefficient of s-gamma check in s-alpha, s-beta in this quotient ring, of course, not in the actual symmetric polynomials, but their projections in the quotient ring. This is the generalization of little wood Richardson coefficients and also of the three-point Gromov-Wittman variance. So as a theorem, that is that these things have the same s-3 symmetry as the Gromov-Wittons. So I can basically permute alpha, beta, and gamma as I want, without changing the coefficient. And even more symmetrically, I can rewrite it as the coefficient of the rectangular box in the product of all three short polynomials. And there is a way to restate it. So what about some other bases? And actually, I must say, I have read, I have seen surprisingly little about bases of the Cromology ring, even for classical Cromology. Everybody says that the Schubert classes form a basis, but then what else? Nobody, I haven't seen it written down. Well, now here's a couple more. So the H is also form a basis. And the transfer matrix is unitriangular with respect to a reasonable order, but I don't know how to describe it. It's not like I have this kind of analog of Costco numbers handy. There is actually a formula for expanding a single H in terms of the Schubert except that if this M is too big, then the Schurz themselves will not fit into the rectangle, and then you will have to expand them. And this is going to get messy. But it's interesting that if you're expanding an H, you get Schurz of hook shape. Okay, here is something else. Remember the Peary rule for symmetric polynomials. It tells you how to multiply a Schurz polynomial by a complete homogeneous by an H. The result is a sum of Schurz polynomials, and the sum is ranges over all k partitions such that the new partition over the old partition is the horizontal J strip. So what is the horizontal J strip? So basically, it's every, it's all possible ways to add J boxes to your young diagram in such a way that no two boxes are in the same column and that you get a partition in the end. This is another way to describe it. So basically the entries of the two partitions interleave each other. So this Peary rule works in the ring of symmetric polynomials and by quotienting it also works in classical cohomology. Quantum cohomology has a more complicated Peary rule. Now, what about this generalized quotient ring? Well, it turns out that it does have a Peary rule too. So this part is familiar. This is just the normal Peary rule. However, the normal Peary rule will often give you some partitions that do no longer fit into the box, into the rectangle. And then you have to reduce the model of the ideal and you get some kind of decay product, so to speak. Error terms maybe. And here is a way to actually explicitly write this error terms down. All of this news fit into the box. And these are little wood richards and coefficients. I'm wondering if you've ever seen a Peary rule that involves little wood richards and coefficients. I haven't before. This is kind of new. And that involves the AIs, but it only involves them linearly, which is also a bit weird. And yes, it generalizes the Peary rule from classical quantum cohomology by Bertram Cio, Confantanini and Fulton. However, notice that this little wood richards and coefficients here can be bigger than two, bigger than one. So this is not a multiplicity free rule. And this is an example for fairly reasonable N and K. Now you might wonder how about multiplying by an E instead of an H? In classical symmetric function theories, there is a complete symmetry between the two. In cohomology, there is still a symmetry. It's a little bit trickier because you have to swap K with N minus K, but there's still a symmetry. Even in quantum cohomology, the symmetry is still there. Posnikov, I think, has shown it. Well, you can forget about symmetry now, because at least it's not obvious if it exists. The thing is, in this Peary rule for the Hs, you have linear terms. Only one A1 here, only A2 here, only A3 here, but here you get squares, for example. And I have not been able to make any sense of this formula. I don't even have a conjecture for what the general rule would be if there is a Peary rule for E. And here is another example. For those who might think it should be easy to multiply at least with a K. It's kind of the extreme case where you're just adding a column. Well, but you're adding a column and then you're reducing models ideal and reducing can give any kind of mess. So even this case is not easy. Now I've been talking about reducing. In quantum cohomology, there is a pretty nice trick for reducing any sure polynomial to fit it into the box. In a way, you basically just keep removing certain kinds of hooks from it until it fits into the box. And what you get is a sure polynomial multiplied with some sine and some power of Q. Now, is there such a thing for the generalized quotient? So here is an example where we are reducing 443. Now, 443 isn't a far cry from the rectangle. The rectangle here is 333. 443 just has two extra cells. So if you get such a messy result, it feels kind of hopeless. It feels like it gets messy. That's sort of all you can say. Well, it turns out that this mess can actually be controlled. So here is a reduction rule, or more likely, I call this a straightening rule. It's one step of the reduction rule. If you want to fully reduce something, you have to apply it many times. So given a k partition that does not fit into the box, so its first entry is bigger than n minus k, I take all possible k tuples whose first entry is mu 1 minus n. So it's my old first entry lowered by n. And then all the other entries are increased by 1 or remain fixed. So 2 to the k minus 1 many terms, which are usually not partitions, they have a very negative first entry usually. And even the next entries, they might wiggle instead of being decreasing. So it's not a partition at all. Then my short polynomial in this quotient equals this sum. Oh, and I don't sum over the entire set. I sum only over a certain slice of it. So only a given size. Also, I guess with this sum, including this sum, it's the whole set. So this is weird. And this is the reason why I introduced short polynomials for nonpartitions. Because as I said, this lambdas are not partitions usually, but you can straighten them out. So this and again, you see only linear terms in the ace. However, in the general case, these things will not fit into the rectangle as well. They will still not fit into the rectangle. And then you have to apply it many times. And this is how you get all these higher degree terms. So I call it a rim hook algorithm. But frankly, I don't know what the rim hooks here are. I understand this only in terms of the stuples. I don't understand this in terms of young diagrams. I suspect that this old method I think by King with the slinkies can say something about it. Well, unfortunately, I don't quite understand this method. So maybe somebody here can help me with that. Okay. Now for the holy grail. And the holy grail is still well in its place. Do we have non negativity? So are the structure constants non negative? This has to be interpreted correctly, because we have all these AIs now. So the structure constants will be polynomials in the AIs. And if you look here, you will hear, for example, you see a lot of negative signs. But since they're polynomials in the AIs, you may you may think that you could put some signs in front of the AIs, and then it will be non negative. Well, apparently that's true. So I re labels, I re index the AIs. It's as be ice with the science in front of them. I picked three partitions, London, you knew that fit into the rectangle. The claim is that the coefficient of the coefficient of what's going on with the sound is the same sign. Yes, it is strange, but it is. So we hear you. So the coefficient of the product of two assets in the coefficient of another S in the product of two assets has a predictable sign. I'm not going to say it's positive because the sign is there. So you would have to also change the signs for or for all the assets in terms of the degree. So this has been checked using sage for all n up to eight. I'm not fully sure I should believe it because honestly, I would like to see a few more ends, but it gets harder and harder to check this. And if it's true, it would generalize the positivity of Gromovitian variants, seeing that it has only recently been proved combinatorially, it would probably not be very easy to prove combinatorially. And this is of course one of the reasons why I think this this ring could have geometric meaning. So as I already mentioned, there is another basis, you can take the monomials, symmetric polynomials, and then you take their projections. And that's still a basis. I don't know what the structure constants are, but I'm just saying it is the thing that's, it's not hard to prove using triangularity again. Here is, here is a thing, something that's not a basis, the power sums do not form a basis. And that's not because of positive characteristics things, even in characteristic zeroes, this is not a basis. Even if AI is zero, this is not a basis. I'm kind of curious as what's going on there, what kind of subring we get, but I don't know. I haven't actually looked at forgotten symmetric functions, but there's a bunch of other bases that could be checked. So what are reasonable questions? One is whether S-modulo i has a geometric meaning, and if not, why do we have all these nice properties? Because these ideals, there is no guarantee that an ideal like this could would behave nicely in any way. In theory, it would have some weird big Gröbner bases. It would, reduction would give you big masses expressions with no patterns. The next question is partly resolved. This is due to something Christian Kratenthaler has asked in one of his papers. What happens if you replace the edges and the ideal by the power sum symmetric functions? So as a high school student working in the MIT Prime's project last year, he has proved that the basis theorem still holds. So the basis that I mentioned, the sure basis is still a basis for this quotient. So I don't know any further properties so far. I don't think there is a three symmetry. I haven't looked at theory rules. Well, the other catch all question is what other properties of quantum global logic generalize. I have tried Postnikov's curious duality and cyclic hidden symmetry. I've tried this duality between k and n minus k. They don't seem to generalize. Maybe somebody with more intuition for the geometry can say how I should probably tweak them to make them generalize. But if you just follow the algebra, it doesn't seem to work out. Is there an analogous construction for the flag manifold? Now I have to admit the quantum cosmology of the flag manifold already has a lot of deformation parameters. I don't quite see how to deform it further. So I'm not actively actively expecting something to happen here. Is there an equivalent analog? I started working on this. I haven't gotten very far. And what about quotients of quasi-symmetric polynomials? This is a wild card. Sometimes you get something interesting in there. Sometimes you just don't. There is an SK model structure on the quotient. So this time I'm talking about the quotient of all polynomials, not of the symmetric polynomials, because the symmetric polynomials have a trivial SK action. But what if I quotient all polynomials? What is the SK model structure? Well, it turns out, and I've been too lazy to write this up for over a year now, that this is the SK model you would basically expect. It's the power of the regular representation. If you've seen diagonal harmonics or not diagonal, just regular harmonics, you would not be surprised by this. And finally, how much time do I have left? It is up to 20. Oh, okay, I can actually. 13 or 10 or 13 minutes all depends. Okay, I can talk about the proven. So let me quickly talk about a question that just feels natural. I don't know how natural it is. So instead of looking at symmetric polynomials and K variables, I can look at symmetric functions and infinitely many variables. This is like the grown up version, except it's easier. And I'm going to use the same notations for them, except I will put them in bold face because so I don't get to confuse them with polynomials. So the symmetric polynomials can be viewed as the symmetric functions, modulo all the high E's that would be zero in the symmetric polynomials. So as modulo I, what's the quotient of a quotient? It's just a quotient by all those things. So you get it from lambda by quotienting out both the E's and the corresponding H is minus A's. Now this looks a bit asymmetric. I mean, it's asymmetric in two ways. You have infinitely many E's here and only finally many H's. But also you are deforming the H's but not deforming the E's. Now I cannot do anything about the first asymmetry. I've tried. It feels like you should be able to kind of have, you should be able to arbitrarily mix H's and E's here, but it's not clear how. If you think about it, you kind of, you don't want to mix them randomly. That will give you very random rings. But the second asymmetry, you can just try to fix it by also deforming the E's. So here's what comes out. I will, and I'm switching to bold face letters here because they will be elements of lambda. They will be symmetric functions. So A1, A2 up to Ak and B1, B2, B3 and so on should be symmetric functions with appropriate degree conditions. Then this quotient is a free k-module with the same basis that we know and love already from Comology. So the same Schubert classes, so sure functions that fit into this rectangle. So now you have infinitely many parameters. I do not know much more about this quotient. There is probably some really low hanging fruit here like other bases. I don't think, I wouldn't really expect much to hold for it honestly because this one really does not have any hint of a geometric meaning, but it just feels like a nice thing to have seen. So let me say a few words about the proofs, well except for the SK action. So main ideas. First of all, the quotient of all polynomials, not the symmetric ones, can be done with Gröbner bases. And in hindsight, this has already been done by Konka, Krattenthaler and Watanabe, already sketched. And Gröbner bases actually, so for everybody who is a bit scared of this, so people usually talk about Gröbner bases over a field. Well, the existence theorem requires a field, but if you have a Gröbner basis, so if you just want to use a given Gröbner basis, if you already know that some things are Gröbner bases, you only need a commutative ring. It's like this vector spaces. For a vector space to have a basis, you need a field, but if you have a free module with a basis over a ring, you can use it. You don't need that ring to be a field anymore. So in the same way, this boils down to actually finding an explicit Gröbner basis for this ideal J. Using that and Jacoby Trudy, you can show that the symmetric polynomial quotient also has the right basis. And the rest, more or less, is about computing the symmetric functions. So I will probably not talk about Gröbner bases here. It will probably take a long time. This is all just in case slides. But here is the Gröbner basis of this ideal J. So x i dot dot dot k, so x i dot dot dot k is a shorthand for x i, x i plus 1, and so on x k. So the Gröbner basis does not use the complete homogeneous polynomials of the whole axis. It uses them only for certain sub sequences of the axis. So if not for the deformation parameters, this would be a Gröbner basis. Well, now you have to deform it. And you deform it in this kind of strange way, with a coefficients and with ETs, so elementary symmetric polynomials, in the other axis. With respect to the degree of lexicographic ordering on the monomials. And so you have a very nice Gröbner basis with really nice leading terms, which is just x i to the n minus k plus i. So McCullis's basis theorem tells you what the basis is for the quotient ring. Now, how do you jump from all polynomials down to symmetric polynomials? Okay, so using Jacobi Trudy, you can reduce each shorthand polynomial that does not fit into the rectangle to shorthand polynomials that fit into the rectangle. It's a bit of a straighting rule. So the shorthand polynomials that fit into the rectangle span the quotient. Now, we need to prove that this is linearly independent. And as usual, this is the harder part. It's like PBW. However, we know already from Arting that this family is span the polynomials as a symmetric polynomials module. And combining these two gives that these products span the quotient of all polynomials. But we already have a basis for this quotient. That's what we did on the previous slide with Gröbner basis. So what if you have a k-module with the basis and the spanning families that have the same finite size? It's well known or an easy exercise that the spanning family must then also be a basis. So we conclude that this family is the basis of the big quotient. And therefore, its subfamily is the basis of the small quotient. So it's a bit of a ping-pong argument. It feels natural if you're into symmetric functions to just work with symmetric functions all the time or with symmetric polynomials. But I couldn't prove it dependent with symmetric polynomials. If you actually try to find the Gröbner basis of i inside s, I don't think this has a good answer. I think the Gröbner basis become very ugly very soon. So you climb up into the ring of all polynomials, you look at the quotient there, and then you argue using this fact that the spanning set that has the same size as a basis must also be a basis. I think this is a trick that's a good takeaway if you don't know. I feel like it can be useful in many other situations. So basically if you don't find a nice Gröbner basis, maybe it helps to go into the bigger ring. And the rest of the proofs, as I said, it's about computing with symmetric functions a lot. I'm just going to briefly notice this identity commonly ascribed to Joseph Bernstein, but it appeared first in the book by Zilevinsky that it's basically saying if you have a sure function and you want to insert a new entry at the front, well preferably of course it should be bigger than the first entry, so you still have a partition afterwards, you can achieve this by a certain operator which is made out of skewing and multiplying, skewing by e's and multiplying by h's. This operator is now known as a Bernstein operator or I think Garcia has called it the Schur-Row-Edner for an obvious reason. So this is in a way the driving force, but there is a lot of computations with theory rules and Jacobi Trudy matrices. Okay, so much for this and I hope to actually have two papers out soon enough about it, not just a big preprint with a lot of messy proofs. So thanks are due to Sasha Posniko for the whole project that I started working on back at MIT. Gerard Dushan, thank you for inviting me again and a lot of people have done some, have contributed some ideas and thank you all. Thank you dear Darish, now time for questions, remarks or comments, maybe? I do have a basic question. Can you, maybe I missed that point from the beginning, but Darish, please, can you lead us again the connection between the little Richardson coefficients and the Gromov written? That's the very beginning I believe. Can you explain us again the link? Because I think if this is happening maybe for the little Richardson, you may have a rule similar with another invariant for, for instance, the connexer by changing the law. So I think this connection may be fruitful for doing other stuff. This is what I want to know. So let's see. Maybe just, you know, yeah, keep up with that. So this one is a coefficient of a Schur polynomial in the product of two Schur polynomials, except you're working in the quotient ring. But if you're, if you're working in the classical homologies and the A parameters are zero, so the quotient ring is basically just looking at like a little piece of the symmetric functions ring. It's not, it's in that case, there is no error terms. It's not being in any way contaminated by the A's. So that coefficient, of course, if you choose alpha, beta and gamma appropriately, you can get any little Richardson coefficient with such a thing. But this is only happening for the deformed Schur polynomial, right? I mean, the one with the bars, not with the classic one, as you say, right? So if you pick, if you take the A's to be zero, then the bars don't really change that much. So basically in sufficiently low degrees, everything behaves as it was. Okay. Okay, thank you. Are there questions? Yes, do you see another way of deforming so that the duality k to n minus k be respected? So this is tricky. In theory, I could well imagine that if you properly somehow, I don't know, no, okay, no, this is probably nonsense. But so the thing is that if I have k, I have k parameters. If I have n minus k, I have n minus k parameters. So it's not quite clear. So I probably should have mentioned that this is all looking kind of similar to what is called factorization algebra and splitting algebras in nominative geometry, or even in constructive Galois theory. This algebras where you just pick some polynomial and you declare it to be a product of two polynomials. And the indeterminate coefficients of those two polynomials will be my new indeterminates. But I wouldn't, I don't know if this is in any way as a morphic or a particular case or a generalization of that. They just feel, they just feel alike. Yeah, actually may comment because, sorry, I missed half of your talk, but still as I wrote you in chat, this is in fact infinitely many variations, deformations of algebra with the basis and this is proper to be Frobenius algebra. This is quantum, big quantum homology gives and k actually are natural, we have chain classes of tautological k dimension bundles and n minus k will be a chain class of n minus k that tautological bundle on Grasmadian. Yeah, so definitely there are natural k parameter families of deformations from quantum homology from big ring. Thank you. Do you think they could be, they could give the same algebra originalization of mine? Why not? Yeah, I would be surprised to have two different different deformations of same algebra with same number of parameters. Of course, it's a question of how to interpret combinatorially. Dari, I have a quick question. So the deformations that you that you consider all come by tweaking the Jacobi Trudy identity or basically perturbing the Hs? Yes. Is there a way of doing deformations? So where you exploit, say the Jacobi, the sorry, the Giambelli identity instead, so for instance, instead of perturbing the Hs, you perturb hook shapes. Good question. Or I mean, that's another way is can you, can your constructions be realized through a, you know, through a perturbation of Giambelli? Good question. And I'm thinking about this more in the case of sure functions, the infinite case, because you take all of the... Yeah, no, I don't even know this. Do you know this in classical homology? If you don't deform anything, if you just quotient by Giambelli, by hooks instead of by Hs. Oh, I think I, I think I remember something by Maza and Weibel, maybe, but I think it's not quite, is it hooks? Yeah, sure finite elements and lumberings, that's something similar. I need to read that one again. Maybe a small question. I probably missed some part of your talk, but when you consider this deformation, you work with commutative grubber bases. Yes. In spite of this deformation, what, what do you see any place for non-commutative grubber bases there? Maybe you can directly work in the deformed situation and construct some non-commutative grubber bases to get together bases, which you need to work further. Of course, probably it's difficult plans for right away, but it's something probably which can probably give some simplification, some other ways to do it. I don't know. I'm curious as to how non-commutativity will simplify things here, but I have to say right away that I don't know what what non-commutative sure functions would be. I mean, there are several versions of them, but none of them feels particularly canonical. Yeah, of course, it should be chosen correctly, appropriately. So yeah, it's difficult to say, just a suggestion to try. Because if you work straight away in in quotient, you can still do the same as when you work with commutative monomers. And maybe it could be easier, not necessarily, but it can. There is one version with practic classes inside the world, inside the free algebra. Oh, okay. A version of sure polynomial. Oh, seems like free ribbons. I don't remember the first authors, but I know that Schützenberg and Lascaux worked a lot on this. Other comments or remarks? So we can thank you, Darij.