 Yes, okay. So welcome everybody to today's super seminar. I'll start with some announcements. So first of all, we are contemplating to have some early career talks by early career super calculators similar to last fall. So just like last year, nominations are very welcome with the usual rules. Everybody can nominate anybody, including themselves and their students and so on. And should we get more nominations than we can accommodate, well then we will give preference to people on the job market. But last year we didn't get more people and we could accommodate. So please just keep the nominations coming. Yeah, so then today we are very happy to have Matthew Samuel speak, Rutgers PSD. He will speak about Molliv Sagan type formula for double super polynomials. So please take it away, Samuel. Matthew, sorry. All right. I'd like to thank the organizers for inviting me first. I'm very happy to be here. My slides, I'm going to apologize in advance in very detail if they're intended to be looked at afterwards. So you'll be able to reproduce everything from scratch from the slides. Hang on a second. I'm going to start with an introduction to Schubert polynomials and related topics. So start at the very beginning. We're going to start by defining the polynomial ring in infinitely many variables. So they're X variables indexed by positive integers. And we're not going to write the indices all the time when we write the name of the ring. So we're just going to call this ring ZX. And S infinity is the group of permutations of the positive integers that fix all but finally many elements. So we can consider this as the union of Sn for all n by saying that Sn is a subgroup of S infinity. It's the subgroup such that W is an Sn if Wi equals i for all i greater than n. The adjacent transpositions are denoted by Si, these exchange i and i plus one. Other transpositions not necessarily adjacent are denoted by Tij, these exchange i and j. And the length of a permutation W in S infinity is denoted by LW. And that's the number of inversions. So it's going to be the length of a reduced expression. So we're considering an action of S infinity on ZX that just permutes the indices. So if we apply the permutation W to the variable X i, you get X of Wi. So I'm going to give a very explicit example. We have P equals X one squared plus X two squared plus X two X three. Then if W is one three two, you apply it and you get X one squared plus X four squared plus X four X three. So next I'm going to define the divided difference operators. So again, we let Si be the adjacent transposition for some positive integer i. And we define a divided difference operator partial i which sends polynomials to polynomials by defining partial i applied to P as P minus Si P over X i minus X i plus one. So we're taking P subtracting Si P and divided by the difference. And just to give an example, if we apply partial one to X one cubed, X one cubed minus X two cubed over X one minus X two, which is X one squared plus X one X two squared. So divided difference operators satisfy certain important relations, in particular they squared to zero. Partial i and partial j commute if i and j differ by more than one. And we have the braid relation partial i partial i plus one partial i equals partial i plus one partial i partial i plus one. So because of these relations, we can define composite divided difference operators that are products of these that are indexed by S infinity. So we write for U and S infinity, we write U as a minimal product of adjacent transposition. So this is a reduced expression. And we take the same indices and use the divided difference operators associated to those indices and take the product in the same order to get partial U. And because of the relations, this does not depend on the choice of expression. So we have defined divided difference operators partial U for any U and S infinity. So divided difference operators have a product rule, which is also called the Leibniz formula. You apply partial W to the product PQ it expands as a sum over all U and S infinity of partial U applied to P times this new operator partial UW applied to Q. So this is the partial UW is a skew divided difference operator, which depends on two permutations. And I believe it was originally studied by McDonald. So my dissertation was entitled the Leibniz formula for divided difference operators. So you're going to be hearing a lot about Leibniz formula, but the dissertation was not restricted solely to type A. So now I'm going to define Schubert polynomials explicitly. So first, we define a permutation W0 of n, which is the longest element of Sn plus one, or the reversal permutation. And we associate to it a Schubert polynomial, sw0 of n of x, as the product as i goes from one to n of xi to the n plus one minus i. So that little squiggle is indeed an S. It's just in the fracture font. So I gave an example for n equals three, S4321 of x, x1 cubed x2 squared x3. So this is in general a staircase monomial with decreasing powers. So to define a general Schubert polynomial, we fix n and consider permutations u and Sn plus one. And there's a formula for defining the Schubert polynomial S u of x. It's partially u inverse W0n applied to sw0n of x. Now this may on the face of it appear to depend on n, but it does not obtain the same Schubert polynomial if we start with a larger symmetric group. So Schubert polynomials indeed can be considered to be indexed by the group S infinity. And I'll mention that they were introduced by Lesko and Schuessenberger in 1982. So some properties Schubert polynomials, they form the z basis of the polynomial ring. Sw of x is a homogeneous polynomial of degree Lw with non-negative integer coefficients. And we have this formula that allows us to express other polynomials in terms of Schubert polynomials. Partially u applied to Sv of x and setting x to zero gives us one if u equals v and zero otherwise. So since the Schubert polynomials form a basis, we can define their structure constants. So if we take the product S u of x times Sv of x, we can re-express this in the basis of Schubert polynomials with coefficient Cubw. So these structure constants are known to be non-negative and they count points of intersection of Schubert varieties. However, at this time no combinatorial positive formula is known. There are many partial results and I'm going to mention some of them, but I think many here would agree that at this point it looks like finding a complete solution is a hopelessly difficult problem. So I'll mention the importance of Schubert polynomials. They're the unique polynomials that represent Schubert classes in the columnology ring of every complete flag variety. So in general, the combology ring of the complete flag variety is isomorphic to the ring of co-invariance of the symmetric group. It's a quotient of the polynomial ring and end variables by the ideal generated by the homogeneous symmetric polynomials of positive degree. So of course we can represent any element of the quotient by infinitely many polynomials, but the Schubert polynomials represent the Schubert classes. The same polynomial represents the same Schubert class even as we add more variables. So now let's get into some results, positive formulas for multiplying Schubert polynomials. So in early one, though this was not phrased in terms of Schubert polynomials since they weren't even defined yet, but that's the Chevrolet Monk formula. It's a formula for multiplying an arbitrary Schubert polynomial s u of x times a degree one Schubert polynomial s s k of x and it expresses the result as a sum of s u t i j of x where the transposition t i j ranges over all i less than equal to k and j greater than k such that the length of u t i j is exactly the length of u plus one. I give a quick example there. So I mentioned Schubert polynomials which were studied much earlier than Schubert polynomials but are a special case of Schubert polynomials. The Schubert polynomial s u of x is a Schubert polynomial with k variables if u has exactly one descent and this is that position k and this polynomial is symmetric in the variables x one through x k. Now Schubert polynomials, so Schubert polynomials with any given number of variables are Schubert polynomials but if two Schubert polynomials have the same number of variables we have a formula for multiplying them positively. We have the Littlewood Richardson rule. More generally Conert in 1997 found a formula for multiplying an arbitrary Schubert polynomial by a Schubert polynomial provided that the Schubert polynomial has the same number or more variables than the Schubert polynomial. Though it is actively being worked on it is currently still an open problem to find formula for multiplying an arbitrary Schubert polynomial by an arbitrary Schubert polynomial. I'll mention Sattili's Bieri formula in 1996. Sattili published the paper with a Bieri formula which really two formulas a formula for multiplying an arbitrary Schubert polynomial by an elementary symmetric polynomial and a formula for multiplying an arbitrary Schubert polynomial by a complete homogeneous symmetric polynomial and Sattili's proof geometric and in the same paper he provided a rule for multiplying by a Schubert polynomial an arbitrary Schubert polynomial by a Schubert polynomial corresponding to a partition of hook shape. So I'll mention puzzles. The first puzzle rule puzzle rule for the Schubert polynomial case is actually more general than that. I'll get into that later. It was proved by Knudsen and Tao in 2003. Knudsen conjectured puzzle rule for all Schubert structure constants in 1999 turned out that didn't hold in all cases but boot crash for Boo and Tobacco's proved that Knudsen's puzzle rule holds for Schubert polynomials with at most to the sense at fixed positions P and Q. Earlier, Boot conjectured a puzzle rule for the three step flag variety which was proved by Knudsen and Zinjustin in 2020 and I have an example of a three step puzzle there. Now another puzzle rule is for the case of permutations with separated descents. Now the paper I took this puzzle from was from 2023 but there was an announcement of this result 2019 Knudsen and Zinjustin put forth positive formula for the coefficients of SU of X times SV of X where there is an integer P such that Ui is less than Ui plus one for all i less than P and Vi is less than Vi plus one for all i greater than P. This is known as U and V having separated descents and this is highly relevant to what I'm going to discuss later. It could come up again and actually I looked at past talks in this seminar and a formula was presented for this by Huang at the seminar in 2021. So there's a connection between the Leibniz formula and the coefficients. It's easy to see that if we apply partial W to the product SU of X times SV of X set X to zero we get CUBW but using the Leibniz formula we can expand this in terms of the skew divided difference operators and we get that applying the skew divided difference operator partial UW to the Schubert polynomial SV of X setting X to zero gives us CUBW. This was proved by Kirilov in a 2007 paper about skew divided difference operators. So now we're going to get a little bit closer to where we want to be. We're going to define double Schubert polynomials. We're going to introduce a second infinite set of variables. These are Y variables indexed by positive integers to obtain the ring ZXY and almost identically to the ordinary Schubert polynomial case we define the top double Schubert polynomial of SN plus one SW zero N of X and Y and that's the product as I plus J ranges from two to N plus one of X I minus Y J. And you may notice that if you set the Y variables to zero we get the ordinary Schubert polynomial corresponding to the same permutation and that's true in general. So then for a permutation U we define SU of X Y by the exact same formula. As for the ordinary Schubert polynomial we apply partial U inverse W zero N to SW zero N of X Y. Now there's a standard paper that people cite for double Schubert double Schubert polynomials and it's by let's go and shoot the burger. But I looked at that paper and it doesn't seem to have double Schubert polynomials in it. So at least to me the origin of double Schubert polynomials is unclear. Maybe someone else has a better idea. The earliest I found was 1990 book of McDonald notes on Schubert polynomials. Some properties with double Schubert polynomials they form a basis for Z X Y as a module over Z Y. SW of X and Y is homogeneous of degree LW with non-negative integer coefficients when expressed as polynomial in terms of linear terms xi minus yj. Now if we set the X variables equal to the Y variables so then the double Schubert polynomial vanishes. So SU Y Y is zero unless you use the identity. Now the double Schubert polynomial corresponding to the identity is just one. And we have very similar formulas we do for ordinary Schubert polynomials. If we apply partial U to the double Schubert polynomial SV of X Y and set X to Y we get one if U equals V and zero otherwise. So double Schubert polynomials have their own structure constants since they form a basis as a module over Z Y. These structure constants CUVWY are polynomials in Y. They're homogeneous of degree LU plus LV minus LW such that we can expand the product SU of X Y times SV of X Y and the coefficient of SW of X Y is CUVWY. These coefficients also have a known positivity property. They have non-negative integer coefficients when they are expressed in terms as a polynomial in the negative simple roots Y I plus one minus Y I just proved by gram in 2001. So if we set Y to zero the term the coefficients of positive degree vanish and we're left with the ordinary Schubert polynomial structure constants. So this is a generalization of Schubert polynomial multiplication. So double Schubert polynomials multiply like Schubert classes in the torus-equivariant homology all complete flag varieties. Similarly to Schubert polynomials they do for torus-equivariant homology what ordinary Schubert polynomials do for ordinary homology. So I'll also mention their dual to the divided difference operators over the polynomial ring. This dual ring is the nil-heck ring defined by Kustant and Kumar in 1986. So there is a co-product and the nil-heck ring delta such that if we apply delta to partial W you get the sum of overall UV and S infinity of the structure constant CUVW of X times partial U tensor partial V. And the tensor product is that of less modules over the polynomial ring even though the divided difference operators don't commute with the polynomials. This is multiplicative. This is a co-product. In 2002 Robinson found the Pierre formula multiplying a double Schubert polynomial by a factorial elementary symmetric polynomial. The formula is grand positive and expresses the result in terms of restrictions to fix points in equivariant homology and Robinson's proof was primarily combinatorial. There are also puzzle rules for equivariant homology. In fact, the first puzzle rule in 2003 was a rule for the torus equivariant homology of the Graspanian by Knudsen and Tao which allows us to multiply factorial Schubert polynomials and obtain coefficients that are grand positive. In 2015 Buch published a puzzle rule for the two step flag variety which correspondingly allows us to multiply double Schubert polynomials that have two shared cents. That was also a puzzle rule. So back to the lateness formula. If we apply partial W to SU of XY times SV of XY and set X to Y, we get CUVW of Y and we can similarly expand this in terms of the lateness formula to get that applying the skew divided difference operator partial UW to SV of XY. Setting X to Y gives us the coefficient CUVW of Y. Now what this illustrates is actually a property of the skew divided difference operators. So in general, for an arbitrary polynomial in X and Y, P of X and Y, we can express this in terms of the basis of double Schubert polynomials using the divided difference operators. Coefficient of SW of XY as partial W applied to P of XY setting X to Y. Now what the skew divided difference operators let us do is compute the coefficients of a product of an arbitrary polynomial with a double Schubert polynomial. So we can expand the product P of XY times SU of XY in terms of the basis and the coefficient of SW of XY is partial UW applied to P of XY setting X to Y. And I can't go without mentioning another strong connection with the lateness formula and the coefficients, which actually holds for all G mod B. If you apply partial W to a product PQ, you can expand this as a sum over all U and V in S infinity. And that's the Toyas equivalent homology range structure constant CUVW of X times partial U applied to P times partial V applied to Q. That was the main theorem of my dissertation. So now we move on to the mohleb second case. We're going to add Z variables to obtain the polynomial ring Z XYZ double Schubert polynomials form a basis of this ring as a module over Z YZ. And you may define structure constant CUVW YZ by saying that in the product SU of XY times SV of XZ, the coefficient of SW of XY is CUVW YZ. So these are the coefficients we'll be interested in. So the place where this was originally considered was mohleb and Sagan's 1999 paper for multiplying factorial Schur polynomials in different sets of coefficient variables. So whenever an ordinary Schubert polynomial is a Schur polynomial, the double Schubert polynomial is a factorial Schur polynomial and they depend on a partition and the number of X variables. That's what determines the permutation. So mohleb and Sagan found a complete little Richardson rule for multiplying two factorial Schur polynomials in the same number of X variables, one in X and Y and the other in X and Z. And their formula was positive, their coefficients were positive as polynomials in YI minus CJ. However, it was not positive after the substitution Z equals Y in terms of YI plus one minus YI. This positive result was not even known at the time the paper was published. But later, mohleb published a paper with a positive formula for factorial Schur polynomials in this sense. That was in 2009. So I have positivity conjecture. I conjecture that like the coefficients for factorial Schur polynomials, the coefficient CUVW YZ has non-negative coefficients in terms of YI minus CJ for all UV and W. Now, relatedly, Keralov conjectured in 2007 that CUVW of Y zero has non-negative coefficients as a polynomial in Y. What he actually conjectured was that applying a skew divided difference operator to a Schur polynomial yields a polynomial with non-negative integer coefficients. Now, we don't lose as much information as it would seem when we set Z to zero because there's Cauchy formula expressing the coefficients CUVW YZ in terms of the Z equals zero coefficients and the ordinary Schuber polynomials in negative Z. So I've been doing some computer testing of these positivity conjectures. I verified Keralov's conjecture up to n equals six. That took about 66 hours to run on my laptop with 11 threads. And I verified that CUVW YZ has non-negative coefficients in terms of YI minus CJ up to n equals five, which after vast improvements, my program actually only takes about five minutes to run. However, I've been trying to test the conjecture for n equals six on a machine with 64 cores and 128 gigabytes of RAM. Since January, we've been running the program continuously that verifies the result is positive using linear programming. And we're going one degree at a time. We verified the conjecture is true up to coefficients of degree six and degree seven is currently running. And so coefficients can be up to degree 15, but we know the degree 14 and 15 coefficients are positive. So we have to test up to degree 13. So for the remainder of the talk, we'll be trying to find positive formulas in terms of YI minus CJ for the coefficient CUVW YZ, which is partial UW applied to SV of XZ and setting X to Y. Now importantly, SV of XZ has no Y variables. So usually when we're substituting Y after we've applied a skew divided difference operator, some cancellation occurs, but that doesn't happen here. So we're really computing CUVW XZ, which is partial UW applied to SV of XZ. And we can take a break here.