 Welcome everybody. Before I introduce our speaker, David, just repeating an announcement. Just like last year, we planned to have one or two seminars devoted to early career talks and we are still taking nominations. We haven't gotten a whole lot yet, so please let them come. But we should invite the early career talks soon, otherwise they won't have time to prepare, so please nominate. We'll wait a couple more days just to see what's coming in before we send out invitations. Other than that, today we are very happy to have David Anderson speaking about new formulas for super polynomials via bumpless pipe creams. Please go ahead, David. Thanks, Anders. I'm Leonardo and Rebecca and Richard, if he's here for organizing and for inviting me. So I'm going to tell what I hope is a fairly simple story, mostly combinatorial about formulas for super polynomials, which this being the Schubert seminar, I hope everyone's interested in. And the kind of objects that are guiding us in these formulas are the bumpless pipe creams that have gotten a lot of attention and for good reason over the last five years or so. And anything that I could possibly lay claim to originality in this talk is very much joint work with Bill Fulton. And I just want to say one of the things that I learned from working with him over the last 10, 11 years or so is that even when you feel like you've proved a result, you shouldn't feel like you're done until it's in the simplest possible form. It's really been kind of remarkable how dedicated he is to presenting things as simply as possible. And so I want to thank him for helping me learn that lesson, which I'm still learning. Okay, so I'd like to talk about bumpless pipe creams and Schubert polynomials. I'm not going to go into all the background of the subject, assuming that, like I said, you all already care. But let's start off by just saying something about what Schubert polynomial is. They were introduced in the early 80s by Lascaux and Schutzenberg. There is a little bit of confusion, I think in the literature, that the single ones were introduced in 1982 or so as representatives for the homology classes of Schubert varieties and the flag variety. Curiously enough, the double ones came along a couple years later. And if you look at their paper, it's, there's no mention of homology at all. It's really just about interpolation. And it's a little unclear when the connection to equivariate homology and degeneracy and geometry came about, but that was certainly in the 90s. And it's certainly due to a number of the people that are here in this audience. I won't elaborate on that too much. We're going to be interested, we're going to take it as given that we want to compute things like this and that are related to it. And we're going to be interested in finding formulas for them. Okay. So the classical ones, you compute like this. And everyone's seen this story, I think. You have a formula for W naught. This is the longest permutation in SN. And it's just a simple product. And then you apply divided difference dot operators to go down in Bruja order. And this inductively tells you how to compute every Schubert polynomial. Okay. So we'll see examples. And I'm going to break with tradition and not do the obligatory S3 example. But maybe you could try it as an exercise. This exercise is about to disappear, by the way. So exercise. This is a useless exercise probably because you don't know what the divided difference operator is. But remind yourself if you can. They're polynomials for S3. Okay. There's another thing you could have done. And in fact, what this is an example of is you could have started with any dominant permutation. I'm going to use W of lambda to indicate the dominant permutation that corresponds to a partition lambda. And the Schubert polynomial for that dominant permutation is, again, just a product. And what do I mean by a dominant permutation? Well, let me do the simplest one I can think of off the top of my head. That's something like this maybe. If I draw the rotha diagram, so this is maybe 231. This is W equals 231. And here's the rotha diagram for that. The rotha diagram of a dominant permutation should just be a partition kind of upper left justified. And that partition is the partition lambda I'm talking about. And so for this example, we would see that the Schubert polynomial is just, in this case, would just be x1 plus y1, x2 plus y1. Because those are the two boxes in that partition. Okay. So dominant permutations are very easy to write down Schubert polynomials for. And since we have a formula for that, you can apply divided difference operators to that one and get a formula for any other Schubert polynomial that lies below your given dominant one. So that's one way of computing them. And it is very much not efficient in general. Certainly, if you start with the longest element and you want to compute a small length element, it's going to take you a lot of work and a lot of divisions. So we're looking for better formulas. Okay. On the way to those formulas. So at the end of the talk, I'm going to come back to just ordinary double Schubert polynomials. But I'm going to take the diversion through a recent story around what's known as back stable Schubert polynomials. Okay. So let's talk a little bit about stability. It's a fact that the Schubert polynomial for a permutation that starts off like one, two, three, four, and then does something interesting, w, I'll call it w. So this would be k equals four. So if it starts off as just the identity for so many entries, and then does something interesting, that polynomial is going to be super symmetric in both sets of variables. So what that means is it's symmetric in the x's, it's separately symmetric in the y's, and there's a further condition, which is that if you set x1 equal to minus y1, then both of them, sorry, then both of them drop out from the formula. And the formula becomes independent of x1 and y1. Okay. So that's what super symmetric means. And because of this, there are kind of two ways to arrive at when it ends up being the same idea. And I'll sketch those two ways and give a very, very simple example of this super symmetry property. Maybe too simple, but you'll see what I mean. Two paths to the same idea using this symmetry and stability. So the first idea is that, or the first path is to relabel the variables and then take the limit as k goes to infinity. So what we're going to do, so taking this running example very simply, we're going to take this polynomial. This happens to be a dominant one, just one box, it's the simplest non-trivial algebra polynomial. So it's x1 plus y1. And then we're going to bump that inversion off to the right by pre-pending k elements in order. And so we get a super polynomial like this. So for example, this would really be like one, two, three, four, say six, five, that would be k equals four. I shouldn't do that because that looks like that looks like cycle notation, which this is not. So if you do that, then this is the super polynomial. This is just a calculation. It's the sum of the first k plus one x's and the sum of the first k plus one y's. And this is definitely symmetric. In fact, it's symmetric in all the x's and all the y's. But let me focus on the first k for purposes of illustration here. So it's symmetric in the first k x's. It's symmetric in the first k y's. And if I set x1 equal to minus y1, then both of those just cancel out of the formula and the formula becomes independent of that variable. So that's the supersymmetry property. There it is. And then what we're going to do, and I'll give credit to this idea in just a moment, is we're going to try to just forget, kind of take that supersymmetry and collect these things together and then relabel the indices so that these are back to one. And that means I'm going to subtract k from everything. And now I'm going to get negative variables. So we're going to relabel those indices. And also take a limit as k goes to infinity. So here I've relabeled those indices. But now I'm also taking k to infinity. So this is going to be a series going off to negative infinity. And then I've got my x1 still hanging around. And likewise, this is a series going off to negative infinity with my y1 hanging around. And these are, this construction produces what's called backstable Schubert polynomials. And my understanding is that this kind of idea was something that Alan Knudson was thinking about in the early 2000s. And that it was related to constructions that one wants to make when realizing graph Schubert varieties and their homology classes as Stanley symmetric functions. So this kind of part here is a symmetric, is always going to end up being a symmetric function because of this symmetry property. Okay. And then very, there's this, so this idea was then picked up in the in the teens by Lamly and Shimizzono. And their 2018 paper kind of launched this whole study of backstable Schubert polynomials as well as the bumpless pipe dreams, which I'll get to in a moment. Okay. So that's that's one path to these interesting objects. The other path is really very much the same in some ways, but it has a sort of different feel to it. And so doing the same thing as before, we've got this Schubert polynomial. And I've got this symmetric part. And what we're going to do this time is collect these and just write down the symmetric function that they are the super symmetric function that represents them. So let's do that. And what this one is going to be called is C one. So C one in this case is going to be the sum of the X's plus the Y's. And you should think of it as being something like a mixture of elementary and complete homogenous symmetric functions, in this case, for the degree one, one, those are the same. And the the pleasant thing about this, which is again, no surprise to the people that invented it from the other perspective, but it's manifestly a polynomial. And we've called these things enriched Schubert polynomials. Here the C in general is a churn series. So I'm going to think of it as just being a list of variables, where the subscript is the degree of the variable. Okay. And the two notions are really in some sense equivalent via a specialization of the C series. If you specialize it to this infinite series, when you take the homogeneous expansions. So for example, C one will be the sum if you if we take this product, C one will be the degree one part of this product, which is the sum of all the X's and all the Y's just like it was before. So if you actually do this evaluation, then the these so-called enriched polynomials are really the same as the back stable ones. And this evaluation is injective, so there's no loss of information. My understanding is that this is this kind of perspective is something that Andrew Spook was thinking about again in the early 2000s roughly. I put dots here because Andrew and his co-authors kind of from around that time worked out formulas which take general Schuber polynomials and collect them according to grouping symmetric parts of them like this. And really, I also meant to say that this is not an independent path from the previous one because LLs are definitely before our work. We're just in some sense reinterpreting it. But yeah, so this is this is a second path to these objects that are called back stable or enriched Jupiter polynomials. So a couple remarks about it. One is that curiously, sort of after the fact, you could have arrived at these these polynomials by thinking hard about the Billy Heyman and Akita Mehalcha-Neruse constructions of Schuber polynomials for the other classical types, which had this feature that the Stanley symmetric functions were encoded in them and there was a limiting process involved. And so these are kind of a type A analog of those other type polynomials, isn't it? So that's one feature of the story. Okay, so Alan agrees that he was thinking about that, too. Okay, so this is a strict generalization or extension of the last-gu-schützen-berger story. If you specialize, so I'm going to write one, because I'm specializing all of the other C variables to zero, then you end up with the last-gu-schützen-berger polynomial. And I'm using this convention so everything is positive, this negative Y. If you go the other way and specialize the X and Y variables both to zero, you get the Stanley symmetric function in the C variables. And here we're thinking of the C variables maybe as the complete homogeneous symmetric functions identifying the symmetric function ring with just a polynomial ring in C variables. So about that identification, so far I've been a little agnostic about how that goes and I'll continue to be. But I do want to say something which is that these polynomials are manifestly polynomials when written this way. And they involve three sets of variables, so the C's are these positive degree variables. The X's and Y's are all of degree, let's say, comological degree one, maybe complex degree one. If you wanted to do an actual singular comology, they should have degree two, but let's not worry about that. But there's Z many of them in both of them. So we're going to allow negative indices, although we won't see them in too many examples. So there are polynomials and I'm here, I'm identifying the polynomial ring in the C's with the symmetric function ring. Like I said, you're free to think of the C's either as elementaries or as complete homogenesis. So a question arises though in this story, which is how to compute them. And the problem is that the notion of dominant or a problem anyway is that the notion of dominant that we started with is not stable. So when I was describing to you earlier on how to compute the last skew shoots and Brigitte polynomials, and the way one first meets them usually via divided differences, you need to start the induction with a base case that you understand. And dominant isn't stable. So there's no notion of dominant because if you think about it this way, dominant is one three two avoiding. That's another kind of equivalent characterization of dominant. But if you see if you have any kind of non trivial inversion in a in a permutation, you know, here's two one, as soon as I start including, let me go ahead and write this as a permutation of the integers. Well, this would be zero to one negative one, you know, et cetera. I guess I can move this off. And that's a one three two pattern right there. So as soon as you include anything stable, anything back stable, you're not dominant. So there's no, there's not going to be a simple product for these things, basically after. So one needs a different kind of start to the induction. So we're going to come up with replacements for the dominant Schubert polynomials. And this is sort of the first place where I can say something that is an innovation that we're doing here. So we're going to use this. We're going to introduce a new ring, which I'm calling a, and it is isomorphic to the same ring that where the Schubert polynomials live, the symmetric function ring with extra coefficients and X and Y. And in this ring, we're going to find some polynomials that will play the role of dominant. These will be dominant polynomials. Okay, so what are the variables in this ring? The variables are, are going to be a whole bunch of chain series, they're going to be indexed by pairs of integers, positive or negative. So without the superscript, these are just the same kinds of things as before. Each one of these pieces has degrees equal to the subscripts, a degree one degree two. But there's many of them. And what you're going to think is that each one of these is some is going to specialize to some series that looks like this. So there's going to be, you're going to let the X is go up to index P and you're going to let the Y is go up to index Q. And thinking that way tells you what the relations have to be. So the relations among these will be like, well, if I, if I increase the P index, then that means I should like be able to throw in another denominator, one minus X and P, and likewise for the Q. So these relations then cut your ring back down to size and you end up being, you end up being a similar to this. So, so far, not much has happened, really. But, but we'll see why it's useful to write these things down. Dave? Yes, Alan? What does one over one minus X I mean? I mean, I'm not used to having these powers get arbitrarily large. It is, it's like a series, we're only going to, we're only going to ever extract coefficients of two, maybe you would be happier if I did this. And then you, and then for any particular T, you'll extract something. Okay, then I'll extract the coefficient of T to the K when I want CK. Okay. Yep. Is that good? Thanks. Okay. Yeah. Yeah. So this is like a meeting the power of T is just one of these kind of turn class algebra shortcuts. But yeah, thanks. Yeah. To clarify, this is always going to, we're always going to only deal with homogeneous elements here. And that, that helps. Okay. So if I, if I were to fix, and what we had before really was C was just C zero zero. And there's nothing particularly special about zero zero, it just does a nice place to start. So I could have fixed it any other PQ, but if we fix one of them, then we end up getting an isomorphism with, with the ring that we had before, because the relations just tell you how to relate everything else in terms of polynomials. So hopefully that's sort of clear. Okay. So then in this ring, we can give them a partition, we can define a polynomial as just a determinant in, in these kind of modified C variables. And so here's kind of a side remark. The dominant permutations are a subset of vexillary ones. Okay. I already sort of told you that, that, that we shouldn't be thinking about dominant ones, but these ones are meant to be replacements for, for these, for, for, for dominant ones. And if, if I were to set things up so that the, so that I could find a permutation corresponding to this land, this thing was dominant, and the determinant for that would look like the determinant for the, the vexillary, sorry, vexillary Schubert polynomials have determinant expressions. This is what I wanted to say. And this would be that determinant. And so our idea here is basically to take that determinant, translate it into this C language, and then just declare that this polynomial is that one. Okay. So let me say that again. If I get rid of the Cs, then, and so I make all the Cs zero, then this one, this polynomial I'll define here is precisely the ordinary Lascaux Schubert, Lascaux Schutzen-Petersen double Schubert polynomial for a dominant permutation. So it's, it's sort of a fun little exercise to kind of try to figure out that when you do this determinant, and you make all the higher Cs equal to zero, and you use the revelations to pop out some X, X plus Ys. You end up with just something that factors into a product of Xs plus Yj's with the ij in the, in the shape of lambda. So these, these are sort of the right kind of dominant versions for back stable Schubert polynomials. They do specialize in the right way. Right. So these form a basis for all polynomials as you let lambda range over all partitions as a, and again, as a module over polynomials and X and Y. Right. So that has the right kind of cardinality, I guess, is the right rating because remember, lambda is symmetric functions, and that has a basis over of polynomials indexed by partitions. So these guys are our basis. And that means that I can write any element of that ring, in particular, the general backstable or enriched Schubert polynomial as a sum with these thick guys with coefficients. And so what we'd like to do is get an expression like this, a finite expression, where we know what these coefficients are. And so the first theorem is, is an expression of that form. So I should say, so I'll say what these things, what these objects are, probably after the break. But what we have is a bumpless pipe dream, that's BPD, a bumpless pipe dream formula for these coefficients. So we can express any enriched Schubert polynomial, excuse me, as a finite sum over partitions, where the coefficients are again finite sums of bumpless pipe dreams, in the kind of the usual way, and I'll explain what the usual way is in just a moment. So these are bumpless pipe dreams. They were, as far as I can tell, introduced by Lamly Shimizono, their connection to Schubert polynomials was explained kind of concisely and clearly by Anna Weigand. Their kind of countertorques and role in Schubert calculus has been investigated by a lot of people, but I'll mention Dao Ji Wang. And also there's a wonderful paper by Fan Guo and Sun, who kind of introduced this notion of dominant transition, which I'll say something briefly about after the break. So this seems like a good place to stop for five minutes or so. All right, David, thanks very much for the first half and any