 OK, so welcome everyone to the Schubert seminar. Welcome to the new edition for the 2023-2024 academic year. We're kicking off the new year with Thomas Lamb, telling us about a decade of positive rate varieties. Please go ahead, Thomas. Yeah, thank you. Thank you for inviting me to speak. Yeah, I was wondering what to talk about. And then I realized that I've been writing papers about positive rate varieties for 10 years now. And that was very scary for me. So I decided to talk about positive rate varieties. Yeah, so let me start off by sort of, I think the start of I'll tell you what positive rate varieties I am trying to compare them to Schubert varieties since this is the Schubert seminar. So just to review some notation, GRKN will be the Grasmani of K planes in N-space. It's got a stratification by Schubert cells, which are labeled by partitions. And just to remind you what a Schubert cell is, you take sort of all matrices that look like this, which means that they have, I mean, there's zeros everywhere down here. They have sort of pivot ones in specified locations and then zeros in certain locations and some stars, which are arbitrary. And so you look at matrices of this form and you take the image of that set inside the Grasmani and that's the Schubert cell. And the Schubert cell is isomorphic in this particular Schubert cell is isomorphic to the C to the number of stars, which is seven. And the Schubert varieties are the closures of Schubert cells. OK, hopefully that's all familiar to the seminar. And so I want to spend some time defining positroid varieties, which I studied initially with Alan Knudson and David Speyer. So the setting is we have a K by N matrix. K by N matrix. K is less than N. I look at the columns of this matrix, the label V1, V2 up to VN. The labeling, I'm going to extend the labeling to integer indexed just by labeling them periodically. And then I'm going to define a function from the integers to integers. And it's defined by this definition. So it'll depend on my matrix, my K by N matrix. And the definition is for each integer i, I find the smallest integer j bigger than i, such that the column Vi is in the span of the next few vectors. So if I want to know what f of 1 is, I take V1. And I ask how many of these vectors do I need to take before V1 is in the span of them? And if I want to know what f of 2 is, I take V2. And I ask for it to be in the span of the next few. And I see how many I need to take. So let's do an example of this. We have an example of this, a really small example. So this case is K equals 2 N equals 6. And I want to calculate. So here, I'll just do int i, f of i. So what's f of 1? Take this vector. Is it in the span of the next vector? No, the next vector spans something zero dimensional. But V1 is in the span of V2 and V3, because V1 is a multiple of V3. So we put 3 here. This 3 says that V1 is in the span of V2 and V3. Next, we look at V2 and ask, when is in a span of the next few vectors? V2 is the zero vector. So it's in the span of no vectors. So that's a degenerate case. And we define f of 2 is equal to 2. You don't need to take any vectors, and it's already in the span of them. What about V3? Is V3 in the span of V4? No, it's not a multiple of V4. Is it V4 and V5 are parallel? So it's also not in the span of these two. But V3 is in the span of these three. So f of 3 is equal to 6. What about V4? V4 is parallel to V5. So f of 4 is equal to 5. Now, what's f of 5? It's not V5 is not parallel to V6, but V1 and V6 form a basis. So V5 is in the span of V6 and V1. So we put a 7 there, because V1 is V7. And then what's f of 6? f of 6 turns out to be 10. OK. And then everything is periodic. So this is all we need to capture the information. Let me see if there are any questions. I think I was supposed to make the start of the talk especially accessible. OK, so this defines a function from the integers to the integers. And the lemma is that this function. So this function depends on the choice of matrix V. So this function turns out not just to be a function for integers to integers, it's a bijection. And this is not that hard to prove just from the definition. And it's also periodic in this sense. And that's just because of this definition here. Just because the V's are indexed periodically. It's something called banded as in the way it's defined. f of i is always bigger than equal to i by definition. It's always less than equal to i plus n, because the worst case is you have to come back to yourself before you find something that you're in the span of. So V1 might not be in the span of V2, V3, or any of those things. But V1 is parallel to itself. So when you come back to yourself, you're definitely in the span. So this condition is called banded. And finally, the hardest condition to prove is that assuming that this matrix is full rank, so it has rank k, then the average increase in f is k. So f of i minus i is how you're sort of the increase in f. And this says that the average is k. One way to think about it is to think about it as the way we like to think about it sometimes is in terms of a juggling pattern. So you can picture this is i. And you can picture f of i as this information. 3 goes to 6. 4 goes to 5. 5 goes to 7, which is here. And then 6 goes to 10. There's also something which goes here. So this picture here, in this case, when f of 2 is equal to 2, we don't usually draw anything there. So we call this a juggling pattern. So you can think of this f as a juggling pattern. And the x-axis is the time. And so at time 1, you throw a ball and it lands at time 3. At time 3, you throw a ball and it lands at time 6. And you only get to throw one ball at each time maximum. And the number of balls that you're using in total, that's k. So if you draw a vertical line, something like this, then you see that this is time 4.2 or something like that. There are two balls in the air. At that time, the two balls will land at time equals 5 and time equals 6. So that's one way to think of this equation. It says that k, which is the rank of the matrix, is the number of balls that you are juggling. So it's a linear algebra exercise to prove that for any matrix V, you will get these properties. As I said, I think these are trivial. This is fun. And this is a little bit harder and also fun. This is pretty straightforward. That is a bijection. OK. So having made that definition, let me write down the definition of a positroid variety. So if you give me f, which is inside band k n, which is k n in words, k n banded fi in permutation, so that's bijection from z to z, which satisfies those conditions from before. So if you give me one of those f's, then you just look at all the matrices which have that f. And you think of those matrices as points in the Grasmian. And this will cut out a subset of the Grasmian. And that's called the open positroid variety labeled by this banded fi in permutation f. So you should check that this definition of fV isn't changed if you do row operations on the matrix V. So acting with glk over here will not change f of V. It will preserve this. So actually, this function descends to the Grasmian. So it's not just a function on the matrices. It's a function on the Grasmian. And the closed positroid variety is the closure of this. And this guy is a closed sub-variety of the Grasmian. And this guy is a locally closed sub-variety. So the first theorem says that there's a decent analogy between the properties of these varieties and the properties of Schubert varieties. So first of all, it's actually quite not obvious that these varieties pi f, so let's just talk about the closed one first, that they're irreducible. That's not clear from the definition that they're irreducible, but they are. And the singularities of the roughly same difficulty as Schubert varieties, the normal and common Macaulay. But it's not important if you don't know what those adjectives mean. Another thing I didn't write here, but let me add that is also similar to Schubert varieties is that the ideals are linearly generated. So these are the ideal of a positroid variety is generated by a bunch of Flucca coordinates. Alan is making a comment. It's not obvious that these things are non-empty. That is true as well. Yes, it's also not obvious that they're non-empty. Thank you. Yeah, so I said, right, OK, so the point is back here, I said, doing some linear algebra, you can determine that any V satisfies these conditions. And what Alan is saying is that there's a harder, I think quite a bit harder statement, which is that anything satisfies these conditions comes from some matrix. Yes, that's one level even harder than that. There's the co-dimension of this variety is some statistic on the banded affine permutation. So if you treat it as an affine permutation, then the co-dimension is the length of the affine permutation. So this is analogous to how things work in Schubert calculus as well. And also the third property that I wrote down here is that it's a stratification in the technical sense. So the closure of one of these open strata is a union of a bunch of other open strata. And there are other descriptions. So the description I gave is pretty close to a description that I usually use to sort of say what positroid varieties are quickly is that their positroid varieties are intersections of cyclically rotated Schubert varieties. That's pretty close to this F description I gave. There's a description that's not obvious. If you know what a Richardson variety is, then you take a Richardson variety in the flag variety. It has a projection to a Grasnani. And if you project the Richardson varieties, you'll get positroid varieties. And there's another way to get them by choosing a Frobenius splitting and then looking at comparably Frobenius split subvarieties, which will also result in the same stratification. Oh, so there is also a question in the chat. What does it mean for G bigger than F in part three? Maybe Alan already answered. Yes, so I didn't want to I didn't want to spell this out. So this is the this is the Bruja order in the affine symmetric group. So analogous to the Bruja order you would use. If you were comparing Schubert varieties, you would use the Bruja order in parabolic quotient of a symmetric group. So it's analogous, and it doesn't take that long to define. But we're not going to need to know exactly what that partial order is. And as I said, since this is a Schubert seminar, I'm in the mindset at the beginning of this talk that I want to compare positroid varieties to Schubert varieties. And if we were talking about singularities of Schubert varieties, one of the famous results due to Laxmabaya and Sashadri is the question of when Schubert varieties are smooth and there's a pattern avoidance criterion for it. In a recent work, Billy and Weaver have found criterion for smoothness of positroid varieties. So that's also known. OK. But I think in the Schubert seminar, one of the things that we tend to talk a lot about would be Schubert classes. So one of the reasons why Schubert varieties are so important is that their cohomology classes form a basis for the cohomology ring of the Grusmanian. So the cohomology ring of the Grusmanian is symmetric functions modulo sum ideal. And I wrote this ideal in a way where I said this ideal is generated by Schuer functions where the partition is not contained inside a particular rectangle. There are many descriptions of this ideal. And the basic theorem connecting to Schubert calculus is that the class of a Schubert variety maps to this distinguished class, the Schuer function, and they form a basis for the cohomology. So with this comparison to Schubert varieties in mind, back when we started looking at positroid varieties with Ellen and David, we proved that positroid varieties, their classes in the cohomology of the Grusmanian are represented by polynomials called affine-standley symmetric functions. So I'm not going to define these affine-standley symmetric functions. So first of all, what do we need to know about affine-standley symmetric functions? So they're labeled by affine permutations and it pops out asymmetric function. It's completely combinatorially defined. So you can write down the monomial expansion of this just as this is sum over autoblow. There's a formula for the affine-standley function, which is sum over certain reduced factorizations. And now I believe there's, I can mention the first open problem. I realize that, as I said, this is 10 years ago, that we actually still do not know or maybe, I mean, there are now and then there are reports that this has been solved. I don't know the current status. But I think we still do not know how to expand positroid classes in terms of Schubert classes, which since the Schubert classes form a basis and these are some other elements in the ring, you can expand it in terms of the basis. And the expansion coefficients turns out in a not completely obvious way to be equivalent to the problem of Schubert. And the fact that it's equivalent to this Schubert problem is basically this other description of positroid variety as projections of Richardson varieties. And it contains the solved problem of the structure constants of the quantum cohomology of the Grasmadian. So certainly the expansion coefficients expanding positroid classes in terms of Schubert classes is an interesting problem. And maybe we are not that far from solving this. OK, so everything so far I've said is sort of trying to put positroid varieties in parallel with Schubert varieties. And that's certainly the point of view we had in the beginning. But of course, this result here says that it's clear from this result that they're not quite as natural as Schubert varieties. Schubert varieties are more natural because their classes give a basis. And it's the basis that we, for a lot of reasons, this basis is the correct one to be working with. And positroid classes give a bunch of other symmetric functions. So in recent years, I felt that a better analogy than comparing the positroid stratification of the Grasmadian to the Schubert stratification is to compare the positroid stratification of the Grasmadian with the torus orbit closure stratification of a projective-toric variety. So this is my notation for a toric variety. And so what do we need to know about projective-toric variety? Projective-toric variety is you are labeled by lattice polytope. This guy here is a lattice polytope. And it comes with a stratification. And the stratification of the toric variety, the strata are in bijection with the faces of this polytope. And everything is glued together with closures, imitating the face poset of the polytope. So if the face is bigger, if the face is bigger, then the corresponding stratum is bigger and so on. And this stratification is obtained by taking the dense torus of the toric variety acting and looking at the torus orbit closures. And here's a list of reasons why I think this is a good analogy so that you should think of the positroid stratification of the Grasmadian as some analog of the torus orbit closure stratification of a toric variety. I already said for being a splitting, and I don't want to go in that direction too much. But if you know about that, then there's an analogy there. I'm going to spend some time talking about the fact that the positroid devices form a canonical divisor in the Grasmadian and also the toric divisors of a toric variety also form a canonical divisor in a toric variety. And there's also this notion of positive geometry, which is a little bit stronger than what I just said. And the other reasons, mirror symmetry where the positroid stratification seems to correspond, this seems to be the right analogy in mirror symmetry, and also relations to cluster where this also seems to be a right analogy. Recall that cluster variety is made by gluing a lot of tori together. And of course, our toric variety is also made by gluing a lot of tori together. Anti-canonical? Sorry, yes. Anti-canonical. Anti-canonical, thank you. So I may have said canonical earlier when I meant anti-canonical divisor. The boundary, the positroid devices in the Grasmadian form an anti-canonical divisor in the Grasmadian. Oh, so as I wrote this slide, I realized that I don't know how to solve this problem down here, which I've actually never thought about until I wrote this slide. So I said, back here when 10 years ago when I was working on this with Ellen and David, we wanted to write all these, we wanted to look at all these symmetric functions, these affine-standley symmetric functions, and they represent positroid classes, and we want to express them into the Schubert classes. But if the correct analogy is with a toric variety, then maybe I should, instead of presenting the cohomology of the Grasmadian as with the Schubert basis and have these extra other elements floating around, I should have the cohomology of the Grasmadian be generated by positroid classes and write down a bunch of relations that positroid classes satisfy. And that would be a little bit analogous to how we often think about the cohomology ring of a, or a towering of a toric variety. We, it's generated by the divisors, and then you write down a bunch of relations using some linear algebra data on the corresponding rays. So anyway, I haven't actually thought about this, but I'm curious if there is a way to present this ring knowing about positroid classes instead of knowing about Schubert classes. Okay, so why do I now think the analogy is better? And one of the most potent ways to think about it is this notion of positive geometry. So a toric variety has a dense torus and the dense torus has a positive path. So let's write down what all these things are. So it contains just by definition of being a toric variety contains a dense torus. And if you look at the real points of the dense torus, it contains a positive path where you just take the first the real points. And then for the real points, you choose one of the two connected components and you call it positive. And then you get some positive torus. So this is some kind of positive torus sitting inside. This thing, the positive part of the toric variety or non-negative part of toric variety is the closure of this inside the toric variety. And this thing you find in Fulton's book on Introduction to Toric Varieties a proof that actually this thing is a diffeomorphic to a polytope. In fact, diffeomorphic to this polytope that you started with. So sitting inside the toric variety which is a complex algebraic variety is something that actually looks like a polytope or a topological space that looks like a polytope. And it turns out that there's also something going on in the Grasmanian in a similar way. So if you take the Grasmanian, you can define the total non-negative part of it which was defined by Lustig and Posnikov which is the locus where all the political coordinates are non-negative. So this defines a subspace of the Grasmanian. It's a semi-algebraic subspace. And so advertise these two theorems. One is that this is not a polytope but it's as a stratified space is pretty much as close to a polytope as you can get. It's a regular CW complex homeomorphic to a close ball. So regular CW complex means that it's a stratified space, the strata are the cells and the closure of each of the strata is itself a close ball. So it's every piece is a close ball. So it looks, if you draw a picture of it it basically looks like a picture of a polytope. And a stronger statement is that it's a positive geometry and this is related to, I mean, so a statement, sorry, the statement that it's a positive geometry is stronger than what I said before that it's got an anti-canonical, the boundary divisor is an anti-canonical divisor. So let me write down what this definition says. So they exist a top form called the canonical form, omega of this thing with only simple pose along the boundary divisors. And when you take the residues, you get a recursion. Okay, so this is a little bit sketchy. So the first thing to say is that in this CW stratification of the total non-negative Grasmanian the strata are given by intersecting this total non-negative Grasmanian with the positron stratification. Okay, so I didn't say that, I should have said that. And over here, this thing is isomorphic to a polytope. In the toroid variety case, the non-negative part is diffeomorphic to a polytope and the phase strata stratification of the polytope is obtained by intersecting with torus overclosures. So the two strata provide these topological spaces with their natural phase stratification. And the statement of to be a positive geometry says that there is a top form which means the differential form of degree equal to the dimension. So in this case, this top form would be K by N minus K that's the dimension of the Grasmanian and it's supposed to end the feature of it is that it has poles at exactly the same places as the complex stratification has strata. So it's polar strata, sorry, it's polar behavior is identical to that of this stratified space. So wherever you've got a facet of your regular CW complex it's got a pole, if you take a pole along there you get another differential form and that will have poles along the facets of that stratum and so on. So this is a little bit stronger. I mean, if from the algebraic geometry perspective seems pretty close to saying that there is a, I mean saying that the divisor is a canonical is a anti-canonical divisor then there'll be some section of the canonical bundle which has sort of simple poles along the boundary but this is a little bit stronger because it asks for form and eventually you get down to a point and you want the residues all to be one. So that's part of the definition of a positive geometry.