 All right. Welcome everyone. Welcome to the 2022 edition of the Schubert seminar. Our first speaker for 2022 is Alan Knudsen from Cornell, and he's going to tell us about the commuting variety and genetic vibrates. So take it away, Alan. Thanks for having me. So, I'm starting off with a non Schubert E thing which is the space of pairs of commuting matrices. So, if I tell you I'm thinking about pairs of matrices, and, and I'm imposing these n squared many quadratic equations that say the matrix entries of x y, the sum of genius quadratic and the x y entries, equals the corresponding matrix entry and y x. Does that ideal. Is that ideal radical that's generated by those n squared many equations. So if it's not, it means that there's somehow secret equations that are also true on the set of pairs of commuting matrices that that hold for any pair x and y of complex matrices that commute, but that you can't directly derive from those quadratic equations. I might sound crazy, but let's start with a simpler example what if I told you about matrices that squared is zero and you say, well matrices are squared is zero or nil potent. If they're nil potent, all the eigenvalues are zero. If the eigenvalues are zero then the trace is zero. And in the process of saying such things you've derived this linear equation trace of m equals zero. So if you get derived it from these homogeneous quadratic equations. And that's not possible without taking a square root somewhere and you really notice that if you think about the one by one case, where I tell you I've got a number whose square is zero, can you derive from that that the number is zero. And if it's a complex number, sure. But if you're trying to do algebraic geometry where you might be running into nil potents and stuff, then you're not supposed to be able to derive that directly. So nobody knows whether there are secret equations on the space of pairs of committee matrices. You can check it for small sizes like up to four by four easily enough using, you can say, does I double equal radical of I in Macaulay to, and find out that that it's fine there aren't secret equations but that doesn't tell you for all in by N. I was brought to this question is brought to my attention. Like 15 years ago, a little bit more and I, and I said well I have some tricks that might be able to get a hold of this. So the usual trick people think about is to say we have a GLN action on this space. We could conjugate x to be in Jordan canonical form, and then why commutes with that does that help us well, it's pretty gross figuring out what commutes with somebody in Jordan canonical form. But even if you did that you'd still somehow be thinking about the set of pairs, not thinking about the ideals. Anyway, so I said, let's degenerate this guy let's take these quadratic equations and degenerate them, maybe not all the way to monomial equations, but to something simpler. And so I said let's introduce this diagonal matrix. It's got powers of T down the diagonal, and I'm going to use the this linear change of coordinates, where I take my matrix X and I'm right multiply by the diagonal matrix. Y and I left multiply by the inverse. And then instead of the equation x y equals y x, I get this equation here x prime y prime. It's not equal to y prime x prime on the nose it's that conjugated by this diagonal matrix. So for any non zero value of T, I've got this scheme here, which is isomorphic to the original commuting scheme isomorphic by this stupid linear change of variables. And I had at this point gotten used to, as I say this was 1520 years ago, gotten used to thinking that the limit is T goes to zero and see what happens to the equation, and hopefully see what happens to the scheme. Now, you can't just say the limit of the equations will define the limit scheme. This is a grubner basis statement you're making it's like, if I have two planes in the intersectional line. If I take the limit of those planes and they fall on top of each other, then the limiting planes don't intersect in the limiting line they intersect in a plane. So, I'm not guaranteeing that I'm not making any claim right now that when I take the limit of these n squared many quadratic equations, the limit is T goes to zero here I'm not claiming that I'm necessarily getting the actual limit of the commuting line might be getting something accidentally bigger like this limiting plane is not the right thing contains the limiting line. So, nonetheless, let's take the limit of those equations. So, there's n squared many equations going on here, and any of them are really easy. The if you consider if you compare the diagonal of this guy to the diagonal that guy, then they're equal on the nose that the diagonal matrix doesn't do anything to the diagonals. So that's what I have over here. And if you think about what's conjugating by this diagonal matrix of powers of T does in general, it introduces positive powers of T in one triangle of y prime x prime and negative powers in the other triangle. And it depends the power the actual power is T the i minus j depends on which diagonal you're in. So, down the main diagonal at zero like I said was saying, but in general you get positive powers in one triangle negative powers in the other. So where you got positive powers and you let T go to zero, then that's easy your that entry vanishes and instead of saying that a matrix entry of this equals a matrix entry that you get a matrix entry this is zero. So there's nothing going on here. But we're even negative power of T, you might worry your equations blowing up as T goes to zero. That's fine you multiply through by the rate power of T, and instead of having a negative power of T on one side you've got a positive power of T on the other side, and you take the limit. And so I studied this stare at these equations these n squared many new quadratic equations for a while. And noticed that these last ones are almost implied by the previous ones, because I'm going to stop saying prime all the time we'll just call them x and y. So the x y lower triangular wax upper triangular. Well, you know those two matrices have the same characteristic polynomial because that's obvious when x is invertible because then they're conjugate to each other, and then by continuity it's always true. So that means that the eigenvalues of x y and the y x are the same. The eigenvalues, they're on the diagonals because these are triangular matrices. So I know that the diagonal of x y is some permutation of the diagonal of y x. All I'm doing here is being specific about which permutation it is. So, so then I thought well okay what if we leave those out. So this thing that I introduced in this paper, some schemes related to the commuting scheme called that I called the lower upper scheme. And I made a paper I changed pretty quickly to the upper upper scheme, which I thought was more convenient but actually I'm these days I like the lower upper scheme where it's not again not a not an interesting difference just to linear change a variable. So this is pairs of matrices where I only imposed the first two groups of equations I didn't say anything about how the diagonals relate. And this doesn't have the GLN action we had before so remember, I said, you can conjugate x, you can conjugate x and y simultaneously by the element of GLN, and they'll still commute or not. And here you can't do that, but there is an action of this group, which is worse than GLN because it's solvable, but better than GLN because it's bigger it's actually got twice the dimension of torus in it then GLN has. So, the action is very simple I've multiplied x on the left and right by my lower and upper triangular matrices, and why on the right and left by their inverses. If you multiply if you multiply through you'll get like b minus x y b minus inverse, and you see that that is keeping x y lower triangular like it's supposed to be. So that's why I have this group action. And you can blame this group action on the original GLN that as we're degenerating the scheme or also to generating the group. And actually, that's not quite fair because this guy's better than just the dot the degenerate GLN, you get more group invariance once you're finished degenerating. So anyway I've got this and what's particularly nice about this better action in some sense is that I can now take x, and not just put it in Jordan canonical form, I can reduce it to a partial permutation matrix. So if you're invertible you'd say this is the Brouhade composition so I'm doing something like the Brouhade composition on matrices instead of on invertible matrices. So, if extra invertible I'd reduce it to a permutation matrix, of course I would require k equals n and I have examples where k isn't done so I wanted to say it in this generality. But anyway, that lets me reduce extra finitely many possibilities so that's definitely better than the Jordan canonical form story where I have you know continuously many different Jordan terms. And then for each of those finally many you can say what are the lies that are lower upper with with them. And I use that to index the components of the lower upper scheme, and say, for example when cable then that they are exactly indexed by those permutations I spoke of before. So this question. So let me say it this way. If you take this scheme. Sorry. If you leave these equations out so it's now the lower upper scheme, I guess, this scheme here. And you ask, not just that the diagonal of XY is some permutations diagonal yx. Let's ask instead just the diagonal of XY has no repeats. And so for K less than n, I guess that could happen. x y is no repeats, then that defines an open set. And when I'm telling you that that open set is dense. So that open set is disconnected it has one component for every way of matching up the K. The diagonal entries of XY with some of the n diagonal entries of yx, all the ones you don't match to be zero. And those are how you index the components of the lower upper scheme. So the other thing I proved about this other than thinking about its components, I proved in the square case that it's a complete intersection. So it's defined by it's defined by not n squared equations anymore, only n squared minus n. But those are dimensionally independent, let's say they cut the dimension down from to n squared where we started when cables that were to nk, they cut the dimension down from to nk. They cut it down to to the expected dimension. So back to k equals n, they cut it down from dimension to n squared to n squared plus n. And I want to point out that here we started with to n squared dimensions, and we impose n squared equations, and naively say we get some the n squared dimensional but we don't use the n squared plus n dimensional. So let's say x could be diagonal y could be another diagonal that's to n right there, hit them with an element of GLN module of the diagonals adds another n squared minus n total of n squared plus n. So this guy is not a complete intersection. But once you leave out those diagonal equations, then you do get a complete intersection and complete intersections are really easy to think about from lots of points of view. One thing I get from them in particular is that the degree of a complete intersection by bazoo's theorem is just the product of the degrees of the defining equations. So I've got n squared minus n many defining equations, each of this quadratic so there's the degree. And that's the sum of the degrees of the components. So I was in this weird position at this point in this paper from a long time ago, where I didn't know how to compute these individually. I knew that each of them is some natural number positive number. And when I add them all together that I do know how to compute. So there's a one case that's really easy, which is what if x has, what if x is supported in the Northwest triangle that and why supported in the southeast triangle. These are the linear equations I'm telling you vanishing entries of x and y, and that gives you kind of the dumbest components of this guy. And linear equations means that the degree of that guy is one. And that corresponds to the permutation where the diagonal of x, y is the reverse of the diagonal of yx. And the other case is the one that's maximally far away from the interesting case. So, as, you know, it, as it always works out right but. But I have this thing where I've got an easy case with a one, and I have the hard case that I want, and I have all these other guys and I have them together and I get to n squared minus two to the sorry. Now for something completely different. Let's consider this Markov chain with based on the number and so I got this natural number in the Markov chain has states that are complete matchings of the number one through two and so that's what these three pictures are. I've got a teeny little one two three four around the boundary of this. And in this case one is matched to two versus match to three versus match to four. It's easy to count these complete matchings. There's two and minus one double factorial of them. And those are going to be the states of this Markov chain invented by to hear and mean house. So, these are mathematical physicists. They say, let's have these transitions where we spin this wheel of fortune goes around and around and around, and maybe it stops here between the one and the two. And having spun the wheel of fortune we then flip this unfair coin. So it's not a heads and tails coin is an E and an F coin. And what's very important for me is that it's unfair that E comes up two thirds of the time f comes up one third of the time. And what I do when I after flip this coin is, if I if it comes up E, then I create a tiny little arch connecting positions I and I plus one. Now those two guys. So in this case if I connect one and two well they're already connected nothing happens. So you'll see there's a little self loop here labeled E one. If I were here and I connected one and two, I would get over to there, because whoever these guys one and two were connected to, I'll just connect them to each other. So, so he comes up I connect. I connect I plus one and whoever they were connected to, they get connected to each other. So instead of F, I instead flip who they're connected to so instead of like in this case instead of two being connected to two and being connected to one and three connected to four, I would connect two to four and three to one. So if my wheel came to here and I got the F, then it would jump me over to there. So that's what this F two is. So that's the, that's the process, the Markov chain. And it should be pretty intuitive that this guy in the middle is the least likely possibility even for general and the matching where everybody is matched directly across. That's the least likely possibility. So I'm not going to prove that right now but just to try to be convincing. And two thirds of the time you you spin the wheel, you're going to be creating one of these little arches connecting an eye and I plus one. And if you then so people are going to typically be connected to things very nearby them. But if you then say no they are supposed to be connected maximally far away, you've got a very long drunkards walk just going through the Fs to try to fix having done an even once. So, to here in the in house, and separately as you just done and the Francesco conjectured about this Markov chain, they conjectured several things. One is that this is least likely. And when you take the ratio of anybody else's probability to this least likely probability get an integer. So in this tiny case and equals to the integers are three and one and three. So the numbers down here. That's the stationary distribution of this Markov chain. If you do this thing a billion times, you'll be in here one seventh of the time and each of those three sevens of the time. So they conjecture that this relative this ratio of the probabilities will be a natural number. And that's totally not true. If you use a fair coin by the way, you get totally gross numbers. It's very special to use the two thirds one third. Eventually when blames it on the young Baxter equation. I'm not going to talk about how that works. There is another case you could think about where he comes up all the time. And what happens is very shortly you never visit these cases where there's a crossing. Again, you only visit cases where you've got non crossing chord diagrams and instead of you end up with only Catalan, many states that you're actually wandering through. And you're not going to talk about that one today either. So they, they compute these numbers and these numbers are easy to compute by computer for for reasonable size and they I think they went up from one to eight computing all these integers. And then they, they said, let's focus attention, not on this case because that's too dumb that's the least case. Let's focus attention on this case where the left guys connected directly to the right guys. So here I got three. If we, if you ask for each size and you, you connect the left guys directly across to the right, and you say what's the, what's the relative probability of that one, then you get the sequence of numbers 1331 1145, and so on. So I plugged that into the OES, and, and so the OES said those are the degrees of the first four commuting varieties, and the next four numbers that you mentioned, I don't recognize them. So they, they put in a sequence of length eight, and the OES recognized the first four. So then they, in the OES it said the degree of this fourth guy 1145 that was computed by Nolan Wallach in 1993 by lashing the 10 spark stations together. And so they contact Nolan Wallach and he says I have no idea how these things are connected. I did just see this talk by Alan Knudsen on the computing variety. So then they go to my paper, and based on finding these numbers in my paper so remember I was computing the degrees of these e-pies for small n, they came up with a more general picture that if you have a, this is what number two here, if you have a chord diagram that's not just connecting, you know, your two n guys to each other in the old way, but it connects the left n guys to the right n guys in some permutation. So there's n factorial of these bipartite chord diagrams or the left side connects to the right side. So n factorial, they, the probabilities that they're getting match the degrees of my varieties, my lower upper varieties. So, so then they contact me and say yeah we we found these numbers in your paper. So what's going on with that. So it took a couple of years for for me and one of those four guys, Paul's is interesting to prove these conjectures and also generalize them in a way I'm not going to talk about today to handle general chord diagrams. So you need something bigger than the lower upper scheme, you need something that has a component not just for every permutation like the lower upper scheme has, but for every perfect matching, and and I invented such a thing and show the degrees of its components, compute these things in general. So this could be a question in chat if people were ready to ask questions now but I'll try anyway. Good, no questions. Right. So, here's just a bunch of stuff about degrees. Maybe so this is this this slide came from a talk I gave. I'm going to be in a colloquium and so I wanted to be colloquial about degrees. And I'll mention several ways to think about degrees that probably most people here know but just in case. So if I have people usually talk about the degree of a projective variety but I'm going to think about its affine cone. So I'll have X the affine cone over some projective variety, defined by some homogeneous polynomials. So you can either take that guy and intersect it with a complementary plane count points. You could intersect it with the unit sphere and measure the volume of that with respect to some Hermitian metric on your vector space. Well, what if you use a different metric well that's okay because you're normalizing it. This is about taking the leading term of the Hilbert's polynomial. So the first of two cohomology ways you say well I'll projectivize, and that's defining an element in the ordinary cohomology of projective space. And the co homology groups of projective space are one dimensional. So it's defining some number times the generator that co homology group, and that number is the degree. And the way I'm actually going to want to deal with it most often is in equivariant co homology. So that I said this guy was invariant under scaling that it's the it's a cone is defined by homogeneous equations. And so I'll think that it's defining then an element in the equivariant co homology of the vector space. And the equivariant co homology vector space is just a polynomial ring in one variable. So it's coming in in a certain degree of that polynomial. So I look at the coefficient, which also unfortunately is called the degree. So the degree of this, the degree of this polynomial here is n minus K, and it's coefficient is the degree of x. So I've got various degeneration ways that I'm going to want to use to compute these degrees. And eventually I'll be computing not just degrees but equivariant co homology classes but that was a less colloquial thing to talk about so they mostly focused on the degrees. So I'm going to split the ambient vector space into a hyperplane and a line. So that's, I hope, easy to picture I've got a hyperplane online. And X is neither the hyperplane or the line it's some curvy thing sitting in that ambient space might intersect the hyperplane somewhere. There's two limits I could perform on X that will get me other schemes to think about and that will have the same degree as X itself does. So one is that I scale the line down so that's what's going on here in taking X and scaling. And when Z is doing it's only acting as written here it's only acting on the L coordinate leaves the leaves the hyperplane alone, the trinks the L coordinate, and I take the limit. So this is something like a slow motion projection of X into that hyperplane. And any single point of big X will indeed fall into that hyperplane when you hit it with with Z. All you're doing is projecting it and losing the coordinate you had on L. And that happens to the points individually in X, but that's not what happens to the limit of the scheme X. So dumbest case, what if X were all of V so if you think about taking big V and you scale it. Nothing changes, it's for every value of Z, you're just getting big V so I got big V big baby, I take the limit, it's big V. You can definitely get plenty of stuff that isn't in the hyperplane, just because it's sort of, you're pulling stuff in from infinity and there's always more out there to be pulled in. So that's one, one kind of degeneration. And if you think about what that degeneration is doing on the level of algebra, it's, it's lexicographic so it's taking the generators of the ideal for X, and looking for the highest power L on them. And you might think well that's weird, I'm this is going to zero I'm looking at the highest power, and it's because acting on the space is dual to acting on its coordinate ring. So the other limit is you take zero infinity this one's actually simpler. I'm scaling away from age. Okay, so if I start with, for example, imagine I have a circle in the plane, and I'm going to scale the X axis out. So I, what will happen is my circle become an ellipse, and in the limit it will become two horizontal lines. And so how do you think about that you say well take the circle intersect it with the hyperplane you get two points. Then those are not going anywhere, take them and cross them with the line. That's what you get in the limit. That's not exactly, you know that doesn't. That's not exactly true if your X prime lives in the hyperplane to begin with or has components that live in the hyperplane to begin with. But, but generically, that's how you should think of what happens when you when you stretch out. So, and on the algebra side it's looking for the lowest power of this variable L. So, these two are actually related to another these two operations, and I'm their projective dual so what's projective duality about. We start with a variety X inside V like I was doing. And then we define this guy CX called the conormal variety to X. So what it is it's pairs of point in V and point in V dual, because that's where it lives, where V is a smooth point in X. So you can worry about what's happening at the singularities and we're just going to punch that by avoiding them and taking a closure later here. So I take smooth points in X. I consider the, I consider the hyperplanes in V that are that are tangent to those smooth points so that contain the tangent space to to the point V. And speaking duly I think that I'm thinking about vectors and be dual that are perpendicular to the tangent space. So this guy lives inside V cross V dual. And if you think about V cross V dual as the cotangent bundle to be, then this guy is naturally the cotangent bundle is naturally symplectic, and this guy will be Lagrangian inside there. So, in the 19th century I don't think they were thinking about that so much. But, but they did observe that you can start with X inside V. So take this guy CX inside V cross V dual, and then project to V dual, and you'll get something we'll call the projective dual cone. And the fun and it's dual in the sense that if you repeat this process you get back to where you started. And there's, it's absolutely terrible trying to relate things about X to things about expert. So in the dimensions, there's a, there's there's no relating the dimensions of the two, or X contains why that expert doesn't contain why perp there's no, it's, it's a very tricky business. But if we've split V into h cross L will therefore split V dual into each dual across L dual, and scaling L down is like scaling L dual up. So these two notions are projective dual in that sense. If you think about moving X by don't take the limit yet but just move X using using Z, then you'll be correspondingly moving expert using Z inverse. And so the, the Lex limit of one relates to the Revlex limit of the other. And pretty soon I'm going to be using both to, and I'll be mixing the two of these together. Okay. So, what I want to compute is the degrees of the EPI so remember back in the the here in mean house story, the degrees of the EPI were relevant for computing the, the probabilities that you're in one of these states. And I'm not going to go into that for the degrees of the EPI, but I'm going to sneak up on them by first thinking about projecting EPI to just the X variables or just the Y variables. And we meet these very familiar spaces matrix Schubert varieties, and, and they're duals. So, I had forgotten about a small break. This is a great time. Thank you very much. Any questions before the break. Sounds like there are no questions so maybe it will take a five minutes break it's a 433 right now for 34. So maybe until 446 minutes. Okay, and I will say this claim made at the bottom is correct. You can find these slides online so you're going to be able to jump around independent of my jumping around.