 I'll take a five minute break just let us know. But in any case, welcome everyone to our Schubert seminar. Today we have Juliana Tomasco from Smith College who's going to talk about which has some varieties are you can. All right, thank you. So I've sort of taken, I took you at your word about creating a talk that was accessible to graduate students, even a sort of wide range of graduate students. So, so that's how we're going to start. And here's the basic plan. First, I'm going to talk about what our Hessemberg varieties, what is the GKM approach, and really why our Hessemberg varieties. So then we're going to take a break. And, and come back and talk both about tourist actions on Hessemberg varieties, and this sort of question, which Hessemberg varieties are GKM, which at that point will be, it will be natural to sort of think about that question really as a partial list of which Nelfoten-Tessenberg varieties are GKM. So, let's see, I'm going to start. I'm going to start. I didn't know Bill would be here today. So, I feel like a postdoc again. So I'm going to start by situating Hessemberg varieties and situating them very concretely. So they are sub varieties of the flag variety. And for me, for today, everything is in GLN. So like everything that I draw will be in C3 actually. So a flag is, you know, a line that contains the origin that is contained in a plane that is contained in a three-dimensional space. I can think of it as a vector contained in a choice of a vector that spans the line that I then add another linearly independent vector to span the plane that I then add another linearly independent vector to span the three-dimensional space. And of course, this is, well, so this is fully accurate, but somewhat inconvenient because it is, it leads to a lot of redundancy in the choice of spanning vectors at each stage. So motivated by this sort of desire to pick some unique representative, I would basically do Gaussian elimination. And so like on the line, I would pick, depending on your conventions, you can pick a different way, but I personally would choose to rescale the line so that the lowest non-zero entry is actually one. So in this case, just dividing that vector by two. And once I've done that, once again I have a lot of redundancy in my choice of the plane, because actually not only, I mean I really need to Gaussian eliminate, not only can I rescale my second vector, but I really want to sort of eliminate the all these sort of appearances of the first vector in the second vector. So subtract off, so that the lowest non-zero entry that I had in my first vector, so that entry is now zero. And then I would have, I think if I just subtracted, if I have not done anything wrong, if I just subtracted, I had to have this vector zero two two and then I rescale and get zero one one. So I keep doing this process, all the way through, coming up with some sort of more unique-ish set of representatives based on whatever sort of conventions I decided to pick at the outset, in terms of which entry I'm going to normalize and so on. So, great. So I can make this more concise just simply by stacking up all of these vectors into one matrix and just sort of summarized by choosing good representatives for the flags, partitions flags into sugar cells. So, for our purposes, we will think about our sugar cells as a permutation matrix with free entries above and to the left of the pivots and zero everywhere else. And if I wanted to sort of think more intrinsically about what's happening, I am picking coset representatives for double cosets of the general linear group. I imagine taking double cosets with respect to the Borel subgroup of upper triangular matrices. So the permutation matrix that shows up in the middle is the pivots of the entry when I'm doing Gaussian elimination. And it's always also just this sort of index set of this double coset. That's where Heisenberg varieties are. So to think about what Heisenberg varieties are, I'm actually going to start with a thing that is not completely, it is a Heisenberg variety, but it is not all Heisenberg varieties. So I want to just start by picking an n by n matrix. And the thing in blue is the one that we're just going to start with. And we're going to ask the question, when does X preserve a flag in the sense that X sends the part of the flag into the right part of the flag for all I. So the way for this entire talk, the way we're going to really approach this is the idea that we're going to test it on individual Schubert cells. So in other words, we're going to take X, that's the matrix on the far left there. And we're just going to multiply it against a Schubert cell. Yeah, there's there's a Schubert cell, the one that we were kind of identifying before. And so I could do some matrix multiplication, I did the matrix multiplication ahead of time. There it is. And what it means in this sense for X to send the part of the flag into the part of the flag is the following. Right, so I need this first vector. So this first vector 01 a should be a multiple of the first vector here. So that's X sends V1 into V1. And then I need the second vector. So then I know that the first vector is contained in the span of the first vector in order to make it so that the span of the first two vectors is contained in the span of the first two vectors. I just need to add the condition that the second vector is in the span of these first two vectors here. So, so you think of this as a sequence of, or this is a sequence of just vector equations where I've got a particular vector I want to ensure it's in the span of one vectors another vector that I just test to see if it's in the span of two vectors, and so on, and so forth. So like, in this case, actually we can furthermore look at this first column and look at that orange vector that I marked out. All right, so the, the first column on the right hand side there 01 a is in fact not a multiple of of one AB, because the pivots are just like in the wrong place. So, so here, like, not true here. So that x vi is contained in vi. In fact, it's a perfectly fine exercise to go through this process. And I sort of feel like if I had graduate students here, I would ask them to try to find me a shoe itself. Which any part of the shoe itself satisfies this condition that x vi is contained in vi. It's a small calculation, right. So, this particular variety that we're looking at is called the Springer fiber. Some people would call it Springer growth and deep fiber if you use, say, an arbitrary matrix X on and like there's a quite literal sense in which the line. This is the line of the Springer fiber when X. So the line of the Springer fiber is satisfying the condition that X V1 is contained in V1. In other words, it is giving you an eigenvector of X. The Springer fiber itself is on some level giving you this eigen flag of X. It's a fairly restrictive condition, especially with the X, the choice of X that we started with there. But there's a lot of different kinds of both combinatorics and interesting geometry of various sorts. I suppose if I have extra time, you could stop me and ask more about that. But for now, I'd like to move on to saying what Hessenberg varieties are. And to start with, actually, I'm just going to go and give you exactly the same slide that we had before. But what I would like to do, I'm still going to pick this end by end matrix X. So instead of asking whether X preserves the flag, sending VI to VI, I am going to actually broaden this condition. I'm going to say that X sends VI into the H of I where. So I still want it to satisfy the sort of flaggy conditions. So H is going to be some function on the indices. I'm going to ask that H of I be both greater than or equal to H of I minus one. And greater than or equal to I. So that first condition is sort of like a non decreasing condition. Since my flags already have the property that VI is contained in the I plus any dimension. So I sort of really do want my function H of I to have a condition like that. That second condition that H of I greater than or equal to I will. We can see that we can revisit that in a moment or two and we'll have to revisit it a little bit later on. But that second condition is something that you really could get rid of and some interesting things happen when you get rid of. We're just not going to today. If we do this, I'm going to pick a specific, a specific function. All right, so we're going to take H of I equals I plus one. And H of three equals three. So, so we're going to test on this specific Schubert cell, this specific set of containment conditions. So here, we're asking does X send the I part of the flag into the I plus first part of the flag. And let's just sort of go through what it's saying. I for each of the vectors. So on the right hand side, we have X times the different spanning vectors for the flag. So what we want is for this. The first column to be in the span of the first two columns of the flag itself. And then we're going to want the second column to be in the span of the first three columns. I'm just going to observe right now those couple three columns span all of C three. And in particular, I actually see the vector 001 right in there. So that second condition is just sort of vacuously satisfied. So what about the first condition is the vector 01 a contained in the span of those two orange vectors. I'm going to pause for a second. And pause actually and just sort of wait for any sort of comment here, including to the very specific question of is the vector 01 a in the span of those two column sectors circled in lunch. So for me. I'm going to use this sort of nightly just when you're algebra Lee check whether a vector like this is in the span of these two. First by sort of noting that any multiple of the vector one a B is going to produce some non zero entry in the bottom row, which would not produce a zero in the bottom row on the left hand side. Unless I were actually taking a zero multiple of that first column one a B. And then I then I would say okay 01 a is in the span of 01 C. If and only if a equal C. So on the one hand, I'd like strictly relaxed the conditions from the springer fiber, I've like expanded the number of flags that satisfy the condition as well as the number of Schubert cells that are represented in in the Hessenberg variety. But on the other hand I've also got sort of visibly some conditions on the entries that can in general yet quite complicated. So this sort of is our first answer of what is a Hessenberg variety. So they generalize springer fibers. They have the property that Schubert cells intersect Hessenberg varieties in these affine pieces. So I've got a little asterisk there because there's some sensitivity to how you've chosen your basis relative to the linear operator X. And I'm not claiming that to be true, but it is possible to choose your basis. Nicely enough, so that the intersection of the Hessenberg variety and the Schubert cell is affine like homeomorphic to C to the D for some that D and furthermore, these affine pieces are going to form a paving by affines. Chapologically speaking, it's like a CW complex in the sense that the closures of these cells, these affine pieces, give you the co-homology or homology. But the closures are more sensitive and in fact they're like quite perfectly mysterious in most springer fibers and like almost all of the Hessenberg varieties. So springer fibers, so springer fibers were first developed as an example of a geometric representation. So their co-homology carries a natural action of the symmetric group SN. It's quite beautiful. And there's multiple different constructions. In fact, they're not all equivalent. Sometimes springer's representation is used alternately to refer to a particular association of springer fibers with irreducible SN representations or the dual, depending on exactly how you've constructed it. And sort of consequently, there's a lot of interesting commentatorics around the cells of springer fibers and standard young tableau and inversions and certain sort of limitations on inversions. When you add Hessenberg conditions, you are sort of gaining some systematic, the ability to systematically limit what kinds of inversions you look at, including getting sort of narrowing on particular classes of inversions like descents, for instance. And we'll say a little bit more. Once I've said a little bit about what use GKM theory. So I think of GKM theory as sort of a computationally feasible way to construct co-homology combinatorially. A priori, I'm going to start with more data, but at the end of the day, I will have a module basis that is also a co-homology basis. I will have some sort of like fairly routine mechanism to go from the equivariate co-homology to the ordinary co-homology. It only works if you start with varieties that are relatively well behaved. This is usually the world in which I live. And part of the point of this talk is to sort of try to figure out which of the Hessenberg varieties are also in this world. So the basic setup is that we have some sort of tourists that's acting on a variety well. I'm actually going to say very little about what well means, except to observe that it is true in the spaces that we're going to be looking at. So largely, if it acts well, then we can get a graph from one-dimensional and zero-dimensional p-orbits, and moreover, we're going to get some sort of labels on the edges of this graph, namely the weight on these one-dimensional t-orbits. So one of the details that I am not mentioning about with well is that the tourists will act nicely enough so that the one-dimensional and zero-dimensional orbits form a kind of nice combinatorial graph. Again, we're not really going to worry about anything that's not a perfectly good finite graph without crazy loops or multi-edges or anything like that. So I'm going to elide all conditions and refer you to a different talk if you want to hear more about that. So then the results, the GKM result, Keresky-Kowetsch-McPherson, is that if you've got this nice setup, then the Equivariant Comology can be described as a way of labeling the vertices of this graph with polynomials, so one polynomial for each vertex, satisfying the condition that any two adjacent polynomials need to be a multiple on the label on that edge. So back to flags for a second. So here, and really for the rest of the talk, when I think of my tourists, I think of diagonal matrices. So we could just sort of like inspect what are the weights and zero and one-dimensional orbits in the flag variety. So to do this, I'm actually just going to multiply matrices. So I think that I'm multiplying this correctly. And then I'm going to count on my assumed audience numbers to correct me if I'm not. So I just multiplied. And the issue as I see it is that this matrix on the right is not in my preferred form. On the other hand, I can do some Gaussian elimination. I'm just re-scaling columns, which is to say multiplying on the right by something. And it just normalize those bottom non-zero entries. So if I do that, I end up with something that looks like one and then I have to divide this first column by T3, divide the second column by T2, and then divide the third column by T1. And now, so now I have a matrix that is back in the cell in which I started. So these weights, so the weights are coming from this, the way that the tourists is acting within each cell. And I'm usually going to linearize them as TI minus TJ, especially when I'm thinking about GKM theory. So these are the same variables and just look at context to figure out whether I am looking at one of the linearized weights or thinking about the actual group action. So in terms of identifying the zero dimensional or that's doesn't matter. In this case, you can sort of like invisibly see when I am sitting inside of a Schubert cell, if I have any tourists whatsoever, then I am going to be moving around somewhere inside that Schubert cell. Which is to say the zero dimensional orbits have to have A, B, and C all equal to zero, otherwise I am being moved. So it's just going to be the permutation matrices. So one dimensional orbits. Well, if I have more than one of those non permutation entries non zero, then, then again, I will be moving around in different parts of the Schubert cell. So the one dimensional orbits only have one non zero entry. So other than the permutation. So consequently, we're sort of like seeing the one dimensional orbits, seeing the zero dimensional orbits, and on each one dimensional orbit we can actually just like concretely multiply and identify the weight. So, so what is the flag. Sorry, what is the graph. So the graph is going to have vertices that correspond to the zero dimensional orbits. So the two dimensions are the one dimensional orbits, we can see in this example up here that I, that if I take any one dimensional orbit, and then sort of do I can, I can, I can sort of use my tourists to shrink the variable a say and make it as close to zero as I can see. So I can see there's one permutation flag in the closure of that one dimensional orbit. Conversely, if I sort of use my tourists to push the entry a really close to infinity, you can see it's sort of having the effect of making the line look very much like the vectors zero one zero. And the plane remains the same span of two vectors that it was been of two permutation vectors that it was to begin with. So, I, that's a hand wavy way of saying that these edges are going to be connecting to permutation flags that differ by a exchanging two of the columns. And then finally, the label. So the label is going to be ti minus Tj, where I and J are the two columns that were exchanged. So we can just draw this. I'm going to write things out just conflating permutations using using simple transpositions to concisely describe my permutation matrices and just sort of conflating the matrix with the permutation itself. So, here are my. Permutations and I'm going to draw these red edges. So these red edges are labeled by T1 minus T2. If you go across a red edge, so left multiplication by s one will take you from one end point to the other end point along a red edge. Blue edges to indicate T2 minus T3. So left multiplication by s two takes you across those blue edges. And I'll create some green edges. The green are T1 minus T3. So this is sort of left multiplication by the transposition that exchanges one and three. Or the permutation s one s two s one will take you across those green edges. So great. Let me just take this graph for a second. Move it down here. All right, so I'm going to. So, so if we are using to create the equivariate comology, and I am looking for all possible ways of solving a sort of system of modular equations in polynomials. And I'll just label some of these vertices. In a way that satisfies the orange condition. Whenever I go across an edge, the polynomials on either side should be a multiple of TI minus TJ. So T1 minus T2 across the red ones T2 minus T3 across the blue T1 minus T3 across the green. So let's see, I'll take this one. So I'm going to put zero on these two vertices. I'll put T1 minus T2 over here. So that bottom red edge. I know I need to be a multiple of T1 minus T2 on the top of it. And so T1 minus T2 will work. So, if I keep going up. So this second vertex on the left is also above zero across a red edge. So it needs to be some multiple of T1 minus T2. And in fact T1 minus T2 works. It also satisfies the condition of the left most green edge. Moving up on the right hand side, the vertex S2 S1 requires us to put some multiple of T1 minus T3 in order to satisfy the condition that you can put zero right below it over a green edge. And now if you look at this edge, this blue edge right here. The difference between the end points here is T2 minus T3, which is in fact a multiple of that blue edge. So similarly, I can put T1 minus T3 up at the top and all of my GKM conditions are satisfied. This is in fact a GKM class and in fact it corresponds to a Schubert class. There's cool things connected to this graph, which is to say that like actually I did not have any choice at all. When I sort of first decided to put T1 minus T2 at the bottom at S1 and zeros below it, if I once I set those three values into place and if I require that I am just going to put a homogeneous degree one polynomial everywhere, then this is like the only way to solve this. So, I could say more, but I will not. Yeah, I could say more, but I will not. So, so we can like just from GKM theory, once you have an adequate torus action, you can get this sort of beautiful combinatorial construction of all of the, of like the whole equivalent biology. So back to this question, why said he hasn't heard varieties. So, there's a sort of well known result that effects is regular semi simple, so diagonal with distinct values along the diagonal that hasn't heard varieties are GKM on both the equipment ecology and the ordinary technology have these like a beautiful SM actions that it inherits really from sitting inside the flag variety. This representation shows up in combinatorics in sort of multiple ways in in active and evolving theory. So, I'm going to show you what the graphs are for the regular semi simple. So, I'm going to show you what the graphs are for the regular semi simple. The graphs are like, dual to what you're going to see me construct for hasn't birth varieties. So, without saying more about why, I'm just going to like actually show you what the graphs are for the regular semi simple right. So, here's my sort of flag variety graph. And try this going to paste a couple more copies in here, smaller and smaller copies in here. Right. So, my hasn't heard variety comes with a function. Let me give you almost every function, almost every function that for n equals three. All right, so the thing on the far left, the full fly variety that is a hasn't heard brunch variety it corresponds to the function h of i equals three, which is determined by each of one equals three. The thing on the right. So, all, all of these regular semi simple hasn't heard varieties have the same. So they actually all have the same fixed points every permutation works. But if I make the hasn't heard function as small as possible, I erase all of those edges everywhere in the graph. I, if I allow myself a little bit of wiggle around each one. And I'm also going to be erasing edges. This time, I erase almost all the edges. Now, if I take this function h of one equals to h of two equals three h of three equals three. I'm just erasing the middle edges. And then finally, on the left, I have all of the edges that I started with. So, sort of, if you think of it as moving from the full flag variety to the right. If you take a step to the right, you erase precisely one green, one red, one blue edge. It's actually determined. Exactly by the conditions of the Hasenberg function. And it's determined by which, which reflections you are right multiplying by. If multiplication determines which edges you remove to get your regular simple Hasenberg varieties on and just not that I'm going to be saying anything about this really, but just very quickly. I can. Yeah, just very quickly. I take it back. I will not say anything about the permutation action, except to observe that since all of my vertices of the graph are permutations, since I have what at least visibly we can see is a sort of very symmetric conditions on the edges. You can actually permute the classes, the equivariant classes in certain ways on and get other equivariant classes. So it's like super combinatorial really just based on the graph. And, huh. Great. And that's time for the break. Oh, wait. What did you think? Let me actually just say so. This is the question that will leave for the breaks. Which other Hasenberg varieties are GKM. Great.