 Alright, so welcome everyone to the Schubert seminar. Today, we're happy to have Colleen Robichaux from UCLA, telling us about CM regularity and kanjalo sting varieties. Just go ahead. Great. Is my sound okay? Yes. So today I'm going to talk about some joint work with Jenna Ratchgaard and Anna Weigand, in which we are able to get combinatorial formulas for the Kassel-Nouveau-Mumford regularity of particular kinds of caused amloustic varieties using growth and deek polynomials and their combinatorics. And to start, we're going to start with just the complete flag variety. So those are all of the complete flags from 0 to cm. So those are the flags where for your chain of subspaces at every step, you want to increase your vector space dimension by one every And as many of you know, we can identify the complete life variety with a quotient of GLN, those complex valued and by an invertible matrices, where this opposite borrel subgroup that we're quotienting by are those lower triangular matrices in GLN. And by taking certain orbits of these borrel subgroups, we can get our Schubert cells, where here this ordinary B is the borrel subgroups of those upper triangular matrices. And by taking these Oriske closures of our Schubert cells, we get our Schubert varieties. And one particularly nice property that these Schubert varieties have is when we take the closure of these Schubert cells to get Schubert varieties, these varieties will decompose particularly nicely in terms of the Schubert cells. And this decomposition is governed by Bruha order on permutations. And when we're studying these varieties, of course, one thing we might want to look at are their singularities. So that's what Wu and Yang were doing in 2006, in which they were trying to classify singularities of Schubert varieties in terms of certain pattern avoidance conditions. And one thing that you can do is look at the fixed points under the action of the torus on your Schubert variety, where this torus is going to be those diagonal matrices inside of your borrel subgroup. And we know that any point on our Schubert variety, any point on our Schubert variety is going to be in the borrel orbit of one of these fixed points. So when we're studying local questions about the Schubert varieties, it makes sense just as well to instead study questions about our Schubert varieties just looking at local neighborhoods of these fixed points. And so that's precisely what they did. And using a theorem of Kazdan Lustig, we know that this intersection of the Schubert variety here, this local neighborhood of the Schubert variety near one of these fixed points, is actually going to be isomorphic to the Schubert variety, where you've intersected it with an opposite Schubert cell. So for this opposite Schubert cell, we build that by instead of taking the borrel orbits, we take opposite borrel orbits. And so using this fact, we have that these neighborhoods of Schubert varieties that we were interested in initially are isomorphic to these intersections of Schubert varieties with opposite Schubert cells, where we end up with an extra factor of affine space hanging off the end. And so what Wu and Yang did is said, well, let's just study this intersection here and call this the Kazdan Lustig variety. Using the Bruha decomposition of these Schubert varieties, it's going to happen that the Kazdan Lustig variety is going to always be indexed by permutations v and w, where v is always going to be greater than or equal to w in Bruha order. And these varieties have very nice defining ideals. In their paper, Wu and Yang give an explicit description that tells you how to define the generators of this ideal. And I'm just going to go over just so we all understand these varieties very well. I'm going to go over an example that will tell us how we can compute these generators for our defining ideal. And we start here in my example, I have v and w. And for these varieties, the permutation w is going to dictate what size of minors I take. And the permutation v is going to tell us what the matrix that we're taking these minors of will look like. Here, I'll start with w to get that minor size information. And first, what I'm going to do is draw out the Rotha diagram for w. And this is a diagram that's going to be appearing throughout this talk. So I just want to review how we can construct this diagram. So for our permutation w in one line notation, for my Rotha diagram, I start with an n by n grid. So here a 4 by 4 grid. And I'm going to place a bullet or one of these dots in row i column wi. So 1 gets sent to 4 by w. So in row 1 column 4, that gets one of these bullets. And 2 gets sent to 1. So row 2 column 1 gets one of these bullets and so on. And then immediately south and east of each of these, I want to strike out each of those integer points in the grid. And all of the integer points in this grid that are sort of unnamed by this process. They never come into play. Those get marked with these boxes. And this collection of boxes is precisely the Rotha diagram for the permutation. Now, next we want to compute the rank matrix for w, which we can do just looking at the permutation matrix for w. But we can also read it off by looking at the Rotha diagram. And so how we do that is to compute what the entry should be in a particular entry in this matrix. We look at the corresponding position in the invite in grid. So that would be this position here. And we just will look northwest of that position in our diagram and count how many bullets we see northwest. So here, looking northwest, we see one bullet. So I store a one there in my rank matrix. Whereas if I were to go here, I can see three bullets, which tells me I should store a three in this entry of my rank matrix. All right, so we have the information that we need to tell us what size minors we take. So now let us build the matrix that we're taking these minors of. And for that, all we're basically doing is, again, drawing the Rotha diagram for v, just in some different symbols. So instead of placing bullets, we place ones in row i column w i. And instead of placing these rays, south and east, I'll place zeros south and east. And every point that would have received a Rotha diagram box, those will now get an indeterminate z i j. OK, so that is how we are getting this matrix here. And these varieties are defined on taking northwest minors, which means that you select a position in your matrix. And that tells you to take the minors of the sub matrix that's northwest of that corner that you chose. So choosing this corner says that I'll be taking minors of this red sub matrix here. And the size of minors that I take of this sub matrix is going to be corresponding to the same entry in the rank matrix as in the matrix that I'm taking my minors of, that corner I selected. So here, this corner corresponds to one in my rank matrix. So that tells me I want to take the 1 plus 1 size minors of this sub matrix, which gives, for example, these generators here. I could do this for a different corner here, which would give us this blue sub matrix looking at the corresponding entry in the rank matrix that is zero. So that tells me I would take the one minors of this blue matrix, which would give us these generators. And you might be worried that you'd have to take all possible of these southeast corners and take all of these minors. But due to what you're in the Fulton, actually, these minors are enough to give us all of the generators we need to generate our defining ideal. And those corners are precisely the essential set of the Rotha diagram of W as Fulton noted in his paper, Introducing Matrix-Suber Varieties. And so what those corners are are the positions in your Rotha diagram where you don't have any boxes south and you don't have any boxes east, which is exactly corresponding to these two corners here that we chose. If you're familiar with matrix-suber varieties, you can see that matrix-suber varieties are special cases of these customistic varieties, where they're defined in almost precisely the same way, except the matrix that you take minors of is not dependent on permutation v. It's just always going to be taking minors of a matrix that's full of indeterminates zij. But these matrix-suber varieties are special cases of these custom logistics. And even more special cases of these matrix-suber varieties are the classical determinant varieties. So taking the K minors of some rectangular matrix, for example. Those are also special cases of matrix-suber varieties and the special cases of customistic varieties. One other thing to note is that you can see by how we're taking these minors and the matrix that we're taking these minors of is it's possible that you could have non-homogeneous or inhomogeneous generators for your defining ideal. And although in this case, you can see that due to using some of the other generators, while this defining ideal will be homogeneous, there certainly are many, many customistic varieties for which their defining ideals are inhomogeneous. However, today, we're just going to be talking about cases in which the customistic variety is homogeneous. And thus, all of the greetings that we'll be using today will be just the standard greeting. So we don't have to be too bugged down in the details of each greeting. And some things that we can maybe look at for the coordinate ring that we're interested in studying. So we'd like to, of course, study the coordinate ring of the customistic varieties as we can, of course, compute the minimal free resolution of our coordinate ring. And from this data, some things that we can compute is maybe the K polynomial. So the polynomial whose coefficients are alternating sums of these multigraded beddy numbers, which tell us the multiplicities of these free S modules here. You might also be more familiar as the K polynomial popping up as the numerator of the Hilbert series, for example. And another invariant that we can compute using these multigraded beddy numbers is the Castelnuovo Mumford regularity. And this is defined by taking the maximal difference between the indices of the non-vanishing multigraded beddy numbers. And so things that this regularity tells us, sort of roughly, it gives us an idea for how complicated our minimal free resolution is. You could think that it's telling you about what are the dimensions of your beddy table. And more practically, it gives you a lower bound for when the Hilbert polynomial is going to equal the Hilbert function. So if you really want to know when those are equal, you might want to compute this regularity to give you that information. However, you often are going to want to have to compute your entire beddy table to extract this information because computing these beddy numbers can often be computationally extensive. We don't have nice formulas for what those beddy numbers should be. So we would of course maybe like to have combinatorial formulas to actually compute these regularities without having to go through computing these beddy numbers. And luckily in the case in which our coordinate ring is Cohen-McAuley, we know that the regularities of these coordinate rings can be computed by taking the degree of the K polynomial and subtracting off the co-dimension of your ideal. And luckily these Causdan-Loustec varieties are Cohen-McAuley. So we know that they will satisfy this proposition so we can use this trick maybe to our benefit to get some regularity formulas. So before we sort of delve deeper into the general, at least homogeneous Causdan-Loustec setting, let's first just talk about matrix Schuber varieties. So I said before that these are special cases of our Causdan-Loustec varieties. So of course we can apply that proposition from the previous page to understand more about the regularities of these matrix Schuber varieties. So applying that we have that the regularity of these matrix Schuber varieties should be the degree of their K polynomial minus the co-dimension of the ideal. And in Fulton's work in which he introduced matrix Schuber varieties, he notes that the co-dimension of these ideals is going to be the coxeter length of your permutation. So the number of inversions in your permutation which is easy to compute. So that's great news. And we also know by these, by Buick's work in Knudsen-Miller that the growth in deep polynomial is going to be the K polynomial of our matrix Schuber varieties. So if we can compute the degrees of these polynomials, then we will have another formula for the regularity of these matrix Schuber varieties by just subtracting off the coxeter length from the degree of this polynomial. However, it's not clear that there should be nice formulas for these polynomials. So in general, these polynomials are going to be inhomogeneous. But luckily we do have some nice combinatorial formulas that allow us to compute these growth in deep polynomials as weight generating series. And so let's look at those a bit. And first we'll talk about a bit of a simpler polynomial that's the Schuber polynomials. So the Schuber polynomials, as noted in Knudsen-Miller's paper, the Schuber polynomials are the multi-degrees of matrix Schuber varieties. So this means that the Schuber polynomials are living inside of the growth in deep polynomials, making up all of its lowest degree terms, right? So there are many, I'm sure as you know, many combinatorial formulas to compute the Schuber polynomials. And so the one I'll talk about today is in terms of reduced pipe dreams. And so I'm just going to, let's see, my pen isn't working. We're just going to talk about this formula and explain how we can compute these Schuber polynomials combinatorially through an example, all right? And so for that, we start again with the Roth diagram for permutation, which in this case, our permutation that we're working with is very small, just one, three, two. And what we do is we take our Roth diagram and put a plus in each box. And then I just want to erase all of the extra junk that's not a plus and left align everything to get this sort of starting game board here. And from this starting game board, we can do certain local moves. Where the moves that we do is going to be, let's say we have a big rectangle of pluses and we have a loan plus here, where these dots here, here, and here are representing empty boxes. And one move that we can do is by allowing that plus, that's all the way at the bottom to jump up diagonally, sort of called a ladder move, okay? And so we can do all of these possible moves where this column of pluses, that's too wide, is any K, any integer K tall, so it could be nothing. And so here we can apply this move by looking at this region. And so that has zero thickness of these K rows of these pluses. And so I can just move the plus diagonally like this to get this other diagram. And so we can write out all of the possible ways to apply these moves. And in fact, these are the only two ways for this permutation that you can apply these moves. And from your diagram, you extract a monomial. So the monomial that you associate is X I to the number of pluses in row I in that picture. And so in this picture here, there are no pluses in row one, and there's one plus in row two. So that's gonna contribute this X two here. And in this picture, we have one plus in row one and no pluses in row two. So that's gonna contribute this X one, right? So this is one way to compute your Schubert polynomial. And it's clear from this definition here that the degree of the Schubert polynomial is going to be the number of pluses in any of these diagrams because the number of pluses is preserved by this local operation here. And the number of pluses was precisely the number of boxes in our Ratha diagram, which is equal to the Coxeter length. So we know for a fact that the degree of the Schubert polynomial is just the Coxeter length. This is common knowledge. So the lowest degree terms of our growth in the polynomial is just the Coxeter length. Now, what about the highest degree terms? Well, we can compute the growth in the polynomials in a very similar way where we're using the same game board to start with. So we have, start again with our Ratha diagram. And we again, just replace every box with a plus and left a line all of the pluses and that's our starting board. And then we again, oops, we again will have a certain operation that we can do where we can move our pluses diagonally upwards just like we had done in the Schubert setting. But for these growth in polynomials, we have another option for what we can do. And what we can do is not only just move our plus diagonally upwards, but we can also just leave one behind. And so just to make this clear, our plus can move diagonally upwards or it can leave a copy. And so when we think about using that here, we can look at this two by two sub square and do the first option of move or the second option of move. And that'll exhaust all of the possibilities here. And we assign the monomials in the exact same way where we'll just have some alternation in sign based on the degree of the term. But that'll be very predictable and we won't have any unnecessary cancellations. We won't have any cancellations, okay? And so using this formula, we know that the degree of the growth in deep polynomial is going to be what is the most number of pluses that can possibly appear in one of these diagrams by applying these moves in any such way. And tying that back to our regularity formula, that tells us that the regularity of these matrix Schubert varieties is going to be the degree of the growth in deep polynomial minus that coxial ink, which was the same as the degree of the Schubert polynomial. So in a sense, this regularity is also telling us about what is the difference between the highest and lowest order terms in our growth in deep polynomial. And summing everything up, we see that using that proposition, we know that the regularity is going to be the degree of this growth in deep polynomial minus the coxial ink. So you might be tempted to just say, let's just use this pipe dream formula. And anytime I want to compute the regularity, I'll just write down what is the largest possible pipe dream that I could find and then subtract off the coxial ink and be done with it. But the issue is as your diagrams get larger and larger, it can sometimes be very difficult to know whether you've made the right choices of moves or the right sort of greedy algorithm to get the biggest possible number of pluses. And so that's why whenever we were thinking about this question, we weren't quite satisfied with just using this rule and wanted a more precise description of what the regularity in these degrees are. So after the break, I will tell you about our formula for the case of axillary permutations. We'll stop there for our break. Very nice.