 Welcome everyone to the Schubert seminar today. We're happy to have a professor Alex Young from University of Illinois at Urbana-Champaign Telling us about new or little numbers. Please take your way Thank you Leo, thank you Anders for the invitation. It's a true honor to be here and I'm very happy to speak to my fellow Schubert calculators I'm going to speak about some joint project I had with Chee-Liang Gao, Guidon Erlewitz, and Nicolaus Serr. I think this is a Schubert seminar and I think Schubert will be kind of low-key in this particular talk, but it is there. So in order to set up a little bit of a historical preamble, let me talk about central dogmas. The central dogma in science that I'm aware of is molecular biology, due originally to Francis Crick, and then later restated by Watson. And it's often stated as DNA makes RNA, makes proteins. And this is the sort of thing that Watson had stapled to his wall when he was thinking about the great scientific contributions that he and Crick made and post their biggest one. And I want to talk about an analogy in combatful representation theory. And this is something I ran out of an old paper of Bernstein-Zelovinsky, who credit Bresna and Gelfand. And the trio here would be that the constant partition function, that is the number of ways to generate a vector using positive, using roots. Coefficients makes the Costco coefficients, that is the multiplicity of a particular monomial in the Schur polynomial, which therefore makes the original coefficient, which is how do you multiply two Schur polynomials and expand the basis of Schur polynomials. Now, I'm slightly lying about what Bresna and Gelfand said. In reality, they weren't talking about Costco, they were talking about weight multiplicity. And they weren't talking about literates and coefficients, they're talking about tensor-proc multiplicities. And so let's just extend this blue situation where we have tensor-proc multiplicities for which LR is just the GLN case. And I think it's also part of the central dogma of common representation theory that if you know something in type A, you should really do it in general type. That's kind of a sort of a general process by which someone thinks about these things. The purpose of this particular talk is to suggest an injection of another level, in red, the NL level, the new little wood numbers. And it's my job in this talk later on in the second half to explain what NL means. But I'm going to start this talk by focusing on the LR case, which is to say GLN, and that's where I'm going to begin. Okay, I'm going to assume as little as possible. So a reminder, general linear group is the group of invertible N by N matrices with complex entries. I'm interested in linear representations of a group, a GLN particular, that's just a homomorphism from GLN to invertible transformations of some finite dimensional vector space. And an equivalent way to say this is you can think of your vector space as having a GLN module structure given by the action of your homomorphism, which is to say it's a module over the group ring of GLN. Alright, may an example that I at least I think tells you most of the theory in some sense. Let's start with N equals two, and my vector space is going to be a degree to polynomials in two variables. And my action is going to be changed by change of coordinates in the displayed way. And so what's what's this change of coordinates going to do is learn change of coordinates, you're going to take a two degree to polynomial and convert to another degree to polynomial. In fact, it's a change of basis, and this change of basis is given by this three by three matrix. So the homomorphism description of the representation is it takes a two by two matrix ABCD and maps you to that particular three by three matrix. And it would be an exercise to check that if ABCD has a determinant nonzero that the three by three matrix has a determinant nonzero. Another feature of this particular example that I want to point out is that the entries of my three by three matrix are all polynomial. Now, one could also talk about the possibility of the entries being rational functions, but it's known in theory that if you understand the polynomial case in the setting that the rational case follows easily. So we'll just talk about the polynomial case for the sake of this conversation. You could go even further, but we'll just stick with polynomial. All right, recall that a representation is irreducible if it has no non-trivial submodules, if it has only trivial gel and submodules. The basic questions and common representation theory from when I think about is you'd like to know what are the irreducible representations, you'd like to understand the dimensions, and in this particular setting, Isaac Schur in 1901 in his PhD thesis, in fact determined irreducible representations of the general linear group, and he showed that they are in bijection with partitions with at most n rows. And in case you've never seen partitions, partitions with non-increasing sequences of positive integers and usually draw some young diagram associated to it, but judging from the audience, I see you probably reading that. One other point that, well, another point I want to make is to talk about the character of the representation. By definition of the character of means, then I take my homomorphism and I apply it to a generic diagonal matrix and then compute the trace. Of course, if you are familiar with group theory of funic, sorry, characters of funic groups, then you might be expecting substitute arbitrary element here rather than a diagonal matrix. But the point is, is that trace is very under conjugation, so if you're diagonalizable, it's sufficient to calculate what goes on at the diagonal, at the diagonalization, and then in addition, almost every matrix is diagonalizable, so my continuity really you see everything by computing this special case. This is, if I'm assuming my representation is polynomial, as I have here, then of course this is going to be a polynomial of this character, but the definition doesn't really know about the way in which I label the indices, so this will also be a symmetric polynomial. Here is an example from the previous page, I substituted a, b, a, b, c, d, I substituted x1, x2, and I end up with this particular matrix, and this is a polynomial that's symmetric at x1, x2. As you may be aware, there are a lot of interesting symmetric polynomials, such as the Sherp polynomials, which can be defined in terms of things like semi-standard tableau, but for the purpose of this conversation, you can take the definition of a Sherp polynomial. For partition lambda to be just a character of Sherr's irreducible representation associated to lambda, and in our running example, the lambda is 2-0. Okay, the other thing I brought up in the preamble is tensor product, and no doubt you know the tensor product, but there's one little point that I want to emphasize, so let me just make sure we go through the definition. I have two vector spaces, v and w, and I want to think about their tensor product as vector spaces. And so the definition is that it's, you know, it posits the existence of some vector space v tensor w, which has the property that if you have a bilinear map coming out of the partition product, and you have a bilinear map from your partition product and the other vector space, there exists a unique linear map from the tensor product to this vector space. And if this tensor product exists, then it's unique because, well, by the usual result, you can flip u and v w, v tensor w. And moreover, construction of the tensor product is not a big deal either. You just basically create these elements, v tensor w, and you force the map from v cross w to v tensor w to be bilinear by throwing down, you know, infilling many relations. In our particular instance, we're interested in v and w being GLN modules, so they are, they are modules, but I want to think about their tensor product as vector spaces. And when you do that, the vector space tensor product will also have an action diagonally where you take a group element and you act in the following way. And since GLN is reductive, which means that if you take a representation, you can decompose it to irreducible, when I take the tensor product of irreducible representations to assures irreducible, I can write as a direct sum of irreducible with some multiplicity denoted by C lambda mu nu. How do you compute C lambda mu nu? Well, that's where characters can come in. The characters are polynomials and you can multiply polynomials, the sure polynomials, and in fact, expand their basis. And as it turns out, product of the polynomials corresponds exactly to tensor product, and some of the polynomials corresponds to direct sum. Therefore, answering this question is just amounts to computing, you know, using polynomials. And so the next thing would be, well, is there some common way to compute C lambda mu nu. But before I get into that, let me just say that for those of you who are new to this particular topic, it may seem a little bit artificial. How I constructed things, because the way I made it sound is, well, you have these representations, you have this natural operational tensor product. So why don't we just take tensor product decompose as a thing to do. But when you work with the representations and you're trying to build them up, really, I mean, you would be in fact to, you know, taking tensor products in order to actually construct the damn things. And so really this question is not just an artificial question. For the purposes of pleasure, it's actually really the endpoint of the whole setup. All right. Let me speak about the literal Richardson coefficient, the LR coefficients. These C lambda mu nu are these tensor product multiplicities, they're known as the literal Richardson coefficients. And what I'm about to do is quickly tell you how to compute it combinatorially. And for those of you who already know it, then you can just take a break. For those of you don't know it, then the main point I want to get across is that if I go too quickly in this particular slide, the emphasis is that you could learn this rule in 10 minutes. If you had a quiet moment for yourself, and I will be using it a little bit later. So just logically speaking, I want to tell you that there existed a rule you can compute with it. And later on, you could deduce things that I will say from this particular page, just for pedagogical reasons, I want to say that. All right. So literal Richardson rule says, it's a counting rule, says that C lambda mu nu counts the number of semi-standard young tableau T of skew shaped mu slash lambda with mu i many i's that are about. An example would be lambda mu nu here so 31421 and 21. And the question that I want to solve is how many times does the middle guy appear in the tensor product of the outer guys. Okay, so how does the rule work. The big guy, new big guy, and you take away lambda on the inside so you forbid some parking spots. And what mu tells you is it tells you how many ones and twos you can throw into the empty spots. In this case, because there's two boxes of the first row of mu, you can put two ones and one box a second, you can put a two. There are three possible fillings that you can put down under those under that weight constraint. But what's this ballot condition the ballot condition says that when you read in traditional Chinese, which is down columns and right to left, that you always see more ones than two says you're reading. Okay, so this T one and T two are good, but B is bad because when you start reading traditional Chinese you see a two before you see anyone. And finally, there's a semi standard condition which is, you know, about increasing rows and columns which is actually an irrelevant conditions particular example. So the upshot is is that in this example, the multiplicity of new in the test product landing view is to because of T one and T two. Okay. And if the rules a little bit complicated, my point is not to emphasize this complicated is that it is so simple. And the fact that you want to do what would be seemingly difficult operation like tensor decompose. All right, what's the main question that pervades this particular talk, the question, naively stated is characterized when selenium new is positive at first sight the question is rather stupid because one answer could just be well you just told me an algorithm. And if the if it comes up with the empty set that the answer is zero. It's positive. Well, first of all, if you were to do that is rather inefficient to actually compute the things, but my point in this talk is not even to talk about matters of complexity that it's irrelevant to this particular conversation. What I want to show you is that this question is indeed an interesting question, and moreover has interesting answers. How would you analyze this question. Well the first thing that you can find out about the nearest and coefficients just this sort of as a set is that they form a semi group. That is, suppose I have landed you knew a triple that produces a positive coefficient. So I'll call that LR triple. Then when you sum the indices you get an LR triple. So the vectors lambda mu new to catnated they form a semi group. And this you can do from the previous slide. It's just from knowing that that rule and thinking hard. Okay, so with that said, let us define the LR semi group to just simply be LR triples. And it's a semi group from the previous statement. Now the next one's a little bit tricky. I need to define something like we'll call it like a provisionally a defective LR send group is the saturate LR send group. It sounds almost exactly the same except you have this extra existence. What's that here land of you new are going to be rational partitions. That means non negative decreasing sequences of rational numbers. And you are indeed saturated LR semi group, if you can rescale by a positive rational to be the LR semi group. It's an odd definition, but something important to this talk. Okay, fantastic term of kids in town says the following. It says that the LR coefficients are saturated. What it means is that LR, the LR semi group is are precisely the lattice points of the saturated LR semi group. And moreover, they both generate the same rational polyhedral cone rational polyhedral cone means that you have some matrix, I guess three and by matrix with rational coefficients, such that the convex hall of the points here in the other seven convicts all the points here in the saturate group are exactly described by these inequalities. Having stated a goose and tell saturation theorem. I think it's fair to ask why do you need to bring up the second definition, because in fact they're both equal. And it's not actually just to make the historical point that you said how made this contribution. It's actually that in in great generality, these two things may not be equal. So it's actually, let's say a good situation in Taipei that they're equal, they could be different in in in the far end of the of the central dogma. And when it isn't when it isn't cool. Okay, I promise you that the question of determining when the LR coefficient is positive has interesting answers and this slide is is about that. Remember, a complex value matrix is Hermitian if it's equal to its conjugate transpose. And the spectral theorem says that all permission matrices are diagonalizable by conjugating the unitary matrix and have real eigenvalues. The eigenvalue problem that stems back to the 19th century asks the following question. Suppose I have three Hermitian matrices, a b and c and I impose the condition a plus b equals C. How does that constrain the possible eigenvalues lambda you do that can appear. So the idea about these, I'm going to think of these as real partitions right instead of rational partitions or integer partitions right real partitions. And here I'm actually imposing that the partitions. The value is not negative. And this is not actually a constraint of the theory it you can consider the general case. It's just that I want to compare them to partitions a little bit easier for my presentation. So general question is a b and c eigenvalues are matrices Hermitian you have a plus b equals C what are the possible eigenvalues we get. And let's talk about the eigencone to be precisely those partitions are sorry real partitions lambda mu new, which for which there are such matrices a b and c and forge lambda mu are their respective eigenvalues. The eigenvalue problems first solved by Fliacco, and one of his theorems is the following. His theorem says that L R sat the defective L R something up some group. And the, the rather the thing that it generates as a rational polygon comb and the eigencone are in fact the same thing. And it's a spectacular theorem because if you haven't seen these sort of things before, what would tensor product multiplicities have to do with eigenvalues of Hermitian matrices under some weird addition constraint. Of course, this is a theorem that really, I think it's just spectacular, and it's so spectacular that I want to share to you through an example. I want to actually see that that really works. Let's let lambda mu mu be these three partitions for one three one and six three, and you can check using the leverage little would reach the rule that this times this has that as it is as a constituent. Therefore, you are in L R two, which therefore means that you are L R sat and in fact this containment is really in a quality due to communism town, but we don't need that for the moment. What Kelly actually was saying, therefore, is that there must exist two by two Hermitian matrices with eigenvalues for one three one and six three such that a plus B equals C. Okay, so let's try to solve for a B and C. First of all, we can conjugate by unitary to assume that C is diagonal with this form. And what I have on the left side here, this is an arbitrary two by two Hermitian matrix, and this is an arbitrary matrix and submission given the fact that a plus B equals C. So what are a little a little B and little C. You can use the Newton identities, the trace of a matrix and some of the eigenvalues the determinant of matrix the product of the eigenvalues. And when you do that and I did, you work out that you have some two linear equations and C and that you can back substitute to get B, and you get a complete parametric solution in terms of argument theta this way. And when you conjugate by whatever unitary you want, you'll get all possible solutions for a B and C. And this is just to say that in the two by two case you have complete control of the problem you can just figure out, you know, given that telling you these things, what the actual solutions are. The question Kelly actually is, is about is, how do you know which Landon you knew give you a system of equations with a solution. And I must confess that having thought about it a little bit, it wasn't so obvious to me that I would get his answer. First of all that there be linear qualities, but I would get his answer, even in the small case. So what is this general solution. Let me make a definition for a non negative integer and I'm going to call the Cliatch going to qualities. Those of this form where you're summing the eigenvalue subsets of the eigenvalues land a new lambda mu over some subsets k i and j. And what are k i and j, they're going to be subsets of n, this n of the same size for which some smaller literature and coefficient is positive. And the small or literature coefficient is defined terms these towels and towels are just, you know, like when you take a subset you draw a lattice path, as you teach your undergraduate contour students and that proves you a partition. So, his, the definition is, is like this, it's about knowing when something is L are sat by knowing when something is an L are. Okay, so it's not quite recursive. And his theorem, of course, is the L are sat, and therefore also eigencone of their mission matrices is exactly described by his inequalities. Precisely described. Now, before clearly actually goes theorem, there was a conjecture by Horn in the 50s, and horns conjecture was also exactly trying to understand the eigencone, I can add, right. Now, his inequalities were exactly the same. Like, it's the same thing that have boxed here, except that horns and equalities do not talk about literates and coefficients. They have this substitute by saying, Oh, I stopped the same towel, this towel that town this third towel. But I'm going to ask whether those are eigenvalues of the T by T version of the Hermitian of the Hermitian matrix problem. So, what could actually goes result combined with the earlier theorem can be sent out says, is that horns and equalities are true. Why, because Kelly actually goes says that the eigencone is equal to L are sat and clear and cousin towers in L are size LR, and Kelly actually is an equals are in terms of normal LR. So by a trivial induction, once you know these two theorems, you've deduced horns and equalities. Once you know the theories. Okay, I think this is a good time to pause. This is the end of the conversation with GLN and when we come back, I'll talk about the NL situation.