 So, I think symmetric polynomials are so basic that many people just invent them before actually reading about them. For example, I think they appear first when you study quadratic equation in school, I don't know, something like this. Then you learn that it is more convenient to, so of course you can write the roots, right? It is much more convenient to work without somehow using the roots. For example, you can compute x1 square plus x2 square without actually writing down, you can of course take x1, put it here, x2 put here, then a lot of cancellation will happens, but it is much more clever to write it like this. And then you know that by VET theorem that x1, x2 equals b and then x1 plus x2 equals minus a. So, you plug this formula and you get that this is equal to a square minus 2b. Okay. So, and you can notice that you can do similar things whenever this expression is symmetric in x1 and x2. So, that's pretty basic, but then I think it gets more interesting when you try to do the same thing with three variables. For example, I want to compute x1 square plus x2 square plus x3 square in terms of the coefficients of, so if let's say x1, x2, x3 are the roots of x square plus a, x cube plus a, x square plus b, x plus c, that's what I, it's a bit too far. So, x1, x2, x3 are roots of this, then how do I compute this? Then I can write down the complete square and then I have to subtract x1, x2 plus x2, x3 plus x1, x3, right? And again, I get a square minus 2b. So, the formula is the same. Wow. I said. This is the beginning, the first observation that leads to the notion of symmetric polynomials. So, the main principle of symmetric polynomials is that all identities between symmetric polynomials should not depend on the number of variables, number of variables, okay? But first I, okay, so I mean, I hope it's clear from this illustration. Now I want to define symmetric polynomials, so we call, now it will be a little bit more boring because we will write some definitions. So, you have a polynomial in x1, xn, polynomial p in variables x1 up to xn is called symmetric. If like p of, when you take any permutation of indices, it stays the same for any bijection sigma from the set of 1 up to n to itself. I did it, but it doesn't work still. So, okay, so we have this definition and first I want to, so here we fix the number of variables, but later we will make more clear why, what do we mean that doesn't change and doesn't depend on the number of variables. But first let's work with the fixed number of variables. So, first thing you discover is that, well, so the idea is to write this polynomial and let's collect, expand and collect the powers of t, we will get t power n minus what we will call e1 of x1 up to xn t n minus 1 plus e2 xn t power n minus 2 and so on, right? And the Vietta theorem says that, like this guys here, they have quite simple forms. So first is just the sum, second is the sum of all products, so this is the sum over all i less than j of xi xj and so on. So we can write the general formula if you take ek then it is a summation over all choices of indices i1, i2, ik, so we have k indices, k is here and we should take the product. So this guys, so notice that when we just, we define that, so the last one en will be equal simply the product of all axis and we just say that en plus 1 and so on as 0. These are called the elementary symmetric polynomials and first theorem I want to prove is that theorem 1 is that every symmetric polynomial in n variables can be expressed in terms of the elementary polynomials in a unique way, okay? So more formally speaking there is a bijection between symmetric polynomials and polynomials in variables e1 and e2 and so on up to here, but I prefer this more intuitive way of describing things. Now let's, and the idea is, so the proof, so here I want to put some hashtag, like when you use Twitter you sort of, you say something like, okay this is true and then you put hashtag, Grobner basis, hashtag must is cool for example and stuff like that. So here I put, so it doesn't mean that I am going to explain what the Grobner basis is, but if you want to learn how to use a similar technique in other settings you should Google this phrase and it will give you a, so the main idea, so main idea of the proof is to, the main idea is to take something called the main term. So whenever we have a symmetric polynomial, let's take p is, okay naturally it is a sum of certain monomials, certain powers, so it's a sum of some coefficients we have here, some coefficients and then some x1 power i1, x2 power i2, xn power in, right? And the idea of the main term is to take the biggest possible, the biggest in some way a monomial from here and then to kill it and to increase it, to increase this main term. So what is the main term? So what do we do? We take all this, we list all the, so list all tuples i1, i2, in that appear in this summation and then, and order them in the reverse lexicographic order, first, so first we list all, list all of highest total degree, the total degree is just the sum of the i's and among them we first look at the degree of x1 and we choose it to be the, the largest and then the degree of x2 and so on, then sort by i1, then sort by i2 and so on. So and the first one, the term first term will be our main term, so the first term in the list is the main term, it's called the main term. So let's look how it looks like, it has x1 to some power i1 and x2 to some power i2, but i2 must be smaller than i1, right? Because if I have i2 bigger, so i2 is smaller equal than i1 because otherwise I could just choose, because I have i1, i2 and the thing is symmetric, I can, I also have term i2, i1, so this is clear, so and so on, and in the end I will get a decreasing sequence, non-strictly decreasing, so, so I want to subtract the, some product of elementary symmetric polynomials from my polynomial, so the main term will go down, will be smaller main term and so in the end I will get my, by induction somehow, I get that my polynomial is an expression in terms of the elementary symmetric polynomials and how do I do this? So I draw this picture, where I have here i1, then i2 and so on, I get such a picture, some partition and then I, the trick is to take, to count boxes now in horizontal, sorry in vertical directions. I will get some indices j1, j2, jk and then I take i, the elementary polynomial e gk times e gk minus 1 times so on up to gj1. So the idea is that the, for example, if I have only one box here, then I will be having x1, okay let us just compute some example, like this, so that I have x1 cube x2 square, so I should take this box corresponds to give me e1, these two boxes, that is why I get e2 and these two more boxes I have e2. So if I start writing these things down, then I will have x1 plus something, then e2 will give me x1, x2 plus something and another e2 will give me x1, x2 plus something. So in this way I cooked a product of elementary symmetric polynomials, which begins with x1 cube x2 square, right and I can always do it like this. So I subtract this product from my original polynomial and I continue until I get 0, do you want, yes I can do, okay why not, I mean this should be basic after all, okay let us do the two, the example like x1 cube x2 square plus x1 square, no that would be too easy, right, plus for example x1, okay x1 square plus x2 square. So what does my rule say, first I should take, I will get the box, sorry like this, with two squares, so this will give me, I have to take e1 square, right, so I should take x1 plus x2 square and subtract it. So this way I have e1 square plus what is left is minus 2x1x2, right, so this was step one. I represented my polynomial as some elementary polynomial and then something else and then on step two I take again the main term, this is x1 x2 and this gives me this picture and I am supposed to take e2 because this is 2, so e2 is just x1 x2 itself, so when I continue I don't have anything left, so that's how it, okay. So now I should make my, the principle that I stated in the beginning more precise, so the principle will say that how to somehow go between, okay let's take, so the hashtag will be limits in algebra, not in topology. So the idea is that you have rings of n of symmetric polynomials in n variables for different n and this should be somehow related, right, so we have, so let's say we take sim n is a sim symmetric polynomials in n variables and I should, I want to study them for different n and they should be somehow related. So for example if I can take sim n and sim n plus 1. The first thing I try, I can try to build some maps between them, for example I want to, if I have symmetric polynomial in n variables maybe it is a symmetric polynomial in n plus 1 variable but it is not true, for example x1 square is a symmetric polynomial in one variable but it is not symmetric polynomial in two variables. Either I can do, I can take some symmetrization but it's not, it's not cool, doesn't commute with products and so on. So instead I will construct a map in the opposite direction and what I do is if I have polynomial in n plus 1 variable I can set the last variable equal to 0 and then it will be symmetric in the rest. Now this is symmetric in n variables and this map is clearly algebra homomorphism. So I mean if I have the sum here and then I get sum here if it's product here then it will go to a product. So this is algebra homomorphism and right, so what I want to say is that you should study identities between symmetric polynomials which are true so we are interested in identities which hold in for all n and it means that for example I have some identity I can consider it in n plus 1 variables. If I set the last variable equal to 0 it should give me the same identity but for n variables. I don't know, that's sort of the idea behind this limits in algebra. So you should, you want to have something like infinity, infinitely many variables but it's not really infinity because every identity will be finite, it will be some combination of letters but so under this map that I constructed remember the theorem says that this is generated by this elementary polynomials and this is also and the map is just sends e n plus 1 to 0. So e n plus 1 goes to 0. Sort of if you have something formula you just kill all n plus 1 and that's how it works. So cool, now we have this symmetric, this polynomials but we cannot do anything with them because there is no relations between these ones. So that's why we need some other polynomials and motivation is like this for example I don't know this polynomial in two variables is clearly symmetric yes. So I should be able to express if you have a formula like this polynomial can be computed in terms of this e 1, e 2 and so on. And so that's why I introduce this so in different symmetric polynomials that generalize this. So h n by definition is a sum of all monomials in these variables sorry this should be different k say x k is a sum of all monomials in x k of total degree k x n sorry because this one is clearly the one of this so first question is how to relate h and so this and e so these are called complete complete homogeneous symmetric polynomials ah yes and then there is another one I can take this I can just take something simple like some of k's powers of my variables this is called power sum so there should be a relation between these three and the two to find this relation relations where it is very convenient to use generating functions so the first generating function is easy it will be for the polynomials for the elementary polynomials remember the definition that I obtained my elementary symmetric polynomials if I open the brackets in this expression so remember this it is more convenient to work with slightly different object I divide everything by t power n and replace t by 1 over t I mean I instead of this I can write this I mean it's quite easy to see that this is simply when you open brackets in this expression you'll get the same thing it will start with one then I have e 1 t plus e 2 t square and so on but good thing about this expression is that I can somehow write that I it somehow works for any number of variables right I mean like so I can pretend that this formula holds the product of all 1 minus t xi is 1 minus e 1 over them t plus e 2 of this times t square minus e 3 and so on okay you may ask okay what if this product the product does not converge but in algebra we don't care of product converges or not because each term can be computed from just on the level of symbols you know if I if I want to compute for example coefficient in front of t square I know that I should x I should take all possible products of x i and x j where they are different so when I do this generating function business I don't really care if the series converges or not just it is just a series okay and then I want to do something else I want to find this relation with h with a complete homogeneous symmetric polynomials and so how do you build any complete homogeneous symmetric polynomial like where is the definition here I have to have all monomials of total degree k it means that I can take just all monomials all possible monomials and then sort them by the degree and it will give me a sum of all h k's so this combinatorics is written very beautifully by this generating series okay I will omit axis h 1 is actually polynomial of axis so and so and so on right why is this true this is true because if I write one component in this product it will be 1 plus t x 1 plus t square x 1 square and so on and you see that it has contains all monomials of x 1 then if I do when I go to x 2 I will get all monomials of x 2 and so on and when I open all these brackets in this infinite product I will just get all monomials and in front of each monomial the power of t will be the same as the total degree of this monomial so this formal holds but this simple combinatorial principle shows gives the proof of a non-trivial identity that if I take this expression and this expression the product of them will be 1 so this implies that 1 plus h 1 t plus h 2 t square and so on times 1 minus a e 1 t plus e 2 t square and so on is 1 so let us all come find the term in front of t power k so in front of t power k I have h k minus h k minus 1 e 1 plus h k minus 2 e 2 and so on and then it ends with minus 1 power k e k this is 0 because on the right hand side I don't have any term in front of t power k when k is greater than 0 and you see this is very nice you can compute h k always from by induction h k if I know e k or does we wrong I can express e k in terms of h k now there is another principle so this identity number two here I want to obtain the power sum symmetric polynomials this is for power sum so I have this expression this is a power series I can take the logarithm of it so what will I get I so we know that 1 plus h 1 t plus h 2 t square and so on equals to this infinite product or you you know you can say that if you have finite finite number of variables it is a finite product so remember we don't care about how many variables we have so let's take the logarithm of both sides the logarithm of a product is the sum of logarithms and then I have log of 1 over 1 minus t xi and there is a power series expression for that which is I still have some of i and then I put some over j t j now is from 1 to infinity t xi power j over j and now I exchange the summation and then I have some over j t power j divided by 1 over j and then what is left here is exactly the power sum sum of powers of variables j so it is more natural and more custom customary to write it this expression in the following way instead of log of this I write exponential and here I start with 1 and this is p 1 t plus p 2 t square divided by 2 plus p 3 t cube divided by 3 and so on okay cool sometimes it is kind of nice to have an expression so this from here we can obtain kind of not so complicated expression of h in terms of p but I mean it is a little bit it just has to you know take the exponential of a sum as a product of exponentials and then write the form the power series for the exponential and then I will have some kind of and then there will be some product of factorials but okay let's not go into this but factorials and powers of some some integer numbers some integer numbers and let us so here is the thing so I denote by lambda this sequence of integers such that they are non-increasing and I also write you know this just the product of and then I denote this number by m lambda and then I have sum of p lambda over m lambda that is a convenient way to have it written where summation is over lambda so this I denote as a sum of indices lambda should be such that the size is k okay but these numbers will be important later if I get to this point so right after we have this we have okay we have h we have e and we have p three different families of symmetric polynomials cool now what do I want to do so there is this sum what we call involution so this is some operation on symmetric polynomials which which when you apply twice you get the same thing you started this let's call it sigma and it is defined like this so whenever I have I see a complete symmetric polynomial I replace it by minus one power n times elementary symmetric and whenever I see elementary I take minus one power n times a complete one okay and now let's try to just do this operation and see what it does to to do different identities and all right so the thing is that sorry I'm not allowed to do this operation because h ends are actually some combinations of the ends so first let's just to put it like this and then let's look at this formula and it we see that in this formula if I put so how do I so suppose I know the h polynomials and I want to find the e polynomials in terms of h I have to write this series here and take inverse of it now I replace each h i by minus one power i e i so I will get this thing so when I take the inverse of it I get this so if I my involution sends h and to this then it will send this elementary ones to the complete ones so from here we see that if you apply it twice then it is identity and then what does it do to the power sums now we should look at this formula so what happens when we put h i's to replace h i's with e i's I get here on the left hand side I get one divided by this but what I had before so it means that I can I will obtain correct result if I replace this what I have in the exponential I put a minus in front of everything this gives the proof that if I apply this involution to p n I will get minus p n and this is very cool because you know you have some complicated identity between symmetric polynomials you apply this involution you get a new identity so next when what people do after they have this symmetric this ones h e and p they introduce a scalar product on symmetric polynomials so let's there is a scalar product for to define this scalar product we first should introduce some other symmetric polynomials the monomial symmetric polynomials and they defined like this for every lambda like like before so it is non-increasing sequence I can take x1 power lambda 1 and so on x sorry xm over lambda m and then I take all permutations of these guys and add them up in this way obtain some another family of symmetric polynomials but as I needed from I needed them because I want to define this scalar product and this color product by definition will be that if I have so but what all this distinct permutation yes sorry maybe distinct so scalar product will if I have h lambda and so I define it for when I have a complete symmetric polynomial so this is the product and mu is another sequence then this is one if lambda equals to mu and zero otherwise okay very simple definition and so I have this scalar product but I don't know why is it good I mean why should I care about this color product but one thing I can do I first I can compute some scalar products of other in other bases well I gave you the idea so you should look at yeah they are all bases the h and okay so for the proof of different identities involving a scalar product I need the idea of I think it's called reproducing kernels so this is a principle so whenever we have a scalar product let's say we have some space of v is a space of polynomials some say I don't know finite to infinite dimensional space of polynomials in X1 Xn okay and for every two like f and j in in the space I can have this scalar product and j is defined and so it is some kind of it is a scalar product meaning it is just bilinear and suppose it is non-degenerate what does mean the non-degenerate it means that if I have a basis sorry I can't hear the thing is that my scalar product is yeah I need to know that m's are bases so it is horrible if I have any two symmetric polynomials I have to express the first one in terms of h and the second one in terms of m and then use this definition to compute my scalar product so yes sorry okay so if I have a basis now it makes sense to speak if it is a non-degenerate symmetric non-degenerate scalar product I can find the dual basis so there exists a dual basis and what is a dual basis it means that the scalar product of gi and fj for any i and j is zero if i is not j and one otherwise so from this point of view my definition says that let the scalar product be such that complete and monomial polynomials form a dual a dual to each other there now what is the reproducing kernel principle the reproducing kernel principle says that if I write right the following thing I take f1 of this x and now I use different variables y1 and up to yn and I put g1 in this different set of variables and then I sum overall pairs like this so this will be now in two sets of variables sum sorry k of x y so the principle says this k does not depend on the choice of basis doesn't depend on the choice of basis now this is some kind of triviality in fact I mean it is it looks complicated but it is not because I mean I have this k in two variables I can expand it to this k there is some associated map so for any I don't know psi I take k x y and then pair it with psi using this scalar product so expand it as a sum of of monomials and each monomial in y compute the scalar product with psi and you get polynomial next so it's some kind of contraction operation so this now if k is a reproducing kernel if k is like here this map from psi to this complicated contraction is just the identity map so now you have to use a pairing scalar product is non-degenerate to show that if k does if this so this map is the identity map so it doesn't depend on the choice of any basis so if I choose another one then I mean there is some work to be done but okay so now let's implore this idea so let's compute the uh reproducing kernel of this scalar product so I have to use definition by definition I should take the sum over my sets like this and then I should take h the complete of x one of x variables and then the monomial of others okay how do I compute such a thing so maybe first we should guess the answer and then uh see that it whole it is true so write this the answer the answer is like this I have to take a product over all pairs i and j one divided by one minus x i y j so now uh like how do you can you possibly see that this this is equal for this you should ask yourself what stands in front of a monomial of a given monomial in y so what which polynomial in x occurs in front of y one power I don't know one so uh so first where can y one come from y one comes from the product y one x one y one x two and so on y one minus uh one minus y one x n that's and that's the only way how y one can appear right so this is all pairs so first I have product for for y one I have this and the same for y two the same for y three and so on and now we can use our identity for the generating series of complete homogeneous symmetric polynomials to see that the coefficient in front of y one power so the coefficient in front of uh y one power lambda one is exactly h lambda one of axis by this by that identity that we had for the complete homogeneous symmetric polynomials and then the same happens for y two for y three and so on so in front of this monomial we'll have h lambda and that's why this identity is true okay now uh I'm almost finished I think five okay so now let's try to use this principle because I mean this is how you will see the power of the principle let's write uh expand this in this in expand this product in some other way so how can we do it let's take uh the logarithm of that so let's expand our reproducing kernel in a different way so I want to write exponential of the logarithm of that and the logarithm formula we already had it must be so it will be the sum of i and j they are indexing my variables and then the sum over m and then it will be x i y j uh power m or divided by m yes I have for each i and j I have this formula for the logarithm and that's how it works and now let's exchange the summation now um I have one over m and here I have some sum over all i and the sum over all all j so let me how do you put it better it means that I have a power sum symmetric polynomial but evaluated at all products x one y one x one y two x two y one um y one and so on all of them all pairs okay but you remember I I I said let's um so now this this part here looks looks like what I we had before because we had that we say the we assumed that exponential of of this thing is simply the sum over all sequences and then I have p lambda the product of power sums divided by m lambda some multiplicity some integer number which is the product of some factorial sum but this now is evaluated at all pairs but now it is not hard to see that when you have whenever you have this power sum symmetric function and you have all pairs it is the same as the product of power sums so I get the sum p lambda of x and the same p lambda of y divided by m lambda so from here you see that so what is the dual basis to p basis so this basis then must be dual to the basis therefore a product of different the scalar product of two different power sum symmetric functions can be is is zero if lambda is not new and it is m lambda if lambda equals so right I don't have time yeah this is pt so we have the way to come you see that p lambda is almost like the orthogonal an orthogonal basis which is very nice to have for a scalar product and in fact like my experience shows that doing some tedious computations it is always easier to work with power sum symmetric functions but the problem with them is that they're not very nice for example all if you have a symmetric function with integer coefficients then it will be still have integer coefficients if you expand it with respect to elementary symmetric functions or monomial symmetric functions are complete homogeneous symmetric functions but for this power sum symmetric functions will have some denominators so for computations it's a very good basis but for like some insights and geometry it's not very good and then there is another basis which is the most interesting one it is both orthogonal and integer so everything except every symmetric functions has integer coefficients with respect to this basis and that's a sure so there is this sure basis and then like you have lots of interesting identities with determinants and they are they define this using some kind of generalized monodeterminants and they are not very easy but they are very nice sorry I have to stop here unfortunately I didn't have any time for examples