 All right, so we were talking about automorphic forms on PGLN. Let me briefly recall what we did last time. So our group G is PGLN, and we have three important subgroups, N, A, and K. And we talked about the set D of differential operators, G invariant on the upper half plane. And we had the Harris-Chandra C function, which measures the spectral density. And we talked about Whitaker functions. Whitaker functions are eigenfunctions of all the differential operators. And we had some formulas for Whitaker functions, and so on. So the first thing I would like to start with today is the Fourier expansion. And well, there are several possible Fourier expansions, depending on which group you want to Fourier expand on. And well, one of the most common versions is to expand with respect to N. And Fourier expansion is always easy if the group is a billion. And if N equals 2, then N is a billion. But if N is not 2, then N is not a billion. And it has only a smaller billion part. So the billion part of N is sitting here on the first off diagonal. And everything else is a bit harder to treat. And that's why the Fourier expansion looks a bit complicated. If you have a mass form phi, then it has an additional sum over a lower dimensional group, gamma in SL N minus 1Z, modulo N N minus 1Z. So N is the unipotent upper triangular group of dimension N minus 1. And then a sum over the coefficients, M1 up to MN minus 1, different from 0. I'm being a bit sloppy with signs here. There are plus and minus signs that I'm going to ignore. A M1 up to MN minus 1 normalized in such a way that these are of absolute value 1 on average. Again, there should be some absolute value signs, perhaps. But I ignore them. So the MJ occurs with a certain power. And the power is J times N minus J over 2. It's not so important. It's just a normalization. And then a Whitaker function. The Whitaker function has index mu, which is the spectral parameter of the cusp form phi. And then a diagonal matrix M that I'm going to write down in a second. Then a matrix that in the upper left block, N minus N block contains the matrix gamma and a 1. And then the point z. And M is the matrix 1M1 up to MN minus 1M1. So you mean the product value? So here, this is a matrix product. This is a product. This is just a single entry. This is a single entry, yes. So this is a diagonal matrix. Yeah, so diagonal matrix. Yes. OK, these are the Fourier coefficients. AM1 up to MN minus 1 are Fourier coefficients. This is what we call Fourier coefficients. And they are related to the Satake parameters and to Hecker eigenvalues. In fact, we'll see later that they are Schuh polynomials in the Satake parameters. OK, and you can recover the Fourier coefficient by integrating the form phi over the group N. So AM1 up to MN minus 1 divided by this product of the Mj's to the power j times N minus j over 2 times the Whitaker function. This is just the product of phi of uz times E of minus, let's see, M1 U1 minus, and so on, minus MN minus 1 UN minus. So you take a character of this group here and integrate it against the mass form. And this produces the Fourier coefficient. OK, so now I would like to speak a bit about Hecker operators and develop a bit of Hecker theory on PGLN. So let P be a prime. We fix a prime, once and for all, and consider Hecker operators at P. So Hecker operators are parametrized by double cosets. And so the typical representative is a diagonal matrix with P power entries. And we fix an n tuple of exponents. And for notational simplicity, we write capital A. It's not this group. It's just a name for a diagonal matrix. It's the diagonal P to power A1 up to P to power AN. And then, so gamma is always SLNZ. And we decompose this double coset into single cosets. It's a disjoint union of single cosets. OK, and to each double coset, we can associate a Hecker operator. I call it ta of P or simply ta. And it maps a function F to the function that I get by summing or by applying all these coset representatives. So this is for the argument. And this is obviously well-defined. And again, gamma invariant, because this was a double coset. OK, and it's clear that taAAAA is just the identity, because then you have a matrix with all entries the same. And this is also permutation invariant. ta is the same as t sigma of A for all sigma in the vial group SN. And the adjoint of ta is t minus A. OK, in particular, without loss of generality, I can scale this in a way that, well, first of all, I can order it so that the smallest of the ANs is this. So I can order it in descending order, say. And then I can make sure that the last one is 0 simply by scaling it. So without loss of generality, I can assume that A1 is bigger than A2 is bigger than, and so on, AN, I can assume that AN is 0. I write absolute value of A for the sum of the AJs. And then I define something that I think of as the volume of such an n tuple. So again, if they are ordered in descending order, then I define v of A to be sum j AJ. So here, the ordering is relevant, of course. So the first one gets weighted just a little, and the last one gets weighted a lot. All right, and then I define the vial orbit. W of A is just the orbit of A under the vial group. So it's all sigma A, where A, sorry, where sigma is in W. OK, one of the main problems in every HECA algebra is the multiplication of two HECA operators. If you want to multiply two HECA operators, you want to write it as a linear combination of other HECA operators. And that's combinatorially possible, but a very complicated thing to do. Multiplication is complicated. But of course, we like to do complicated things. So if you multiply two HECA operators associated with two double cosets, then you can write it as a linear combination of double cosets with respect to certain matrices D. That's clear. But the question is, for which D is this coefficient alpha D non-zero, and what is it? I mean, can you compute given? I mean, of course, it's determined. If I give you two matrices A and B, two sets of tuples, can you compute A alpha D? Well, you can write down an abstract formula, but this helps very little. So only those D appear where the double coset gamma D, gamma is contained in gamma A, gamma B, gamma. That's something you don't want to compute. And then alpha D is given as the number of pairs, J, K, such that gamma D is gamma alpha J, beta K, where alpha J and beta K are the given coset representatives. OK, so in principle, you can compute it, but it's very complicated. Algebraic combinatorists have worked on this a lot. And there is something called Young Tableau. So if I give you two tuples, A and B, you can write them in a certain tableau, and then you can find out which of the D's occur. And this is called the Littlewood Richardson condition. OK, so this is combinatorially challenging. OK, HECO operators, of course, commute with the differential operators in D, because these operators are gene-variant and with themselves and are normal with respect to the usual inner product. And I write, denote the eigenvalue of ta for a mass form f by lambda a of f, or perhaps lambda a of p comma f, if you want to emphasize the prime. OK, I said earlier that Schuer polynomials play a role. And for those of you who haven't seen Schuer polynomials, let me define what they are. Schuer polynomials. So if you have an n-tuple, A, natural numbers, including 0, in descending order, we define the Schuer polynomial S A of x to be a ratio of two determinants. So x has n entries. It's a polynomial in n variables. x1 to the power a1 plus n minus 1 up to xn power an plus n minus 1. And then you decrease the powers one by one. And finally, x1 to the power a1 up to x1 to the power an. This is one determinant. xn to the power b. xn, yes. And you divide this by a Fundament determinant. x1 to the power n minus 1 up to xn, n minus 1, x1. So it's just one. OK, so this thing is the Schuer polynomial. It's associated with the tuple a. OK, it's obviously a symmetric polynomial. It's a symmetric polynomial of degree absolute value of a. And it turns out that the Fourier coefficients that I defined up there are just Schuer polynomials in the Sartake parameters. So a of p to the power k1 up to p to the power kn minus 1 equals, OK, I need a bit of space for this, the Schuer polynomial attached to the following n tuple, which is k1 plus up to kn minus 1. And then you remove step by step a term. And so the last entry is just, well, the second last entry is k1. And the last entry is 1. So this is obviously descending order. And the alpha is the n tuple of Sartake parameters. So in particular, a1111p. So then only the first entry appears and everything else is 0. By the way, this is not 1, this is 0. S1 0 0 0 alpha, which is just, as you can see from the definition, the sum of the alpha j. So this is what is usually called the Peth Hecker operator. OK, one way of trying to understand the Hecker algebra is to use a kind of spherical map, namely the Sartake isomorphism, the Sartake map. The Sartake map attaches to each Hecker operator a polynomial. Sorry, yeah, a polynomial. And it's a ring isomorphism. So multiplication becomes very easy because you just multiply two polynomials. And everybody knows how to multiply two polynomials. The disadvantage is that it's hard to compute the image. It is useful to renormalize the Hecker operators. I renormalize them as follows. T tilde, for me, is the same thing, but multiplied by P to the power V of A. OK, and now I can write down the map omega. The map omega maps TA of P tilde to a polynomial with entries rational with only P denominator. So I need a bit more space. So in z1 over P and then x1 up to xn. And it's symmetric. It's a symmetric polynomial with coefficients having only P denominator. And as I said, the image is very hard to compute. It depends on the explicit coset representatives. So you have to know explicitly the coset representatives and then do something with them, depending on explicit coset representatives. But it turns out if you don't want to know exactly what the image is, but just roughly what the image is, then it's very easy. This equals up to a small error term, just the sure polynomial. And the error term is of size 1 over P. So by this I mean a polynomial whose coefficients are bounded by 1 over P. So if P gets large, then you can very well approximate the image under this map by the sure polynomial. And in this notation, you can write down the eigenvalue. So the eigenvalue of f for the HECO operator ta is you take the unnormalized ta of P, apply the map omega, then you get a polynomial. And in this polynomial, you substitute P to the n plus 1 over 2 times alpha, where alpha are the satake parameters of f. Let's say alpha f. So in particular, we can guess what we expect how big. So how big should lambda AB in this normalization? Well, the sure polynomial has bounded coefficients. So the expectation is, so this is what you would expect, that the size of lambda APF in this normalization is roughly, I have renormalized this by a factor P to power V of A. So P to power minus V of A. And this is a polynomial of degree n, so of degree absolute value of A. And I plug in P to the n plus 1 over 2, so plus n plus 1 over 2 times absolute value of A. This is roughly the size that we expect. OK. And to give you an explicit example how we can actually work with these HECO operators, I would like to show you how to construct an amplifier on GLN. How to construct an amplifier on GLN. And my aim is to produce a very explicit amplifier. The fact that abstractly an amplifier exists can be proved abstractly. And there is a beautiful treatment in a paper of Akshay together with Leo Silberman. They give an abstract argument why an amplifier should exist and how you construct it. You can learn a lot from reading this. I would like to construct a very explicit amplifier. So see also Silberman-Benkertisch. So the whole idea of amplification is based on HECO relations. So we need a HECO relation like lambda p squared equals 1 plus lambda p squared. This is the prototype of a HECO relation, and we want to have something similar in general. OK. Obviously, there must be some relations in this algebra, but I want to have something explicit. So let me first show you in the language of the Satake map what this looks like. OK, I have a bit of space here left, and then I need to clean the blackboard. OK, under the Satake map, we have. So let's look at the case n equals 2. So what are the HECO operators of interest for us? There is the HECO operator T11. This is just the identity, and the image is x1, sorry, this is supposed to be T10. So that's really the p-th HECO operator, and the image is x1 plus x2 over p. And then we have 2 degree 2 HECO operators, T20, and it turns out that the image is x1 squared plus x2 squared plus x1 x2 times 1 minus 1 over p divided by p squared. And then we have T11, which is simply the identity, and that's x1 times x2 over p cubed. OK, and now we want to cook up an identity of these three operators, and we know what we have to look for. The sum of these two is the Tp squared HECO operator, and this is the Tp HECO operator, so we want to square this one and sum these two, and then there should be an identity. And indeed, so you can check that T10p squared minus T20p plus T11p equals x1 x2 divided by p squared, and that's just p times the identity. And recalling that our HECO operators, these Tp's that I defined here, are normalized a bit differently, we would expect that the Tp HECO operators roughly square root of p, so if you square it then you get p and you get precisely the HECO relation that you expect. OK, and now we want to do something similar in general. Any questions? Are the Fourier coefficients multiplicative? Yes. Yes, they are. Of course, not completely, where's the hook there? OK, of course, not completely multiplicative, but they are multiplicative. Well, they are multiplicative if you have an eigenform of all HECO operators. And that's simply the Chinese remainder theorem. If you multiply two HECO operators for different primes, then you can just multiply the diagonal matrices. So you can really treat one prime at a time. OK, so let's do a bit of algebraic combinatorics. I define capital pi n to be the set of all partitions of n by which I mean the following. All n-tuples, a1 up to an, in descending order, such that their sum equals n. So 0, of course, must be allowed. So some of them can be 0, and then some 1s and so on. And the sum has to be n. That's what I mean by pi n. And I consider the vector space of polynomials with, say, complex coefficients, although this completely irrelevant, in n variables that are symmetric and of degree n. So all symmetric polynomials of degree n in n variables, and it's obvious that the dimension is just the cardinality of pi of n. And I will construct a HECO relation by looking at various spaces of this vector space. So let's consider various spaces. First basis is the obvious basis. I take all monomials where c is in the vile orbit of sum a, and a runs through pi n. So this generates v. These are obviously symmetric polynomials, and they have the correct degree and every polynomials of that form. OK. So this is sort of the canonical basis for this space. Then I would like to look at the second basis, which looks as far. Basically, I take the sure polynomials as basis elements. So I take all a and pi n. And I look at the t-eighth HECO operator, and I normalize it. Then I apply the Satake map. This gives me a polynomial. And I take the collection of these polynomials. And this is, well, essentially the sure polynomials because the image is up to a small error, the sure polynomial, plus small error. But then, of course, it's also, I mean, if this is a basis, then if I disturb the basis a little bit, it's still a basis. So I can really write here an equality. OK. So to give you an example, in the case n equals 4, we have the HECO operators t 4 0 0, then 3 1 0 0, 2 2 0 0, 2 1 1 0, and t 1 1 1. So pi of 4 has five elements. These are the partitions that we get. And the corresponding five sure polynomials form a basis of the vector space. OK, so the most interesting basis is the third one that I'm now going to define. And I define by bracket J the n-tuple J 0 0 0 0 0. And I look at the HECO operator J 0 0 0 0 0 0. And its image under this attack map is essentially just the sum over all monomials of absolute value of degree J plus a small error term. OK, and I claim that the following is a basis. And I will prove this. And having proved that the third basis, which I'm going to define, is a basis will automatically produce the HECO relation. So this is probably the only proof that I'm going to include in this lecture series. So I claim that the following is a basis. I claim that V is generated by the following set. I take this attack a map of the following product of HECO operator. So I take an n-tuple a in pi n. And then it has entries a1 up to an. I take the entry a J 0 0 0 0 0. And I multiply this together, J from 1 to n. And I claim that the collection of these polynomials generate the space. So again, let's look at an example just to see what is happening here. It's very simple. It's just notationally a bit complicated. If n equals 4, then these are the following HECO operators. And there is a tilde here. t4 0 0 0 0. Then I have 3 0 0 and 1 0 0. This is the second. This is the product. Then I have t2 squared, t2 0 0 0 squared. This corresponds to the tuple 2 2 0 0. Then I have t2 0 0 0 and t1 0 0 squared. And I have t1 0 0 to the fourth power. These are the five HECO operators whose image I'm going to take. And so you see the important thing is that this collection does not contain the identity. And so if this is a basis, then I can write the identity as a linear combination of these, and then I win. So what I have to show is that this is really a basis. OK, here's the proof. Proof. So we take a partition a. And so as I said somewhere up there, the image of t tilde aj is this thing there. So I take the product, j from 1 to n, and then sum x to the power b, where b has absolute value aj, plus a small arrow of size o of 1 over p. But I ignore this for a moment. And I just compute what this is. So this thing is this polynomial. And I want to compute what is this polynomial. And then the collection of all these polynomials should be a basis. Well, it turns out that this is you sum over all a prime in the set of partitions. And you sum over all c in the vile orbit of a prime, x to the power c. And you have some coefficient here that depends on a and a prime. And what is this coefficient, where c a prime equals the number of matrices cij with non-negative entries, such that the row sum or column sum, I don't know. So the sum over the i's is aj. And the sum over the j's is ai prime. So if you prescribe a and a prime, then you have two n tuples. And you construct all matrices with given row sums and given column sums. So the vector of row sums should be the a vector. And the vector of column sums should be the a prime vector the other way around. It's irrelevant because it's obviously symmetric. OK, so this is the coefficient. And proving that this is a basis is equivalent to showing that the matrix cAA prime, if I go over all possible pairs of partitions, is invertible. So I need to show that this matrix is invertible. If you state it like this, it's a perfect competition problem. So you construct the matrix as follows. Here's the matrix. The matrix has dimension pi of n. And each entry is the number of matrices with given row and column sums. OK, show that the matrix is invertible. OK, if you are a good competition mathematician, then you can perhaps solve it on the spot. I'm not good at these problems, so I just did the first few cases by hand or by Mathematica to see if I find any pattern here. Show that the matrix cAA prime, where a and a prime are in pi n, is non-singular. OK, so I computed the determinant for n equals 1, n equals 2, n equals 3, n equals 4, n equals 5. Turned out the determinant is 1. So this cannot be a coincidence. So show more strongly, show that the determinant equals 1. So here's, in the case n equals 4, the matrix looks like this. It's 1, 1, 1, 1, 1, 1, 2, 2, 3, 4, 1, 2, 3, 4, 6, 1, 3, 4, 7, 12, 1, 4, 6, 12, 24. And has the term. Obviously, it's symmetric. It has to be symmetric. Show that the determinant is 1. Well, in this case, you can just compute that the determinant is 1. Or you let Mathematica compute it. But how do you show this in general? And I played for a while. I computed eigenvalues. Doesn't help anything. But then I computed the Cholesky decomposition. Turns out it has a very beautiful decomposition as A transpose A, where A is 1, 1, 1, 1, 1, and then something funny. So in this case, it's just 1, 1, 1, 1, 2, 3, 1, 2, 3. And so obviously, the determinant of this thing is 1. And this cannot be a coincidence. I mean, typically, the Cholesky decomposition has square roots. I mean, this is an integer matrix. And this cannot be a coincidence. So it must happen in general that this matrix has a Cholesky decomposition where these numbers count something. And OK, well, then I looked it up and I found what it is. So algebraic combinatorics, people have known this for a long time. This is the so-called Kostka matrix. And these are the Kostka numbers that are well-known in the theory of Schur polynomials. And so this is well-known that this matrix has this decomposition with an integral matrix here. And once on the diagonal and then everything is clear. OK, so as a corollary, there exists a linear combination with bounded coefficients. Oh, and by the way, so all the time I ignored this error 1 over p. I mean, if I really work with the images of the Schur poly, of the Hecker operators themselves, it's a total mess. But it clears up if you work with Schur polynomials, if you let p go to infinity. So if you just look at sufficiently large primes, it turns out that this matrix, if you define it properly, so really with the images of the Hecker operator, is still invertible, even for p equals 2. But if you choose p large enough, then you just disturb it a little bit and then it's still invertible. So there exists a linear combination with bounded coefficients such that, so the coefficients I call y a such that sum over all partitions, y a product j from 1 to n t tilde a j equals, well, equals, yeah, t 1, 1, 1, 1, 1, which is the identity. Well, it's a scaled version of the identity. It's n times n plus 1 over 2 times the identity. And as a corollary, and this is what you typically need for an amplifier, corollary, there exists 1 less than j less than n. So your amplifier has depth n. You have to go at most to the nth power, just as in the case n equals 2, you only need to go up to p squared. Such that lambda j of nf is bounded from below by what you expect. This is the expectation. And you really get this p to the power minus vj plus n plus 1 over 2, absolute value of j. And you can compute what this is. It is p to the power j times n minus j over 2. Simply because here you have a sum of HECA operators and their eigenvalues cannot all be small if they equal the identity. At least one of them must have a big eigenvalue. Otherwise, their sum cannot be the identity. OK, any questions? All right. OK, so this was a little bit of a crash course on HECA theory on PGLN. And I want to spend the remaining 10 minutes on just briefly mentioning Eisenstein series. So far, we have been talking about cusp forms most of the time, although this discussion is completely independent of whether your form is a cusp form or not. HECA theory applies in the same way to Eisenstein series. I would like to briefly mention Eisenstein series. Which one? OK. OK, Eisenstein series. So abstractly, they are attached to parabolic subgroups. So, OK, so here the amplifier is at humanity 1 over n. Yes. Is it a general feature of, suppose you have a split group of rank n that an amplifier would be 1 over the rank? So you have your amplifier, so you need to take the operator up to P to the power n. I think you can't get lower than that. Yeah, so I'm wondering if for any other group, if you can achieve it. So what would be the invariant which is the rank? Yeah, I think it's the rank. No, it's no more than the rank. No lower than 1 over the rank. 1 over the rank, you know, is the rank of the affinity for the question. In general, for general split groups. Maybe we'll talk about it afterwards. OK, so let me move to Eisenstein series. Just a very brief discussion. So abstractly, they are attached to parabolic subgroups. And I just show you how it works for n equals 3. So for n equals 3, let me start by the case n equals 2, because somehow this is an inductive process. So let me start by the decomposition of the standard upper half-plane modular SL2Z and the L2 space. So this is the case n equals 2. And this decomposes into the Eisenstein space plus the hospital space plus the constant function. And the constant function should be seen as a residue of Eisenstein series. OK, and from this, we can go one level up. There are three types of Eisenstein series corresponding to this decomposition. There is the minimal parabolic subgroup, minimal parabolic subgroup, which is the group that Akshay mentioned at the beginning of his talk. This measures the frequency of attendance in a lecture series. And the corresponding Eisenstein series has a spectral parameter mu just as a cusp form and a complex variable z. And it can be defined by summing, well, basically, I mean, this is gamma infinity in usual notation. And you take the power function that I used also to define Whitaker functions. So this is a very close analog to the usual Eisenstein series on GL2. It has Fourier coefficients that are essentially divisor functions. So the Fourier coefficients are of the following form. d1, d2, d3 equals m. And then you take mu1, mu2, mu3. And this is the m1, m1st Fourier coefficient. And then by the Hecker relations, you get all others. So these are the three langlines parameters. They add up to 0. And so this is a generalized divisor function. OK. So what's i mu of gamma z again? So i mu is the power function. It only depends on the y-coordinate. So of this matrix, you decompose it in Iwazawa decomposition x times y. And you'd only take the y-coordinate. And i mu of y is a sum over yj to some linear polynomial So it's some powers of yj. And then you sum them up. In fact, it's perhaps not even a sum, right? Right, it's a product. Right, it's a product. Yes. OK, and the exponents are chosen in such a way. So the important thing about this power function is that it has the correct eigenvalues for all the differential operators. So you just choose the powers appropriately to make sure that it's an eigenfunction of all the operators. OK, so this is the minimal parabolic subgroup, then maximal parabolic subgroup. And of course, in general, there are many groups in between. But for n equals 3, there is only minimal and maximal. So this is the group that looks like this. And here you have a bit of room. You have a 2 by 2 block. And here you can insert a gl2 cusp form. EU of z, s, u is now a cusp form on gl2. And that's some gamma in gamma mod p. p is this parabolic determinant of gamma z to the power s times u of, well, you would like to write gamma z, but this doesn't make sense, because gamma z is in gl3. And you want gl2 entry. So you first project this to the upper left corner with the map pi. So pi sends 1x2x3x111 times y2y1y11 to 1x21 and y21. So this makes sense. And the Fourier coefficients are twists of the Hecker eigenvalues of u. So they are basically convolutions with the identity. So the Fourier coefficients are d1, d2 lambda u of d1 times d2 to the power 2s times d1 to the power minus s. And the third type belongs to the constant function up there. So you can view it in two ways. Either you can view it by just putting the constant function here, or you can view it by taking a residue of a minimal parabolic Eisenstein series. So that's a degenerate case. u is the constant function. What do you think? What are the Fourier coefficients in this degenerate case? Well, it's a stupid question. There are no Fourier coefficients. At least no non-degenerate Fourier coefficients. The only Fourier coefficients that exist for this degenerate case are the Fourier coefficients where at least one of the variables is 0. So a m1, m2 with m1 and 2 both being non-zero doesn't exist. OK, any questions? OK, well then I think that's a good point to stop. And next time, we'll do something more analytic and look at moments of L functions. Questions? Yeah. Can you give a reference to the combinatorial part of the NP-tire? Yeah, well, I hate to reference my own papers. But so you find this in various preprints on the archive by myself and Peter Marga. And the whole Hecker business, like sure polynomials and things like that, you can find in a book by McDonald. Is this correctly spelled? I hope. No? I'm not sure. So OK. And it's called, I think it's called, whole polynomials. I think it's Cambridge University Press. I think it's Oxford. It's Oxford? Yeah. Right, right, right, it's Oxford, yeah. Maybe it's called symmetric functions and whole polynomials or something like that. This is where I learned all these things with sure polynomials and so on. Actually, third chapter of Seymour's arithmetic automorphons for that day. Very nice. OK, so let's quote this too. What is the arithmeticity of automorphine? OK.