 Okay, thanks for coming. So, as I announced last time, the main application today, which will basically feature all the ingredients that we had in the previous lectures, will be a moment estimate for L functions. And in order to prepare for this, I would like to recall state's formula that we had last time or perhaps in the second lecture, I guess. So, this is one of the few analytic results that we have on higher-rank Witticher functions. If I take the Witticher function attached to the index alpha, and I multiply it by the Witticher function attached to the index beta, times the determinant of y to the power s and the harm measure on the group A, then this is the product of gamma functions that you would expect from Rankine-Selberg theory. It's the product jk from 1 to n, gamma r s plus i alpha j minus beta bar k. And this has to be taken with a grain of salt. I'm not sure if the bars are all correct, but morally speaking this should be okay. Divided by gamma r of ns times the product 1 less than j, strictly less than k, less or equal to n, gamma r of 1 plus i alpha j minus alpha k times gamma r 1 plus i beta j minus beta k, everything barred. Okay, so we will use this a million times today. And what I would like to introduce now is Rankine-Selberg theory for GLN. And this is also one of the few things that works exactly as in the GL2 case. So for general n, we consider the maximal degenerate Eisenstein series, e of z, s. I defined this in the case n equals 3 and in the general case it's exactly the same thing. It's the sum over gamma modulo of the maximal parabolic subgroup determinant gamma z to the power s and p is the maximal parabolic subgroup. And with this definition the same unfolding works as in the GL2 case. So if we consider the inner product of f times g bar against an Eisenstein series, this Eisenstein series, then this is by definition the integral of h modulo gamma f g bar e of dot, s d mu. And now we can do the usual unfolding. This is a sum over gamma mod p and this is an integral over h mod gamma. And by gamma invariance we can unfold this to an integral of h mod p. f of z g bar of z determinant of z to the power s d mu of z. And at this point I insert the Fourier expansion for one of these factors, say the first one. And you remember the Fourier expansion was a little complicated. It had a sum over a lower rank group outside, sln minus 1 modulo the unipotent subgroup. And here again we can collapse integral and sum and this becomes an integral over h mod the unipotent subgroup of dimension n and then sum over the m's. These are the indices of the Fourier coefficients. Am, so this is a vector, divided by the normalization product mj to the power j times n minus j over 2 times the Whitaker function times the exponential where these xj's are the first off diagonal. So this is the Fourier expansion and this additional sum is collapsed with the integral. And then I just copy the rest. And at this point I can execute the integral over x and insert. So if I integrate this over x, then by definition I just get back the Fourier expansion. So the Fourier coefficients of g. So I perform the x integration. Yes, I haven't yet performed the integration over x. That's now to come. So I integrate over x and then I'm left only with the integral over a. So this is the y integration. Subgroup a and what is left, so everything is killed by orthogonality of characters except so b are the Fourier coefficients of g divided by the normalizing product. And now the over 2 disappears because I get it twice from a and b and I get a Whitaker function attached to the form f and a Whitaker function attached to the form g. So recall m was the diagonal matrix having the m's and products thereof as entries. The x doesn't contribute to the determinant, so only the y contributes to the determinant. And this is the harm measure on the group a. Okay, and here we are in good shape. We can do a change of variables to remove this m. And then what we are left with is essentially the integral over two Whitaker functions. And this is precisely where states formula is used. So this equals the rank in Selberg L function of f times g bar at s divided by zeta of ns times the gamma ratio that comes from states formula. Where the rank in Selberg L function is defined as follows. By definition, zeta of ns, this is needed to have a nice Euler product and then you divide by zeta of ns here. As in the usual case, you'd multiply by zeta of 2s for the rank in Selberg L function in order to get a nice Euler product. And then am bm bar divided by m1 to the power n minus 1 and then all the way to n minus 1 to the power 1 to the power s. So the last coordinate comes with power 1 and then the other coordinates come with higher powers. Okay, so group theoretically this unfolding is slightly more complicated but in principle it's the exact same idea and it gives the exact same result. Okay, so rank in Selberg is very robust. It works for general n. So in particular, this Eisenstein series has a pole at s equals 1 with constant residue. And so if I take the residue of this as s equals 1 and then take f equals g, then I just get the Peterson norm of f. Since the residue at s equals 1 of the Eisenstein series is constant, we have that the Peterson norm is roughly the residue is equals 1L of f times f bar at 1. Well, times the gamma factors, so maybe I should write this. So this is the completed L function in my notation and the completed L function includes the appropriate gamma factors. Yes. Okay, any questions? Okay, so then let's move on to the application. Example, moments of L functions. So this has now a real analytic number theory flavor on GLN. And this is based on a paper of mine that was published perhaps two years ago. So as a warm-up, let's do the case n equals 2. That's the case we are all familiar with. Okay, and I want to prove the following theorem. Take a classical holomorphic cusp form of weight k and say full level. And then we look at the following second moment. We sum over basis of cusp forms of this space. So bk is a basis of sk. The rank in Selberg L function, L1 half f times g. And it doesn't matter whether I put a bar here or not because in level one everything is real except certainly if we restrict to Hecker Eigenforms. Okay, and so this sum has k terms roughly. And on Lindelof we would assume that the whole thing is of size k. And this is what the theorem says. There is a bound of size k to the one plus epsilon. So this is as strong as Lindelof on average and it's on the edge of subconvexity. So the convexity estimate is square root of k. And if I drop all but one term then I just get square root of k. Yeah, so it's on the edge of subconvexity and Lindelof on average. Okay, so how would you prove this? Well if you are trained as a classical analytic number theorist, then the first thing you do in this situation is you write down an approximate functional equation, open the square and perform the sum over g using the Peterson formula. If you do this then the following happens. Approximate functional equation plus Peterson formula. This leads to a double sum coming from the square. You open the square so you get sum over n and m. Both are roughly of size k and you have lambda f of n, lambda f of m. And then there is a sum over c coming from the Peterson formula. It turns out that the sum over c is essentially bounded. So let me just choose c to be one so that the closed term and sum vanishes and only the Bessel function survives and you get something like this. It vanishes from the formula, that's what I meant. It disappears because, yeah, so it's one. Yes, so this is the beast you would have to estimate of size k. You have two sums of length k, so square root cancellation would give you the desired bound k. But if you look at this, it doesn't look very nice because both variables are of size k. So the argument is of size k and the index is of size k. So you are in the transitional range of Bessel, if you know something about Bessel functions, if the argument is the same as the index, that's a disaster. So probably you can push this through if you're a good analytic number theorist, but that's probably very hard. So it's probably hard. Instead, I would like to use a very different approach and in addition, this approach would never work in higher rank. So I use a very different approach. So instead, we do the following. So to fix notation, let me remove the f here. It's just lambda n times n to the k minus 1 over 2 times e of nz. And I define the l infinity factor of f times g. That's the usual infinity factor that you get in rank in Selberg. It's gamma of s over pi to the s times gamma of s plus k minus 1 over 4 pi to the s plus k minus 1. And as you can see, this doesn't depend on f and g. It only depends on the common weight. And the weight is something that we don't change during the argument. So let me just denote this by l infinity of s. And then we have seen before in much greater generality that the square of the two norm is, well, it's essentially l infinity of 1 times the finite l function. But it turns out that the residue of the finite l function is bounded above and below. So it's of size k to the power plus minus epsilon. This is my shorthand notation for an upper and lower bound at the same time. It's bounded above by k to the epsilon and bounded below by k to the minus epsilon. And now let's run the following argument. So we are interested in the sum over g l 1 half f times g squared. The first thing we do is we work with the completed l function because the completed l function is easier to handle than the finite part of the l function. So this is sum over g lambda 1 half f times g squared divided by l infinity of 1 half squared. OK, so far I haven't done anything. So I need this blackboard here to keep track of what we are doing. So here comes the first step. I use Rankin-Cellberg to write this as an inner product. So this is 1 over l infinity at 1 half squared times sum over g inner product of f. OK, here I write it differently. It doesn't matter. fg times e dot comma 1 half. Right? That's Rankin-Cellberg unfolding. It's OK? This is step one. And one is Rankin-Cellberg. OK. In order to prepare for spectral analysis, I would like to have a sum. So this is a sum over Hecker eigenforms, but I would like to have an l 2 normalized version of this. So I would like to have g to be l 2 normalized. So I artificially l 2 normalize it. Get l infinity 1 half squared times sum over g inner product fg e of dot comma 1 half squared. And now I divide by the two norm of g squared. And to compensate for this, we know what the two norm squared is. It's essentially l infinity at 1 up to a small error term. So let me write k to the o of 1 here so that this equality is really an upper and lower bound. OK. So what do we need for this step? 2 is again Rankin-Cellberg. Plus we need good upper bounds for the residue at s equals 1. So upper bound for the residue at s equals 1, l of f times f bar s. Now this looks very much like applying Parseval. I move this out of the sum so that I get k to the little o of 1, l infinity at 1, l infinity at 1 half squared. And by Parseval, well, maybe it's Bessel's inequality. This is this thing here, squared, f times e at 1 half, right? So this sum is the spectral expansion of this square here. OK. So at this point, it's not really clear how to... I mean, a nice thing would be to apply Cauchy-Schwarz at this point. Unfortunately, Cauchy-Schwarz is not easily available. I mean, I would like to estimate this by the two norm of f times the two norm of e, but the two norm of e doesn't exist. So I cannot easily apply Cauchy-Schwarz. But I have something... Well, we can do something similar, and I will explain this later. This is a sort of regularization. So this is step 4. 4 needs some regular... So this was 3. So 3 is Parseval. 4 is some regularization that I will explain in a moment. It turns out that this is bounded by k to the little o of 1 divided by l 1 half squared and l infinity at 1 times the inner product of f times dot comma 1 plus epsilon f. So basically what is happening here, you trade off two copies of e 1 half against one copy of e at 1. OK. I will explain this step later. Let's believe it for the moment. Now we can use rank in Selberg again. So this is an l function. So this gives k to the little o of 1, l infinity at 1 divided by l infinity at 1 half squared times l infinity at 1 plus epsilon. And again I have to estimate a residue here. So for step 5 is the same as step 2. We need rank in Selberg and an upper bound for the residue. OK. And that's it. Now we only have to collect these gamma factors. And you see precisely what happens. You get a half power of k for 1 against a half and then you get another half power for 1 against a half and you end up with k to the 1 plus epsilon. So this is state's formula in the GL2 case that gives a precise description of these gamma factors. So in this case it's just gamma of k times gamma of k plus epsilon divided by gamma of k minus a half times gamma k minus a half and this gives twice a half power of k. OK. So as you can see we did basically nothing. I mean we never looked at Fourier coefficients. It's a very soft argument. We didn't do anything. The only not completely trivial step is this step here. And I'm going to explain this. So concerning 4. So f e at 1 half square is by definition the integral over a fundamental domain of h mod gamma of f squared times e squared. And here I estimate. So here I pick a special fundamental domain namely I pick the usual Siegel set. So I mean I'm only interested in upper bounds but it doesn't really matter. So I take a fundamental domain with sufficiently large y coordinate f is either this or even the corresponding Siegel set and then I can simply use the Fourier expansion to bound this by y to the 1 half plus epsilon squared. So the constant term in the Fourier expansion is dominating. Everything else is rapidly decaying. And now you see how you trade off two versions of 1 half against one version of 1. So this is y to the 1 plus epsilon or 1 plus 2 epsilon and this in turn is bounded above from the Eisenstein series at 1 plus epsilon again by the Fourier expansion. So this is the identity term and you just add the others artificially and get an upper bound. So this is bounded by integral over f, f of z squared e of z 1 plus epsilon because this contains as a constant term y to the 1 plus epsilon and then by positivity the other terms for real argument are all non-negative so you can artificially add them if you want. And this is precisely what I claimed. Okay, so Philippe told me that this argument also plays a role in his paper with Akshai. Yeah, so in any case this is how you regularize this Eisenstein series at 1 half. Okay, so that's all and that completes the proof. Okay, so now this was the warm-up, the n equals 2 case and well, let's do the same thing for GLN. So now the same on GLN and we have some good chances to succeed because as I said, basically we did nothing. I mean, we used Rank and Selberg, we used States formula, we used Parseval. Okay, and we have to do something about this regularization. Of course we do it here for mass forms, there are no holomorphic forms. So let's do the same on GLN for mass forms. Okay, so here's the theorem. Can I ask you a real quick question? Is it really enough to know an upper bound for the residue or do you need a bound for uniformly in a neighborhood? I need it uniformly in a neighborhood, that's true because here I'm evaluating it at 1 plus epsilon. Okay, fine. So, well, whatever, yeah. But I mean, certainly if you can bound the residue then you can also bound it in epsilon away. Okay, theorem. Let F, G be for simplicity, tempered, so satisfying the Ramanujan conjecture, tempered spherical mass forms for the group SLNZ with respective spectral parameters mu and mu. Okay, now I'm not consistent with what I said earlier. In one of the previous lectures I said that my unitary axis is the real axis, now it's the imaginary axis. But I think you can cope with this. So they are purely imaginary by which I mean that they satisfy the Ramanujan conjecture. Okay, then if, so, G, oh, let's see. Okay, so G is fixed, no, wait, F is fixed and I'm summing over G. So I'm summing over spectral parameters mu that are in an O of 1 neighborhood of the given spectral parameter mu. That is the same as sort of taking the same weight. And if you take the same weight then your spectral parameter is also in an O of 1 neighborhood, I mean it's on the nose the same thing, this one is O of 1 as a distance O of 1. So mu and mu are at a distance O of 1, L 1 half F times G squared. And I claim that the bound is the cationality of this set. So what's the cationality of this set? Well, that's the spectral density at mu and this is measured by the Harris-Chandra C function. So this is the density at mu and I have to be a bit careful. If I'm very close to the walls then the measure can do something funny. So let me write c tilde and by this I mean, so by definition this is the product. Let's see. This will always be the measure of a one ball whether or not it's near the wall, right? Yes, yes. Product one less than j less than k less than n one plus mu j minus mu k. So the measure itself may behave very strangely if say one of two of, so there is a pair of parameters that's equal. So I have to adjust this a bit by adding one plus here. Yeah, but anyway. To the one plus epsilon. So I lose an epsilon on the way. Okay, but in spirit it's the same theorem. You're averaging rank in Selberg L functions by taking a fixed form of very large spectral parameter and then you average over an O of one ball and the cationality, so the upper bound is the cationality of the set essentially. Okay, how do we prove this? Well, we prove it in the exact same fashion. Let's see if we have all the ingredients available. Okay, so rank in Selberg is available. We need an upper bound for the L function at one and in a neighborhood of one and for the residue. Sorry, say this again. So what do you require to a margin conjecture in this case but if you don't have to or do you have to? Oh, probably I don't need it. I think it was just laziness because if the Ramanujan conjecture is not satisfied then maybe you have to modify it. You have to figure out what exactly states formulas. Do you conjugate or do you take the negative inverse? But it's not necessary. It's just convenient so that you don't have to bother about exceptional eigenvalues. Okay, so as I said we need an upper bound for Ls in the literature for Ls f times g s close to one, for n up to four this was proved by Farrell, Bramley and then for general n by Jean-Anne Li who is probably in the audience. Okay, what else do we need? So this is available, Parseval is always available, same as two, then we need states formula. So states formula tells us, so the end game will be that we need to estimate L infinity whatever it is at one squared divided by L infinity of one half and states formula tells us what this is in terms of a gamma ratio and we just have to verify that it coincides with the spectral density and that's what it does. So this is of the order of magnitude c tilde u squared. Okay, so what remains a step four we need to do a similar trick for general Eisenstein series in order to make this transition from one half to one. Okay, well that's a little lemma. Okay, so remember up there the important thing was to bound the Eisenstein series in a fundamental domain by basically the first Fourier coefficient and we do the exact same thing here too. E of z one half is bounded by determinant of z to the one half plus epsilon plus there is a dual term determinant of z tilde to the one half plus epsilon where z tilde is w z minus transpose w and w is the long vial element and then what you do is you bring this back into canonical Iwazawa decomposition and take the determinant in canonical form and then z has to be in a suitable Siegel set I can't be too close to the bottom of the upper half space so z is in a Siegel domain that is some x, y and h such that say the x ij's are bounded and the yj's are all not too small the constant square root of 3 over 2 plays no role but it's a valid constant. Okay, and assuming that this lemma is true we get as a corollary that the integral over the fundamental domain which is contained in S f of z squared times E of z one half squared d mu of z we get two terms and then in the second term we make a change of variables to get back to z and the price we have to pay is that we get z tilde for the cusp form so this is bounded by integral f z squared plus f z tilde but this is just a dual mass form determinant of z to the one plus epsilon and then you can run the same argument with the form itself and with the dual form and you get the same result okay, so let me sketch the proof of this lemma this is not very exciting it's essentially the same idea we use the Fourier expansion of this degenerate Eisenstein series but we use a special form of the Fourier expansion so first we write so one of the main features of this maximal degenerate Eisenstein series is the fact that it's really an Epstein-Zeta function so if you slightly renormalize things determinant of z to the power s divided by zeta of ns times an Epstein-Zeta function x transposed y transposed yx where xy is z at the point ns over 2 where z of a matrix m and the complex number rho is the usual Epstein-Zeta function one half times sum over all non-zero vectors one over the corresponding quadratic form A transposed mA to the power rho so you have the values of the quadratic form if m is positive definite but this guy here is certainly positive definite okay, and this is a classical object this Epstein-Zeta function okay, and so the Fourier expansion of this Epstein-Zeta function can be found in the literature so compute inductively the Fourier expansion from an old paper of Audrey Terrace if a matrix s is given in the following form identity, identity q transpose times t, so I'm following her notation this is now a matrix s2 and then iqi and this is an n1 block and this is an n2 block so this has n1 rho and this has n2 rho and in total we have n1 plus n2 rho then gamma r of 2 rho z of s rho can be written in terms of the Epstein-Zeta function associated with the smaller matrix s2 and then you can inductively move on so this is gamma rho gamma r of 2 rho z s2 rho and this is a smaller matrix plus a matrix an Epstein-Zeta function for t gamma r 2 rho minus n2 divided by determinant of s2 2 square root zt of 2 rho minus n2 and then a complicated term that takes care of the cross products I mean this is a diagonal term this is a diagonal term and then we need a term for the cross products but this is rapidly decaying because now comes the Fourier expansion in terms of Bessel-K functions and the Bessel-K function decays rapidly and it will not contribute too much so there is a sum over a and b a and z to the n1 non-zero and b and z to the n2 non-zero a transposed ta 1 quarter n2 minus 1 half rho and then this is a bit technical a transposed s2 inverse b 1 half rho minus a quarter n2 times e b transposed qa so here you see the cross term Bessel-K 1 half n2 minus rho 2 pi square root of a transposed ta e transposed s2 inverse b and that's it okay and then it's an exercise to bound this so you do this inductively reducing the dimension step by step and this can be estimated trivially using the rapid decay of the Bessel-K function and then you continue with s2 which has one dimension less you apply the same argument and you keep doing this and then eventually you end up so let me continue here on this blackboard so eventually you end up with a sum j from 1 to n some easy terms that may contribute some poles but otherwise they are easy and then z1 up to zj to the minus 1 half zj to the j over 2 minus rho uniformly in matrices s of the form x transposed zx where capital Z is a diagonal matrix zn up to z1 and they are ordered such that zn is the biggest and z1 is the smallest and none of them is really small okay and if you plug this in you get a bound for the Epstein-Zeta function based on the Fourier expansion if you plug this into the above formula then you get a bound for the Eisenstein series and if you combine everything then you get the lemma so this requires a bit of a case-by-case analysis but the idea is fairly straightforward okay so this proves the result and the moral of the story is that in higher rank it's often useful to use soft techniques and not to try to use things like approximate functional equations and then you end up with a total mess that you cannot handle the soft techniques generalize more easily okay well I guess that's the end of what I would like to teach I hope you enjoyed this a bit and nobody will be too angry if I end 10 minutes early but maybe you have questions okay yes how far are you from this good questions and I should also, I mean I said that these soft techniques are more easily generalizable but at the same time of course they are not strong enough to prove something really deep like subconvexity they do give something highly non-trivial namely a best possible moment estimate that we have to prove subconvexity which is morally equivalent to an asymptotic formula with power saving error term well we can we can look at this proof and see where we fail and I mean basically we fail almost everywhere to get an asymptotic formula so the first small cheat is here then the second cheat is when we apply Bessel's theory because there is more than just so I mean this is over the whole space of automorphic forms including mass forms of weight K so these holomorphic forms of weight K are of course the big chunk that contributes but there are also mass forms of weight K that also contributes so here is an inequality and then again we have the small fluctuation in the residue on scrap paper I worked out a version where I replaced K to the epsilon by a log power but I don't know how to make this into an asymptotic formula let alone any error term another question then is there any chance that this will work here that is GL3 and then G is like GL2 forms oh well potentially yes but that's a totally different story of course you also have Rankine Selberg so I said perhaps a bit sloppy that so this is Rankine Selberg this is one version of Rankine Selberg namely GLN times GLN you can do Rankine Selberg in all possible combinations GLN times GLM and you're asking for GLN times GLN minus 1 and this then is of course a totally different story because there is a period formula a very different period formula and it will involve a summation over the lower dimensional GLN minus 1 spectrum there is very beautiful work by Matthew Young in this direction he computed several moments for GL3 times GL2