 Thank you for the introduction and for the invitation to speak. So today I'll be talking about helping non-bound to monomorphic forms and I should emphasize that everything I'll be talking about is joint work with Rizwan Khan from the University of Mississippi. All right, so the general setup of the problem interested is looking at eigen functions of Laplacian on a manifold. So we'll take M to be a manifold and start with the N dimensional, eventually we just don't look at surfaces, so two dimensional map. Initially we'll assume this manifold's compact and without boundary. So approachable example you should have in mind is something like the N sphere, living in Rn plus one. And what we're going to look at Laplacian eigen functions on this manifold. So these are functions on this manifold that are L2 normalized. So this manifold comes with a natural volume measure. We want to normalize this function such that it's L2 norm is equal to one. And we also want it to satisfy the eigenvalue problem the Laplacian still traveling operator or phi is equal to lambda of phi. So here this Laplacian still traveling operator well it's kind of a complicated thing but we really should just think about it how it looks on Rn which is minus the sum of the second particle of it. So I mean it's minus here for reasons I explain in the next slide. So you can think of this as this differential operator acting on a function and its eigenvalue is just some complex constancy which is shortly sees actually not just some complex constancy but actually a non-negative real number. Okay, so this is a second order linear differential operator and one of the reasons why this is really quite nice is that this second order linear differential operator commutes with all the isometries on M so if you're on Rn isometry this means rotation of translation. It's a scalar differential operator it's linear and it's the lowest possible order that commutes with isometries. So basically the simplest such nice operator. All right, so there's a reason why we study these eigenfunctions and one of the reasons is that well they have very many nice properties such as the fact that these eigenfunctions form an orthonormal basis of L2 of M so this is a Hilbert space and you have a bunch of functions and it turns out that if you use the standard values functions they expand this space they form an orthonormal basis. Not only that there are countably many such eigenfunctions the eigenvalues are discrete the eigenvalues are non-negative and the eigenvalues form a sequence that tends to infinity. So the fact that the eigenvalues tend to infinity means that we can order these eigenfunctions it's a natural ordering of these eigenfunctions just in terms of increasing eigenvalues. Okay, it's only a partial order because you might have repeated eigenvalues so you might have two eigenfunctions the same eigenvalues don't actually order there but still up to that you can see all of them and what happened to you. All right so in general these eigenfunctions complicated beasts there's really not much nice you can say about them in terms of how they look. They're not necessarily gonna have closed form solutions they're just gonna be some complicated special function that satisfies this second order in your differential equation. In various special cases you actually can write down explicitly what these eigenfunctions look like. So the two kind of prototypical examples are Sphere or the Taurus. So if we take for example the N-Taurus then we can explicitly write down what these eigenfunctions look like. Actually really nice the basic is waves they are so there shouldn't be an I here there should be sine of two pi times the inner product of the product of X and Y or cosine times two pi of the product of X and Y. Where here Y is an N-tube of integers. So for each interval of integers I get sine of two pi of the product and cosine of two pi of the product these are two different solutions to the eigenvalue problem. And the eigenvalue is just four pi squared times the sum of the squares of the entries of this integral of integers Y. So if you're a number theorist you might already be interested because you see that these eigenvalues are expressed in terms of sums of squares of integers. Indeed this understanding Laplacian eigenfunctions Laplacian eigenvalues on the N-Taurus is interesting from not just an analysis but in number theory for this reason. With that being said I won't really be focusing on the Taurus much in this talk. Except I just want to quickly show you what these eigenfunctions look like. So this is basically a density of what these eigenfunctions look like. And you can see they're basically periodic. So they're periodic. They size by these nice period relations and the densities aren't too complicated. Okay, so as I said before in general we can't specifically write what these eigenfunctions look like except on particularly nice manifolds like the N-Taurus or the N-Sphere. A more complicated example but still quite interesting is the bottom average stadium, which is a complex surface. Okay, it actually has boundaries when you're working with this eigenvalue problem you'd add to boundary conditions that will sweep that a little further. And then there are no closed forms for the eigenfunctions but we can still numerically calculate them. So here's some numerical calculations due I believe Alex Barnett who looks at these density plots of the eigenfunctions. So the darker the graph is the larger the eigenfunction and the smaller it is. And each of these pictures here is a different density plot for a different Laplacian eigenfunction with increasing eigenvalue. And you can see for the most part these look pretty random and okay, there are bits where there are dark bits but they're not huge clumps of darkness. So these aren't large in most places they're usually spread out. Except there are some funny examples here and then especially here where the Laplacian eigenfunction looks like it's mostly concentrated inside the rectangular part and not concentrated inside the semicircular part. And here it looks like it's completely avoiding the semicircular part and completely concentrated in the rectangular part of this two-dimensional manifold look back. Okay, so this is going to come up later in the 12th and we'll look at LP norms, we want to understand where there's concentration of mass and how big an eigenfunction may be. So the case I'm most interested in is the case where these manifolds are actually both of these symmetric spaces and they're both of these symmetric spaces coming from a group. So you start with a lead group G you mud out on the right by a maximal complex subgroup of G and you mud out on the left by a lattice. So all we should have in mind when I write this is the most basic of interesting cases which is when we take G to be SL2R it's based on two by two matrices for the real entries and we turn that one. K to be the maximal complex subgroup which is SL2 and gamma to be the discrete subgroup SL2Z so the modular model. So the reason this is interesting is that for me this is interesting is that we'll see that these little partial eigenfunctions actually are automatically formed. All right, so this case G mod K well with this group mud out by this maximal complex subgroup and it turns out you can identify this with the upper half plane. So the point said that we'll max plus Iy with Y positive and then this low symmetric space is just H minus SL2Z which is the modular surface and this has a fundamental domain which I'll show pictures over there. Okay, so here is a bunch of different pictures of the fundamental domain in each case there's a partial eigenfunction so the density part of the partial eigenfunction. So the fundamental main looks like this living in the upper half plane. This picture here is a density part of a partial eigenfunction on H minus SL2Z on the modular surface. And each of these is a different one so we see that they all look slightly different but they have similar patterns and you know they look kind of more concentrated down here and up here but there's a reason that we'll see shortly which is due to the fact that this is a hyperbolic surface so distances up here aren't compressed as much as distances down here. All right so let me show you some more pictures so if I zoom out and show you more I can just see some of the things and if I keep zooming out and look at more of the partial eigenfunctions so this is going way further up we'll see that the further up we go kind of the frequency, the wavelength that we're looking at here looks larger but again that's just a hyperbolic surface so distances up here aren't as compressed as much as distances down here. All right so these are what the MC plots of Laplacian eigenfunctions on the modular surface on each one of those we've got to look like. So let me tell you some more about this modular surface it's a hyperbolic surface that's negative curvature. It has a hyperbolic metric coming from that bar plane metric on that bar plane. Laplacian isn't quite the same as on R2 so instead of being my as to some of the second partial derivatives you have to include this Y squared here and this Y squared comes from the fact that the distance function has a Y on the bottom so this is overcoming this distance function and scaling in the vertical direction. And finally there's a nice volume measure on this modular surface which just looks like three of them and this is a normalizing factor to ensure that this is a probability measure and it's just the X, D, Y over Y squared. Okay so what are these Laplacian eigenfunctions? Well the Laplacian eigenfunction eigenvalue zero is just a constant function one. The non-constant Laplacian eigenfunctions on the modular surface are equivalent to non-constant Laplacian eigenfunctions on the upper half plane that are invariant by SL2Z. These are known as mass forms, the mass task forms. So these are a type of automorphic form and they closely relate to classical and polymorphic modular forms. So if you're more comfortable with polymorphic modular forms wherever you read mass form you shouldn't just think something like a polymorphic modular form. So let me explain why these are similar to polymorphic modular forms. Basically instead of working with Laplacian as before you can start to form Laplacian. So instead of just having this part here you include this extra bit here that depends on the weight K. So this isn't quite the same as Laplacian but somewhere in many ways. Now if instead of asking for eigenfunctions of this Laplacian we ask for eigenfunctions of this weight K Laplacian with this exact eigenvalue here then we essentially get polymorphic modular forms. So actually you get a polymorphic modular form up to multiplication by y to K over two. And once we remove this then we find that this partial differential equation here is the same thing as the Cauchy-Riemann equation. So the same thing is ensuring that we're working with a polymorphic. Okay, so these are very similar. These mass cast forms because Laplacian eigenfunctions are very similar to polymorphic modular forms. You'll notice here that the eigenvalue for a modular form or a homomorphic modular form is kind of like K squared. So you should think of the eigenvalue of a mass cast form as being similar in some sense to K squared. All right, so here is the fundamental question that I want to ask in this talk and then I'll say what we can show towards this. So the fundamental question is asking about the distribution of mass of Laplacian eigenfunctions or rather the sequence of Laplacian eigenfunctions as we increase the eigenvalues, as the eigenvalue tends to be. Okay, so this question of course is a bit vague. There are many ways we interpret what it means to study the distribution of mass. One of the other ways is to understand this through a primary over this thing, which I like to stress. So instead of me talking about LP norms. I shouldn't guys well see the LP norm before. When you just take your function you take its absolute value, take it to its peak power, integrate over the whole surface and then take the peak for it again. They can do this for any real number greater than this one. Of course in P, P was infinity. This is just the same thing as essentially the supremum and since it requires a smooth function which is the same thing as the supremum. So you can think of this as in some sense measuring the concentration of mass. If you take a large P, this is essentially saying how big are the spikes of this function by? How big can it get and how much mass does it concentrate in various areas? So if you take a large P norm and the LP norm is not very big that means that this does not get too big many times around some very steep spikes. Take a small P norm. This means that it's reasonably spread out and there's no bits where it's repeatedly larger than it should be. Okay, so I'm gonna study this for Laplacian eigenfunctions and eventually the main problem we're studying is this for Laplacian eigenfunctions in the modular surface. So for mass plus points. So what do we know? We know that when P is equal to two that by definition and Laplacian eigenfunction is L2 normalized. So the L2 norm is just equal to one. What about the soup norm? Well, there's this rather classical result coming from something called the local viral norm which tells us that this L infinity norm can't grow faster than the eigenvalue to n minus one over four where n is the dimension of the manifold. So we're looking at a surface. This says that the L infinity norm grows no quicker than the eigenvalue to one full. There's a nice general way to get between these two norms for any P in between them. This is interpolation. Think of this as like a convex D principle. If you know the L2 norm and the L infinity norm then you can bound all the LP norms in between with an explicit dependence. So in fact, we can show the LP norm for any P between P between two infinity is done by lambda to something that depends on both N and P such that when P is equal to two you just get lambda to the zero which is one and P is equal to infinity you get lambda to the n minus one over four. So this is just interpolating between the L2 norm and the L infinity norm. You can actually do slightly better. So this is known due to SOG in the late 80s. And what he showed in fact is that there is another exponent and you can also get a good bound for the LP norm which is when P is equal to two to the two times n plus one over n minus one. So if n is equal to two, so if n is a surface this is just the L6 norm. And what you found is you can actually beat what you would get by interpolation for the L6 norm. And then once you have the L6 norm you can interpolate between the L2 norm and the L6 norm to get LP norm bounds for P between 26. You can interpolate between the L6 norm and the un-filling norm to get the remaining bounds. So what he worked out was that the LP norm is bound by a lambda to some power and that power varies in these ways. So in AS2, I've graphed this down here. So this is one over P. So when P is two, one over P is a half and this exponent is just zero. When P is infinity, one over P is zero and the exponent's a quarter. And when P is six, one over P is a sixth and we get this point here. And then we kind of smoothly interpolate between them to get this perfect. So this is a result of SOC and you might ask is this actually is this result any good, is this actually something useful? And the answer is yes, but this is the fact that it's the best you can do to certain manifold. So if you look at say the end sphere, these LP norm bounds for partial eigenfunctions are actually sharp. These are the best you can do. You can actually show that there exists the partial eigenfunctions which achieve these LP norm bounds. Okay, but that's for one particular manifold. What happens if you impose conditions on your manifold? So for example, if you ask your manifold, if your manifold has say non-positive curvature, obviously the sphere is positively curved. If you have non-positive curvature, you can actually improve these down. You can get logarithmic savings. So you can save by some power of log length. And if you take something completely like a torus, so you can take the two torus for example, it's known that the LP norm is in fact uniformly bounded for P between two and four and more generally for the end torus. We don't know it's uniformly bounded, but we only lose by a very small power of the eigenvalue if P is between two and two over n minus one. Okay, so here you can see that the actually, well, the LP norms can't, these bounds can't be improved on the sphere, but they can be improved on negatively or not positively curved surfaces or flat surfaces like the torus. So there's a general conjecture to your variance in SINAC. I guess it's written down explicitly in a survey of SINAC from 2003, which says if that you're working on a compact surface, so two-dimensional manifold, that's negatively curved, but not only is it's LP norm, not only do you get this logarithmic savings, but in fact, you should have something that's essentially uniformly bounded. The LP norm should grow very fast and eigenvalue to the epsilon no matter which P you choose. So this is saying that that song's bound very far up the truth. So says that you have these polynomial growth in the eigenvalue. This conjecture of variance in SINAC says, well, actually you really should have something that grows as slowly as possible. So this is a really strong conjecture. This is really quite optimistic. One of the reasons is if you take your compact surface to be certain arithmic surface and you take special point for that surface, well, these values are these partial eigenfunctions that actually relate to special values of L functions. And this will show that these special values of L functions are quite small. So it shows something for the generalized linear hypothesis, which is the conjecture that the central value of an L function of a normatic form grows no quicker than its analytic conductor to the epsilon, for every epsilon. So this generalized linear hypothesis, which I'll abbreviate to GLH, this is implied by the generalized linear hypothesis, but certainly not known under which. So in some sense, it's a proxy for the generalized linear hypothesis. It's a really strong rigid conjecture that we all believe is true, but it's very far out of the reach right now. So variance is in size conjecture, implies this GLH in certain cases. So it's extremely strong conjecture. Okay, so what's known towards this conjecture? So this conjecture for negatively curved compact surfaces. We'll work with something non-compact. So we'll work with the modular surface, which is non-compact, but it has my volume. And what a variance in some I can show, 27 years ago, is that in fact, well, we don't get all the way down to eigenvalues of the epsilon. We do get a power savings improvement on the solve scale. So some showed that the soup norm is bounded by the eigenvalues of one formula. And the variance and sonic get a power savings. They save a shave of one over 24 in the next one. So you can think of this as a sub-compact bound soup norm. Okay, once you have this, you can interpolate between this soup norm and the L2 norm down and you get this graph here. The fact that this surface is non-compact means we can't just use some result. So you can't use this L6 norm result that he uses because that only works for compact surfaces. So you could probably modify this method to show something similar. She also mentioned that... And may I add something? Yes. You are in the formula group, so eigenfunctions automatically there. But in general, you need to take operators to bring... Yeah, yeah. So I'm swimming under a rug, but yeah. To crush... But I need to assume these mass cusp norms are just cusp norms that are also eigenfunctions in every single vector operator. So the need is to be arithmetic eigenfunctions. Okay, the fact that M is non-compact surprisingly means that there's a previous conjecture about the LP norm is being done by eigenvalue epsilon that can't hold the non-compact surfaces. This phenomenon essentially is the escape of mass at the cusp. These eigenfunctions grow quite quickly at the cusp. But we won't really worry about that. Okay, so this is not the result of the vanishing cycle which proves this power same for the L-infinity norm. And the question that I might study the rest of the talk is whether we can prove this when P is smaller. So we have this result, the L-infinity norm. It uses the amplified pre-chase formula. It really uses the arithmetic nature of this much of a surface and of these eigenfunctions. But the method only works with L-infinity. So there's nothing about other values. So can you improve this for smaller P? And in Thomas Watson's PhD thesis in 2002, he showed the answer is yes, when P is equal to four, conditionally on the generalizing of the log-activity. So again, I'm assuming these eigenfunctions are eigenfunctions of head-operatings. When you look at the L-4 norm, and he showed that if you assume GLH, you do get this Avani-Atzarnac conjecture. It's got a very eigenvalue of the epsilon. And then once you have that, it's definitely between L-2 and L-4 and L-5. Okay, so this is nice, this is really quite strong result. But it's conditionally on GLH. And it turns out you can actually hope something even better, which is something known as the Gaskin moments conjecture. This essentially goes back to Barry. I don't know if this conjecture is written down anywhere. Nice. And he just states that if you take M to be a negatively curved surface, so not just non-positive, it's a negative conjecture. And you look at the end moments of an eigenfunction. So you look at its integral, look at its end moment, this should converge to the end moment of the Gaskin random period. So these, the partial eigenfunctions should behave. The distribution should be the same as the distribution of the Gaskin random period. This is a very optimistic conjecture. This is really quite hopeful. You notice that if N is odd, the right hand side vanishes. So the conjecture is that if N is odd, these moments vanish. And if N is even, you're taking the square of this. And this is something that reels the square of its positive. So it's the same thing as taking the LN norm when N is an even integer. So in particular, if N is four, this conjecture states that the L4 norm, raised to the fourth power, should converge to the fourth moment of a Gaskin random variable, which is equal to three. In fact, this is known if we assume generalized random variables. So this was the result of Jack Klaken and my co-author, Rizwan Khan, five years ago. They in fact improved this L4 norm down at Watson to an asymptotic. They showed that this L4 norm, once raised to the fourth power, converges to three with a little power series character. And subsequently, Rizwan Khan and I approved this result unconditionally for a special subsequence of all of them. So a thin subsequence of eigenfunctions or dihedral eigenfunctions or dihedral mass cuss forms or C-M mass cuss forms unconditionally we can prove this asymptotic, but that is a thin subsequence of repulsionation. So we have to have, this is only the eigenvalue 10 to infinity along this subsequence. Okay, so this is much stronger than what's improved, because we're actually in asymptotic and not just an upper bound. Okay, so I should quickly mention before I move on that these kinds of problems have been studying many aspects. So I'm posing all these questions for mass cuss forms or partial eigenfunctions on the surface. But as I mentioned earlier, these kinds of things are very similar to holomorphic modular forms. So you can set us the same questions for LP norm bounds of holomorphic modular forms in the weight aspect. So as the weight k tends to infinity, how large are these LP norm bounds of holomorphic modular forms? Okay, so you need to actually multiply them by the imaginary part of z to the k able to make sure this thing is holomorphic, this makes sense. And so we have some nice other bounds due to Ogeo 2007. Well, it shows that this sweep norm grows essentially like k20 before. There is an unconditional bound for the L4 norm. So this is kind of, this is an unconditional analog of what's result. But we showed the L4 norm in this holomorphic modular form in the weight aspect, grows no faster than k to the 1 over 12. And if you assume the generalized learning hypothesis then you can prove something much stronger and show this L4 norm is uniformly down. Grows no faster than a constant. This is a very recent result from last year by a hidden sense. You can also look at these same problems in the level aspect. So you can think of sequences of mass cusp forms of homomorphic modular forms of increasing level. And it turns out there are kind of two different ways. Look at this is one, if you like q, the level q tends to infinity along square free integers, then you get this L4 norm bound. If you like q to infinity along prime power just you get an analog of the variance in science result. And there are various hybrid results where q may just be any integer and then there's some dependence on various power. So it gets a bit complicated with some of these q is not purely square free or q is not purely a prime power. And there's lots of variants of these problems. And you can also look at this, these Lp norms, presumably L4 norm in the level aspect is there's work of Rizwan Khan again, check, okay. And finally, you can look at these problems on average. So if you look at the L4 norm and average, you can show bounds that are essentially, essentially sharp. Okay, so these are, this is a very active area of research. So a lot of people thinking about Lp norms, especially Lp norm and the L4 norm because you have two good strategies in each of those cases. So what I wanna discuss today is some work that Rizwan Khan and I have been working on. You should take this with a pinch of salt because we're still writing these up. So things are not yet complete and these experiments may change. This question, do we not care about P less than two? I guess we do care about P less than two. But the problem is you have nothing to interpolate with. So you can get the lower bounds Lp norms with P between one and two but you can't interpolate upper bounds because you have no upper bound for the L1 norm that is in any way non-trivial. So basically the techniques go out the window for the P less than two. Okay, so this is a result that Rizwan Khan and I improved which is that for Laplacian eigenfunctions and module service, the Arachnege eigenfunctions so the eigenfunctions of all the HECO operators, we get an improvement over what Sol gives us by a quite large amount. So Sol tells us that the L4 norm is bounded by eigenvalue point 16 and we improve this by a little over a six fold. We get three to three of it. So Sol's bound is something up here and we get something that's more than a six fold. Okay, once we have this, we can interpolate between the L2 norm being one and the L2 norm bound to the bound to the sum of that. And so now we get something that beats Sol's bounds for all p. So Sol's bound to these red line, the variance of Sol gets this blue line. If you combine the variance and Sol, and what Khan and I get, we get this green line that beats Sol's bounds everywhere by a power of six. And the nice thing about this is that it's unconditional. So of course we know under the generalized unlawful hypothesis that we can get all the way down to zero here for P less than four but that's conditional. So this is an unconditional. I have a question. Didn't Simon, hi, very nice. Didn't Simon Marshall do something on this? For Simon, yeah, that's a good point. I didn't mention it. Simon had this paper in the juke that I was actually skimming this morning from seven years ago, where he looked at L2 norm bounds for geosic restrictions and got a subconvex bound for that. And then he used some workbook again, I think it was again, to show that this implies subconvex bounds for the L4 norm for compact aerodynamic circumstances from certain that's coming from pertaining answers. I don't, maybe you could push this method to work with non-compact. I don't know that first. It's a much smaller housing. So if you look at like a one over 56 improvement over Sol's bounds, so it's a very small thing. I agree. Thanks. Okay, so in the last bit of the talk, the main bit of the talk, about how we've proved this. So the strategy, the initial portion of the strategy is goes back 20 years to Watson's thesis. I'm guessing Peter Siles is the person who suggested this strategy to Watson, which is PhD student. So the strategy is the fact that we want to look at this L4 norm. Okay, we can just look at the fourth power of the L4 norm. So we remove the fourth root. So looking at this integral of five to the fourth, which is the same thing as the inner product of five squared with itself. Then we have this inner product of a function and another function. And now we can use passable identity to expand this as a sum of an optimal basis of L2 of it. And a nice thing about this is that when we expectfully expanded, well, we mentioned in the talk that the partial eigenfunctions themselves form a normal basis of L2 of it. So our spectral expansion here, expansion using passable identity is in terms of the partial eigenfunctions itself, just different. So we have the constant eigenfunction one. Then we have a sum over all the partial eigenfunctions, psi of the inner product squared of five squared against psi. Okay, I'm switching on the run here that this is a non-comic surface. So there's also a contribution from the continuous spectrum involved in Einstein's series. For the sake of the stock, we'll just pretend those don't change. They don't add any complication. Okay, so the first term is just to expect to expand this. And now we've reduced the problem to understanding how large this sum of this triple product here is. So a triple product, I mean, the integral of five squared against another partial eigenfunction psi. And what's been shown in this thesis is that we can relate this triple product to special values of L functions. And once we do that, we can actually show that these terms here, decay exponentially once psi is large. So once the partial eigenfunction of psi is sufficiently large at the partial eigenfunction of five. All right, so let's see what this is. So this is the identity that relates L functions. These triplets L functions disappeared in Watson's thesis and then it was fleshed out in fair generality by Itchina, a few years later. And it states that this triple product of the square partial eigenfunction against another partial eigenfunction is exactly equal to some explicit constant of this portion of special values of L functions. So this is an L function of value of the central points. L functions evaluate the edge of the previous strip. These are complete L functions. So Lanta means I'm including the gamma factors. Okay, the factor that has gamma factors is an issue because we can explicitly write down what these gamma factors look like and then use sterling formula to understand the size of these gamma factors. It turns out to be easy to understand the size by relating them to the square roots of the eigenvalue. So I'll write T psi of the square root of Lanta psi minus a quarter and T phi of the square root of lambda phi of minus a quarter. And it turns out this special value of L functions is exactly equal to some constant times some gamma functions times the L functions insults. These gamma functions have a certain asymptotic behavior. So the first thing to notice is that if the spectral parameter of psi is larger than twice the spectral parameter of phi, this thing decays exponentially. And so as I said in the previous slide, this means that we can truncate this sum due to this exponential decay. So it's really a finite sum rather than an infinite sum. The second thing is that we have some delicate polynomial dependence on these spectral parameters. So a decay is polynomial to polynomial in T psi. And then there's some polynomial dependence on T phi and T psi that changes when these spectral parameters are closed. So if T psi is closer to P phi, then this decay goes away and otherwise it exists. Okay, the upshot of this is that we can truncate the spectral sum and we can show this before power of the L4 norm. This is simply bound by one plus this finite sum and the special values of L functions weighted by this complicated polynomial of the difference. And the rest of the game is to understand this sum of L functions. So this is a mixed number of L functions. There's two different L functions in here. That's with a complicated weight. And understanding moments of L functions is kind of a very well studied problem so that the most classical version of this is understanding moments of the Riemann Zeta function. Problems like this. So you can think of this as a discrete analog. So it's a sum instead of an integral and these are more complicated L functions. But it's a similar problem to understanding the moments of the Riemann Zeta function with the additional complication that's a hybrid problem that actually makes this problem quite hard. My hybrid I mean, well, here are only parameters that we're fixing some case. That's how high a moment is. A parameter is T. We want to understand how big this moment of the Riemann Zeta function is in the T aspect. But here a moment depends delicately. Well, its length depends on phi. So we're summing over T psi. And these L functions depend delicately on both T psi and T phi. So on both vector parameters. So the length of the sum depends on T phi and the conductor of these L functions depends on both of these things. So it's a somewhat hybrid problem that we have these two parameters to worry about. Okay, so I'll mention here what's known about the spectral sum. So I'm just gonna admit what's inside the records because it's complicated. So a strategy for bounding the spectral sum is to break up into three parts. And the reason we're breaking up into three parts is if I go back to this slide here, you'll notice that this term here depends delicately on how close T psi is to two T phi. So what's signed and what's noticed almost 20 years ago is that the initial portion when T psi is somewhat smaller than T phi looks negligible so long as we assume the generalization of what happens. And the same thing goes if that T psi is on the other side of this, it's not too close to, it's not, it's quite close to two T phi and it's a quite short. So they show that these two contributions are very small under the assumption of demyelization of the hypothesis. And the main contribution seems to come from when T psi is reasonably close to two T phi. And this range where T psi is reasonably close to two T phi, they were actually able to show a bound unconditionally for this bottom of the spectral sum. Using techniques like the DL3 baryonov estimation formula which had just been formulated in Stephen DeMiller's PhD thesis. Okay, so what they've shown is the main contribution seems to come from here and they could bound this quite well and these two terms at least under GLH seem to be negligibly small. And you might hope to be done here because this bit looks like to be like the main bit and these terms here should be small. And what's shown is the main bit actually isn't too large. So you would assume that means the small bits should be shown to be small. But somewhat surprisingly, so it wasn't noticed for a while was that it's actually quite hard to unconditionally show that these bits that should be small genuinely are small. So what Simon Watson showed us, we just need to show that these two terms here which we know are small conditionally are actually small unconditionally. If I may interrupt since this might be a good occasion for me to confess something, I think Peter knows what I'm gonna confess. So in my BAMS notes lectures, I think from 2004 there's a claim that Watson and I are going to prove unconditionally the L4 norm is bounded by lambda to the epsilon and that is withdrawn. So I think I've said it enough times around the world but since you're recording this, it's now on record. Sorry to disturb you, but I think the record should be set straight and I'm setting it straight. I'm the person who's the culprit. I was tactfully skirting around that claim because this bit here, you actually could show but these bits here, yeah, I can't find it. All right, so for the rest of the talk, I'm just gonna be treating this bit how we can actually unconditionally attack this first range. I'm not gonna worry about the second range, there are similar techniques. So this first range is the hard bit, it turns out. It looks like it should be hard, but it's actually easy. All right, so we wanna attack this sum here where T psi is a fair bit smaller than T pi. So we'll be showing, well, we can just break this up into diag regimes. We know the size of the weight in these diag regimes. We have this moment of L functions. And we wanna show is that once we're broken something into diag regimes, well, T is smaller than T pi. These are the weights in the front, you wanna show that this moment of L functions is bounded by one. And if you assume that generalizing all that, this is certainly true. This term is like not just less than one, it's less than T over T pi. So here, capital T is my diag kind of that goes up to T pi. Okay, so we know that that was conditionally. And the problem is how I've done this unconditionally. So that this is quite hard. And the reason it's quite hard is that this L function is quite complicated. And one of the things that has a larger analytic conductor relative to the length of this sum. But good bounding moments when the analytic conductor is not so large can make for the length. But this is a short sum with large analytic conductor. And that's a very hard problem. Is it known that central values are non-negative? Yes, there is no that central values are non-negative. That's a key thing we're gonna use. So we know that these numbers are both greater than zero, which is not a priority obvious. All right, so how do we balance? We're gonna have three strategies, which I'll talk you through. The first is, is simply using the crochet swatch and the spectral law of signal quality through the deseret variance. The second is a little trigger using holders in the folding and then try to understand just the first moment of this L value here. And the third is a technique when there's GL4 or GL2 spectral reciprocity. Okay, so the first is quite straightforward. We have this sum of the product of these two things and we just purchase what to separate. And then once you separate them, you replace these L functions with Dirichlet polynomial. So this is a standard technique, and when you come to theory, you get the central value of an L function. You can replace it by Dirichlet polynomial. The length of the Dirichlet polynomial depends on the length on the unlinked factors. Then we have a Dirichlet polynomial here and a sum here. And there was a technique to the variance and that's where that gives us a really nice layer of bounding these sums of Dirichlet polynomial. So we do this for use, this technique known as a spectral large sieve once we've separated these things. And we end up being able to show that this sum is less than T to the three halves times T5. What we were trying to show is that it's no larger than T times T5. So we're losing by T to the half and this can be quite drastically bad because T is running all the way up to T5. So the reason why this is bad is that this L function isn't so complicated. This L function is quite bad as big only conduct and this is a short length so it's not a great fair. What's interesting is that if we just use this technique alone and don't know anything else, we recover the sums. So we get back what's on. Okay, that's the first strategy. It's quite lossy in fact, it's very lossy when the kind of T gets quite close to T5. So second strategy is to use, as Henry mentioned, the fact that these L values are non-negative and just use the super non-bound this L value here and pull out the front. So we know a sub-convex term. Know that this special value grows no further than T to the one third. So we just pull that T to the one third at the front and we know that this thing here is non-negative. So we don't have to worry about absolute values. And we are left with downing this sum. Okay, this is fitting some stuff on the right actually used pull this in a collier and more fancy way but this is fine for absolutely short. And Rob, reason why I wanted to show which is quite new and at least wasn't at all obvious to us is that when we have this L function here alone we can actually explicitly write out this out in terms of something else. We can write this out as a main term, a size T square, is what you'd expect under the generalized number of boxes. So conjecturally, this is really actually how big this thing is plus a dual term. And the dual term is kind of surprising that the dual term is equal to an integral of a different collection of L functions. So this, we started with a GL2 moment was something of a GL2 module of forms of a GL3 times GL2 rank and server of L function. And we ended up with a GL1 moment. So an integral of T of a GL4 cross GL1. This is a GL1 to the GL3 times GL1. So it's kind of a GL4 object. This turns out to be a generalization of a formula known as model Hashi's formula, which relates the fourth moment of the Riemann's data function to the cubic moment of central values of L functions associated to much of the forms. There's also a similar result due to Conrinovaniac on the cubic moment of central values of L functions of automatic forms related to the fourth moment of the L functions. This is essentially a classical analog. What's interesting is we actually prove an exact identity. So I'm writing a protocol form, we actually show this is exactly equal to this plus a dual term weighted by a sum. Okay, so once we have this, well, we have a nice strategy, which is we wanna understand how to bound this. We just need to understand how to bound this dual moment here. And we do that by, like, closing scores. We just separate out these two L functions. We separate out in the square of this, this Riemann's data function, in the square of this special value of this L function. Integrating over them. And now we do the same, and I think before we replace them by Dirichlet L functions. And then we can use this trick of large serve that we have a second moment of Dirichlet L functions integrated. We have a nice phase development. They'll skip everything. Again, we end up with some complicated bounds here. Again, this is a bit lossy because this L function here is a bit long compared to the interval where we're integrating over. We get something that's decent. It's better than the previous bounds we got when T is a bit closer to T5. Now, previous bounds are really bad as soon as a data parameter T was somewhat large. These are much better when T is somewhat large. They're actually worse when T is quite small. All right, so those are the two strategies. And the third and final strategy is something similar. So this second strategy, I pulled out this L value at the front. And I showed that once I remove this L value, I'm left with just this L function here. And it's exactly equal to a main term plus a dual moment, which is something similar except with different L functions and a different length. So I start with a sum and end up with an interval. A third approach doesn't take out this L value at all other bounds. We don't balance. We just start with a sum and we don't try to use any of these approaches for what we don't use whole or anything. But we use some other error-making techniques like the Kiznetsov formula and Voronoi summation formula and the spectral exemption of sums of possible sums. And what we're able to show is that this special value, this moment of L functions is exactly equal to a main term, which has a size T squared which is what we would expect on a generalized loop hypothesis, plus a dual term, which is another moment of L functions. What's interesting is that this is similar to what we saw last time. We start with the moment of L functions and we end up with a main term. That's a different moment of L functions. But here we actually end up with the same moment of L functions just weighted differently into the different length. So initially this was weighted by one and now it's weighted by a T squared over T phi. And initially this was of length T and now it's of length T phi over T. This is quite surprising. This is not gonna be obvious. Is it surprising? Is it, that's what you call reciprocity? Yeah, so this, you can think of this as a generalization of Kuznetsov's and Mordeharshi's formula for the fourth moment of the central value of a Heckelmeister past form. So they show that you have this kind of duality formula that if you look at the fourth moment, it's equal to the same fourth moment with a different weight. What was Valentin Blommer? Yeah, so there's also work of Valentin Blober and York University student, Georgie Lee and your colleagues, Steven D. Miller, who looked at a four-question two moment. So the kind of hospital version of this where this does not factorize. Nations saw something similar, except they were thinking of T phi as being fixed. So we call this spectral reciprocity because it relates one moment of L functions to a different moment of L functions. It's spectral sum to a different spectral sum. It reminds me more of the approximate function equation rather than reciprocity, but... Well, we don't use approximate functions at all. If this doesn't use approximate functional equations at all. I mean, switching the lengths according to... Yeah, I guess you can think of that in some senses being like the approximate functional equations in very good length. There's a question from... You have an exact identity over there, just a size of it. It's an exact identity. So we have a test function here. So instead of this diagonal, we actually have a test function here and we have a different test function here and it's related in terms of hypergeometric functions and we just found them. But it's an exact identity. It's not an approximate one. Yes, F should be side, sorry, Matt. That's a side question. Question from Giorgio. Would you have your hand raised? No? Okay. Okay, yeah, so this F should be side. Okay, so this is a generalization of the identity of this net solve that relate this gl2 moment of this thing to the same gl2 moment but way differently than we did before. And I said this is a reciprocity formula and this kind of thing has been studied recently by Bloma Lee and one. The reason why this is a good strategy is that we started in some range here and we ended up in a different range and a priori that's bad because we selected down this moment adult functions. But the good thing about it is that we already know how to do that because we have our first and second strategies. So that means if that T is say large than T5 going up, we want to down this moment, we switch it. Now we end up with a sum that's something of size less than T5. And we already have strategies. So this is why the third strategy comes after the first and second is because the third strategy reduces us to a moment that we can now down by the first and second strategies. Okay, so we combine these three strategies. We can use some additional tricks as well using Hollywood quality that I'm not going to go into between things like the 12th moment of special values of L functions and other things. We can show you that what we want to show you is that this moment here met us. Do you get the solitary weight on the dual side? Kinda. We haven't actually looked at the solitary nature of them. I would assume that some of the solitary we're just bounding them in absolute value. We have to kind of delicately bound them in absolute value. I doubt you can take any advantage of any solitary nature. Okay, so what we're trying to show is that the initial portion of the spectral sum, so we've broken the die regimes which is basically the same thing as one of the T times T5 especially at L functions. We're trying to show that this is bounded by a constant if T is the lower T side. And to combine these three strategies we're able to show that for most ranges of our die primitive T except when T is particularly small. So T is less than T5 to 3 of a 19. The best you can do is this first strategy. That first strategy loses by T the half. So instead of bounding this by one we bound this by T the half. And if you plug in T is equal to T5 to 3 of 19 we get T5 to the 3 of 38. And that means that this the fourth power of the old formula is bound by T5 to the 3 of 38 where T5 is basically the spread of lambda. So we take four roots and place this by square lambda and we get the L formula is bounded by 3 of 38. Okay. So I'll stop there.