 Thanks so much Michael and to all the other organizers. I'm very happy to have the chance to speak here about my work with Steiner and Cagudin on, as you said, the supernorm problem in the level aspect. So we'll begin by recalling some background on the supernorm problem. So just to keep things rather concrete, we'll take M to be a compact Romanian surface and then we'll let phi be an L2 normalized Laplace eigenfunction on that surface and we'll think about what happens when the eigenvalue tends to infinity. So this is a problem in quantum chaos or semi-classical analysis of understanding how big the size of such eigenfunctions can be. So there's something called the local bound due to Horminder and Saga that just takes into account how the eigenfunction behaves in small open subsets of the manifold, which tells you that it's bounded by t to the half, where t again is something like the square root of the eigenvalue. And one way you can think of that that proof, there are a few perspectives on it you could take is that one can understand averages like I've indicated here, where you take t in a window of size a bit bigger than one of the squares of these values rigorously. You can prove asymptotic formulas for them that tell you they're of size roughly t. If you drop all that one term in an asymptotic formula like this, then you recover the bound I've indicated here point wise. So yeah, so this this bound is sharp in some situations. So for example on the two sphere you have eigenspaces where all but one of the elements will vanish at the north pole. And for those examples, only one thing contributes here. So it has to be as big as the bound suggests it could be. But there are some cases for example when M is negatively curved or hyperbolic where it's expected that these these these bounds can be improved. But you know, since this is a number three seminar, we come to the corrupt, which is that the improvements have been made in a significant way, meaning in the exponent rather than just a logarithmic factor, only when the manifold is an arithmetic manifold. And when the eigenfunction phi is also arithmetic in the sense that it's a hexagon form. So there's a pioneering result in the subject going back almost 30 years now, due to invaniation Sarnac. So what they consider is, for example, the modular surface that I'll call M, which is what you get by quotient upper half plane by the modular group. And they take fee to be a cuspital hexagon form. So it'll continue to be a Laplace eigenfunction in the sense of the previous slide, but it'll it'll now satisfy an eigenfunction condition under the heck operators. So this in particular tells you that you can actually determine fee up to a scalar by the eigenvalues that we know about it. So it kind of throws out the multiplicity issue that causes problems when you try to understand the super norm problem more generally. And so assuming these conditions they managed to show the following improvement where you replace the exponent one half by something a bit smaller, something smaller by the fraction 112 up to something that we can take shrinking as to your grows to infinity. So their result and their method kind of kickstarted the whole industry. I mean, I think the last time I checked there were some there's somewhere on the order of magnitude of 100 papers that have cited this and built upon it in various ways. So for example, people have generalized it to the level aspect, the subject of the talk, where you don't vary the manifold, you don't vary the eigenvalue, but you instead vary the underlying manifold. And you try to understand what happens there. Many people have generalized these results to other spaces such as other quotients of more general groups by arithmetic groups. And there are also many results where people don't just bound values of eigenfunctions at points, but instead integrals of eigenfunctions over submanifolds, for example. So it would be kind of impossible for me to summarize, but there are lots of papers that have generalized this in many ways. So this talk will be specifically about the level aspect of the problem, which will kind of say what we mean by in a bit more detail in a second, but the methodology will kind of play a big role in the talk. So I want to kind of pause to explain in some broad strokes what their argument actually was and what that argument is a prototype people have generalized and then indicate very briefly how we're going to kind of depart from that before we go into the details of what the results are. So the argument in a nutshell considers what we now call amplify second moments, which are expressions kind of like what we had on the first slide, where we had a sum over fee with parameter in some window of size roughly one of the squares of its values, both an additional weight factor introduced here that are called amplifiers following the work of Ivaniich and Duke Freelander Ivaniich and then Ivaniich Sarnak. And the idea here is that the coefficient cl are maybe real numbers between negative one and one that can be chosen to kind of conspire with the signs of the eigenvalues of a specific eigenfunction fee that one cares about. And the idea here is that if one can prove an asymptotic formula for an expression like this, much like one could when there wasn't any amplifier, then one has the hope of kind of choosing these signs in a way that conspire to amplify the contribution of a given fee at the expense of some other fees. And so then the asymptotic formula by kind of dropping all but one term and expression would then give a better than trivial bound for the values of that eigenfunction. So to actually make this kind of argument work, one needs a few inputs. So one, for example, one needs to know that these eigenvalues, the Hecke eigenvalues are not all zero for L in the range under consideration. And one needs that in a quantitative form one needs to know that they're kind of all pretty big. And there's a trick that's now standard in the subject going back to some of these pioneering works and application that I mentioned, which is to make use of the Hecke multiplicity that relates the values of the eigenvalues at a prime and the square of that prime. So this identity shows that like you can't have both of them being too small. And then it gives kind of a source of examples of Hecke eigenvalues that are provably bounded away from zero. So that's kind of the main leverage one has to show that these things can be made large. So then, okay, you can make the amplifier large, you're gonna need to actually bound this expression. And this is something you do kind of using, well, using what's called the pre-trace formula that I've written down here, where, so things like on the left hand side are what you get by say majorizing the condition on T by some weight function H that maybe is smooth and big on this interval and pretty small otherwise. And then expanding the square here and using Hecke multiplicity to collapse the Hecke eigenvalues multiplied together into individual Hecke eigenvalues. So the right hand side is what's called the geometric side of the pre-trace formula, which involves an integral transform K defined in terms of H, and then a sum over two by two integral matrices of given determinant, the number n that you're taking the eigenvalue for here. And then K you evaluate on something like the distance between where gamma moves the point Z that you're evaluating in Z itself. So there's kind of some analytic problem of estimating K in terms of H, and then there's some diophanting problem of actually estimating how the various gamma contribute here. So for instance, one needs to understand how many gamma have a given determinant and have distance in this sense at most delta, for various ranges of delta that we should think of as being pretty close to zero. So these are the problems that they kind of introduced and then solved in their paper. And what can we say about this? Yeah, I guess the main thing is that so their paper is 30 years old and somehow no one's managed to improve this exponent since then. So it kind of seems like one runs into a wall here. People have thought about this, there doesn't seem to be so much room to improve here. But there's a good bit of inefficiency actually that comes from having to impose this condition here. So if we knew something a bit more robust about these eigenvalues, like if we knew that they were typically bounded away from zero or something like that, then Avonius and Sarnac even remarked that they could then improve their exponent quite a bit. But we don't know how to do that. So it's kind of an open question to find any way to do better here. So all of these other directions that I've alluded to considering the supernorm problem use kind of the same basic sketch. There's going to be some step where you have to make some amplifier big, some step where you write down something like a pre-trace formula and do some analysis, and then some step where you got to do some kind of diophantine considerations. And so this is kind of an archetype for a lot of the literature on the problem in many directions. So things changed a few years ago when one of my co-authors, Raphael Steiner, introduced a new approach to this supernorm problem in these arithmetic settings for surfaces where instead of using an amplified second moment like in the previous slide, you consider what would happen if you tried to understand just a fourth moment on amplified. So in some sense, the point of an amplifier is you want to increase the contributions of the bad fees, the ones where the supernorm is large so that you can prove that they don't exist. And the best possible way to do that is just to put the value of fee in there. But then the obvious question is can you actually make sense of the asymptotics of expressions like this? Can you understand them rigorously? So he introduced a technique for doing this kind of in a, and I think what could be fairly described as like a test case not directly related to many of the problems that people have considered previously, where he showed that you can rewrite this kind of fourth moment over a family of eigenfunctions as I've kind of just indicated that schematically here as something like an inner product of a pair of theta functions which you can then maybe hope to try to estimate in some way using geometry of numbers techniques. And I'll go into more detail about what exactly is meant by that later in the talk. And then once you have something like this set up, you can try to estimate individual values by dropping all but one term and just using that everything is positive. So he wrote a very short paper, maybe it was six pages or something kind of introducing this technique and apply it in some some basic example. And a few people or teams of people have applied that technique in the intervening years. So first, the other two of my co-authors, Ilya Kiyudin and Raphael Shiner applied it to the weight aspect on arithmetic surfaces. So kind of like the Avani Sarnak set up where the weight is varying. And so they got a very strong bound there that's now the world record. I guess in the non-compact case, so for example, for SL2Z, it kind of just reproved a record that was already established by a different method. But at the very least, it kind of showed the promise of the technique to, you know, see things differently and also to extend and to compact quotients, things that had already been known for non-compact quotients. So it kind of plays a role substituting Fourier expansions in Shah's argument. And then I've indicated your Blommer-Hakosh Magan-Milisevich recently applied this to something involving hyperbolic three spaces and their quotations for something I just kind of called the K-type aspect. And yeah, what I'll talk about today is a pre-print that went up a couple months ago, I guess, joint with me and Kiyudin and Shiner, where we apply this technique to the level aspect version of the problem. So then what I'll do in the talk is I'll say what I mean by level aspect, I'll state our results, and then I'll try to say something about like what we actually did, like what kind of analysis actually went into making them work. And I should mention that as of now, this technique does not seem so hopeful in attacking the kind of motivating question here where you just vary the eigenvalue. So the Avani of Sarnac problem, it seems very difficult to improve upon that or even to improve upon the trivial bound using this technique, but maybe just some new ideas are needed. Okay, so yeah, with that kind of overview, what do we mean by the level aspect? So for the talk, I'll let M be a natural number that we'll think of as coming off to infinity, and I'll always assume that it's square free. So I guess natural numbers kind of come in two extreme classes. So they're the square free runs or maybe the primes, and then they're the ones that are powers of fixed prime. So for example, two to some large power. For the latter examples, it turns out the supernorm problem is a rather different flavor that is kind of more closely aligned with the spectral and holomorphic aspects that I indicated previously. And there's a whole different discussion that, you know, many people have worked on. So it's a particular Abhishek Sa'a and Yuki Hu of many papers going in that direction. So here we're going to take N to be square free. You can think of it as just a prime and you won't lose anything for the talk. And we'll let M be the quotient of the upper half plane by the standard congruent subgroup Gammonata then. So this is a quotient whose volume grows something like N as N tends to infinity. We're going to assume as before that Phi is say a Laplace eigenfunction, Cuspital now defined on this varying manifold M. I'll always assume that it's again a Hecker eigenform. I'll further assume it's a new form in the sense of academic letter. And I'll assume that the eigenvalue is kind of bounded to some fixed window. So if we kind of fix an advance, I'm bound on the eigenvalue and we require it to lie in that window as the level N varies. For the sake of normalization, I'll assume now that the L2 norm squared is equal to the volume of the manifold. So what this says informally is that on average in an L2 sense, the values of the form are one. And that's a convenient normalization because it motivates what I think is the folklore conjecture, which says that the L infinity norm should likewise be bounded by something like one or something that grows very slowly within. And that's to be compared with the analog of the local bound that can be again proved pretty easily using a pretrace formula of into the half. So just like in the eigenvalue aspect, we have some gap to bridge between the trivial thing where the exponent is a half and maybe the optimistic conjecture where the exponent could be say zero. So yeah, many people have worked on this question. So I guess Avonius Sarnac was in 1995 and the first improvement on that, at least the publication date is 2009 by Blomer and Holowinsky, where they improved on the exponent by the fraction one over 37. And yeah, they introduced a lot of new ideas into the into the subject. So in particular, unlike in the eigenvalue aspect, in this aspect, they're kind of much more serious diophantine considerations that go in. So it matters a lot whether, for example, the real part of the point that you're evaluating at as well or badly approximable by rational numbers, they give different arguments according to the two cases. But I guess the basic prototype of the proof, so the overall method was like in Avonius Sarnac, and that's the case for all these results here. They all use an amplified free trace formula, but they came up with kind of better and better ideas to handle the diophantine analysis, or perhaps to construct the amplifier, and then to do all the necessary counting. And eventually the exponent kind of shrunk down to one sixth, which you can think of as kind of one third of the way from the trivial bound to the optimistic bound. And it might remind you of some other natural barriers in in analytic number theory. So for instance, in the subconvexity problem, a third of the way from convexity to lindelaw through the zeta function is what's called the vial bound. And that's kind of known as like a very hard bound to improve upon many people have, but still just by a little bit. And there are many settings where something like a vial type bound is the best known, and it kind of feels like a natural limit. So okay, I've emphasized again, all these results use this second moment amplified approach of Avani of Sarnak, improving the diophantine analysis that goes into that method. So yeah, so Steiner, Cajunin and I tried looking at this question using the fourth moment approach. And we didn't really know actually whether it would have any chance of succeeding, because by then we had seen that, okay, it succeeds pretty well in the weight aspect. It seems to fail pretty miserably in the spectral aspect, the eigenvalue aspect. Level aspect for square free levels is kind of intermediary between those two aspects in certain respects. So it seemed for about six months that it was definitely going to fail, but we at least had the hope of kind of coming up with a good reason why it would fail in the level aspect that maybe we could, you know, write up and share with people and say, hey guys, you know, here's something to maybe try to find some new idea to improve upon. Then we ended up actually making it work, so that was nice. And so what we get here is a further improvement from one sixth up to one quarter, which is exactly halfway between the trivial bound and the optimistic bound. So we were quite surprised that this actually worked out. Well, it's a preprint you guys can tell us if you believe that it worked out, but okay, we do. And yeah, it's kind of a rare thing. Like I mean, for example, for the zeta function, this would be kind of like proving a bound of t to the one eighth, that's which is kind of hopelessly out of the questions. This is a situation where you apparently can prove kind of halfway like that. And yeah. Okay, so that's the result that I'll try to say something about what went into the proof of in the remainder of the talk. As you mentioned, maybe a little bit about like a little bit more about why people care. So I've indicated here a few applications kind of a bit a bit internal to the subject. So for example, you can interpolate this L infinity bound with L four bounds proved by other people to get better LP bounds in general. You can make Wolfin's estimate for twisted additively twisted sums of Hecker eigenvalues quite uniform with respect to the the level of the eigenform. You can improve certain sub convexity estimates in some cases. So some people how in Chen had proved a sub convex bound in some kind of hybrid twisted case conditional exactly on the super norm bound that we're considering here. So just by plugging in our new exponent to the result, you get a better result in in their setting. And then one of my co authors has applied this to improve the known bounds and the diameter of certain arithmetic hyperbolic surfaces. Okay, so maybe that's that's it for now on applications. Yeah, before I go any further, are there just any questions on kind of what we're doing, what the result says, anything else? Peter, I have a question. If you take the sort of the attic quotients instead of this continuous question, so you'd get a graph, and there are a lot of people in studying their infinity norms of graphs. So I assume your results applied there and give all, you know, these Ramanujan type graphs some bound. Have you worked that out? Yeah, that's a good question. So the answer is yes. And it'll be in fact on the one of the slides coming up. Okay, thanks. Excuse me, just a follow up question. Does it give any bound on diameter of Ramanujan graphs? So I think what he observed is that it gives the bound that follows from Deline or Eichler or whatever, but without appealing to that. So it kind of recovers two times log of some, so it recovers the same. It recovers it, but it gives a proof that doesn't need that. For example, for some reason, we want that. Paul, I had a quick question. Sure. The folklore conjecture you mentioned, enter the epsilon bound. I seem to remember that there was a paper by Tompley that in some setting that's not true. What was that? Yeah, so the setting would be if you, for example, you allowed non-trivial central character. So if you work with Gamma 1 of N, then their examples were kind of the most optimistic thing would fail. Was that enter the quarter that he got? That sounds, I'd have to think to remember, that sounds kind of, no, I think it might have been bigger than that even. I don't want to kind of be wrong, but that happens for kind of well understood reasons related to the Fourier expansion and kind of behavior near the cusp. It's like one term kind of contributing to the Fourier expansion in certain regions or something kind of of that flavor. And that just doesn't happen for the class, for the example we're considering here. For Gamma 1 of N with N squared for you, that kind of thing won't happen. I just thought in case it's a quarter, then I was going to ask, are you proving the best possible in that setting? I don't remember. That's a good question. I seem to recall it being a third. Okay, if you want another coincidence though, so the analogous estimate over a function field was proved essentially by Will Sawan a couple years ago where, so what he proved is that you could take delta to be something that tends to zero as the cardinality of the underlying finite field tends to infinity. So at least if p is sufficiently large, you get something kind of of that shape. But for each individual value of p, it seems likely that one could probably adapt our proof to the function field setting and then give a better bound there. So maybe that's one example where kind of the two bounds kind of match up. So what he gets using a lattice homology and all that versus what we get by just averaging. Okay, so yeah, so I've indicated already in my response to one of these questions that there are results holds a bit more generally. And I mean, I would have been happy just for the sake of kind of giving an informal talk to present the proof in the case I already described with gammonadevin. But as it turns out, the proof is a bit simpler to describe for a different specialization of the general result. So I'm going to state the general result and then I'm going to specialize it in a different way in a way that makes the proof it easier to go through. I think, I hope at least. So what's the more general result? So it's going to concern a quaternion algebra over the rational numbers. So these are classified up to isomorphism by their discriminant that I'll call db, which will be some square free natural number. And a good example is if B is the algebra of two by two matrices, then db is one. That's kind of the example relevant for what we have been talking about so far. So in this example, you can define trace and determinant maps, taking values in the rational numbers that satisfy a bunch of natural properties. And I mean, if you don't know what quaternion algebras are, they're basically algebras that have analogous maps satisfying analogous properties. So they come in a couple flavors according to what happens when you extend scalars to the real numbers. So when you do that, you either get the two by two algebra real numbers, that's called the indefinite case, or you get Hamilton's quaternion algebra, which is called the definite case. And in that case, the thing we've called determinant, which is maybe more traditionally called the reduced norm, defines a positive definite quadratic form on B. So something that looks in suitable coordinates like sum of four squares, unlike the determinant on the two by two matrix algebra, which is a signature two two form. So that's some kind of indefinite. So for some arguments or for some definitions, really, it'll be easier to switch to the indefinite setting. Sorry, switch to the definite setting. So next, we're going to again, let n be a square free integer tending to infinity, co-prime to the discriminant. We'll choose an Eichler order of level n. If you don't know what an Eichler order is, you can just focus on this example where it's kind of the order, meaning a ring of rank four, where it looks like this. So an Eichler order in general is something that kind of locally looks like this when reduced modulo n. And then instead of our manifold hyperbolic space modulo gammon out of n, we'll take an adelic quotient of this shape defined using the adelization of the Eichler order and some copy of SO2. So again, if you don't know what adelic quotients are, don't worry about it. Just think about this example. This is kind of the general way to reproduce that example. But then coming back to Peter's question, as Peter knows, and as many people here know, these quotients in the definite case I provided you further quotient out by SO3, we'll give these finite Ramanujan graphs. And so some of the eigenfunctions we consider will be eigenfunctions on those graphs and we'll be bounding the super norms of them. This is some kind of generalization of both certain hyperbolic surfaces and certain graphs are kind of more generally, I guess in the definite case, this would be a union of spheres, finally many, which you can think of as vertices in a graph. Yeah, and if there are any questions on notation or anything for anyone, just please feel free to jump in. I don't mind being interrupted as I go. So, okay, we'll fix some kind of cutoff t that'll bound the eigenvalues of things involved. That'll be independent of n. We will let script F be an orthonormal basis of, again, Hecola plus eigenforms V on our manifold M with parameter bounded by T. It'll be convenient just to kind of simplify writing to introduce the notation V for the product of these two numbers, which is pretty close to the volume of these manifolds with respect to the natural Hermannian metric. So we'll think of V as kind of the main parameter going off to infinity. And then the main bound will prove is, okay, whatever point you choose on your manifold, the fourth moment of that over your family is bounded by a bit more than V. And this is kind of something you'd expect to be basically best possible. So we normalize things in the same way as before, so that on average, in an L2 sense, third size one, the family will then have size roughly V because we've kind of chosen a bounded spectral cut off and the volume is growing like V. And I've only stated here that the main estimate we prove in the compact case and in the non-compact case, we have some additional terms that kind of blow up as G tends towards the boundary. But near the boundary, we have other methods using just the Fourier expansion that'll give good bounds. So in all cases, we get the very clean result that the L infinity norm is bounded by the volume of the quarter provided you have fixed spectral cut off. So in particular, in the case that B is the matrix algebra, this is just this congress quotient of the upper half plane and we get 10 to the quarter like we stated earlier. Here's the minor remark. I mean, everything that I stated here works also in the weight aspect where you don't take Laplace eigenforms but polymorphic things. And in fact, in that aspect, we can kind of make the whole thing uniform in the weight and we get a bound that depends on the weight like K to the fourth, which is optimal and that we're kind of best known in that respect as well. In this Laplace eigenform aspect, kind of the best we seem to be able to hope for is to recover the trivial bound with respect to the eigenvalue. Okay, so that's the result that we're going to present the proof of. Are there any questions on that? So improving this quarter would directly improve the diameter bound? Oh, that's a good question. That sounds kind of believable. I mean, I don't know if Rafael's here and wants to answer that. I mean, I'm sure he's thought of it, but I think the answer might be yes on that. I see. So somehow it's a barrier reason, like if you correct this. I think so. Don't take my word on it, but you can definitely look at Rafael's preprint and I think he answers that. Sure. Thanks. In the introduction. Yeah. Okay. So the overview of the proof, we're going to find some theta function kind of in the spirit of things defined by Shimizu, but we'll call capital theta. It'll live on Gammonot of V. So Gammonot of V is the standard congruence thing of level the product of the discriminant and N. And it'll satisfy something like the following identity. So the fourth moment that we want to estimate will be the ultra norm of this theta function. So if we want to estimate the fourth moment here, all we got to do now once we prove this identity is estimate that norm of the theta function. Of course, it's not obvious that that's any easier, but it's something new to play with. So we're going to try to just get our hands dirty and understand anything about this norm of the theta function by integrating over over Ziegle domain. So we'll say, recall what we mean by that. And that'll eventually reduce the problem to some counting problems of two types, what we'll call type one and type two. And then we'll solve the counting problems. So as it turns out, the type one counting problems will be pretty close to things that already showed up in this table of records that I had here when people, you know, optimized to this amplified second moment approach to this ignore problem. So maybe the main new ingredient will be the kind of type two bounds, but there'll also be some new things just in the type one treatment. But first, I'll say something about where this basic identity comes from to make it believable. So it's going to come from the pretrace formula, just like the study you've amplified second moments did, but in a slightly different way. And to present the proof as simply as possible, I want to go to the setting where the pretrace formula itself is kind of as simple as it conceivably can be. So I'm going to take B to be definite. So then this determinant form is positive definite. And so all of the fibers of that form on this order are will be finite sets. So I'll call those fibers are then that'll be the elements in our Eichler order with determinant and that'll be some finite set. I'll also take the argument of the eigenfunctions, which we bound them to be the identity element of this adult quotient. Just for notational simplicity, we can always reduce to that by conjugating things a bit. And then I'll take the cutoff to actually be zero. So what this means is that the kind of parameter here the plus eigenvalue will actually be zero. So I'm considering things that are harmonic, which means that they're actually constant on the spheres. So we can think of them as functions on these finite sets on these finite or monogen graphs. And so in that case, the pretrace formula has a very, very, very simple form. So the left hand side is just the average of the Hecker eigenvalue weighted by the square of the value. The right hand side is just up to normalization, the number of elements of norm n in the Eichler order. That's quite simple. And then also that the definition of the Shimizu theta function in this setting is a bit simpler, because in general, theta functions are easier to define when you're attaching them to positive definite quadratic forms than indefinite ones. For indefinite ones, you're always going to have some major in the background introduce. So here it's just kind of a standard theta function like in the first course in modular forms, where the coefficients are the number of elements of determinant n in the Eichler order. So this will define a modular form of weight two on gamma naught of v. And so the identity will come down to relating the pretrace formula to the Shimizu theta function. And you can kind of see the relation right away. I mean, the first is the Fourier coefficient of the second. And by operating this a little bit, we'll get the basic identity. So let's just see how that goes real quick. So we need one more ingredient to explain it which is the notion of Jacke Langlands or Eichler Shimizu lifts of our eigenfunctions phi. So these are modular forms now on gamma naught of v that you get by just taking the series expansion attached to the Hecke eigenvalues of your given form. So it's kind of a theorem proved by the authors indicated above using a comparison of trace formula that this actually defines a modular form. And moreover that any two modular forms you get in this way are orthogonal to each other. So this is one of the ways that the multiplicity one theorem for these compact quotients can be proved. So we're going to use that in the following very serious way the orthogonality of the Jacke Langlands lifts attached to our forms. So they're basically orthogonal and then the inner product is roughly the volume of or the co volume of that group which is something like v. It's given more precisely by an adjoint L value whose size is known to be pretty close to one. All right so that's where we've used the multiplicity one and we've defined these Jacke Langlands lifts. So there's a basic lemma one can now write down which says that this Shimizu theta function can be written as a weighted sum of these Jacke Langlands lifts where the weights are kind of the numbers that we actually want to bound at the end of the day. And the proof is very simple you take the pre-trace formula you multiply it by e of nz and then you sum over n. So I've written down the pre-trace formula here I've multiplied it by e of nz on both sides and then I've summed over n. On the right hand side we get these Jacke Langlands lifts on the left hand side we get this Shimizu theta function and it's then the lemma pops out. Now finally if you take this lemma you can use it to try to compute the inner product of theta against itself. You expand that first as a double sum over forms in your family of these inner products but we already mentioned that these inner products detect kind of the diagonal condition and so then you end up just getting basically this fourth moment of values of the things that you're interested in up to some normalizing factors. So that's the basic identity so you see that we've kind of used the the pre-trace formula and the multiplicity one theorem in a very serious way here. And after all that the whole point of or what remains is just to prove this bound for this elton norm of the theta function. So there are any questions on on that? I imagine this was kind of fast but I hope it maybe just gave a flavor for like what actually goes into the argument what the inputs are. So if there are no questions I mean kind of where we're going now is we're going to reduce to this kind of probably abstract seeming thing to some concrete counting problem which we'll then talk about in the rest of the talk. So one very very minor caution is that as I've written it these integrals are actually infinite because there's a non-custodal contribution that I haven't indicated in each of these steps. So one can either regularize things by subtracting that contribution or one can work with the differences between two different values a study sums like this instead and these sums are adequate for proving these fourth moment downs. So I'm not gonna I'm just gonna kind of ignore this technicality going on and what I'll explain now is how to actually how we actually try to bound these expressions. So theta is on some quotient like h mod gamma not a v and that quotient so I copied from I think Diamond and Sherman's book via math stack exchange the following image of a fundamental domain for that quotient. So you got some part near infinity and then a bunch of parts near zero and you want to estimate the integral of this thing over that whole domain. So the way you do it is you kind of you know I guess for the part near infinity you cover it by what's called a zeal domain this thing that I drew a picture of here in red which has the advantage that when you integrate along the horizontal contours you can use use parsable's identity to simplify the integral quite a bit and express it directly in terms of the Fourier coefficients of theta and then you need to do something similar for all these other regions near the cusp zero and you can understand how theta looks there by using for example Poisson summation to work out its Fourier expansion yet. So it turns out that if you kind of try doing this naively okay you'll reduce some counting problems but they turn out to be kind of at least as far as me and my collaborators are concerned very difficult so we couldn't see any way to broach them. So it turned out that like the contribution from this part of the fundamental domain near infinity was quite easy to bound but the remaining ones were quite difficult. So what we needed to do to make things work was to kind of balance out the fundamental domains in a way that makes this this part a bit harder and these parts a bit easier and at the end of the day we found a pretty efficient way to do that which is just to take this standard covering by little fundamental domains for SL2Z and apply the Frickin collusion to it and then after having done that apply zeal domains to the parts of the cover that we get and then unfolding this integral by integrating along each of those. So that was one of these um that that was maybe one kind of at least relative to our ignorance like one novelty in our argument and I don't want to kind of bore you with like ways that one can go wrong in trying to prove this theorem but okay once one has that idea which which turns out we believe to be on the right track the problem reduces the counting estimates. So the counting estimates are for example of this flavor so these numbers are of annual recognize as the Fourier coefficients of theta so it shouldn't be too surprising that they show up in the estimates that we need when we try to unfold an integral like that and we need to know for example that the sum over n up to x squared of the square of the number of elements of our Eichler order of determinant n is bounded by maybe a bit more than x squared so this notation will mean maybe up to some v to the epsilon coefficient and we need this in a range for x going up to about the square root of the volume so um yeah I'll try to get a little bit of a sense of like why this is not like a completely trivial problem after giving a preliminary reduction of it. So the preliminary reduction is that so these orders they have a kind of boring part which is the part given by the integer z and the orthogonal of that boring part is the traceless part that'll call r zero so it's the elements in the order of trace zero and this direct sum is pretty close up to some factor of two let's say to the order itself so pretty much any accounting problem involving an order r can be reduced to accounting problem involving the traceless part which is the more interesting part and so when you actually try you know breaking up this sum which is counting pairs of elements of r in terms of now quadruples of elements where two of them are in z and two of them are now r zero there's a pretty straightforward way to use the divisor bound to reduce this kind of estimate um using this kind of and using this kind of inequality to estimates that now just involve the traceless part of the iCover so sigma one is a first moment of cardinalities in the cycle order and sigma two is a second moment so these are what what we call type one and type two estimates we need to show that each of these are bounded basically by about x I mean just to just to think for a second about what that what is that actually saying so the number of numbers n up to x squared is roughly x squared and each of these cardinalities are integers so we're trying to show basically that at most roughly x of these cardinalities are non-zero in the first place and whenever they're non-zero they're bounded by basically one and if you think if you think about what a first and second moment down together like this are kind of saying that's that's more or less the content um so there's not a lot of room to kind of prove a bound like that if you mess up a little bit as you try so um I guess we could get by in the non-compact case a bit more simply for both of these problems by using very explicit analysis of coordinates but to get this um this bound in the generality we have it where it works in the in the compact setting where you don't have quite natural coordinates in your iCover orders and we're actually you can you can even vary the underlying quaternion algebra so the discriminant can vary freely um we didn't see too many ways to do this other than the kind of systematic way that we eventually found so um for the type one estimates I've already mentioned that like these were basically treated in existing literature on the supernorm problem via arithmetic amplification so Blommer and Michelle wrote a wrote a paper proving this delta equal one sixth estimate in the definite quaternion algebra case where they proved this this bound for the type one estimates or what will they implicitly prove something like this um which is adequate in the range that we need it um and it's approved using things like Minkowski theory and some kind of explicit estimates for the successive minima of our iCover order equipped with the determinants one so the only novel thing that we do related to type one estimates is that we give analogous estimates in the indefinite case which um I guess could have been could be used retroactively to prove the one sixth bound in the compact indefinite case which hadn't been done apparently um and I mean I mean there the statement is just that kind of the same bound holds if you replace the determinant by any majoring for it so we put some work into formula even just into formulating kind of what the replacement is and then we adapted the argument so those are the type one estimates so it remains to kind of treat these type two estimates which I'll try to give some flavor for it so the first the first observation is that the type one estimate follows from the type two estimate assuming you can prove the following thing that I call a fiber bound which is just that for all n in the relevant range these numbers are basically bounded by one so maybe bounded by v to the epsilon that's what I mean by this notation right because then you know the type two estimate is the second moment but if we bounded all of them individually by roughly one then we can reduce that to the first moment and then that gives an adequate bound so um yeah I guess the main the main thing the most difficult point was just to find a way to prove this individual bound show that for any n up to roughly the volume the number of trace zero elements in our Eichler order of norm n is basically bounded by one so if you want to really stupidly simple example of what this estimate means in a case where you can't really see what's going on but just to give some flavor so the whole subtlety here is really that the Eichler order r is actually varying it depends upon the discriminant it depends upon the level n but maybe we if we took r to be the Hamilton integral quaternions for simplicity this will be saying that if you look at integral points in free space just look at z three in the standard way it ends up saying something like that the number of integral points so I guess this is kind of an Archimedean analog of what we're saying here but it would say something like the number of Archimedean the number of integral points in a ball of radius one is basically a big O of one so it's not a very impressive sounding statement but we're kind of trying to prove some kind of distorted analog of that here that's roughly what we're looking at and so let's hope and maybe to end a bit early but I said at a lot of times so I'll say at least like kind of the input of the proof for the fiber bound so so the main thing that we use in the end is that if we have two elements in our Eichler order then we can take their commutator and the determinant of that commutator always satisfies a very strong congruence condition so maybe it's simplest to just look at this example here where we have the matrix algebra and our Eichler order consists of things like this that are upper triangular mod v well when you take the commutator of two things that are upper triangular mod the the result will be null potent mod v and it's pretty easy to see that the determinant of anything here is divisible by v so this turns out to be a pretty robust observation and the way we apply this is to show that if we have one element in the set that we're trying to bound say gamma one then all of the products that we get from other elements in that same set actually lie in some set of cardinality one so I just say set s of cardinality one that contains all these products so in some sense we're using one element and then this kind of strong condition on the commutator with any other element to eventually show that those other elements have their products with a given element highly constrained and hence those other elements in turn are quite constrained so um yeah I have a final slide that just gives the argument for anyone who really wants to see kind of the details of what we do it's it's quite a simple argument at the end of the day but I thought just I would conclude now just by summarizing saying that yeah we improved on these bounds in the little aspect using fourth moments over families studied by norms of theta functions and counting problems and then it's worth just remarking that it's it is still open to improve upon this avanias sarnac bound in the eigenvalue aspect and we don't even know how to actually improve any improve on the trivial down in that aspect using this method and with that I will thank you guys for your time and I think we