 So, I'll begin by finishing this cliffhanger proof. So essentially the argument is here already, so I should just explain how one can evaluate these and why they work out to give you this right power of log for the lower bound. So let's look at the left-hand side first, so I can expand out Akd to the k-1. It'll be a sum over numbers up to, so Akd goes up to, the sum over n goes up to x. So if I multiply k-1 of them, the sum goes up to x to the k-1. Some kind of function, which is like a divisor function, let's call it dk-1 of nx. Over root n, kydn. And this function, let's just say, it's the number of ways of writing n as the product of a1 to ak, where the aj are at most x. So now if I use this formula that we know that we can evaluate the first moment plus a little bit, so I can compute that the number of the sum over this kuminance up to x, that this behaves like, so this is asymptotic to some constants, which we'll forget. There's a sum over this n going up to x to the k-1 over root n. And then let's write n as n1 times n2 squared maybe, where n1 is square free. And then I get from the formula that we still have as the answer. So we get something like 1 over root n1, well x times this, log of root x over n1. Now to get this lower bound for the k-th moment, I only need a lower bound for this quantity. Everything is positive, so I can just focus on whatever terms that I want and not worry about some terms that I want, that I don't have to worry about. So this is, let's say, at least as big as log x. So this is at least some x times log x. And let's just worry about n that's less than x. So that I don't have to worry about what dk-1 of nx is. It's just dk-1 of n. So it's at least as large as n up to x, dk-1 of n. And then it's going to be divided by an n1 times n2. This is the square root of n1 and n2, and that's the square root of n1. Okay, so and this is essentially a multiplicative function. So let's, so well it is a multiplicative function. But what I mean is that I can split the n1 and the n2 squares. So you can see that this kind of roughly looks like x log x. A sum over, let's say, n1 goes up to maybe root x. dk-1 of n1 over n1. And then the n2 terms have a dk-1 of n2 squared over n2. This is not strictly speaking correct because n1 and n2 can have something in common, but that's a very minor, minor nuisance. And then it's a simple matter to evaluate these. The first term would give me something like log little x to the k-1. And in the second one, you can again work out what the power of log should be. dk-1 of p squared, that's the important thing to calculate. And that's k-1 times k over 2. So the power of log will be this exponent here. So it's k times k-1 over 2. And little x is a small power of capital X. Maybe x over x to the 1 over 100k. And so all of this, if I forget the dependency on k, is as large as x log x to the, this is a log x to the k. So you see this k times k plus 1 over 2, which is what we wanted. And then I have to do the same calculation for the second sum up to x. And again, this is not so bad. This is sum over fundamental discriminants up to x. And then I have a sum n up to x to the k, dk and x over root n. And once again, if I interchange the sum over d and the sum over n, only the square values of n will survive and everything else will disappear. So this will be bounded by, you can track, where I just used that this function is at most as large as the k divisor function. And once again, this is bounded by x log x to the k times k plus 1 over 2. So in other words, all we are saying is that the L function L have chi d. It doesn't really look like this approximation, a chi d, which is a very short truncation of the Dirichlet series defining L. But at least as far as the size of the objects are concerned, they more or less have the same size. And so when I'm using Holder's inequality, there's really no loss in using Holder's inequality in this setting. So this gives the correct law about it. Okay, now one can make some refinements to this argument. So I said, you can also prove the same kind of result. If k is a rational number bigger than 1. So let's say you can write k as r over s bigger than 1. So here the trick is, well, so the trick in this was to set up Holder's inequality in a way such that you only have to evaluate integer powers of some Dirichlet polynomial. Evaluating a fractional power is hard, but with an integer power you could always expand it out and then interchange sums and play around with it. So we want to do something similar here. And the idea is, so I don't want to now take this a of chi d to the k, because that's a fractional power of some Dirichlet series, which I don't know how to handle. But it would be nice if a of chi d this itself were some power of some other Dirichlet polynomial, right? If that were a, then that raise to the k would just be b of some chi d to some power r. So, and again that would be some integer power of a Dirichlet polynomial, and I can handle everything exactly like so. So now these coefficients were easy to understand what they were. They were just the coefficients 1 chi dn over root n. So I need to figure out what would be something that I put in here, which when I take the s power, kind of looks like one. And that's easy. I should just choose the coefficients of zeta of s to the 1 over s is a bad variable. Okay, so zeta of z to the 1 over s, and that would be d 1 over s and over n to the z. The 1 over s, the divisive function. So that's all you have to do. So you define b of chi d to be the sum d 1 over sn over root n, chi dn. n goes up to some very small power of, power of capital X. And then expand, and then play around with the sum of l half chi d, b of chi d to the s, to the k minus 1. Still an integer power of some Dirichlet polynomial, and this is bounded by the sum of l to the k. And this, again, is an integer power of a Dirichlet polynomial, and everything goes through. So now you can look at, you can see that one thing that I have to worry about here is that the, how, is, how large should X be? Well, this whole thing should be a small Dirichlet polynomial. And the, and the size of it is, it's b of chi d to the power k times s, which is the same as r. So I really want that this little x should be taken to be a small power of, of capital X, but something like 1 over 100 r. Rather than just 1 over 100 k, right? So, so now you can see that there's a small problem because even if this number k is very close to 1, but if your, if your k is happens to be 101 over 100, it actually is, you need to choose a much smaller value of x, than you could for k equals 1 or k equals 2. So in other words, when you carry out this argument, you get a lower bound of the right order of magnitude. But the lower bound will depend upon k, and it will depend upon the height of k, rather than just on what the value of k is. Okay, so, so I'll describe very briefly the result you to maximum residual in me, which fixes this and gives you a bound which is continuous in k. So, so already for zeta this is, this was new and so let me state it just for zeta that I would get something like completely, explicitly maybe for large t. You get something like x of minus some, some dependence on k. You log t to the k squared. The right power on k squared, k should be like e to the minus k squared, rather than k to the 4. Okay, so this is for all real k at least one. So the proof of this is I kind of like this proof because it's like a nice joke. You can use this thing called the Sylvester sequence to write one as a. So you can write one as one half plus one third plus one seventh plus so on. Where at each stage, you multiply all the previous fractions and add one. So, so this is a very rapidly increasing sequence that sums up to one. So, let me get the, so I'm gonna call this one over b1. You don't have to use this but it's kind of fun to use this. And then let's do the same thing for one minus one over k. I'm going to write down some sequence of natural numbers whose reciprocals add to that and again do this in a greedy way. So choose the largest number you can do and then the next largest and so on. So, so this, this sequence of numbers, the aj and the bj will increase very rapidly. Like think of the Sylvester sequence where you're multiplying all the previous numbers and adding one. That increases certainly much faster than exponential. Okay, so now the idea is going to be that we're going to use some versions of these functions b of kd, b of b for the zeta function, taking shorter and shorter things. But for these parameters, the aj's and the bj's. Okay, so I'll explain how that works. Okay, so I'm going to start with the integral of zeta of half plus it. And then a product overall, l goes from 1 to infinity. So I'll call this a sub l of half plus it. And then a product overall, l from 1 to infinity, b sub l of half minus it. And a sub l of s is going to be the sum of d of k over l of n over n to the s. n goes up to some small power of t, t to the 1 over 100 al. And then this b sub l of half minus it is going to be a sum of dk over bl of n over n to the s. And again, this n goes up to some very small power of t. So the idea is that this is kind of a stand-in for zeta of s to the k over al. And this is a stand-in for, well, so here for zeta of half minus it to the k over al, sorry, bl, right? So when I multiply all of these powers of zeta out, this guy is a stand-in for zeta of s. The whole product is a stand-in for zeta of half minus it to the k. And this whole product is like an approximation to zeta of half plus it to the k minus 1. So if that intuition is correct, then what I've written down is kind of like a proxy for zeta of s to the 2k. But on the other hand, these are all short Dirichlet polynomials, so let me explain. Because each of the n's here only goes up to t to the 1 over 100 al, so when you multiply all of them out, it still goes up only up to t to the 1 over 100, and the same up to here. So these are two short Dirichlet polynomials that are being multiplied out. And then you can use Holder's inequality to say that this is bounded by zeta to the 2k to the 1 over 2k. And then I think here you just get the product of overall l, product of mod al of half plus it to the 2al. You can see if you don't believe this inequality, at least check that the exponents add up to 1. So this is a 1 over 2k. The sum of 1 over 2al adds up to 1 minus 1 over k. So those two things add up to 1 half, and the sum of the 1 over 2bl adds up to 1 over 1 half. So that inequality is correct. And then you can again evaluate everything in sight, because this is a zeta function multiplied by two short Dirichlet polynomials, and this is a short Dirichlet polynomial, and that's a short Dirichlet polynomial. So everything can be evaluated. And it gives you a lower bound for the 2k moment of zeta, which I claim only depends on the parameter k and doesn't depend upon any rational approximations to k. So let me just maybe roughly explain why that particular result is true. I don't want to say this very in much detail, but the point is that when you're thinking of the sum of n up to t to the 1 over 100 al, so one of these integrals, let's say. So when you expand things like this out, I think our ak-type contribution was coming from this dk n squared over n, n going up to some point. Like if it goes up to t, you get this constant times t log t to the k squared. The fact that you're truncating at something like t to the 1 over 100 al might roughly mean that when you carry out some expansion like this, you might lose something and have only those terms with n going up to some small power of t. But what's the loss in doing this? This would be asymptotic to something like log t to the k squared with the usual constant ak. But then divided by, well, instead of going up to t, you're only going up to t to the 1 over 100 al, so something like 100 al to the k squared. So in other words, you lose some constant, which is going to be some fixed power of this al. So we've certainly lost something by doing this truncation. But on the other hand, I'm not losing that to the power 1. I'm losing it only to the power 1 over 2 al. So I have some loss, but this loss is really going to be only this number to the 1 over 2 al. So in other words, what I've lost is not really al, but like al to the k squared over al. So the loss is like 100 al to. But that's OK, because these al's are rapidly increasing. So when I multiply them over all al, I only lose a constant. So when I take the product of this, it's still the utmost. I don't lose by more than a constant factor. So in other words, when you expand this out, the upper bounds that we get for this quantity is only off by a constant from the upper bound for dk n squared over m. And the same thing with the lower bounds for this, I lose only a constant. And that means that you get a lower bound, which is uniform in k, OK? So this is an idea which is in some ways related to the sieve. It's also going to be an idea that I'll explain in work on the upper bounds, which I'll talk about next. And you should really think of it as, so maybe I'll try to make this analogy with the sieve more clear as well, you should think of it as, if you have to sieve some number of primes, it's the initial primes that make the biggest contribution. So you want to be able to sieve the initial primes really well. And then the primes which are further and further out, you don't have to work so hard to sieve them because they make progressively smaller contributions. So the small values of n, we kind of get with a big weight like d of k over 2 of n over n to the s. We get them out of the way. We take a long sum. But for other ones, they contribute progressively less. So we can truncate the argument in a shorter way. OK, so the situations for lower bounds of L functions is pretty well understood, except for one final point, which is that these arguments tell you that if I know something like the first moment, then I get lower bounds for all the larger moments. So the one thing that's remaining is what happens for lower bounds for small moments. So for example, the argument that I gave you for, let's say, let's look at it now for Dirichlet characters, you could look at L half to the chi to the 2k, overall chi mod q, primitive. Then the argument that I gave you would show that if k is at least one, so you start by saying something about the second moment here, and then it would say that if k is at least one, then we have the right lower bounds. So but what happens if k is less than 1? So in general, we actually don't have good lower bounds for moments in the small ranges. But this is one example in which we do have good lower bounds, which were worked out by Chandi and Lee, obtained the right. So they carried it out, I think, when k is a rational number, but you can then also incorporate what Max and I did, and then probably get the right result for all real values between 0 and 1. So this seems like a kind of a technical thing. So I should say that it actually is, I think, an important question. So let me give you an example of something where we would like to be able to prove the results like this, but we don't know. Let's fix a module of form f, and let's look at the family of quadratic twists of it. Let's say f twisted by chi d. And it would be very nice to give lower bounds for this. So we know this lower bound for k larger than 1, and it would be very nice to give the same lower bounds when k is less than 1. And the reason I say this is that suppose you had the right lower bound even for small values of k, then you can imagine letting k go to 0. If you let k go to 0, then this function is just picking up essentially 1 if the l value is not 0, and it's picking up 0 if the l value is 0. So if you have the right lower bound as k goes to 0, you would know that a positive proportion of these l values are not 0. So this would imply, so if this is true for small values of k, then it would imply this conjecture would imply a positive proportion of non-zero. So it's expected that this l value can be 0 essentially only when the sign of the functional equation is 0. So that happens 50% of the time, and 50% of the time the sign of the functional equation is plus 1. And we expect that most of the time in that case it's non-zero. But we don't even know that a positive proportion of such values are non-zero. But well, the problem of non-vanishing and the problem of these small moments are quite closely related. The way you prove non-vanishing results is to do something called the modifier method and be able to compute the first and the second moment. And if you can add something called a modifier, then you would be able to prove non-vanishing results. We can do that in the case of all Dirichlet characters. And this is what underlies the work of Chandi and Lee, is that essentially they're using a kind of a version of a modifier to get lower bounds for small moments, which we don't have in this situation. But apart from this very small moment, the story for lower bounds is pretty well understood. So now for the last part, I want to talk about upper bounds for these moments and also connections to the center limit theorem of Selberg that I already mentioned. So maybe to think about this, let me tell you one more way to think about why we get the various powers of log in these moment conductors that we are supposed to get. So let me start with recalling Selberg's theorem that I mentioned yesterday, which says that if I look at the log of zeta of half plus i t, I can look at the real or the imaginary parts, but let me look at the real parts here. And suppose t varies between twice capital T and twice capital T, that in this range, this is approximately Gaussian with mean 0 and variance half log t. And what this means is that, OK, so let me write it down. So this means that if I fix any number v in r, then as t goes to infinity, the measure of values t for which log zeta half plus i t is bigger than v times the square root of the variance, that this tends to the Gaussian integral. So, OK, so the theorem as it stands is a theorem for fixed values of r of v and as t goes to infinity. And you can make it uniform in certain ranges, but to gain some insight, let's assume that this kind of behavior persists for all values of v. So let's suppose that this is true for all v. This is, of course, false. OK, so let me give you one reason why it's false. Selberg's theorem would tell you something about maybe large values and small values. So you could imagine that v being very, very small, like maybe v going to minus infinity. And in this case, you shouldn't expect too much uniformity in which results like this can hold, because suppose you want to say that you're making some claim about the frequency with which very small values of zeta occur. There's one way in which you can produce small values of zeta. You take a 0 and you take a small neighborhood around a 0 and you'll get very small values. So if you take that, you will produce more small values than will be predicted by Selberg's theorem. So you should not use Selberg's theorem in that range. So not true for very small values of zeta near 0. But maybe you can assume that, let's say, suppose it's true for large values of zeta, or maybe it's true as an upper bound so that you can get upper bounds for moments. So I'm going to normalize this slightly differently. So let's say that suppose we assume that the measure of t and t to 2t with zeta of half plus i t bigger than some number e to the v, that this behaves roughly like it's bounded above by something like x pof minus v squared over log log t. OK, I'm being a little crude here. This is exactly the same as the integral that I wrote down, except for the slight normalization. I made it here e to the v instead of e to the v times the square root of the variance. And then this integral is roughly like e to the minus v squared over 2, not exactly, but roughly the same as e to the minus v squared over 2. And then that's what I've written down here, again, if you take into account the normalization. So let's suppose that something like this is true. OK. So if you know this, then you can immediately get an upper bound for moments. So this will imply a bound for moments quite easily, because what is the 2kth moment of zeta? Well, it'll be the values on which this is larger than some number e to the v times the measure of the set on which it takes values that are as large. So this will be the integral from minus infinity to infinity e to the 2k v times the change in the measure of such large values, which you can integrate by parts to be able to use this bound. So this is essentially the same as, oh, maybe this should have been with a minus sign, I think. OK, because this is a decreasing function. So OK. But then when you integrate by parts, you'll get 2 minus signs, which will cancel out. So now I can write down. So OK. So now suppose you have a uniform upper bound for the measure on which zeta can be large. So then you would get that this is bounded by dv. And then you can just focus on what is the range of values of v on which this is as large as possible. That's a simple calculus exercise. And you can see that this is maximized when v is about k log log t. And for this value of v, this integral is roughly of size t log t to the k squared. So in other words, what the 2kth moment of zeta is picking out is, so the 2kth moment is really picking out values of zeta of size log t to the k. That's e to the v. And the set of values of zeta around size log t to the k, this set has measure, which is about t over log t to the k squared. So therefore, if you take the 2kth moment of this, you get log t to the 2k squared times the measure, which is 1 over log t to the k squared, which gives us the right bound. So you can see that there is kind of an intimate connection between Selberg's theorem on the value distribution of zeta of half plus i t and these moment conjectures. And so you can ask about the Keating-Snath conjectures, what would be the analog of some conjecture on the value distribution of the logarithms of L functions, which would produce these exponents of k times k plus 1 over 2, or k times k minus 1 over 2. And so these are conjectures that are due to Keating and Snath of Selberg's theorem. So let me take, let's suppose we take log of L half chi d and we vary over fundamental discriminance d up to size x. And we want to know the distribution of this. You can see that this is already a more subtle problem than Selberg's theorem for zeta because, well, OK. I looked at log zeta of half plus i t. It could be that zeta of half plus i t is 0, but it's 0 only on a set of measures 0. So I can ignore that in any statistical calculation that I want. Here I have a problem because if I look at log of L half chi d and L half chi d happens to be 0 very often, then I don't get a distribution at all. I can't ignore these values. But we do conjecture that there are very few values of d for which L half chi d is 0. In fact, we conjecture that there are no values of d for which any Dirichlet L function vanishes at half. So this is never expected to be 0. But if it is 0 for a lot of times, then we have a problem. OK, so we can't prove that this is never 0 or 0 on a set of density 1. So that's one reason why these analogs are not theorems, but just conjectures. But OK, but the conjecture is now quite nice. This is approximately normal. But now there's a slight difference with mean. Instead of the mean being 0, the mean is about half log log x. And the variance is a little bit larger. It's log log x. So why is the variance larger? Well, the log of zeta half plus i t has a real part and an imaginary part. Both can be big. And therefore, each of them becomes of size roughly 1 over square root of 2. So that when you square them and add, they contribute 50-50 each. Whereas this guy is always going to be a real number. And so therefore, you get this variance being slightly larger. I'll explain the mean in just a minute. Now, sorry? This is a conjecture. Yes. And similarly, one can conjecture if you look at log of l half, say, fix a modular form f, an eigenform. And then look at l half f twisted by chi d. So here we should be careful not to look at it over all discriminants, but maybe over the fundamental discriminants up to x, for which the sign of the functional equation is positive. If the sign is negative, then these values are all 0, and there's no more to be said. And then this is also a conjecture that this is approximately normal with mean minus half times log log x and variant log log x. So OK. And then what I want to say is that if you do the analog of the calculation that I did for the zeta function, take the 2 kth moment in each one of these families and rewrite them in terms of values being of certain size v, and then use these two conjectures, you will produce exactly the exponents k times k plus 1 over 2 in the symplectic case of quadratic Dirichlet characters, and k times k minus 1 over 2 in this case. And you can kind of see the difference coming in the mean, which is small in one case, pointing in the negative direction, where the moment is smaller, k times k minus 1 over 2, and large in the other case. So let me give you a kind of a heuristic explanation for where these central limit theorems come from, and then I'll state some recent results on these upper bounds for moments. So maybe to start with, you can think of log zeta of half plus i t, let's forget this, you can more or less think of this as a sum over primes p of 1 over p to the half plus i t. Of course, this doesn't converge, but it's not very far from the truth. If you truncate it at some height t, it's kind of roughly a very good approximation, or maybe let me truncate it at some height x, x is some power of t, t to the 1 over a. And then this is usually a good approximation up to error. This is not completely correct, but let's pretend that something like this is true. But it's not completely correct for one more reason. If you take the log of the zeta function, it's not just a sum over primes, it's a sum over all the prime powers, it's a sum over kp to the ks. So we have to worry about the prime squares and the prime cubes and so on. The prime cubes and the prime to the fourths are all irrelevant because they are convergent to the series when I'm on the half line. But the prime square could still be slightly problematic. Maybe it would be something like a sum of 1 over p to the 1 plus 2 i t in this case. In the zeta function case, it really doesn't affect us very much. You can kind of say that this is never going to be. So it's a bit like the problem of L1 chi that I started out with, or the log of L1 chi. This is essentially almost surely this is convergent. And if you like, you can forget about this contribution from the prime squares. So then you're left with trying to think about the sum over primes up to x of p to the half plus i t. And here you want to make use of the fact that these p to the i t's for different primes behave like independent random variables as t varies. So you can justify this. At least if x is not very big compared to t, then you can, for example, you can compute moments of this object. So if x is like t to the 1 over 100, then you can compute the first 100 moments of this. And you know how the distribution behaves. So if this behaves like a sum of independent random variables, then you can see that the central limit theorem will apply to this object. So the idea would be that there is a central limit theorem for the sum of 1 over p to the half plus i t, p going up to x. And so if the p to the i t's are just behaving randomly, so they more or less cancel out, and then you should compute the variance of it. So I should say that, OK, now if I want to do it properly for log mod zeta, I should take the real part here. So I should take the real part here, and then compute the moments of this. So this will give a kind of a heuristic justification for cell works theorem. And you can make this precise in the t aspect, because this initial approximation that I wrote down can actually be made precise if you're willing to average over t in a certain sense. So even though this could be 0, and this would be minus infinity, and then this formula doesn't make sense, it holds true in a sense of measure. But now imagine that we can make the same argument for, let's say, log of l half chi d for quadratic Dirichlet characters. So in this case, I would try to formally write down something like a sum over p up to some point, 1 chi d p over square root of p. But then I'll also have the sum over the prime squares, and then the prime cubes and so on are irrelevant as before. But you can see that now the prime square terms are actually determined explicitly. They behave, they're always equal to 1. And the p goes up to, I said here that it should go to some power of t, so maybe some power of x. But this grows like log log x to the 1 over a. And log log is varying so slowly that it really doesn't matter where I truncate it. As long as it's some power of x, or it doesn't even have to be a power of x, it could be like x to the 1 over log log x or log log x square. And it'll still be OK. And this doesn't change very much. So it's this feature that becomes the mean of the distribution, which is about half log log x. And then the contribution for the primes still behaves roughly randomly with the pluses and minuses taken equally often. So this should have a normal distribution. This part should be normal with mean 0 and variance about the variance would be the sum of the reciprocals of the primes, which is about log log x. Again, because of this log log x feature, it doesn't matter where you truncate. As long as it's remotely sensible, it'll give you the same answer. OK, so that's the Keating-State conjecture for the family of quadratic Dirichlet L functions. And let me just explain the analog of this calculation for the L functions of a quadratic twist of a modular form. So if you write your modular form as a product of a primes p, so alpha p and beta p multiply out to 1. And alpha p plus beta p are coefficients AP. Then if you do the analog of this calculation with log of L half f twisted by chi d, so you'd have to evaluate the logarithms of these quantities. So you'd get an alpha p over p to the half, and then a beta p over p to the half, and then alpha p squared plus beta p squared. So this would formally, it would look like a sum over p, the higher-order terms, which are irrelevant. So this, again, we would expect, oh, so everything multiplied by a chi dp. And this would be multiplied by chi dp squared, which is just 1. So again, these are some fixed numbers, whatever they are. These are the APs. But we would expect that this is normal with mean 0, and the variance is going to be the sum of AP squared over p, the squares of these. But by rank in Selberg, then this would be, if you truncated any reasonable point, this would still be asymptotic to log log x. But if you look at this, so you can check that alpha p squared plus beta p squared plus 1 would be the coefficients of the symmetric square L function attached to L. And that averages out to 0. This averages to 0. Which means that alpha p squared plus beta p squared averages to minus 1, almost always. So this averages to minus half times log log x. And so the mean is shifted, in this example, to something negative. OK, so those are the conjectures in these contexts. And what I've said is that if some uniform version of these conjectures were true, then you would get upper bounds of the right order of magnitude for moments in these various examples. So let me end by saying some results on upper bounds, which I'll describe tomorrow. So first there's a, so the principle for lower bounds was that if you can evaluate some moment plus a little bit, then you can get lower bounds for all the higher moments. So there's a complementary principle, which Maxime and I have been working on recently, is that if you, so let me state it roughly as, if you know some moment plus epsilon, then you can get the correct upper bounds for all smaller moments. So I'll illustrate this by the theorem in the context of quadratic twist of a modular form, or an elliptic curve, let's say. So here the only moment that we know how to calculate is the first, is we know asymptotics for this, and it's the only moment that's known unconditionally. And so what our work does is that from this one can prove that you get the right upper bound for all smaller moments. Now apart from this, we have pretty good results if we know the Riemann hypothesis. This problem has now recently been fully understood if we have RH. So some years back I proved that on GRH one almost gets the right upper bound. So for example, you would get that the 2kth moment of zeta assuming RH would be bounded by t log t to the k squared plus epsilon for any k. And last year there's been a beautiful refinement of this by Harper. One gets exactly the sharp upper bound t log t to the k squared without the epsilon. So in some ways we essentially understand the size of these moments. So let me give you maybe one more example of this result with max. One gets that zeta to the 2k is bounded by t log t to the k squared for all k less than 2, which was previously known if you assume the Riemann hypothesis, but this result is unconditional. So we have good lower bounds very often. And we also have good upper bounds essentially almost all the time if you're willing to assume GRH. Harper's result is on GRH for zeta. But the idea would be that if you assume GRH for whatever family you're working with, you would get the corresponding result. Now, and then the last thing that I'll discuss these results tomorrow along with one other refinement. I mentioned these Keating-State conjectures on log normality. Part of this work with Razevel would also give you one-sided bounds towards these results. We can prove an upper bound for the frequency of large values, which is exactly the conjectured upper bound. But we can't produce a corresponding lower bound. So for real, the real problem is that one cannot show the disapproximation holds often enough for the analysis of the underlying theorem. Yeah, so for example, I proved that seven-eighths of these health functions are not zero. But what if one-eighth of them were zero? Then there would be no theorem. But on R-eighths, we know it, right? No, you don't know that. Even on R-eighth. On R-eighth, seven-eighth would be replaced by 15-16, but not one. So there is work of one result I can mention in that direction, which is due to Bob Huff, that if you assume the Riemann hypothesis and you also assume that there are no zeros which are very near the central point in the sense of density that the number of times you get a zero very close to half this density zero, then you can prove the analog of cell work zero. But you need more than R-eighth. One remark is, in this conjecture, it implies Goldfeld's conjecture, because it's quite a strong conjecture, to say the least. The other just remark is that these terms, where you're getting the main terms, where you're getting these positivity, this is exactly the same remark goes into this theorem of Goldfeld's where, if you assume R-eighth and you're informed, do you want to explain it? So the theorem that Andrew mentions is one of the original calculations on the Birch-Svenetian conjecture, where they were looking at not at the L values, but by taking the Euler product up to some height, and then trying to make a conjecture for trying to make the Birch-Svenetian conjecture. But it turns out that if you take the Euler product up to some point, which is kind of like taking the this side of the logarithm, and if you do that, then you're off by a constant on the constant, like square root of 2, which comes from this kind of phenomenon. And it also goes to Dirichlet functions, because quadratic Dirichlet functions. Probably going in the other direction. So the proof of the moment of the zeta functions is conditional. Have you tried doing it unconditionally? And how far you can? So, well, I think lots of people have tried to do something unconditionally. We don't know, so it's an interesting question, but we don't know how to get anything for, say, the 4.1st moment, or anything larger than 4. So in some sense, the moment is kind of saying that suppose I ask you for the measure of the set on which zeta is bigger than v, unconditionally. So I told you Selberg's theorem, which says that this is going to be bounded by, essentially, t times something exponential in v, e to the minus v squared over something. Unconditionally, the only thing that we know about this is that this is bounded by t over v to the 4, if v is large. So when v is in certain small ranges, you get better bounds. But uniformly, in a wide range of v, this is the only thing that you know, which is the fourth moment, which is the content of the fourth moment. So there's nothing that we know which will beat this large value estimate. If we knew that, we would actually have consequences for, say, gap, primes, and short intervals. Not bounded, not these gaps between primes, but every short interval contains a prime. When v is the large and no much, then what goes? Well, yes, there's a range in which you don't know. Say, if v is bigger than t to the 1, 6th or something, then we know the measure is 0. But there's some range. Maybe let me say t to the 0.13, then I don't think we would know. So v is t to the 0.13, I think. Then we don't have a good estimate for what this measure should be. So it doesn't have to be a large moment that we don't know how to handle. It could be a fairly small moment, which we still don't know how to handle. Oh, one sec. I'll give you a little bit of that. All right, we'll ask a very ignorant question. Once you have half-ignorant, once you know the moments, if you knew the moments, what wouldn't you know? That is, I mean, if you had a compactly supported distribution, which this one isn't, certainly, then once you know the moments, you know the distribution. Of course, here we don't have something compactly supported, and we will never know the moments exactly. Yes, now, someday, I'm conditioned in the main terms and some good error terms. But are there some interesting distribution of questions that would not follow from a good knowledge of the moment? That would follow, or that would not follow? Because almost everything would follow if you knew the moments to a high enough. That's the motivation, in part, I imagine. Well, I don't know if the moments, for example, imply pack correlation, or I don't see how it will. Yeah, well, that's very fine, yes. That's not clear, like, which is? Actually, I can make a precise conjecture on moments which would imply the maximum size. Really? Yeah. Yeah. Root log. Yes, yeah, to the root log. Just that, you know, well, I might as well say it rather than be accused of having cliffhangers. So I conjecture that this is bounded by T log T to the K squared if T is, let's say, bigger than a million, or billion, let's say, uniformly in all values of K. Does this agree with the CFKRS conjecture? It agrees with the CFKRS conjecture. You sure about it? Yes. Bullet tennis. Right. It agrees with the CFKRS conjecture. Will it be greater than or equal to 1? Well, I said a billion, but OK. Let's think about a billion, just to keep it safe. But, you know, I think it might actually be. The point is that the constants that go in front here are actually very small. And so this is, in fact, a very weak conjecture to say. But if you knew this uniformly in all K, it would imply that the large values of data are more. That would, but if you looked at that formula, there'd be secondary factors. Yeah, yeah, but you can check that the formula satisfies this bound. So that I checked. Well, what would be an example? Yeah, it's not a fair question, but what would be an example of something that would not follow, whether it would not be local, like for correlation? I mean, moments only control, like, I don't know. Love scale. Right, yeah. And would it come from everything about it? No, I don't know. I think it's driven only by the very... But he wants to assume it uniformly for all K, right? So, for example, if you take those very small moments, then you will be controlling the size of data. So by looking at the very small moments and letting K go to zero, you will recover Selberg's theorem on the central limit theorem. So, it's hard to say what can't be done. I mean, what reasonable answer can one give? Well, I think this from a general conjecture cannot be done from moments, for example. Well, except if you take complex moments, then it can be done. But you need some uniformity, right? But Harold is giving you. Are you not now, but all right. What are the limits? You made some statement about them, in some limit from Selberg's use of small values. Yeah. What do you believe to be the limits of the Selberg distribution? Oh, so this is something which has worked out in Chris Hughes's thesis. And so roughly, the idea is that if I'm doing this from memory, so I'm not completely sure. So let's say v is like minus log log t. And you're looking at values of zeta of size 1 over log t, something like that. Then in that case, I think that there's a flip from e to the minus v squared over log log t to just e to the minus v. Does that make sense? So instead of looking like e to the minus v squared divided by something, it just flips to having something of size e to the minus v from that. Yeah, we don't believe that there's a similar thing for large values. And this is simply because you get very close to a 0, and then those values are much larger than. So if you're within e to the minus v over log t of a 0, then you have a large measure, something like this.