 So, let me recall where I ended last time. I began discussing the problem for moments of L functions on the critical line. So, for zeta, there are these conjectures of the 2kth moment behaves like some constant ck times t log t to the k squared. And the ck was factors as the product of two relatively well understood things. a k is very easy to understand. It can be written explicitly in terms of an Euler product. So it can be given as an Euler product, which converges absolutely. And then this gk was this mysterious factor which I wrote down a formula for a conjectural formula for last time. And then I also mentioned that you could look at analogs of these in other families of L functions. So for example, you could look at chi mod q. So these are all conjectures. Let's say if you average over primitive characters and this should be asymptotic to some other constant ck times q log q to the k squared. Let's say q is prime so that I don't have to worry about how many primitive characters there are. Or if you look at fundamental discriminants and you look at quadratic characters, then this should be asymptotic to some constant times. If you make a q, I'll actually know the state. If you make a q, I will mention that, sorry, this should be k squared, of course. If you make a q average, you'll know it for one more value of k. Actually, maybe, so I'll mention results in that case. And you could look at twists of a modular form, where you expect something like x log x to the k times k minus 1 over 2. So first, let me explain where these conjectures come from and then I'll explain what we know towards them. So these are representative of many other kind of conjectures that you could write down. So why do we have these shapes of these asymptotic formulas? Well, so there is also a connection between these moment conjectures and the distribution of zeros of L functions and random matrix theory. So let me just say, let's say for the zeta function, for zeta, if you look at the zeros and suppose we assume the Riemann hypothesis and write the zeros as half plus i gamma and arrange these gammas in ascending order, then there's the conjecture which goes back to Montgomery and then with numerics by Othlisco and then extensions by Rudnik and Sonak. That if you look at the, so these zeros get closer and closer to each other as you proceed up the critical line. So the nth zero is roughly of size 2 pi n over log n. Because there are about t log t zeros up to high t. So you can normalize this by looking at gamma n times log gamma n and divide by 2 pi. So the nth normalized zero is about size n. And you can ask for the spacings between gamma n's. So Terry mentioned that for primes if you take the normalized spacings between primes they should behave like spacings between random numbers thrown down with mean spacing 1. This one is expected to be different. The spacing between these ordinates should correspond, is expected to correspond to spacings between eigenvalues of large random matrices. So maybe to give one kind of ensemble that you could take here, you could look at the unitary group and choose a matrix uniformly from our measure on this group. So, and then it'll have eigenvalues which look like e to the i theta 1 to e to the i theta n. And then these numbers theta 1 to theta n lie between, let's say we arrange them so that they lie between zero and 2 pi. And then what you're interested in would be the normalized spacings between theta j times n over 2 pi. So then the difference between two consecutive eigenvalues would be, if you normalize these eigen angles, the normal, the spacing would be 1. And that, whatever spacing that you get should correspond to the spacing. So in other words, for each matrix you look at the, the spacing distribution here. And then you average that over all matrices in this group, u n. So when you average it, there's some measure on how these theta 1 to theta n are chosen. The measure is not simply d theta 1 to d theta n. It's something that keeps track of this, e to the i theta j, let's say j and k go from 1 to n and then times d theta 1 to d theta n. So, so the difference here from just taking angles independent of each other is that the angles here repel each other and you don't want two angles to get close to each other. The space of, the space of the group where any two angles are very close to each other is very small, comparatively. Okay, so that's a conjecture on, on the zeros of, of L functions. And the conjecture says that you should look at the spacing between zeros of random matrices and you want large random matrices. So at the end you let this parameter capital N go to infinity. So Keating and Snaith had the, had the idea that you should make this a bit, bit more precise. And you can make it as an actual model where you say that if you look at statistical properties of zeros around height t, then that should correspond to, so, so you don't just want to say that as t goes to infinity it corresponds to something on the random matrix side. As n goes to infinity, you want some relation between t and n. And the relation that they, that they, that they postulated was that, well the spacings between two consecutive zeros here is roughly 2 pi over log t. And the spacings between two consecutive eigenvalues here is roughly 2 pi over n. So that kind of gives you this, this equivalence. If you take, except these two things equal to each other, you should take n to be roughly speaking on the scale of log t. And you look at random matrices of size log t. Okay, so if you, if you believe this, then that gives you a way of thinking about other problems on, on L functions. So, for example, if you want to think about properties of zeta of half plus i t, around height, around height capital T. Then you might expect that this is modeled by, by some other object, by the object here whose zeros are the eigenvalues of your random matrix. And that would be the characteristic polynomial of a random matrix of size n, which is about log t. And then you can imagine that maybe the conjecture for the 2kth moment of zeta of half plus i t should correspond to whatever we can find for the average over the unitary group of the characteristic polynomial of some matrix g to the 2k average over the group g. So, so now this side, one can calculate exactly. And in fact, what's done by, what's something which was computed by Selberg in a famous paper that he wrote in a Norwegian high school journal. And, and so, so he computed that this has a poll, has, is a polynomial, well, or from his work it would follow that it's a polynomial in n of degree k squared, okay? And so this should correspond to log t to the k squared. So you can see this matching and the leading order asymptotics for this involves this constant gk times n to the k squared. And this is the motivation for the Keating-State conjecture. So notice that in this, in this analogy, you don't see this arithmetic factor ak, which I said was easy to understand. And you could write in terms of some Euler product involving primes. Of course, there are no primes on this side of the, of the random matrix side calculation. You can form an analog of this in all the other cases that I wrote down. The group that you average out might change depending on the family that you're, that you're interested in. And you can form analogs of these, of these conjectures. So, so the other conjectures. So, what are you evaluating in k squared on y? Yeah, okay, you can evaluate it anywhere, let's say, just at zero, let's say. It doesn't matter. It would make a, so in the t aspect, it makes no difference where you evaluated. In, in one of these other families, you might have to evaluate it up. If the, if, because there's no special point in this, in this situation. So the other conjectures are formulated similarly, and you would, so, so, so there, there were refinements of this conjecture of Montgomery and Rudik, Rudik Sarnak by work of Katz and Sarnak on, on family. So given some family of L functions, Katz and Sarnak prescribe a, a symmetry group, which kind of tells you what family of random matrices you should look at in order to understand this pacing, zero spacings of L values in that family. And you use that to model the moments as well. And it, and it, and it gives you these different predictions that it seems to work. Okay, so these are, you should, these are of three cases. Unitary, which is where examples are like the zeta function in t aspect, or all Dirichlet L functions. Or you could look at orthogonal. And here, the typical examples would be like the twists of a, of a modular form. And you see this feature of k times k minus 1 over 2 coming in the, in this example. And the last would be, would be some plectic where the prototypical example is something like just quadratic Dirichlet L functions, okay. So, so this is one way of approaching these, these moment conjectures. It, it's very, it's very pretty in, in, in, you know, kind of telling you conceptually where this constant gk should come from. But it also, maybe one part of it which is unsatisfying is that you have to put in these aks by hand. That there's no way in which you see that arising from this heuristic. So maybe one, one heuristic along these lines should be that maybe one should think of the zeta function as having two parts. One is that it has a, it looks like an Euler product up to some point. And then there's another part which comes from the zeros. The Euler product one does separately and must match something from the ak times some, some well understood thing. And then the average of the zero part should give you something which just gives to use this random matrix calculation. So there is work along those lines by, by Hughes, Gonic and Keating. I'm not sure that I'm remembering the names correctly. So maybe, maybe Farmer, Max would know. No? Okay, I think it might be Farmer rather than Keating. Okay, we can give John Keating some credit as well. So, so these, so they formulate some hybrid models of trying to write L functions in terms of both primes up to some point plus some contribution from zeros and then you can study each one of these, each one of these separately. And then that, that produces the, the contribution from A, the primes will produce the ak and the contribution from the zeros will produce the, the gk. Okay, so, so now I want to tell you a different way of approaching these, of approaching these conductors, which is due to Conry, Farmer, Keating, Rubenstein and Sneith, which for, just for integer moments. So, so this argument actually will work for real moments as well. And for integer moments, there's a conjecture which will identify, say, let's say one advantage of this is that it identifies all the lower order terms in the moment conjecture. So, so for example, in the case of the kth, 2kth moment of the zeta function, it'll produce that there's a polynomial pk in log of t. So, of degree k squared, and then it'll identify for you all the, all the terms in the expansion form of all the coefficients of pk. But it's a very messy polynomial, it, it, it won't just be this polynomial, because as you can see it'll also have to include arithmetic coefficients corresponding to generalizations of this ak. And this conjecture is also very, very simple in a, in a certain sense. So, I want to explain how, how this conjecture works. So, let's, let's, I'm gonna do this in a special case. And we're looking, going to look at all Dirichlet characters chi mod q. Let's say these are primitive. Assume that q is prime if you like, and something large. And I want to make a, a conjecture for the sum of l half chi to the 2k. And the idea here goes back. So, I mentioned last time that Ingham computed the fourth moment, but he computed the fourth moment of the zeta function with various shifts with four, with four parameters. So, the idea here is to do something similar here, not just to look at this as the, as a 2kth moment of one object, but to introduce 2k parameters. So, it's convenient if I look at the completed l function instead. So, let's say we multiply this by, oh, there's an epsilon chi here. So, so complete the l function by multiplying by the appropriate gamma function. So, this is either 0 or 1, depending on whether the conductor is, the character is even or, even or odd. And this is a Gauss sum whose sign is 1, absolute value is 1. Okay, well, I won't worry too much about this, this a. So, suppose instead of looking at l half chi to the 2k, we look at, we look at the following object instead. So, I introduce 2k new variables, alpha 1 to alpha k and beta 1 to beta k. And of course, if I set all the alphas and the betas to be equal to 0, then I recover the 2kth moment that I am interested in. There are some factors of these, of these guys, but of course I can just remove them at the end. Now, okay, so this object, whatever it is, you can see that it has, that I'm allowed to permute some things and the answer should not change. So, I can permute all the alphas and clearly nothing changes. I can also permute all the betas and nothing changes. But also I claim that I can take, I can take this. So, let's say I put alpha k plus 1 to be beta k, beta 1, alpha k plus 2 to be beta 2, and so on. And I claim that if I actually, I can think of permuting these 2k variables, and then still nothing should change, okay? And this is the step that you have to check. And the reason for that is that if I, if I make a permutation of, of all of these 2k variables, then some of the pluses will become minuses. Let's say some alpha 1 becomes beta 1, and beta 1 was back to alpha 1. If I do that, then I'm using the functional equation twice. Once to replace this by a half minus alpha 1 chi bar, and then using the functional equation once to replace this by a half plus beta 1 chi. And if I use it twice, the sign of the functional equations will cancel out as well. Epsilon chi times epsilon chi bar will be 1, okay? So, so nothing changes can, so what we see is that there's a slightly non-trivial relation that I can permute alpha 1 to alpha k, alpha k plus 1 to alpha 2k in this notation. And then again, nothing should change. So the, whatever answer that I have, whatever answer that I want to make here, this answer should be symmetric under the action of S2k, of permuting by these 2k variables. Okay, so now let's make a guess as to what the answer should be. And we'll make a very naive guess. We'll expand all of these into this Dirichlet series, even though nothing might actually converge. And then just identify diagonal terms. So we have various factors coming from, so you have these q over pi to the half plus alpha j. So again, I'll forget this a that appears. So let's, so this is just some nuisance factor. Let's not worry too much about this. And then what I have is a sum over chi mod q. And then I'm going to expand out all of these, these L functions as Dirichlet series. I'm not worried very much about convergence. So I have a sum over k variables m1 to mk and another k variables m1 to nk. And then I have something like chi, something like that. Okay, so now we do one more piece which is not justified. We take the sum over the characters chi mod q and we use the orthogonality relation. The orthogonality relation will give you that the product of m1 to mk should be congruent to the product of n1 to nk, mod q. For this sum to contribute. And we're going to think of that orthogonality relation as just picking out the diagonal term m1 to mk equals n1 to nk. Okay, so the guess from summing this might be, so the number of characters times whatever you have here, times the diagonal terms. Well, these should also be co-prime to q if you like. So variables co-prime to q. And then you have now this object is something which is nice and multiplicative, if I look at a prime p, it must appear the same number of times on the left-hand side as on the right-hand side. And on the left-hand side, let's say if it divides one factor, it might divide some, it might appear as p to the half plus alpha seven, and then it might appear as some p to the half minus beta 11 or something like that, right? So you can write this as a product. So this one can formally think of as a product over all primes p of various expressions which would look like, let's say, maybe it can appear in this product more than once. But let's say the terms which appear once would look like one over p to the one plus alpha j minus beta k, beta l. And the sum over all j and l. Plus higher order terms. And you can see that, okay, the higher order terms will involve larger powers of p here. So think of alpha j and the betas being relatively close to zero so that if I have a one over p squared or so, that gives me convergence series. So the main contribution is this p to the one plus alpha j minus beta l, which I can think of as a zeta of one plus alpha j minus beta l. So we would compare this with the product over all j and l going from one to k, zeta of one plus alpha j minus beta l. And then what is left over will be something analytic in a wider region. So, okay, so that's going to be my guess for what happens if I just blindly expand the Dirichlet series here and then just restrict myself to the diagonal terms, okay? So I would make a conjecture that the answer is something given by this product of zeta functions times some other function which is nice and tame, can be understood easily. And this is roughly of the right shape because if you imagine alpha j and the beta l being small, then this is like having a pole of order k squared for the zeta function. Now, okay, but now if you look at this answer, this answer is not really symmetric in the variable. So you're allowed to permute the alpha j's, that's okay. You can permute the beta l's, that's okay. But you cannot now make some of the alphas become betas and some of the betas become alphas. So you miss some symmetry that the original answer must satisfy, so much better. So now the idea is, now let's, so we are, so this answer, the guess is symmetric under sk cross sk, but not. So the conjecture of Conry-Farmer Keating Group in Sinonsite is very simple. You just symmetrized this answer. So symmetrized the guess. So you take a sum over all permutations in this quotient and just take the guess, let's call that z of alpha 1 to alpha 2k. And then you do that, you apply the permutation to these parameters. And so each term in the guess, so if you just look at this object, this has a whole bunch of singularities. It has singularities whenever one of the alpha j's equals a beta l. Or if one of the alpha j's happens to be 0. So there's a lot of singularities of that expression. But if you symmetrize it, then the object is actually regular in all the alphas and betas. So this object, so this has many singularities, but the symmetrized answer is entire in alphas and betas. So all the symmetries cancel out. And where do you get powers of log? Well, you can see that the powers of q will vary in this answer. You might have a q to the alpha 1 plus, the sum of the alphas minus the sum of the betas in one situation. And maybe that exponent will get flipped around in other cases. So the powers of log will come from the power of q that you have there. And then finally, if you take all the variables equal to 0 in this answer, it matches exactly the Keating-Sneith convention. So in the alphas and betas, don't they use the functional equation? So this uses the functional equation in a certain sense, right? Or you mean do they use the- Yeah, of course, they use the functional equation in the alphas and betas. Right, but that only takes, that gives you, so that accounts for one permutation where you replace all the alphas by betas and all the betas by alphas, right? Because it's clean on this one. Yeah, I think it's clean on this one. I can't disagree with that, right? But it's the same idea. So in a certain sense, this moment conjecture is the simplest conjecture that you can write down, which is likely to be true, right? You must have this S2K symmetry and this is the simplest object which has that symmetry. So it's a very intuitive conjecture in some ways. But what we are missing in all the ways that we know to attack these problems, we start with, as Emmanuel was saying, with something like the approximate functional equation, where you start with an expression for your L functions. Which if you put in these variables, you start with two terms, one of which is, so one term will be symmetric under pi being the identity permutation, that's what you have. And the other one being the permutation where all the alphas are replaced by all the betas, just an involution there. And you miss all the other symmetries. And so whenever you have to make a long computation, which eventually recovers all the symmetries that you had, but you don't know how to do that. We don't know how to do that in any straightforward way. We don't know how to keep the symmetry from the start. So what do we know about this conjecture? Well, so we have theorems for small values of k. So for zeta, we only know for k equals 1 and k equals 2. The second and the fourth moments. If you look at the family of quadratic twist, quadratic Dirichlet L functions, well, maybe let me do L have chi to the 2k for chi mod q. So this is kind of exactly analogous to the zeta function. But already there are some features here which are harder than for zeta. And we now know this for k equals 1 and k equals 2. But for k equals 2, for the fourth moment of the zeta function, for a long time we know that you can get an asymptotic formula with a power saving in the error terms. So in this case, to get precise information on error terms, seems quite complicated and it's only known fairly recently by work of Matt Young. And in the case where q is prime. Then one can look at moments of quadratic Dirichlet L functions. So here we know the first three moments. So here you can take just integers, they don't have to be even. So we know the first three moments. These are maybe the first two are due to Utila, and the third is due to me. And then the fourth one is very much on the edge. It looks like one should be able to prove the fourth moment. There's a nice result of Heath Brown, his large surf for real characters. Which implies that if you look at the fourth moment, you can get a bound which is almost sharp. You can get the right power of x, x to the 1 plus epsilon. So in situations where you can get a sharp upper bound for the moments, you might hope that you'll be able to refine it to get an asymptotic. So this is one such example, but we don't know how to refine it to get an asymptotic. And another result in this direction is that on GRH, Matt Young and I can get an asymptotic. Actually, this is not strictly speaking correct, so maybe let me put this in quotes. We have a method which we worked out in a different case, which should extend to give asymptotics in this case. But nobody has written it down. You're using quadratic reciprocity. You do use quadratic reciprocity, right? So another example where we know very little is to take the quadratic twist of a modular form. So f is a fixed eigenform. And in this case, we know only the first moment, k equals 1, which was done, so it's not strictly speaking correct, but maybe, but in principle, this was done by the Muthys and Bump Friedberg-Hofstein. They dealt with derivatives and obtaining the first moment in this case, but the same method would work for just these values. And knowing this for k equals 2 would also be an interesting extreme case. We are almost there. The same result of Heath Brown would give you the right upper bound for this moment to give you an x to the 1 plus epsilon bound here. And again, we don't quite know how to do this, but on GRH. So this actually is what Matt and I proved, given asymptotic. So this asymptotic is, it doesn't verify the actual polynomial that you're supposed to see, it only provides the leading order term, the first term. And you can't say anything more. That's a lost guess, okay. Now, another kind of results which have been done recently is work of Conry, Evaniates and myself. And this is a question on what happens if you average over q as well. So you might, you would like to prove something over where one can see this kind of 42 that appears in the moment conjectures. And we would like to see that this is 42 times something. So, so there are q characters of an average over q. There will be q squared characters. And then the power of log should be log to the nine. And so this is to account for the a k's or a 3's average out in some sense. And we would like to be able to prove exactly this result as stated. We don't quite do that. But we do something pretty close to this. So, let me replace this by, this is the completed L function again. That's half plus i y. And then I need to introduce a little integral here with respect to y. And then one can prove, sorry. Without any weight, if you like. Because it's a completed L function. So the completed L function, the gamma factor will decrease exponentially as y gets large. So you should think of this as a small average of the L function in an interval of length one. But, and then this we can verify an asymptotic formula for this. In fact, what we prove is something more precise. We actually get the full asymptotic expansion that's conjectured by Keating, by Conry Farmer, Keating Rubenstein, and Snaith with an error term, which is something like q to the 1.9. So you get a power saving in the remainder term here. And then there is very nice work of Chandi and Lee from maybe a couple of years back, where they prove the analog of this with the eighth moment. And the main thing is that they see the 24,024 that was conjectured. So but in this result, you don't get a power saving in the remainder term. This is really just checking the asymptotic. You don't know the full asymptotic expansion in this situation. And also, this is on GRH. But maybe this is the largest moment for which we can check these moment conjectures so far, okay. Now, so those are the special results that are known. And what I'll do in the next couple of lectures is to discuss failing asymptotics. We might want to get good upper and lower bounds for these moments. So I'll discuss now lower bounds in the next lectures. I'll discuss upper bounds for these moments. So the rough maybe the say one line summary of this is that is that we can get the right upper bounds, right lower bounds for essentially any family in which we can do something. So whenever we can compute some moment. So usually you would like to compute this maybe the smallest moment that you can. The first moment, like if you like, let's say the first moment plus epsilon. Which I'll explain what this means. That you should be able to put a little bit of a Dirichlet polynomial as well. And also be able to compute the moment. Then this will imply the correct low bounds apart from constant for all larger moments, okay. So let me try to explain how this works. And this is something which was developed by Rudnik and me and with the recent extension by Maxine. For GV and me. So results like this for the zeta function are classical. They go back to maybe to titch march. That you can get a lower bound for zeta half plus IT to the 2K. Which is of the right order of magnitude. And you can think of this as just being an application of Bessel's inequality. So you can look at zeta of S to the K. It becomes a complicated object when K is large. But at least you can understand how it's in a product with functions like N to the IT. You can understand this for small values of K, for small n, up to t to the 1 minus epsilon. And then you can use Bessel's inequality to say that if you understand these inner products then the mean square of zeta of S to the K is bounded below by basically the averages of these mean squares. But this proof doesn't by itself extend nicely to families of L functions. And you can also ask what happens when K is not a natural number. And here there was work by Ramachandra and Heath Brown who got the same lower bound whenever K is a positive rational number. So you can ask if you have the bound for all rational numbers, why don't you have it for all real numbers? But the reason is that the bound that they obtain will depend upon the height of the rational number K. So the constant that goes in here will depend upon K. But it doesn't depend just on K, it depends on the height of K. So you have a slightly weaker result for all real numbers K. So now let me first describe this work with Rudnik. And let me illustrate it by first giving you a proof that if I want to look at the sum of L half Chi D to the K, D up to X. So let me just say first for all K which are natural numbers. But then I'll extend it to rationals and the rationals in a little bit. That I can get here a lower bound of the form X log X to the K times K plus 1 over 2. So the main thing that we're going to use in the proof is that, okay. So I said that if I can compute the first moment plus epsilon, then I'd be able to produce results of this type. So let me illustrate what I mean by that. I would like to be able to compute the sum of the first moment L half Chi D, say fundamental discriminant, and twisted by some number Chi D L, let's say. And L is some number which I'd like to be some small power of X. Maybe L goes up to X to the 1 over 100 or something. You can do much better, but let's start with this. So how should I estimate these? Well, first you can start with some kind of with a relation for the L function L half Chi D, which is called the approximate functional equation. But one can, there are many exact versions of this. So there's nothing approximate about it, except the name. Which says that you can essentially understand this L function by two sums, which go up to roughly square root of the conductor. So the sum goes up to basically square root of X Chi D L N over root N. And there are basically two sums of this. There would be the sign of the functional equation involved. But for quadratic characters, the sign is always one, so I can write it as just two copies of this kind of object. So, and this should not really be root X, it should be like root D over pi or something like that. But let's not worry about that. And so if you put that in here and exchange sums, so you get that this moment is the sum over D up to X. And then a sum over N up to root X. So let me write this as D over N, and then there's an L over N. So now I would like to take the sum over N outside and bring the sum over over D inside, okay? Sorry, what are they right here? D over L, okay? So I take the sum over N outside and bring the sum over D inside. So I would like to understand something like twice the sum over N up to root X. And then a sum over D up to X, D over L. So now I can view, so instead of just viewing this as a sum over character's chi D, by reciprocity I can view this as a character sum to the modulus N times L. And N is fairly small, it's only of size root X. L is fairly small of size only some small power of X. So it's a character sum where D is pretty large and this NL is pretty small. So I can really understand character sums of that type quite easily. By even something as simple as Poliovin-Ugradoff. So of course, the larger the moment, the more complicated your argument to handle this should be, would be. But in this case, even a simple Poliovin-Ugradoff inequality argument would say that, would imply that I only have to worry about the terms, where this is the principal character that gives you a main term. Everything else gives you an error term. So you only have to look at the terms where N times L is a square, is the main term, and everything else gives you remain the terms. Okay, so what would that be? If I write L as, let's say, L1 times L2 squared, where L1 is square free, then N has to be L1 times some other number M squared. And so you can now understand this fully. The answer should be, so the answer should be, apart from some constants, 1 over square root of L1. And then a sum of 1 over M in some appropriate range gives me a log of root x over L1. So that's the answer for this moment. And you can see that we can evaluate this quite easily with a very good main term and a good remainder term, at least for small values of L. Okay. So now we can use this to get bound for all the moments, as follows. So I will look at the sum of L half chi D. And then try to think of something which looks like an approximation to this, which I'll call a of chi D to the k minus 1. Let's say k is an integer. And a of chi D is going to be a truncation to something that I can think of as being an approximation to the L function, but not quite. So this little x, if it were like root x, then it would be really a good approximation to the L function. But I would like to choose it to be some small power of x, maybe x to the 1 over 100 k. So if I choose that, then if I expand A of chi D to some large power, that's still going to be a short Dirichlet polynomial that I have here. And so I should be able to evaluate all of this, all of this object. So I should be able to evaluate this. So I'll finish this calculation after lunch. So we can evaluate this. And then on the other hand, I also get just by using Holder's inequality, that this is bounded by L half chi D to the k, to the 1 over k. And then the sum of A of chi D to the k, to the 1 over k minus 1 over k. And I can evaluate this too, because this is just a short Dirichlet polynomial that I have to evaluate. So this can also be evaluated, okay. And then of course, if you compare those two, then you get a lower bound for the kth moment of L half chi D. And this moment, you don't lose anything in this argument, as I'll show. And you get the right lower bound. So let me stop maybe here and then I'll continue, I'll finish off this proof. Okay, are there any questions? Do you have a heuristic for fractional moments? A heuristic for fractional, what? The heuristic is CKR at best. Yeah, I think they don't have a good heuristic for fractional moments, I think, based on this extrapolation. Right. But so for example, I think it will not be a polynomial, right? So we don't, yeah, I think it's not a good, yeah. I don't think anyone has a good idea of what the conjecture should be there. Something that obtains a power saving of, so they conjecture that CFKRS for these integer moments also conducted that there should be an error term of size square root of T or square root of X or whatever. And this conjecture may or may not be true. But there's no even plausible conjecture for fractional moments of that. Thank you again. All right.