 So I'll continue from last time. We were talking about upper bounds for moments of L functions. And so everything that I'll say in this talk is joined with Maxim Rajivik, L slashes, OK, both of them. So I told you last time that the principle here is complementary to what I described earlier for lower bounds, where if you have an asymptotic for some moment, you get lower bounds for the correct lower bounds for all higher moments. Here it's the complementary principle that if you have an asymptotic for some moment plus epsilon, then you have the right lower bound for all smaller moments. And I'm going to illustrate this by talking about one particular family. Let's think of the family of quadratic twists of an elliptic curve. So let's say something is not so important to us, but at the moment, say you're given some elliptic curve E given by y squared is f of x for some cubic polynomial f over the integers, let's say. And we want to look at those twists, D, which are fundamental discriminants, for which the twisted curve E d, the sign of the functional equation is 1, since in the other cases, the L function is trivially 0. And then we want to understand, say, moments of L half E twisted by chi D. Or again, if you think of the analogy with Selberg's central limit theorem or these Keating's native lectures, we might also want to understand the distribution of log L half E twisted by chi D. And so this is an example of a family where we can compute very little. So usually, you can compute one or two moments. In this case, one can actually compute exactly one moment as known, but this is known with some degree of flexibility that you can put in a short Dirichlet polynomial, in fact, even a fairly long Dirichlet polynomial, and still evaluate the first moment. So that's the 1 plus epsilon kind of situation that we know. So in particular, for all moments larger than the first, you would get the right lower bound by the method of Rudnik and me and its extensions by Max and me. Now, so here, let me state the kind of results that we can prove. So first, since we know the first moment, I'll try to indicate that we can prove the following upper bound for all k less than 1, that this is at most the right, the conjectured asymptotic, which in this case is k times k minus 1 over 2. And secondly, so I indicated that in this sub, that there's the analog of the Keating's native conjectures, which is the analog of Selberg's theorem, which would say that if you look now at log L half e twisted by chi D, the conjecture of Keating and Snaith would be that this is approximately normal with mean minus half times log log x and variance log log x. So I'm thinking of the discriminance D being of size about x. So this, as at the end of the talk, Andrew mentioned that this is a very powerful conjecture because it implies, Goldfeld's conjecture, that if the sign of the functional equation is positive, then 100% of the time, the rank is 0. So this is a refinement of that conjecture, if you like. And so we can do this. This projection of course, there varies, man. You have to do within the family. It's all subjective. And 50% plus, 2 to the minus, in what family? Well, I think he did conjecture for quadratic quiz, right? Yeah. Yeah. I mean, just by twisting the character, you control the root number and can easily accomplish whatever percentage that we use. But we are in this family where the sign is plus 1, right? Suppose I want to give only for the derivative reduction excluded. There's nothing in the derivative, are there? Because the additive reduction controls the root number very easily. So plain statement like that. But what are you disagreeing with? That 100% of the value functions are non-zero here or? Equal to conjecture. I can tell you that the family of electric cars, you sign 1 in the equivalent percentage, 100%. Sure, if you look at the family of rank 2 to this, then you'd be OK. You'll specialize family. Right. But I am specializing the family here. For this particular family of quadratic quiz, with the sign of the functional equation being 1, this conjecture is true. And it implies this conjecture of Goldfeld that 100% of the time the rank is 0. And I don't know the history of it. So I mean, if you think that somebody may, it's not a deep, it's not a, yeah. It's a secret, in my opinion, meaning this conjecture. OK. That's between you and Mr. Goldfeld. OK, so proving this Keating-Sneight conjecture is quite hard, because even, let's say, if you assume GRH, we don't know how to prove that 100% of the time these are non-zero. But what we can prove is one half of this Keating-Sneight conjecture, namely that we can prove that there's an upper bound. So OK, so let me state this. It still happens to your time a little bit. Because as you are asking, his thesis is a real good result of our project. Because when he did it. I agree, Henrik, but it's not for this family. He caused this conjecture. Right? They don't remember depending on the magnitude of this thing. Yes, but this is from more complicated families. The root number does not do anything interesting for what I decreased. It was the one thing that was well understood. So maybe to make the point, the root number in this family is very simple. It just depends upon the progression that D is, that D lies in. If the conductor of the curve, let's say the conductor of the curve E is fixed to be some number N. So maybe I should also specify here that D is something that let's assume that it's co-prime to N and also is odd. And then it's a very simple thing to write down what the sign of the functional equation is here. And it just depends on progression. It's easy to care. Change is by treating by some kind of character. So it can easily control it. Right. It's not a big deal. In this case. It's what would my brother do. He would have a lot of high-density reduction in that curve. And to do it for fueling more fuel reduction, very, very high-density, that is, if he's there, not somewhere in that case, then it's a change. And it's just that. OK. So in this case, what we can prove is that if you look at the number of discriminants D up to size x that lie in this side with that have the sign of the functional equation 1. And let's look at those for which the log of the L function. So we want to take into account that the mean is expected to be this negative number. So let's subtract the mean, which is the same as adding half log log x divided by the variance. And let's count those for which this quantity is big. Now, so this kind of assumes, for example, that this value is not 0. So you can't prove a lower bound for this object. But what we can prove is an upper bound for this object that this is at most as large as the number of discriminants up to x lying in E times what we would guess, which is the Gaussian, which is a low of 1. So in other words, what we would like very much would be for an equality here, which we don't know how to prove, but we can prove a one-sided bound towards that conjecture. Now, for this family of quadratic twists, you can get some kind of interpretation of this result for orders of the take-off-reverge group. So let's assume, if you like, the Birch-Spenitz-Dyer conjecture, but you really don't need to assume all of the Birch-Spenitz-Dyer conjecture, you're only assuming it for rank zero twists, where it's almost a theorem, but not quite. So the only thing is that almost all of it is known. The group is known to be finite. But on the other hand, we don't know exactly what powers of very small primes divide the group. So you can't really control all of the order of it, because you don't know how many powers of 2 divided or 3 divided. I tried to look in the literature for some clean statement of this, but it doesn't really seem to exist. For what I'm going to write down. But this is equal to, so it's what would be conjectured by Birch-Spenitz-Spenitz-Dyer, so that this should be equal to this L value times various, so the square of the torsion group over q, and then there's some real period omega of d, and then there's some Tamagawa factor, Tamagawa, which I think is roughly correct. So we know how to understand this. This is some bounded object which is not very hard to understand how this behaves as a function of d. The real period, roughly speaking, grows like 1 over square root of d. So in other words, this just goes to the numerator of square root of d times the L value. And then the last thing to understand is this Tamagawa factor. So the Tamagawa factor, it factorizes as a product over various primes of some function tp of d, so it's a local calculation. And most of these factors are 1, and you essentially have to worry only about the primes p that divide d. Maybe you also have to worry about the primes that divide the conductor n, but that's a fixed set of primes. We don't have to worry about it. So if p divides d, then these numbers tp of d can be described as 1 plus the number of solutions to the congruence f of x is 0 mod p, where this is the polynomial that appears in the definition of our curve, y squared is f of x. So now you can see that the conjecture of Keating and Snaith on log normality of this should be translated to a conjecture on the log normality of Shah as you vary in this family of quadratic twists. So, well, everything is easy except for this Tamagawa factor. And the Tamagawa factor is not so hard, because it only depends in a nice way on the primes that divide d. So if you carry that out, then the conjecture that Max and I make is the following. So you have to understand how these Tamagawa factors behave. So let k be the field generated by the two torsion points of e. So this is some field whose degree over q is either 1, 2, 3, or 6. So it's a Galois field. And the Galois group of k over q, let's say, is some group g. And the Galois group is a subgroup of gl2f2, if you like. Or I'm going to think of it as a subgroup of s3. So it could be the trivial group, or it could be generated by a two-cycle or a three-cycle, or it could be all of s3. And then we can define the following parameters. I'm going to call c of g to be 1 plus the number of fixed points of g, interpreting this as an element in s3. So this could be a number which is either 1, 2, or 4. Exactly as this could be a number which is 1, 2, or 4, always. So that's supposed to be like these TPs. And then I want to define two parameters, mu of e. So this is going to be the mean of the T-chaffer of which group in this log normality conductor that I want to make. And this minus 1 half is the minus half from the Keating-Snaith conjectures. And then it'll be modified by the expected value of this log c of g. So that's the factor coming from these Tamagawa factors which appear in the denominator. And then the variance is going to be defined as, well, the Keating-Snaith variance would have a 1 for the log log x. And then there should be a variance coming from these Tamagawa factors which is this. So this is the kind of thing that you would expect if you play around with the Chebotarov density theorem and a simple Erdisch-Katz type argument. An Erdisch-Katz type argument will tell you that this kind of object or the logarithm of it is Gaussian with a certain mean and a certain variance. This would be the mean and that's the variance. And then if you put everything together, then the conjecture would be that then as d varies in this family e, let me say log of Shah e d divided by square root of d, that's from this real period, the size of the real period, that this is approximately normal with mean mu of e times log log x and variance sigma of e squared times log log x. So that's the conjecture. And this is kind of related to some work by Deloné who had some somewhat related conjectures that he made for moments of Shah of e d, but he didn't quite formulate it in this sense of being log normal. Now, and towards this conjecture what we can prove is again, we can prove one half of this conjecture. Namely, you can get an upper bound for the frequency of large values, bigger than. What are the values of mu and sigma in each case? There are four possible values. They're written down in our paper. But this is a kind of a unified way of thinking about what all these four values are. But clearly, if e has complete two torsion, we can all figure out what this is. It's just half minus log four for mu and then one plus log four squared. And then for the other cases, you have to kind of work out what they are. I mean, as I said, we wrote down all these four cases. You don't see any obvious pattern in terms of what they are. This is one way of describing it. Like, I think I also, I mean, I found it convenient to describe it just in terms of s3, like, and there's some silly ways in which you can try to describe it in terms of gl2f2, which is the same group, which is maybe more natural if you think of the two torsion points. But there's nothing terribly, I don't know anything terribly illuminating about it. I asked Brian Conrad about it as well, and he didn't have a very illuminating expression either. So let's think of this plus minus mu of v times log log x over the variance, so sigma of v squared of log log x, that the proportion of time for which this is bigger than v is at most what you would expect, the number of discriminants d up to x, d and e, with, OK. So the easy parts of these constructions are done, so somebody has to do the hard parts of sharing that these two actually match. We're not dividing by this code of log log x, but something really. I mean, the point being that, it's a mixture of something that's regotion and something that's really Poisson. No, the other one is also calcium, by Erdisch-Katz. Right. Because it's like Erdisch-Katz, it's really better approximated with a Poisson distribution. So yeah, OK, there's your more Gaussian theory, but OK, that's for you to say, so. Right, I don't, yeah. Do you give anything if you're normalized by something slightly less than square root of log log x, or is it the natural? This is the natural way to normalize, I think, in this case. You could do other things, like you could, in some sense, the Tamagawa factor is an artifact of the fact that we are looking at all numbers rather than just to say prime discriminants, right? And we can prove analogs of our theorems for prime discriminants also, and we can prove an upper bound result. In that case, all this kind of mu e and sigma e will disappear, and they will just be replaced by one. But it was kind of nicer to write down something which has four cases rather than something that has one case. You still assume BSD, right? In this, we assume BSD for rank zero curves, which I think of as a mild assumption because it might be removed any day, but so far not. OK, so let me tell you something about the proofs of these theorems, and then I'll try to connect it back also with the conditional theorems that we know about how to prove for moments where we can say, if you assume GRH, you can prove the right upper bounds for all moments, thanks to my earlier work in Harper's recent work. And these techniques also give you a way of thinking about that. They're actually quite closely related to Harper's work, but we were working on this. We had these two results independently and realized that they were very close to each other. Now, so let me start maybe with the upper bound for the central limit theorem, because the proof is, the proof is in some ways extremely simple. And in fact, the same techniques would also give much simpler proofs of Selberg's original theorem on the normality of log zeta half plus i t. And the proof is also kind of going backwards in history a little bit, because in problems involving the zeta function 100 years back, people started thinking about these problems by right by modifying using Euler products. This is a work of Bohr and Jensen and others very early on, and then as time progressed, people started replacing the Euler products by Dirichlet series, like Selberg's theorem on modification and so on. And now we're going to go backwards and replace Dirichlet series by Euler products again. So Dirichlet series are much easier to deal with in a certain sense, but Euler products are really much more flexible in thinking about many of these kinds of problems. And it's also related to sieves in a way that I'll explain. So let's say in our discussion of Selberg's central limit theorem, I made the case that you can try to write down the logarithm of the zeta function and pretend that it's given by a Dirichlet series and then compute moments of that Dirichlet series, restricted somewhere. So that's what we're going to do. So I'm going to define p of d to be the sum of AP over root p chi d of p. And p goes up to some small point, so up to something like, let's say, x to the 1 over log log x squared. So it's a very small power of x. Now, I claim that this object, I can understand everything about it on average. So if I average over d up to some point x, with d being in my set of fundamental discriminants e, I can compute as many moments of p of d as I like. I can compute the k-th moment of this. And I can evaluate this, can evaluate this. A of p is a normalized point challenge. So A of p is by how my l function for the elliptic curve, which I'm going to write as am over m to the s. So these are normalized in such a way that AP is at most 2. It's a normalization. So we can evaluate this for a wide range of values of k. So for all k up to log log x squared, essentially, let's put over 10 here, if you want to make the case that it's a very small Dirichlet polynomial that you're trying to average. When you expand this out, you get terms which only go up to x to the 1-tenth. And then you can evaluate any Dirichlet polynomial that you like of such a short Dirichlet polynomial over this. Now, and what will the answer be when we try to evaluate this? If you expand it out, so when we average over these real characters, recall that we get a contribution only when you're averaging over a square value, qd of chi d of l is small, unless l is a square, where there's a main term. So if k is an odd number, then when you expand this out, you get an odd number of primes that appear here. And so you get complete cancellation. So the odd moments all vanish. And what happens to the even moments when you expand this out to an even power? You need to have a product of p1 to pk, k is even, let's say, multiplying out to be a square. And the best way in which you can have k primes multiplying out to be a square is that if half the primes match with the other half of the primes. And the number of ways in which you can pair those off is basically the same as the Gaussian. So these moments will work out, and they will match the moments of a Gaussian. The mean of the Gaussian would be zero, because if you just take the first moment of p, it's zero. But the variance will be, well, you need these primes to match with some other primes. So you get a sum over p up to this point, x to the 1 over log log x squared, ap squared over p. And by rank and celberg, this is just asymptotic to log log x. So you can see why log log x is very good for these problems, because it's really log log of x to the 1 over log log x squared, but that's exactly the same as log log x with very little error. So if I take this short truncation of the logarithm, at least formally, then that truncation has a Gaussian distribution with mean zero and variance log log x. The mean is zero, because I did not include here the p squared terms, which should be what would contribute to minus half log log x. So I'm trying to explain this theorem. So suppose we want to count a discriminant for which this condition holds. So let's call this star. So suppose star holds. Then I claim that one of the following must happen. Either this p of d is bigger than v minus epsilon times square root of log log x. So after all, this is really what I would like, because I would like to say that this logarithm of the L function really behaves like this polynomial p of d once I subtract out this mean, which is from the p squared terms. So that's really what I would expect. And then I'll make a slightly technical condition, which I'll explain a little. Or maybe that this happens to be very small. This is just a nuisance. It's not very important to think about this. Or what must happen is that one and two fail, but it still happens that if you take L half e twisted by chi d times log d to the half, that's the mean times x p of minus p of d. So the assumption is that this quantity is big. And we are also assuming that this quantity is not too small. So this quantity must still be bigger than the exponential of epsilon times square root of log log x. So this must still be pretty big. So one of these three cases must happen. And then if I can estimate the probability with which these cases happen, then I'm happy. Well, the first case we know already, because I know that p of d is normal with the right variance and the right mean now. So the probability with which this happens is exactly dominated by the Gaussian. It's actually equal to the Gaussian. But more or less, it's approximately the Gaussian. So I didn't realize that you were inviting other people to heckle me as well. So now this is also very easy to handle. If you just use the fact that this normalized by square root of log log x is Gaussian, log log x is bigger than square root of log log x. So therefore, this is still in the tail of the Gaussian. So this happens rarely. So now the last thing that you have to do is to wonder how often can it happen that this product of two quantities gets large. Now one way to understand that would be to look at the first moment times this x pof minus p of d. Let's forget about the log x. If I can evaluate this, and if this turns out to be small, then I know that it's not going to be very often that this product times the log d to the half, it gets large. So evaluating this would be enough. But now this is not so easy to evaluate because this is what I mean by taking a product not with a Dirichlet series, but with an Euler product. This is like the Euler product of the primes up to some point of some function. If you modify that primes up to some point, you couldn't be able to take the inverse. That's right. That's right. Yeah, that's exactly why we deal with Euler products and not with Dirichlet series. So that's the trick. But so Euler products have this advantage that I'm able to kind of invert the exponential of this, which I would not be able to do with Dirichlet series, as Henrik points out. But we can get back to Dirichlet series from the Euler product too. And we use now a very simple calculus argument. If x is small, then e to the x is actually well approximated by its Taylor series. So if you write down the Taylor series of e to the x, it's going to be a good approximation to e to the x, provided x is not too big compared to where you approximate. Let's say x is less than l over e squared or something. l over 10. This is easy enough to verify. But now we are exactly in this case where we're going to assume if 1 and 2 fail, it means that p of d is small. It's a little bit large. It's not too small in the negative direction. So therefore, I can take, so where was p of d? It was up there. So therefore, OK, let me maybe let's say I defined it by taking p up to x to the 1 over 10 log log x squared. I forgot my time as usual, instead of 1 over log log x. But you see, I can now replace, so using this, I can replace x of minus p of d by the sum over j goes from 0 to log log x of minus 1 to the j over j factorial p of d to the j. In the case that I'm interested in where 1 and 2 fail, this is a good approximation to the exponential of minus p of d. If you make this an even number, what you have here is also going to be always positive. So that also is a useful thing for us. So instead of evaluating this moment, which I don't know how to evaluate, I can replace this essentially by a Dirichlet series approximation and then compute the moment, and that's enough for us. And the key fact that you want to realize here is that since I chose this p to be so small, even taking this to the power 2 log log x is only going to make me something only of size x to the 1 over 5 log log x, which is a very short Dirichlet polynomial. So I can evaluate everything. So that's the idea of the proof. And to prove the other central limit theorem for Shah, it's a very similar argument to this. You combine this with an Erdos-Katz argument to prove that the Tamagawa factors behave nicely, and then you also have to worry that the Tamagawa factors don't interfere with the L function. But that's easy enough to arrange as well. OK. So now I will try to describe quickly the argument for our four moments. Yep. So here you will lose something, but you don't care what you lose. But in the next argument, I will show you why you lose nothing. And it's connected to the Sylvagan itself. So as Henrik says, the fact that we stop p of d at some very small power of x usually means that there will be some log associated with this. In fact, that's true. You would lose some powers of log log, but that's OK because this e to the epsilon root log log x is much better than any power of log log that you need to lose. But next I want to say that we can get the sharp bounds for moments of L functions where you cannot lose these powers of log log. So let me tell you how that argument works, and then I'll explain where the argument in some sense comes from. Well, maybe let me explain to you where the argument comes from, and then we'll see. So you can see. So the argument that really comes from Brunsev, which Henrik started talking about today a little bit. So let's think of the very first Brunsev, which is the pure Brunsev. Suppose you want to count the number of primes up to x, and you want to sieve by the primes up to some number z. So you want to count the number of primes up to x sieving by the primes up to some number z. And you try to do this by inclusion exclusion. And then you would want to understand something about d divides n, d divides the product of the primes up to z, mu of d. So this is the sieve of Eratosthenes. And Brunsev idea was that you could get an upper bound here by restricting yourself to those d's for which you stop at an even number of steps in inclusion exclusion. And then this is Brunsev upper bound for the number of primes. And this is a very powerful method. And what is important here is how you choose the value k. How far do you have to go? So now there are two balancing forces when you carry this argument out. One is at each stage in the sieve when you have your ad's, you have an error term which is of size bigger of 1 in this case. So the total error that you have in the sieve will be on the order of, say, the number of integers up to x, which have utmost 2k prime factors, all of the prime factors going up to z. So it's utmost z to the 2k, taking a very crude bound for this. And then there's going to be a main term which is going to look like the sum of d divides p of z, d utmost x. And then you'll have mu of d over d. And while this condition maybe is not important, there is still the condition that omega d is utmost 2k. And the question is, is this going to approximate for you nicely, which we would like, we want this, is this going to approximate the product of the prime sub to z, 1 minus 1 over p, which it should if it's going to be a good serve. So you want to choose k large enough so that this is true, but k small enough so that this error term is not too big. So that's the game in print serve. Now, and the original idea of Brun, Henryk, this is not a historical comment, just a sketch, OK? So let's pretend, let's think of this product up to z. Well, morally, this is like thinking of the exponential of minus the sum p up to z, 1 over p. And if I think of terms here with omega d is sum j, d divided p of z minus 1 to the j over d, let's say, this is like taking the Taylor expansion of this and taking the jth coefficient. So therefore, what you see is that this will approximate that if the first 2k terms of the Taylor approximation to the series will approximate that. And the question is, when will the first 2k terms in the Taylor series approximate this exponential? Well, k has to be larger than this sum, which is just log log z. So in other words, we get a good approximation here. So this is good if k is bigger than log log z, OK? So that's what you catch. So now we can see what Brun's argument would be. k is like log log z. So if this is to be small, so z has chosen to be something like x to the 1 over 10 log log x. And then choose k to be 2 log log x. And then you're done. So you have a sieve that works. Now the only problem with this serve is that you are not able to sieve up to some small power of x, which is what we would like in order not to lose log logs. But we have to stop at this much smaller power of x, OK? But now you can actually iterate this pure Brun's serve. So for free, you can sieve up to x to the 1 over 10 log log x. Now the next step that I'm going to do is that I'm going to use this serve first, d divides pz. z is x to the 1 over 10 log log x, whatever. And let's say this integer part of it so that it's even. So I take this, and then I put in my next set of primes, which are going to be those that divide. So p divides d. It implies that p is now going to lie. So you'll see what I want to do here. 10 log log x. And it goes up to x to the 1 over 10 log log log x. And now you see that I can get away with this by only going up to twice log log log x. OK. So I claim that I can do this with no loss as well. Because once again, the numbers involved are only going to be of size x to the 1 tenth here, another x to the 1 tenth here, maybe going up to x to the 1 fifth. I'll want to do this carefully if I don't want the exponents to add up to more than x. So the error terms will be completely under control. And why will truncating up to this triple log be a good approximation? Because if you sum the reciprocals of the prime in this range, it's going to be extremely small. It's only going to be like x to the 1 over triple log x now, instead of double log x. And now you can keep iterating this. But at each state, you get some number of primes, right? But OK, I want to do it this way because this is what I'm going to transport to the moment bound. So and then I keep iterating this. So well, OK, keep doing the same thing. So in other words, maybe you start out with some l1, which might be something like 100 log log x. And then you take l2, which is going to be log of l1, or 100 times that. And then l3, which is 100 times log of l2 and so on. And then at each state, you use the primes p that are going to lie between x to the 1 over, let's say, 10l1 squared. And then here they're going to lie between x to the 1 over 10l1 squared and x to the 1 over 10l2 squared and so on, OK? And so in this case, you would take numbers with twice l1 prime factors, at most twice l1 prime factors, twice l2 prime factors, and so on. So that's a version of a Brunner serve. It's kind of a pure Brunner serve. And it will not lose anything more than a constant. So you will gain back any powers of log log that you lose in the usual Brunner serve. Now you can take this argument and use it to prove the bounds on moments. Because I can basically replace, so I can take, when I want to use an Euler product, I will use an Euler product in some range from here and some range from there and so on. I will split up my Euler product into different pieces. So let me take these lj's and let me write pj of d to be the sum over the primes that lie in one of these intervals, x to the 1 over 10lj minus 1 squared. And what I would like to say is that, OK, so I want to get a bound, so I want to, I'm going to write down a kind of a point-wise bound for some case moment, and I want to write down a bound which is going to involve an interpolation with the first moment times some Dirichlet polynomial plus some other Dirichlet polynomial. That's going to be the aim. And I don't quite know how to do that because if I put in a Dirichlet series here, I would like to put the inverse of the Dirichlet series there. And that I don't know how to do. I can't put inverses. But I can do it with Euler products. So I could take l half ed to the k. And if I had Euler products, I could try to express a bound, this needs to be worked out carefully, running out of time a bit. So I want to be able to write something here with x of k minus 1 times these Euler factors, p1 of d plus p2 of d plus so on, and then plus the exponential of the kth power of these Euler products. This is true, but I can't evaluate anything at this stage. Why are they keeping one function just on the base k minus 1 and then by the insertion? So k is going to be a real number less than 1, right? Less than 1. And I can only evaluate the first moment in this family. So k is a number less than 1, so this is some negative exponent if you like. I'm going to increase the moment power. I can write down some interpolation inequality, but I can't do it with Dirichlet series. I can do it with Euler products. I can write down a bound of this type, but I cannot do it. In number third tradition, k is between 2 and 1. k is between 0 and 1. Succeeds from 2 and 1. Right. OK, so well, it was also an integer in various places, like in the kth moment there. But it's very right, I have to say. OK, so now I have to worry about this. What do you mean? What happens if the central value is 0? This is still an inequality. Sum 0, number 0, is bounded by something here, which is non-zero. The left-hand side is much more 0 than the right-hand side. It's an upper bound, right? I mean, then on the right-hand side is more 0. No, no, no. So it's interpolating between l-half and 1. But the second term is, yeah, yeah, yeah. It'll be a true inequality. I haven't written down the true inequality, but I can write down a true inequality. So how do I want to set this up? The idea is that I will replace each of these exponentials by the exponential of one of these, say, k minus 1 or k. Doesn't matter what, pj of d. I will now want to replace this by the Taylor series going up to the l over l factorial with the Taylor series going up to these numbers, lj, right? So lj as here. All the lj's were even as well. So if you take them to be even, you can replace one of these exponentials by this as an upper bound. Most of the time, not always, but most of the time. So if you like, this is the idea of the Brunsev transported to this situation. So once you do that, then you will be left with Dirichlet series. And there will be short Dirichlet series as in this Brunsev argument. And therefore, you can evaluate everything that you want. And then, and if you arrange this carefully, there are no losses in the method, and you get exactly tight bounds. OK, I'll just use this. I have only a few minutes. So let me, so maybe I won't tell you the simple real analysis that goes into making inequalities of this type. It's not very hard to make that precise. So let me tell you in the last few minutes how you can get bounds for moments if you're willing to assume the Riemann hypothesis. And you can get tight bounds for all moments. Assume GRH. So the key ingredient here is that if you look at the log of L half e twisted by chi d, for example, then, well, this log might have singularities whenever this function happens to be 0. But when it happens to be 0, this is actually to be interpreted as negative infinity. And we are only interested in upper bounds for this. So the singularities which come from 0s of L functions on the half line are helpful to you when it comes to getting upper bounds. And they don't hurt. So that might convince you that you can prove a general upper bound for this logarithm, taking the values up to some point x. Then you would have the term from the prime squares, which will give you something of that type. And x is a free parameter. You can take x as you please. Of course, if you choose it to be very small, you have to pay an error term, which is going to be of the size log of the conductor, log of capital x, divided by log of little x. So if x is like a power of the conductor, there's only a constant loss. But the smaller you take x, the bigger your losses. So this is a lemma that I proved in my work on moments. And it's very flexible because you have complete freedom of how you can choose this x. So now you want to find an analog of this Brunsev argument, which is sub. So there are two things. One is that you can just take x to be some small power. If x is like capital x to the 1 over k, then you can compute the first k moments of this sum. So if you play around with this by choosing little x small enough, you can compute as much as you want about this. And then you can prove you can make precise a uniform version of the Selberg type central limit theorems. So in this way, you can prove that the frequency, that the number of d for which log l half e twisted by chi d is bigger than some number times log x to the half, let's say, that this is bigger than e to the v. You can prove that this behaves like e to the minus v squared over 2 log log x. This is true for fixed values by, well, this is not known at all, but this is true for fixed values by, for example, this theorem of max and i that I first explained. But it's now going to be true if you just want some crude bound of this type, maybe with a 1 plus little of 1, then this will be true uniformly for v going up to log log x times log log log x. So that's the range in which I established this. And now if you recall the calculation that we did yesterday on the connection between Selberg's theorem and the bounds for moments, you only need to know some theorem like this for v being k times log log x, where k is the moment that you're trying to evaluate. And this log log log x is still going to infinity, so it'll beat every k. So this argument will give you the right upper bounds apart from this log to the epsilon. The idea of Harper in getting rid of the log to the epsilon is that you don't have to go to the central limit theorem and then use that to get a bound for moments. You can instead deal with moments all the time by themselves from the start. And that argument is very closely related to this. The argument that I sketched here, the idea is that you, so the idea is that you take a sequence of values of x. Maybe the first one will be like x to the 1 over log log x squared and then x to the 20 over log log x squared. This is Harper's choice. And then x to the 400 over log log x squared and so on. And you consider the Euler product up to these heights. And you check at any point of time if the sum of the involved in the Euler product is small or not. If it is small, you try to use the Euler product until that stage. And if it's big, you kind of stop at the first place that you are allowed to use and then estimate the tail differently. This is not an elaborate treatment of this, but I am out of time, so I'll stop now. Thank you. Any other questions or comments? Can you give people what is the main result? What is the main result? Well, I gave you, so there were three things that I mentioned. One was the bounds that you get tight upper bounds for moments up to one, up to the first moment in this family. And the second one was that you get one half of the Keating-Snaith conjectures. So you get upper bounds for the Keating-Snaith central limit theorems. So conjectures that you can prove that the frequency of large values is dominated by the expected frequency, which is that of the Gaussian. And the third one was the argument for the analog for Shah, which we conjecture is going to be log normal with a certain mean and a certain variance in this family. And once again, we can prove an upper bound for the large values of this, a tight upper bound for the large values of this. And then the other result, which I didn't say very much, was that on GRH, we have the right upper bound now for all values of k by work of Harper refining my work. Very well formulated question. But with these complicated conditions with number of prime factors and so forth, is there any thought about trying to get rid of them by putting in some factor that will go to 0 if the number of prime factors is big in any interval? So the flexibility of the method is really to kind of think of the exponential series as an approximation. I mean, you can rig it in a way in which, you know, if the size of the prime factor, you can smooth it in terms of the size of the prime factor. But the number of prime factors you want to kind of keep, well, I mean, you can play around with other versions of the combinatorial self. And I'm sure one can maybe strengthen this. But yeah, but there is something. I mean, OK, I don't know what the right answer is. It's like it is kind of a strangely backwards argument. It's going back in terms of, you know, as a sieve, this will be weaker than any sieve that anyone could write down, right? But it actually gets the right result in this case. Yes, what about the prime, if you look at the family? Of prime discriminance, yeah. We can do that too. And the reason for that, so in the case of primes, it's interesting that we cannot handle the first moment. We don't know asymptotics for it. But on the other hand, you can still do it because you can combine it with the cell work sieve. So you can get upper bounds of the right order of magnitude for the first moment in the family of primes, even with an amplifier, because you can put in the cell work sieve for free. So then all of the rest of the arguments will go through. And you will get the tight upper bounds for moments. You also get the upper bounds in the central limit theorem structure, precisely without any loss, I should say, when you restrict yourself to prime discriminance. That was my question. What about L prime? If you look at the family of primes, what would you assign them? Yes, you can do L prime as well. Yeah, yeah. If you look at the odd twist, you can do that, right? Yeah. Yeah, the one other ingredient that's used is the positivity of the L values. I did not emphasize this before. And for L prime, that's still OK. You have by gross argue the positivity of values, and then everything else is OK. So by having all these bounds on all the moments, can you get better percentages of non-zeroes? No, so this is the question that I mentioned in my last lecture, or two lectures back, that the interesting problem here would be to get lower bounds for these small moments, for the moments less than one. Now we have the upper bounds, but we don't know lower bounds. If you have lower bounds for all the moments less than one, then unconditionally you would get a positive proportion of non-vanishing in this family, which is still not known. Yes, one more. It would be interesting that the lower bounds for the average of L-central values to the function of 20 between V to answer very special numbers, right? So it's performing some very tight points, and it's beautifully done. But that's a very, very sparse set. You have in mind a very sparse set, like S-units, or? No, very sparse, like in V to X, it's X-ray script of your heads. Ah, OK. But that's there, so not prime, it's got lower bounds. The other one is type of steering person. You can use type of self-execute, whatever that is. So you mean like sums of two squares, or? Yeah. Yeah, that means to better the stress of this when you're allowed in V primes, which you're not sure. I see, only primes composed of three more four primes. OK. OK, but still, you know, it's perfect density. Interesting, I have, yeah. It's a listenable method of code. Questions? Why are you choosing these discriminants? Yeah, well, he knows that. He just asked me these questions, you know, and tried to explain it. He failed to explain it. Yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah, yeah. Well, it's a rejection. We're doing this here. In the particular problem. So, of course, there's some work of Munchion kind of related things where you use. Yeah, but quite... Right, but Munchion... But that's Munchion, yeah. Anyway, it's just an interesting subject to call people to do now. She's under the right scope for the authors. They need us. Questions? Then let's send a sound for the fall.