 So the title was vague, but I'm not going to talk everything about L-functions, so I'm going to talk about some topics related to the value distribution and moments of L-functions. So to maybe to start with, you can imagine, so we have some notion of a family of L-functions, and this might mean things like looking at the values of, say, the Riemann zeta function zeta of sigma plus i t, where sigma is some fixed number, and t varies and is large. Say let's say t lies between capital T and twice capital T. And then you can ask for the distribution of values of this object. Or to give some other examples, you could take a character chi mod q, let's say primitive, and ask for the distribution of values of L-sigma chi, again for a fixed value of sigma, and as chi varies mod q, then let q go to infinity. Or to give some other examples, you could look at a special class of characters, which are namely quadratic characters. So these are parameterized by fundamental discriminants d, and you could look at objects like the quadratic Dirichlet L-function, again at some point sigma, and d varies over all fundamental discriminants up to some point x. So these are some kind of sample problems that you could consider, and you could look at other variations of this. You could take some automorphic form, maybe, and twist that by characters. So let me give maybe a few more examples. You could look at L-sigma f, where f varies, let's say, over all holomorphic Hecker eigenforms of weight k. Well you could vary the level if you like, but let's say we vary the weight, and the weight k is supposed to get large. And well in everything that I'll say, we'll assume that these L-functions are normalized so that there's a functional equation which connects s to 1 minus s. So half will be the central point always. So or something related to this would be to fix your favorite modular form, and then to twist it by quadratic characters. OK, so these are some sample objects that we would like to study. And in almost all of these cases, the only thing that we completely understand is what happens when sigma is bigger than 1, when we are in the range of absolute convergence. So there you can say quite a bit, like let's say we take L-sigma chi d, which is just given by this Euler product, and then it's easy enough to say something like, well, this is at least as large as what happens when all the chi d's are plus 1, and at most as large as what happens when all the chi d's are minus 1. So it varies between two constants. And it can get arbitrarily close to this constant or arbitrarily close to that constant by choosing the first few primes to point in a certain given direction. So this is, you could say, easy enough to understand. But even this problem of looking at values of L-functions to the right of 1 can be non-trivial if you're interested in automorphic forms where you don't know the R-monogen conjectures. So even here, some problems are not easy. There are some subtleties, which we don't know in general, for the coefficients of these L-functions. So maybe I'll just say here that we can say things here, but I'm not really going to focus on this. But let me just say there's work of Bolteni and Shannon Lee, which deals with problems of this type of bounding L-functions at the edge of the critical strip or just a little bit to the right in situations where you don't have the R-monogen conjectures. Think, for example, of a mass form. So the first problem which is non-trivial would be to ask for the value distribution at the edge of the critical strip, namely when sigma is 1. So maybe let me just focus on one problem here, which is of special interest, which is the case of twist by quadratic characters. So that's especially interesting if, say, the discriminant d is negative, and the size of d is less than x, let's say. Then we would be interested in L1 chi d, since we know that that multiplied by, so that is equal to the class number. And let's see, this would be a constant here, like 2 pi divided by w, which is the number of roots of unity in the field. Well, w is usually 2. So up to some factor of pi, this gives you information about the class number of the field q squared of d. So asking for the distribution of values of L1 chi d is the same problem as asking for the distribution of class numbers in this example. So that's a problem that we don't understand very well. So unconditionally, the only kind of bounds that we know are that L1 chi d is bounded by, so this is fully explicit. You can put in a slightly small constant here as well. This is easy. And then we have a lower bound that it grows at least like d to the minus epsilon, which is Ziegel's theorem, which is ineffective and remains an important open problem. But this is not the truth of what happens for L1 chi d. If you know something like the generalized Riemann hypothesis, then we have much better bounds, so we know that it's bounded at most by some constant times log log d and some other constant times. So I write c everywhere, but the c might be different constants in each occurrence. So c divided by log log d. So how should one think about this? And also, how do these large and small values come about? And usually, what's the size of L1 chi d? That's the first question. Okay, so to think of what the size of L1 chi d should be, we should go back to this trivial example that I did when sigma is bigger than one. I can just write it as an Euler product, which is convergent, and then get upper and lower bounds that way. So you could ask, is there something that's going to say, how far do I have to go before I can approximate L1 chi d by its Euler product? So z will be something depending upon. So d, you should think of as a discriminant about size x. And z will, of course, will maybe go to infinity as x goes to infinity. You will certainly need something of this type. And you're interested in making z as small as possible, as the result's still being true. And unconditionally, we'd have to take z very, very large, like maybe larger than x at any rate. Maybe even larger than that, maybe like e to the log x squared or something like that. But if the generalized Riemann hypothesis is true, then one can take z to be fairly small like log x squared. And once you know that, then these bounds follow because this product at most can be as large as a product of 1 minus 1 over p inverse up to log x squared, which gives you this upper bound. And it's at least as large as this, which gives you this lower bound. And this principle is related to some other principles that we have, like some other problems that we have, like what is the least quadratic residue or non-residue. So find the smallest value of p for which chi d of p is 1 or chi d of p is minus 1. And these are problems for which unconditionally we don't have very good results. We can say that the least prime is less than some power of d, like maybe d to the log. So unconditionally, we would know something like, here we would get p is less than d to the 1 over 4 root d. And here, maybe only that d is less than p to the 1 fourth, character is more d, plus epsilon. And but the Riemann hypothesis would tell you that in both these cases, you can get positive and negative values once you're at size log x squared. Now, maybe the first thing to get us started on thinking about these value distributions would be to understand the problems of this type, like this least quadratic residue or non-residue. What should be the truth? GRH tells you something like log x squared, but that's not really the truth. The truth in these cases should be that there exists such a value, really, that you can take anything which is a little bit larger than log x. So let me put log x to the 1 plus epsilon, but you'll see that I can make it even a little bit more precise, like log x times log log x. And the reason is simply to think of randomness. So each prime p could be plus 1 or minus 1. And it's roughly plus 1 or minus 1 with probability 1 half. So I can think of this chi d of p as being like a random variable x p. And this random variable takes values roughly 1 and minus 1 with equal probability. But you might want to be a little bit careful because there is a third possibility that the value could be 0 when p divides d. So let's allow for that possibility. And then you can work out that the right probabilities here are this is with probability p over 2 times p plus 1. This is with probability 1 over p plus 1. And this is with probability p over. So this is approximately 1 over p. It's 1 over p plus 1 because d is conditioned to be square free. Well, just pretend there is plus minus 1 with equal probability. Then to get z numbers pointing in the same direction, the probability is it takes some fixed value of signs epsilon p for all p up to z. This is about 1 over 2 to the pi of z. And so you can imagine that if this probability is much smaller than 1 over x, then maybe you don't get any numbers which have that property at all. So stop when this is. So that's a plausible conjecture for what we should expect for this quadratic residue or non-residue problem. And you might also think that that's a plausible conjecture for what the values of L1 chi d should be, that we should be able to take this oil of product and take the oil of product up to essentially log x. Log x to the 1 plus epsilon. And that's a good approximation to the value of L1 chi d. So by the way, this conjecture is stronger than Rh in the sense that it will predict values which are predict upper bounds for L1 chi d and lower bounds for L1 chi d, which are about half the size of this upper bound. OK, so that's completely open, but it gives you a good first model for how to think about the value distribution of L1 chi d. We could say, well, this L1 chi d should be modeled by looking at a random oil of product where the x p's are taken independently for different primes and satisfy this being plus minus 1 with equal probability. So I have an infinite product here, but you can check that this infinite product will converge almost surely. And the reason why it converges is, well, the convergence of this product is related to the convergence of sums of the form x p over p if I take the logarithms of this side. And then the convergence of the sum is OK, because if you think of the x p's, they are plus minus 1 equally often. So counting summing x of them will give you square root cancellation in the sum of the x p's, and they're weighted down by something more than square root of x, which will give a convergence sum. So this converges almost surely. So this model has been studied in the context of understanding L1 chi d for a long time. Maybe it goes back to work of Erdish and Chauvlar and also Eliot in the 70s. And they prove that L1 chi d has a nice distribution function, a smooth distribution function. So you can compute the probability that L1 chi d is bigger than 10, let's say, or the probability that it's less than 1 over 100. And sometime back, maybe 10 years back, now, Granville and I studied this carefully, trying to determine with what uniformity we can match the distribution of values of L1 chi d with the distribution of these random-oil products. So we could prove something like, well, so the way you do it is by computing the number of fundamental discriminants up to size x for which L1 chi d is bigger than some number, which let me normalize as e to the gamma times tau. The e to the gamma is for this Mertens-type constant that comes in this Euler product. And you would like to figure out when this is approximately given by the number of this random Euler product, which let's call L1 x, being bigger than e to the gamma times tau. Let's divide here by the number of discriminants up to size x. And we prove that these two objects more or less match in some very uniform range. It's true for tau up to something like log log x plus, maybe. So this is unconditional. So log 4 is four logs, so log, log, log, log. And then if you assume GRH, then you can replace this log 4 by log 3. So what does this mean? Well, you can ask, well, what exactly does the probability of this L1 chi d being large? What does it look like as a function of tau? It's some crazy function. It's not something nice like Gaussian and tau or anything like that. It actually behaves very strangely. It decays doubly exponentially. So this behaves like e to the minus some constant times e to the tau over tau. Is that still legible when I write that? Yep, OK. And so this is an asymptotic, not an exact equality, of course. And this constant c is some funny constant. It's about, I wrote it down because I can't. So it's e to the minus 0.8187, something like that. And this constant that appears in the exponent is some weird thing. So I'll write this down just to illustrate that this is not some universal distribution. It's something that you can compute in this case. You get some answer. If you compute it for some other function, it doesn't have anything to do with just the family of L functions. It has things to do with the actual coefficients and what d you're ranging over and so on. OK, so what does this mean? If you look at this probability, because it's doubly exponential, it means that once tau is on size of log log x, then this probability becomes something less than 1 over x. And because it's being divided by this tau, tau really has to be, if it's on the scale of log log x plus log log log x, then this proportion becomes less than 1 over x. And you expect nothing to be, so you don't expect this equality to hold once tau is bigger than this plus this plus 20, let's say. Then there should be nothing which satisfies that inequality. So essentially there's a very wide range of values of tau in which this inequality can possibly hold. And we have as a theorem, at least on GRH, that it almost holds in the entire range in which it can. So the fact that the two distributions match for such a wide range might lead you to believe that, so therefore, we may believe that this is this conjecture, which again, as I said, is in some sense beyond GRH. So if this is true, then you could say that we completely understand the distribution of values of L1 chiD. It behaves like a random oil of product, and it seems to behave like a random oil of product in essentially the whole range that it can. And whenever you see a value of L1 chiD, well, of course, the chances are very good that it's just lies between 0.1 and 10. You're never going to see a value which is not in one of these ranges. OK, let me see if I can. But this is not the only question that you can ask about values even at the edge of the critical strip. So let me ask you one more question along these lines about which we know extremely little. So let me just think about negative discriminance. And let's say, OK, just which we know for the distribution of values of L1 chiD here are basically the same as a class number, at least when d is not minus 4 or minus 3. Now, this, of course, is an integer, which means that these things are not arbitrary real numbers. They have to lie in certain buckets near integers divided by square roots of integers. And if I want to understand this integer, then I'm not really interested in just understanding how the values of L1 chiD are distributed. I'm really interested in understanding them in very small intervals. Like, if I take an interval of length 1 over root x, I would like to understand the distribution of L1 chiD in such a short interval. Now, this is, of course, impossible because the random model is not seeing anything about the arithmetic of these class numbers, OK? So let me give you one conductor here. We would certainly expect that every number is the class number of an imaginary quadratic field. I think this is an obvious conductor. And the reason why I say it's obvious is that you have about x discriminance up to size x. You take the class numbers, they all are of size about square root of x. You're taking the square root here. So you have x numbers mapping down to square root of x numbers. So each number should get its fair share of fields for which it's a class number. So there should be roughly root x. So if I give you a number given a number h, there should be about h fields with class number h, OK? The discriminance go up to h squared, and then the chance of you landing exactly on h might be like 1 over h. But this is forgetting some things like, well, for example, if h is odd, then genus theory tells you that the discriminance for which you can have a field with class number h, they have to be primes. So maybe it's not actually h. Maybe it's like h over log h in that case. And if your discriminant is do so by a large power of 2, then if h is do so by a large power of 2, then the discriminance would be do so by a large number of primes. And maybe there are more discriminants in certain cases. So certainly, I would think that if I let this number be denoted by f of h, the number of fields with class number h. So f of 1 is 9 is a famous theorem. Then I would expect that maybe this is always bounded by something like maybe h times log h, and maybe bounded below by something like h over log h. Maybe one can make more precise contentures of this. Although so far, I can tell nobody has a good guess on what the asymptotics here should be. But we know very little about this. All we know is that, and it follows from this work on the distribution of L1 chi d that I've just been talking about, we can compute the average of this function f of h. And it turns out to be a fairly nice constant. But the error term is remarkably weak. I can save a square root of log h. And I have no idea of how to save anything more than that. Like maybe h squared over log h to the 10 would be very nice, but I have no idea how to prove that. And because you can get some asymptotic formula with some error term, you can also prove something like the number of fields with class number h is utmost h squared times some power of log log h. This is very weak. Just all it says is that, if you look at all the fields with class number up to h, they can't all basically accumulate on one value or maybe on log h value. So it rules out some things. It says that it's a, so I don't quite know how to put my quantifiers here. Let me say this, although it sounds idiotic. Not almost all fields have class number equal to a power of 2 times a bounded odd number. But if you ask for the same question with, so I hope it's clear what this means, that there is a positive density of fields. Actually, there's a zero density of fields whose class numbers are a power of 2 times a bounded odd number. That you can prove. But I don't know how to prove that whether the same question, whether the class numbers can simply be a power of 2 times power of 3 times a bounded number. Or certainly if you ask it with 2, 3, 5, 7, then it's wide open to figure out how to say anything about this. So that's a very, very, sorry. Can you not use Korn-Lenscher to guess the sum? I assume it's very uniform. I don't know how to guess it. So there are fluctuations in, let's say, if 3 divides a class number, 3 divides h, then that seems to bump up f of h a bit. So there are deviations that you see by looking at the tables. But I don't know exactly how I would formulate that. Yeah, Korn-Lenscher could tell off Right, so you then have to assume that they're all independent of each other and then make some formulation. And then you have to put in the genus theory thing for powers of 2. So I wrote down somewhere a version of this with h over log h and a power of log log. And that it should be on that order. That I feel kind of confident about. But I don't feel confident about the constants that go in front of them. So this is a very strange kind of conductor because it's telling you that these values of L1 chi D, maybe we know they converge to some nice smooth distribution function, but it could still be a very granular kind of distribution. But at any point, you find that they accumulate in very short intervals around a small number of integers. And we don't know how to rule things like that out. OK, so I don't know if there's a good way to ensure that something is the middle board. OK, now so that's what happens for the edge of the critical strip. You can ask for any other value sigma, which is bigger than half and less than 1. If you fix sigma bigger than half, then there has been some work on this recently due to Lamzuri, who also discusses things like let's say you could look at the analog of what I was talking about in the zeta function case. If you look at zeta of 1 plus it, you can think of how this is distributed in the complex plane, so it's real and imaginary parts or its size and its amplitude and its argument, if you like. And he also has extensions of this two values of sigma that lie strictly bigger than half. And the story is more complicated, but it's kind of similar in spirit that you can still understand things pretty well using the random model. So in other words, so what makes this, so let's say for l sigma chi d, you would analyze it by looking at this 1 minus xp over p to the sigma inverse. And what makes a random model work is that, again, if you think of the convergence of this and you look at xp over p to the sigma, so again, if these are canceling out to square root cancellation because they are plus minus 1 with equal frequency, you see that the sum still converges if sigma is bigger than half. So this still converges just almost surely. So here's one kind of corollary of this work. If you fix sigma less than 1, and if you look at the values of, OK, this is not, so one way to say it is that this is close to any given real number, positive real number, for infinitely many values of the discriminant t. So there's a distribution function for this, and any given real number around it, there's some positive density with which you take values in that distribution. So the reason why I was going to write something down and I hesitated because we would, of course, love to be able to prove that the image consists only of positive real numbers. But we don't know how to prove that because that would be some part of the Riemann hypothesis to say that there are no real zeros of these functions. But on the other hand, in the density sense, you can say, because even the characters for which you might have a 0 at sigma, there are 0 density results which tell you that that happened very infrequently. But certainly, OK, but these values can take any value, any given positive number as a value, or dense in the set of positive real numbers. OK, so that tells you what happens to the right of the critical line. And now for the rest of the time, I'm going to talk only about what happens on the critical line. So now we're going to discuss to sigma equals half, which is in some ways the hardest case. So you can see that one way in which things fail is the random model is no longer meaningful. If I write down a random product of this type, then this series does not have to converge anymore. In fact, it diverges almost surely. So the sum of xp over p over square root of p diverges almost surely. Doesn't quite work. And you can see it in other ways, too, because there are zeros of L functions on the half line. Whenever you have such a 0, it doesn't make sense to approximate it by any kind of other product that you have. OK, and it's also reflected in the fact that some very basic questions about the values of, say, zeta of half plus it are unanswered. So here's the conjecture which goes back to Ramachandra that the values of zeta of half plus it, as t varies on r, these values are dense. This is open, but it maybe is not so hopeless that maybe somebody will solve it. So there is work on this by Emmanuel Kowalski and Nikolaj Bali connecting this to moment conjectures for zeta of s. OK, so this is in some ways an aside. I'm not going to talk much about this problem. So one difference is that while these values to the right of the critical line actually have a value distribution, the values on the critical line don't have a value distribution. What you have is a different kind of result, which is due to Selberg, which we would like to find analogs of this for L functions, which we don't have. Selberg has a theorem that says that as t varies in, let's say, t to 2t, so if you take the log of the zeta function and you take its real part or its imaginary part, so if it's a 0, then the real part of the log would be, say, minus infinity, but it happens on a set of measure 0. So it doesn't affect any of the calculations. So the values of this logarithm are distributed like a Gaussian, so like a normal random variable. So they're approximately normal with mean 0 and variance about half log log t. OK, so I'll explain this theorem in a little bit more detail tomorrow and also what it means for moments. For the moment, let me just say that what this means is that this variance is growing. So what it means is that if you take a value of zeta of half plus i t and you just look at it, there are two cases. Either it's some large number in size or it's some small number in size. And you're never going to see, so the probability that you see a value which is of size 10 is going to be 0. So therefore, you can't make progress on this conductor that the values are dense in C because that is actually a set which has 0 measure. Now, OK, so now we can get to the main topic that I'll discuss in the next lectures, which is a classical theme that goes back to Hardy and Littlewood, which is to try to understand. So their idea was to understand the value distribution of the zeta function by studying the moments of zeta. So here k is, so k is, let's say, some natural number or you could also just take k to be some real number, which is positive maybe. And what it also makes sense to consider complex moments as well. So in fact, for this application to run on this conductor, you need to understand something about complex moments of zeta. And one reason why they were interested in it is that it's very easy to see. So this is just the L2k-th norm of the zeta function. So of course, if you can understand this for large values of k, you can say something about the L infinity norm of the zeta function, which is the Lindelof hypothesis. This is equivalent to knowing that these moments. So there's been a lot of work on this question from the 1920s due to Hardy and Littlewood. But so far in this case, we have only asymptotic formulas in two cases for k equals 1 and k equals 2. So for k equals 1, this is asymptotic to t log t, which is due to Hardy and Littlewood. And for k equals 2, so maybe I'll write this in anticipating some other conductors. Let me write this as 2 times 1 over 4 pi squared. And this is due to Ingram. So actually, Ingram's paper is quite interesting because he considers more general objects like zeta. So let's say this may not quite be his notation, but instead of just considering the fourth moment, he also puts in these four variables. If you set them all equal to 0, then you get the fourth moment of zeta. But he works out the asymptotic formula in this generality saying that it's more transparent to see what the shape of the asymptotic formula is when you disentangle the variables like so. Now, so for a long time, these were all that was known. And it was not even clear what the right conductors for these moments are. People guess the right exponent. So there was a folklore conductor. I don't quite know who to attribute it to. Maybe Titchmar should be a reasonable person to attribute it to. There was a folklore conjecture that the 2-kate moment, let me give this a name just to stop writing it all the time, that this is asymptotic to some constant times t log t to the k squared. And then maybe in the early 80s, there was a suggestion by Conry and Gorsh that this constant ck factorizes nicely as the product of two constants, ak and gk. And well, at the moment, of course, this is not so profound to write this down. But the point was that there's a natural thing that you could associate with the k-th moment of the k-th power of the zeta function. That has a Dirichlet series, which is the k-th divisive function d kn over n to the s, at least in the range of absolute convergence. Now, you could think of this expanding this moment as being like the mean square of the k-th power and think of using a parsable type argument. So a parsable type argument would suggest that maybe the asymptotic should depend on some object which maybe looks like this, d kn squared over n. So you're on the half line, so there's an n to the half plus i t. And so if you take the squares of that, you get d kn squared over n. And what is bogus about this is where I have truncated it, choosing to truncate it as n up to t, which is not motivated by anything at all, except that that's the natural scale maybe on which this behaves. Then you can show that this object, it's very easy to get an asymptotic formula for this. And it turns out to be asymptotic to this ak times t log t to the k squared. So this is the natural order on which the moments of the zeta function should behave. And then this gk is measuring some kind of deviation from what you would expect very naively. And in this notation, the Hardy-Littlewood and Ingham results say that g1 is 1 and g2 is 2. And then the other values of gk were not obvious for a long time. And then in the 90s, Conry and Gorsh conjecture that g3 is 42. And Conry and Gonic conjecture that g4 is 24,024. And roughly around maybe exactly the same time that they conjectured this, Keating and Snaith found a general conjecture which predicts what gk should be. And the conjecture is quite nice here, at least for integers. You could write it as so. I think if I remember this correctly, then this should work out to these numbers for the small values. So this is the conjecture of Keating and Snaith. And actually, I said this for integers, but you could actually make sense of it for non-integer values as well. You simply have to replace factorials by gamma functions. And you basically get something which is related to a Barnes double gamma function. So that's why you need this Barnes double gamma function, some ratio of double gamma functions. OK, so moreover, so this is the case for the zeta function, but there are other families of L functions that you could consider as well. And you could consider moments in these families. So let me give you two other examples which are kind of typical. You could look at all discriminants up to size x, fundamental. And look at the average of L half chi D to the k. So maybe here, k is some natural number. These values are all expected to be non-negative, but we don't know how to prove that. That will be an important result, giving you also lower ones on L1 chi D. But anyway, they should be non-negative if you're willing to assume the Riemann hypothesis. There are no zeros between half and 1. And here, the Keating-Snaith conjectures is a little bit different. This is asymptotic to some constant. So the conjecture would be analogs of Keating-Snaith, some constant Ck, which can also be specified nicely in terms of factorials, but which I don't remember offhand. But what's interesting is that you get a different power of log x here, k times k plus 1 over 2. And then to give one more example, let's say you fix your favorite module of form f and take quadratic twists of it. And then here you get some constant Ck, maybe different constant from here. So maybe in the last five minutes, let me just tell you what I plan to do in the rest of the lectures. This is your race in the suburbs, you know? Yes. It is just naïvely speaking in the suburbs, you know, to kind of guess what the people in this room would be if you just get the A base. So you would get, well, it's not clear how exactly you would formulate it. You would get the right power of log, and that's something that I'll talk about. So Selberg's theorem should predict that you should get some constant times t log t to the k squared. And that will also be the key to trying to think about what the distribution of these should be that we would like analogs of Selberg's theorem in these contexts, which we don't know how to prove. So OK, so the first point is what Terry just said. I will try to make a, I'll explain the link between moments and this value distribution to things like Selberg's theorem and expected analogs for L functions. But maybe this won't come close. So the things that I want to explain are this, to also figure out something about where the conjectures for moments come from. So there's a particularly nice way to formulate these conjectures, which is due to Conry, Farmer, Keating, Rubenstein, and Snaith. And the heuristic is very simple, and it gives a very elegant answer for what all the moments in all the families you can think of should be. But on the other hand, even so you can, and this has been verified in many small cases, which I'll mention some of these small cases tomorrow. But what's maybe unsatisfying is that there is a very nice conjecture, but all the proofs in which we can check the conjecture in the small cases are very unsatisfactory in that they involve, you have a huge mess, and then you check somehow that the mess that you get matches up with this nice conjecture. It's not very illuminating the proofs. And then we have general techniques which allow us to give lower bounds for all higher moments. So in other words, if you know some moment and you have a little bit to spare, then you can get lower bounds of the right order of magnitude for all higher moments than that. And the fourth one would be that a complementary principle to this, that if you have upper bounds for some moment, then that implies, automatically, that there are upper bounds for all smaller moments. So of course, this is obvious if you don't get the right power of log by Holder's inequality. So the point is you can get the right power of log in both these cases. And the last one is related to this kind of argument. Now we know that if you, on GRH, you can very generally prove upper bounds of the right order of magnitude in essentially any case that you can think of. So I'll stop. Thank you. As we have a question, of course. How do we guess that what are the coefficients of the power of the logics? What is the? Yeah, so I'll explain that tomorrow. Can you guess for a bigger class of L-function like for any automotive L-function or a silver class, in fact? So it depends on what your, so in the t aspect, everything will have the same log t to the k-squared type phenomenon. And if you vary the family, then you can still guess what the answer is going to be. Is it going to be for any L-function in the world class? In t aspect, yeah. Yeah. Prima t. Prima t, OK, Prima t, yes, right.