 Felina, thank you, Philippe, thank you, Michael, for organizing this seminar. And I want to tell you about what is somewhat of a recreational topic, which blends together very classical things in lattice point theory and mathematical physics and very classical things in prime number theory, which I think have not been blended before and maybe will not be blended again. And this is joint work with Bingrong Wang, here, who was my postdoc at Tel Aviv and is now back in Shandong, in Jinal in China. OK, so let's start with the famous Gauss lattice point problem. And this is the question of, if I give you a large disk, how many lattice points does this disk contain? And the standard guess is that the number of lattice points is roughly the area of the disk. This is, you look at it at the large disk, so the asymptotic parameter is the radius. And in the limit of large radius, you want the number of lattice point to be asymptotically equal to the area of that disk, which is pi r squared. So I'm going to give you one proof because that proof is actually insightful. It will tell you what to expect and what not to expect. So let's go through the proof. How do you prove that the number of lattice points is roughly the area with a remainder, which is roughly the length of the perimeter? Around each lattice point, you put a unit square. So you center a unit square at the lattice point. So you get a bunch of unit squares, which are this joint except for the boundary. And the union, this yellow shape here, is some kind of polygon. The area of this polygon is just the number of unit squares because the area of each square is 1. That's the way I define the squares. So the area of this polygon is exactly the number of lattice points. Now, the area of the polygon is roughly the same as the area of the disk. In fact, we can see something a little more precise. If we look at a disk which circumscribes that polygon, so the radius of the disk will be, let's say, the radius of the original disk plus 2 or plus 1. So that polygon is wholly contained in this big disk. And so its area is less than the area of the big disk, which is pi times r plus 1 squared. And in turn, that polygon circumscribes a smaller disk of radius r minus 2. And so the area of the polygon is bigger than the area of the smaller disk, which is pi r minus 2 squared. And if you expand what these squares are, you'll get pi times r squared, which is over here. And the difference would be something like pi times r. And this is what it says over here, that the difference between the number of lattice points, which is the area of the polygon, and the area of the disk is at most the circumference of the disk. So this is how you prove this fact that the main term is the area, and the remainder term is at most the length of the perimeter. OK? So the question that people ask is, is the length of the perimeter really the right answer? And very quickly they understood that it's not. So then there was a race to improve the remainder term. And over 100 years ago, people improved the remainder term from the length of the perimeter to a smaller power, to the perimeter to the 2 thirds. And then there was a huge amount of work over the last 100 years in trying to improve this 2 thirds to something else. There are many famous names associated to it. And the current world record is very recent. It's worked by the late John Morgan and Nigel Watts, which gets some number. The only thing I want to note about this number is that it's clearly less than 2 thirds. However, it is bigger than 0.5. It's bigger than a half. And the conjecture, which is due to Hardy, is that the remainder term is roughly of size the radius to a half. OK? So that's the conjecture. I'm going to prove it to you in a second. Next slide will be a proof. This is a time when you guys want to disconnect me. We said no lunatics. We send them to a waiting room. OK? Now, never mind the upper bound. Let's forget that, because I want to say right now I have nothing to add to the upper bound. But it was already known 100 years ago that you cannot do better. The next one I'm going to have, for instance, from there, computed the mean square, the average of the square of this remainder term once you normalize it. So if the remainder was less than half, this difference normalized here would be a negative power of the radius. And so when you integrate it, you will get something which shrinks. When you average it, you'll get something which tends to 0. But you show that the mean square actually has a limit, which is not 0, which is positive. So that immediately tells you you cannot get a smaller remainder term all the time. And there's work about how much worse than square root of r can you get. And you can get something, some kind of power of log times the power of log log. And again, there's a huge amount of work on this. And the current world record, which is probably very close to the truth, is due to sound uncertain here to even say what it is. OK, so this is my promise to try to prove the remainder term. I did say to try it, so don't send me to the waiting room. Let's go back to proof of the estimate that the distance between the number of lattice points in the area is at most the length of the perimeter. So when we compared the area of the polygon and the areas of the two disks, the inscribed and superscribed disk, we were off by things that lie on the boundary. And some of the things we overcounted, and some of them we undercounted. So the difference between the area of the polygon, which is the number of lattice points, and the area of the disk is a sum of terms along with the perimeter, which come with signs. Some of them are plus and some of them are minus. And each of them is part of a square, maybe this part here. So let's model this. So let's assume that I have a sum of terms which are roughly the same size, each of them, and that the signs are random. So if you give me that, let's remember the central limit theorem. The central limit theorem says that if you take a sum of independent coin flips, the sum of independent plus minus ones, if you take n of them, then divide by square root of that number, then typically that ratio is bounded. Therefore, if we believe this analogy, then we believe that typically the remainder term is really of size square root of r, which is exactly this hardy conjecture. So if I'm a physicist, I'm done. I've approved it. OK? Don't send me to the waiting room. Now, not only is it going to give me the exponent, it tells me that the size of the difference is not the length of the parameter, but square root of the length of the parameter, which is r to the half. It also predicts something else. The central limit theorem says that the ratio, one suitably normalized, has a limiting distribution. So the probability that this ratio of the n plus minus ones divided by square root of n, the probability that it lies between 1 and 2 tends to increase the number of coin flips to the area of the Gaussian over that interval, which is this probability distribution. So here is a prediction made by this heuristic that if you look at the normalized remainder term and think of the radius as a random quantity, then you expect a Gaussian distribution. OK? So that's what my attempt to solve the circle problem led me to. It gave me the right exponent, and it gave me a prediction for the remainder term. So let's check this. So already in 1940, Oral-Winzner proved that this ratio, the normalized remainder term, has a limiting distribution. And this was rediscovered because Winzner's papers are very difficult to read. They are sort of stream of consciousness papers. I had the misfortune of reading his collected words, and it's very hard to extract a statement from them. But once you know what to look for, it's there. Anyway, this was rediscovered by Heath Brown. And the statement is that indeed there is a limiting distribution. That's number one. Here is a picture of this limiting distribution. It does not look like a Gaussian. For instance, it's not symmetric. And this was, in fact, proved by Kaiman Tsang, who was Heath Brown's postdoc at the time, and he showed that the third moment of this distribution is not 0. So that says that it cannot be a Gaussian, because a Gaussian is a symmetric distribution, only has all its odd moments vanishing. It only has non-zero even moments. So that sort of puts a damper on my attempt to prove the hard disk conjecture, the circle problem, using independence, because independence predicts a Gaussian while the reality is that this distribution does exist. That is not a Gaussian. And we know quite a lot about the distribution now. It is real analytic. And it decays much faster than a Gaussian. It's almost compactly supported. It decays, instead of like e to the minus u squared, it decays like e to the minus u to the fourth. So there is a distribution. It's not Gaussian. So we can't use this heuristic to generate the proof because it predicts something wrong. OK, so now that we understand the circle, let's try to look at more general planar domains. So I'm going to play the same game. It's just repeating the slides with a slightly different shape. So instead of taking a circle and dilating it, you take your favorite shape in the plane. There has to be some caveats. So let's take a convex shape. This is a serious caveat. And with a smooth boundary, this is another serious caveat which has no zero curvature. That's another non-trivial condition. An example is an ellipse. I'll show you a few other examples. Let's call a domain like this an oval. And we play the same game. We take this oval, dilate it, and count the number of lattice points inside. And then repeat the same argument as we did before. And you will find that the number of lattice points is asymptotically the area of this dilated oval, which is the area of the original oval times r squared. And the remainder term can be proved to have roughly the same upper bound as we know for the circle. I think we know everything as far as upper bounds for the circles for these more general shapes under these caveats of smooth boundary with no zero curvature and convexity. And again, the conjecture is with the same kind of heuristic that the correct exponent in the remainder term is a half. And we also know that you cannot do that with the same ideas. And once you know this, you again play this game of looking at the remainder term, normalizing it and thinking of the dilating parameter as a random variable. And one proves, and this was done by Pavel Pleffel about, geez, almost 30 years ago, that there is always a limiting distribution, which has pretty much the same features as the limiting distribution for a circle. But that distribution is not universal. So not only is it not a Gaussian, it's not universal. It depends on the shape. So here are two examples, both of them are ellipses. We take an ellipse, and here is the limiting distribution if the eccentricity is pi over 10. So just two different distributions. So they are really depending on the shape. Was that a question? So the chat flashing. No, I don't think so. Okay. All right. There's a question for it. Sorry. Yes. Yes. So please unmute your microphone and ask the question. Kevin. Yeah, I just had a comment. Really about the, the circle problem itself. So there's another random approach where you shift the center by a random amount. And in that case, Ruzia also recovered the Hardy Littlewood conjecture. I think he's up or down with something. R to the one and a half log R. It's known that the limiting distribution. In that sense. This is with fixed bar. So R is fixed. Oh, when I was fixed. It's somewhat orthogonal to what you. Yeah. This is an orthogonal question, but I understand the. I understand this. That's an interesting statement. So for me, the shape is fixed and the only random. The random parameters that dilate the dilation. You're saying you take. Move it around in a random manner. Right. Without, without dilating it. Yeah. That's an interesting statement. That is, as you say, orthogonal to what I'm saying. It's certainly an interesting thing to study. And people have studied the hybrid version. Where you take a random center, but also take the radius to infinity in a random way. This was done because of motivation coming from mathematical physics. Which I will touch on. A little later, this was done by Joe Lieber, and the late Freeman Dyson and Chang and some other. Okay. Any, any more questions? This is a good time to, to ask. Okay. Let's continue. So, so just we have established that. Every shape has its own limiting distribution. And they're all. They're different. They're not, they're not universal. But they do exist. Okay. So this is all known stuff. And now I want to, to. Mention. An open problem. From RC. Does this distribution very continuously in A and B? Let me back. Is it continuous in the shape? That's the question. If you change the shape in a continuous method. I think so, I think so. I think there's, there may be a precise statement, but it looks to me thoroughly clear from the formula, which I'm not sure. So if you, if you look in the space of distributions and think of the shape, let's say the accident, if you look at the lipses and look at the eccentricity of the lipses, the parameter, does it depend continuously on the parameter? And I believe the answer is yes. Andrew, please ask your question. Yes. Yeah. With your assumption on your shape. Oh my God. You always have three coefficients for the N omega R. The, the, the four year, the four year transform of the shape comes into it. You know, because it's a, it's a shape and it's not the characters, the indicator function is not smooth, but it has a four year transform. And you're right that the condition on number zero curvature dictates something about the decay of the four year transform. This is really important. So if you take a fractal shape, God forbid, then, then you add the four year transform. If you take a fractal shape, God forbid, then you have a problem. If you take a shape with corners or with points, even if it's smooth, the points where the curvature is zero, then you will get a different answer. The exponent need not longer be, no longer need to be a half. And this is dictated by the smoothness and curvature of the bankrupt. So that's Andrew put his finger on it. That's one reason why I made this assumption to have sort of a clean statement. You can and should study shapes, which have corners or where you allow points of zero curvature and so on. And I don't think people have done that yet, but it should be doable or at least some doable. And it would be interesting to do. So one more question, please, or fear G please ask your question. A fear Gordetsky. I see only that. Yes. I wanted to ask if there are some, if there was some empirical work to study this quantity even for the non convex bodies or something. Don't know. So fear is asking if someone tried to do numerics. And I don't think so. Again, this, the problem as I took it came from the mathematical physics literature, even though it should have come from the number theory people, but they didn't, or at least as I said, the witness studied it, but it got lost. By the way, fear, the curves I'm showing here are not empirical. They are theory, purely theoretical curves because I know the answer. I will tell you what the answer is later. And then I just use the answer to generate the curves. But what happens when you take a non convex, non smooth or flat shape? I don't think, I don't think anyone has even tried to look at this empirically. Again, I see. It's fun to do so, but they haven't done it. Okay. There are more questions. No. Okay. Right. So this is again, the stuff that is more or less known except. Okay. So it's known with the, with the assumptions I described, and then once you go, you can still stick to plan the domains and there's new, new physics to be done there. I would say. But let's move to three dimensions. And instead of looking at. That is points in the plane, you look at lattice points in the ball, sort of in the circle in the ball. And that's the same question. And the answer is the volume. This is the volume of a large ball. It's. Constant times are cubed. And you prove it the same way as we did for the circle. And. There's a remainder. The proof that you will get will give you a remainder term, which is the area of the surface. Mainly off. Or squared. And if you make this heuristic, that the contributions of each cube are random, you will get square root of the area of the surface, which is R to the one. So that's the conjecture. I don't know who it's due to, but it's very old. And analogous to the two third exponents, there's a fourth of the exponent. And the problem is, you know, I don't know who it's due to, but it's very old. And analogous to the two third exponents. And the problem is actually quite difficult. It's still not known. This ball problem, the silver problem is not known. It's hard. The world record is fast. I know it's due to his brown, which is some number, which is bigger than one, but less than four thirds also. And once you do that. You look at the normalized remainder term. You look at the remainder term normalized by the square root of the area, which is what you think is the right answer. And you ask, does this have a limiting distribution? And the answer is no, because already in 1940, Vojtek Jarnik computed the mean square of this normalized remainder term. And he showed that it blows up. Not only does it blow up, there's an asymptotic and the mean square of this normalized remainder term is asymptotic to a multiple of log of the radius. Now this gives a rise to a new conjecture, which as far as I know, it's due to Pavel Blecher and Trinland Dyson in a paper where they rediscovered Jarnik's work, which was embarrassing for them, but put a nice conjecture. And the conjecture is that this normalized remainder term is Gaussian normalized after you normalized correctly, meaning not by the square root of the perimeter of the area of the bounding sphere, but include this square root of log of. And the conjecture is that this normalized remainder term is now after you further renormalize, it now has a Gaussian distribution. So it's very, very different from the two dimensional case. And I should say that I think if you just take a generic ellipsoid, you will no longer have the log feature and then you will be back to the case of a non-generic distribution. And I don't remember if that's known or not, but in any case, the case of the ball where the remainder term actually needs to be further renormalized is supposed to be Gaussian. I was looking for numerics on this and they once saw some very poor numerics and they couldn't find them today. I think that's a great problem to try to do. It should be known here as others. The worst case in I, which is the stupid circle problem and ball problem, that's only mathematicians who do such things. Physicists will look at the typical behavior and the typical behavior is the distribution. So I think that's a very attractive conjecture, assuming, well, even if it's wrong, it's attractive to me. Okay. So this is dimension three. So I keep telling you that I'm motivated in looking at this from problems in mathematical physics. So let me just spend two slides sending people to know where to look. So where do such problems arise? If we look at what is called vile's law. So vile's law is you're trying to count the eigenvalues of the Laplacian. So you take your favorite plan of domain, for instance. Look at the Euclidean Laplacian with suitable boundary conditions. For instance, can take Dirichlet boundary conditions, which roughly mean that the function is going to vanish at the boundary in a suitable sense of Neumann boundary conditions where the normal derivative vanishes. And then you look for eigenvalues of this problem, for the Laplacian acting on functions with these boundary conditions. And the statement is that there are actually a lot of these eigenvalues. In fact, the eigenfunctions give you an also normal basis, of the functions on this shape. And the eigenvalues are a discrete set of numbers which only accumulated infinity. So you can actually count them. So here is a picture of the staircase function of these eigenvalues. You jump by one every time you hit an eigenvalue. And you see this staircase, which is climbing up all the way to heaven because there are a lot of eigenvalues. And you can see that it looks like it has linear growth. That's the blue curve over here. And this was one of Hilbert's problems. Hilbert's problems was to prove that there is an asymptotic law. And the asymptotic law says that the number of eigenvalues up to x grows linearly in x as x goes to infinity. And the pre-factor tells you the area of the domain. So the way Mark Katz formulated it is that we can hear the area of a drum because these eigenvalues correspond to frequencies of a drum. You think of this shape as a membrane. So we have a counting problem. And then, of course, you can ask about the remainder term. Play the same game. Now, in very special situations, really a handful of situations, you can actually compute these eigenvalues. You can write a formula for these eigenvalues. This is extremely rare. This is not what usually happens. For instance, if your shape is the rectangle, then the eigenvalues adjust sums of two squares with coefficients which depend on which rectangle you're looking at. And then when we want to count the number of eigenvalues, it's exactly counting the number of values of this quadratic form when you restrict to a quarter, to the variables, to be positive, so you're looking at the quarter of a circle, let's say. And then the statement of Weisler, it's just counting lattice points in a quarter of a circle or a quarter of an ellipse, and then everything you know about the remainder term. For an ellipse, you also know in this case as well. Okay, so this is how to prove Weisler in this special case, except it's not so special. And then there are some, again, very rare other cases where the eigenvalues can be parameterized, not as directly, but at least approximated by things which come from lattice points. For instance, if you take a disk as your domain, when the eigenvalues are squares of zeros of vessel functions, and those have a semi-classical approximation in terms of lattice points, and surfaces of revolutions and so on. And in some of these cases, for instance, in some surfaces of the revolution, the remainder term has been investigated as far as the limiting distribution, and you find something that looks similar to what happens in the circle problem, that there is a non-Gaussian distribution. For instance, this is not known if you take the eigenvalues of the disk, but in some cases it is not. So I think that's an interesting motivation to look at these cases. And as I said, for most shapes in the plane, the eigenvalues are not explicitly, they're not explicitly solvable. We don't know what these eigenvalues are, except that we know that they exist. Otherwise, vial wouldn't be such a big thing. But you can still try to study these remainder terms, and there is a very interesting conjecture that says that if you take a shape where the billiard flow is chaotic, and they don't want to say what that means, but it's something that, an example is to take a stadium, which means you take a rectangle and stick two semicircles on the side, then de-symmetrize it, but just taking a quarter of the shape, then the eigenvalues are not, we don't know what they are, but one can numerically compute them using finite element methods, for instance. And look at the statistics of the remainder term, and there's an interesting, let's call it conjecture, coming from work by physicists from Frank Steiner in Ohm and this team, how far away from the end spot, that indicates that maybe for these chaotic state situations, the limiting distribution is Gaussian. I don't know whether this is true or not, but it's something that one should bear in mind, because I had some little project many years ago about the modular domain for the hyperbolic operation where I gave some support for this. Okay, so I want to move on to my last subject, which is much more arithmetic, which has to do with primes. So I want to do a two-dimensional prime number theorem, but let's remember the one-dimensional prime number theorem first. So pi of x is the number of primes up to x. In this audience, I don't think we need to explain what this is. And the prime number theorem is the statement of how many primes are there up to x, and the answer is that the number of primes up to x is asymptotic to the logarithmic integral, which is this ugly function, which in turn looks like x over log x, and there's a series, a asymptotic series in descending powers of log, which describes this logarithmic integral, but you want to keep this. You don't want to take this term only because that would screw up everything else I'm going to say. And in case you're not happy, which most people are with this ugly function, instead of counting primes, you can count prime or prime powers weighted by the logarithm of this corresponding prime. This is called the von Mangoldt function, and then this Chebyshev function. Instead of being asymptotic to the lie of x, it's now supposed to be asymptotic to x. So you smoothed out this main term. And so this is the prime number theorem, which was proved over 120 years ago. And the Riemann hypothesis is a statement about the remainder. So the Riemann hypothesis is completely equivalent to the statement that the difference within pi and lie grows like square root of x, roughly, or between psi of x and x grows like the square root of x. So that's the remainder term that we want to study from the point of view of distributions. Okay? And here's another theorem of Orwell-Winzner, again from the late 30s and the 40s. The statement is that the remainder term, once you normalize by square root of x, you have to do that because that's what the Riemann hypothesis tells you. It's well known that you can't do better. So square root of x is the right thing here. But once you normalize, if you look at the logarithmic quantity, that is instead of taking x, you take e to the t. And think of t as your random variable, or equivalently take psi of x minus x over root x, but now take a logarithmic measure on x. Then this has a limiting distribution. That's the statement. So in particular, square root of x is the right size, but you needed to assume the Riemann hypothesis. So there was no question of the right size. And here's a picture of what it should look like, which actually was not, this is an empirical picture from just taking a lot of primes. And this is a theoretical picture because I know the formula for this limiting distribution. Well, I know. When I say I know is, you will see later that I know in a very loose sense. Okay. So this is the one-dimensional thing. And the theme of this lecture in the last 10 or 15 minutes is how to combine these two things is to instead of look at lattice points in planar domain, or at primes in a interval to look at prime lattice points in the final domains, which again is something that you should only do after a long drink in a time, which we can't do nowadays. Just one second, please. Yes. You want to ask your question or? Hey, yeah. It seems that my question was answered by Peter Humphrey. Sorry. Okay. Do we want to know what the question was? I asked what, what was the decay, if the decay rate of P is known and Peter says that it's doubly exponential. Thank you, Peter. K rate of P here. Yeah. In the lattice point problem, this thing here, right? Decays very quickly. Okay. So let me move to the two-dimensional case where you'll see something even more extreme happening. So here's the two-dimensional case. So you, you instead of counting, you take your favorite domain of the plane, for instance, a circle and lips or some more general thing, dilate it and look at lattice points where both coordinates are prime. Okay. When I say prime, I mean absolute value is prime or based on our experience from counting primes in the one-dimensional case, you may as well look at lattice points where the both coordinates are prime powers and the way each coordinate by the logarithm of the corresponding prime by the fundamental function. So you get a different, you know, a related counting function. So then, and then I want a two-dimensional prime number theorem. That is to understand the asymptotic behavior of this prime count, two-dimensional prime counting function. Now again, I'm going to put some assumption on the domain and we need it for the analysis we've done. I'm sure one can do better but we want a baseline, a benchmark for what can be done later. So we assume the plan of domain is bounded by a smooth convex curve, which has no zero curvature as we did before and we call this an over. And moreover, I'm going to assume that the over is symmetric just because I only want to worry about the situation that if I count the prime, I also count negative to the prime. So there are many shapes like this. Here's three examples and here's an example which is not one, which is not good because it's not convex and so on. So we're happy that there are shapes like this. So here is the statement that first of all, there is a two-dimensional prime number theorem that is, we know that the main term, number of prime lattice points is the area divided by the square of the logarithm of the dilating parameter. Equivalently, the Chebyshev function when you count with von Mangel-Dway is asymptotic to the area. So this is the nicer thing to look at. This is unconditional. That's the last unconditional thing you'll see here. This is not helpful with what we know now, of course. And if you assume the Raymond hypothesis, we can get the remainder term, which is better than r squared. And the remainder term is now r to the three-halves. Now, in the standard lattice point problem, the remainder term that was known over 100 years ago was r to the two-thirds. This is not a misprint. This is three-halves. So much bigger. So that, of course, you could say, one says in the decline of the generations, the people were working 120 years ago, probably better than myself, but so that's why they go to third and why three-halves. However, three-halves turns out to be sharp here. So the next statement is that if you assume rh, and you normalize, you look at the difference between the main term and the actual count, and divide by r to the three-halves, which I was trying to hint was wrong, but it was just lying. Then there is a limiting distribution, which, of course, is not zero. Again, once you take a logarithmic distribution on the radius, instead of r, you work with e to the t, and t was now going to be the random, uniform random parameter. So if you look at this normalized and reparameterized remainder term, the statement is that assuming rh, assuming the regular hypothesis, there is a limiting distribution. The probability that this normalized remainder term lies between zero and one has a limit. Okay, so that's three of these results. Just one option, please. Hector, would you please ask your question? Thanks. So I'm wondering if instead of assuming grh, what if you just use zero-free regions? Do you get some error term for item one in this slide? Yes, yes. So item one, again, I don't use grh, I use rh, not that anyone cares, but in item one, you will of course get remainder term if you use the standard zero-free region with a very similar result. So I don't like writing it. So you'll get r squared over e to the square root of log r or something like that. All right. So when you write rh, you really mean for the Riemann's data function? Yes, yes, yes. I really mean for the Riemann's data function. Thank you, Hector. I, okay, Hector knows me. So I usually poop who anyone who makes a distinction between rh and grh, because if you give me one, I'll do the other the next day probably. But I don't need grh, I just need the Riemann hypothesis. Such as like, okay. So we continue. Now we are doing science fiction, right? So we had this one unconditional statement that we go back, which was the main term, as Hector was saying, you can get the remainder term unconditionally, then assuming you're in my hypothesis, you get a power saving as the experts would say. But the claim is that this is the correct remain size in the sense that if you normalize, then you get something which is not going to zero, which has a limiting distribution. This only assumes Rh. So it's, I don't know if it's science fiction, it's just not the science of today. Maybe it's the science of tomorrow. But I am going to move much further away from reality than this. I think I'm in the wrong way. Yes, Andrew. Do you mean in two bigger of R to the three over two, or R to the three over two times of law? No. Okay, sorry. When I say three halves is the correct exponent, it's the statement here that if you normalize by to the three halves, then you're not going to zero. And that most of the time you're bounded. I'm not saying what happens. Okay. I haven't said yet. I will in a second. No, but in part two, you're saying it's absolutely bounded the error term. Yes. Yes. Right. Okay. So I, so. Right. So this is actually. So first of all, this is there's no log here. And then claiming the shouldn't be a log, not this is the right exponent and this is the right size. So the difference is bounded by 10 times this. And this limited the limiting distribution of this normalized remainder term is compactly supported. So it's unlike what happens in these other problems. Then none of these funny logs and log logs and horrible things like this here. The difference is really bounded by 10 times out of the three hours. Okay. Right. So I think once I write the formula that we have for the normalize remainder term, all these questions will be solved. We have a formula, which is not surprising. Just use the explicit formula for people who know what it is. For others, you can just ignore it. We have a formula for this remainder term. And the formula is that it's a sum over the Riemann zeros. So going back to Hector's question, it's just the zeros of the Riemann Zeta function. And the formula is that there's a certain coefficient, which depends on which zero you take. Times e to the it gamma n. It's like a Fourier series. And moreover, that coefficient does depend on the domain, but you know, we know how it decays. We even have an asymptotic for it. It decays as a function of gamma of the zeros is the case like one of the gamma to the three hours. Now the nth zero of the Riemann Zeta function looks essentially linear. It's essentially linear. So let's pretend that gamma n is n. So this looks like one of the n to the three halves. So you get an absolutely convergent series here. And therefore this remainder term normalizes bounded. And that explains why I put big O and didn't put any logs and didn't put quotation marks and didn't lie. Okay. So here's the corollary. If you assume the Riemann hypothesis, here's where you have to assume it. We assume that the zeros, the way I've written here have, these are imaginary parts. These are real numbers. So assuming the Riemann hypothesis, the remainder term is bounded by R to the three hours. Okay. This follows from this formula. And now I'm going to show you just very quickly a formula for these coefficients. You won't get too much from looking at it. I just want to show you that it exists. Right. So here is, I remind you, here is the normalized remainder term. And then claiming there's a formula for it. This is four year series with coefficients. And here's the formula for these coefficients. So this, you take a mailing transform of the, the functions describing the graph of this shape. And the mailing transform is evaluated zeros. And these are these functions. And this is a formula. Again, I don't want to say anything more than that. Except that to convince you that there is a formula. If you look at our paper, you will find this formula there. And that formula depends, among other things, has the geometry of the boundary built into it because one of the quantities that come in is the curvature of the shape at the vertices. These are special points because of the symmetry. These are local extreme of the curvature. And so it builds into them somehow and it decays like the parameter to the three halves. This is really crucial because that makes this, this series actually absolutely convergent, which makes it much easier to handle than the one-dimensional prime number theorem where this analogous series does not converge. Okay. So this was science fiction. Now let's move into science fantasy or whatever you call it. So I want to describe what the value distribution of this function is. So this I need to put a hypothesis about the Riemann zeros, which goes way beyond the Riemann hypothesis. So the Riemann hypothesis is clearly correct. And I will assume one more thing that if you look at the Riemann, the imaginary parts of the Riemann zeros, then they are linearly independent over the rationales. I only need to look at the positive imaginary parts because there's a symmetry. So this is the hypothesis. Good luck if you want to prove it. It was used already by Wintner in the 1940s and it's a popular hypothesis. It was using prime number races and other things. I have no idea if this will ever be proved. If anyone wants to comment on that, we can do it after the talk. I'm sure we can amuse ourselves. But it is plausible and it has been checked numerically since that you can check that anything is transcendental. So for instance, a few years ago, Weston Kruggen checked that for instance, the first 500 zeros don't admit any linear relations with integer coefficients, which are of size most 10 to 5. Which, okay, that is some evidence at least. So this is what I need beyond the Riemann hypothesis. And it is really way beyond the Riemann hypothesis. But if you give me that, then the statement is that the limiting distribution is smooth. It's compactly supported. This I know from the Riemann hypothesis because I said that Sandra was pointing out that the remainder term is bounded by R to the 3 house, really bounded. Moreover, it's a symmetric distribution. So unlike what we saw for ellipses, lattice points, ordinary lattice points in the ellipse, where each ellipse has its own weird shape, here the distribution is symmetric. It still depends on the shape. And I'll tell you what it is. It's the probability distribution function of the sum of random cosons. So you replace the phone that I had before with just a sum of cosine of uniform independent random variables with the same coefficients as I had before, which are parametrized by the Riemann zeros. And this is some random variable, which has a limiting distribution, which has a value distribution function. You can compute it as a convolution. And that's a distribution for our remainder term. So this is the answer. Of course, you may want a formula for it. So I'll give you in the last slide or so. The formula, but again, this is all assuming that the zeros are linearly independent. So here's the formula for the distribution. Just look at this beautiful formula. It's a Fourier series. There's a Fourier series where the coefficients are products of Bessel functions evaluated these coefficients, which depend on the Riemann zeros. Okay, so you may or may not like it, but the good thing is that you can take this formula and stick it on a computer because there are tables of the Riemann zeros. And then once you have these tables, the Mathematica will generate a plot of this function for you. So here are two examples. If our shape was the circle, then we get this dashed curve, which may or may not be a Gaussian. It's not because it's compactly supported. And if you take an ellipse and you choose the eccentricity comingly, then you will get a bimodal distribution. It's still symmetric, but with two pumps. This is the difference between an Asian camel and an African camel. I'm allowed to use terminology from my part of the world. So using this really outrageous hypothesis, you get this result. And okay, I think my time is out. So I will just skip this slide. This is how to prove everything. Let me just skip it. So just to summarize, I've shown you a bunch of problems about the value distribution of the remainder term and some are old, some are new. In all cases, there is a limiting distribution. In some of them, it can be proven and has been proven. In this case of counting prime lattice points, I know what it is, but to prove anything, it's not enough to assume the remainder hypothesis. I need this ridiculous linear independence hypothesis, which seems true and gives you good numerics, but I don't expect anyone to prove it soon. Thank you.