 Welcome everyone and thanks for coming. As Philip says this is my second time in the seminar the last one was almost two years ago it's how to believe that we've been doing this for two years hopefully. Next time that we'll do it will be other options. So what I want to discuss today is what's been going on. Beyond the classical theory of uniform distribution in the last few years. So this is a very introductory level talk. So I apologize in advance for people who are more expert I see several here in the audience. It's not for them. It's more for people from outside the subject. The plan is to give a five minute survey of the theory, the classical theory of uniform distribution. That's a theory that started out about a century or so ago, works by Hardy and little world by a man by the corporate and other people. And this is a theory that has been going on for a while, and much more recently, the focus of the research is moved to much finer quantities which I will describe presently, namely near nearest neighbor spacing pair correlation related quantities. And some of this research has been driven by questions that have come from outside of number theory or even mathematics, which have to do with problems in quantum chaos. And also from new features of the zeros of the Riemann's data function which are being discovered about 50 years ago. So, which are also very briefly described. And so the plan is to spend a few minutes on the classical theory of uniform distribution, then define these finer statistics and describe how they occur in random models. And then in the theory of the Riemann's data function and in quantum chaos. This will be just a few minutes and then in the second half of the talk, I will discuss how these quantities are manifested in the examples and standard examples of classical uniform distribution theory, new things to be done. So let's just start with the classical theory of uniform distribution. And in this theory we take a sequence of real numbers and look at the fractional parts, which are numbers between zero and one. And then we say that the sequence is uniformly distributed mode one. If any sub interval of the unit interval of the unit circle contains the proportion of points as it contains is asymptotically the length of the interval. So that's this definition here. And in particular, uniformly distributed sequence must be dense. Here are three examples, numerical examples. The first one is you take the numbers just to be an arithmetic progression. And an irrational step. So take root two times and to be definite and look at the fractional parts of that sequence. And here is the histogram of these fractional parts of what you see here. And took 10,000 points and running from one to 10,000. And I bin them into, I think 20 bins, each of one over 20. And you see the height of the bars is the number of points that fall into each bin so there's roughly 500 points which fall into each bin. Here's a particular example of root two times in. So this is uniform distribution, whatever the definition is, this is uniform distribution because each bin visibly contains essentially the same number of points. Here's another example, instead of root two times then let's take the sequence login and look at its fractional path. And now if you look at this experiment where I take 10,000 points and divide them into bins into 20 bins, visibly, not all bins contains roughly the same number of points. So this sequence is experimentally not uniformly distributed. And as an exercise I suggest that you just prove that it will not be distributed. So these are two easy examples. Here is another example which is not too difficult when you take the sequence to be square root of integers and look at the fractional parts. And then you see here that it is roughly uniformly distributed, because the bin heights seem to oscillate around 500 which is the main value. It's not as uniformly distributed as this so-called chronicle sequence of root two times and but it's certainly uniformly distributed if you compare it to the picture of login. So these are three good examples to keep in mind and I think the experiments are very convincing. So let's look at more examples without the numerics this time so the example that I discussed can be generalized instead of root two times then you take alpha times in. If alpha is one then the fractional part will always be zero so that's not so interesting so you make alpha irrational and then the statement is that the fractional parts of alpha times and are uniformly distributed. Alpha is irrational. The example that we looked at before can be proven not to be uniformly distributed the fractional parts of login. So this was all known in the beginning of the 20th century. The German vial made a huge breakthrough in this theory and invented the method that allowed one to prove various sequences are uniformly distributed. For instance, if you look at alpha times n squared, when alpha is irrational, then vial prove that this is uniformly distributed. So this invented the method of what are called now vial sounds and vial differencing and so on for this purpose. And more generally, if you take the values of any polynomial at integer points, as long as there's one non-trigger coefficient which is irrational then the sequence that you get is uniformly distributed. Again, this is German vial. Another example that will be particularly relevant for me is instead of n squared, you take n to the theta. Well theta is any number. So if theta is an integer then this falls on the vial example. But you can take non-integer thetas. For instance, the case I'm interested in is theta equals a half. And the statement is that for any non-zero alpha and any non-integer theta, the sequence alpha times n to the theta is uniformly distributed. So again, this is old stuff. And some people who want to read up on this, there's a classic book by Coipers and Niederleiter, which describes this theory very succinctly. Another example of a result in a classical result, which I think is due to vial, is that if you take any sequence of distinct integers and look at the dilates by multiplying them by alpha, then for almost all alpha, you get a sequence which is uniformly distributed. Almost all is in the sense of a big measure. Now if you apply it to the sequence of powers of 10, for instance, the statement is that almost all dilates of 10 to the n are uniformly distributed and that turns out to be equivalent to the statement that alpha is normal. What is normal? You take the base 10 expansion of alpha. And for instance, you'd ask how often is the digit 3 appearing? What proportion of the digits that you get equal to 3? And the answer should be 1 tenth because 3 shouldn't be different than 5, 7, 9 or 2. Likewise, if you take any string of k bits, k digits, with k is fixed, let's say k is 7, so any string of 7 digits should occur equally often. That means with frequency 1 over 10 to the 7 in the decimal expansion. The number that satisfies this is called normal and Val's metric theorem says that almost all alphas are normal in base 10 or base 3 or base 2, any of your favorite bases. So this normality is a subject that I think was started by Emil Borrell in the beginning of the 20th century, and this gave a proof of that almost all numbers are normal, but we don't really know any natural, any nice number that is normal. For instance, we don't know that root 2 is normal or cube root or two or pi or he or any algebraic number is not known to be normal, even though we believe that any algebraic irrationality is normal in any basis. There are explicit examples that you can write down that are provably normal for one base. For instance, you know, I believe when he was in high school or an undergraduate wrote down this number so you take, you just concatenate all the number of base 10 expansions of consecutive integers like for 0.12345678, 91011 and so on. And this you can prove is normal to base 10 but is not known to the best of my knowledge to be normally many other base. And there are other examples like this, I wrote down the one which is due to Stoneham, which is normal to base two, but was proven not to be normal to base six. So the study of normal number is a, it's still a big and ongoing subject in the, for instance, we want to cover shared was here in the audience is an expert on this and there are many interesting questions about them but the most for me the most interesting question is whether roots to is normal and that is not. There are many preprints proving it but none but survived. Okay, so this in a few minutes is the classical theory of uniform distribution some of the highlights. Now let's move beyond. So once we know that the sequence is uniformly distributed. So look at finer statistics. These are people have started looking at those only very recently on the scale and the time scales of this theory. So discuss to such quantities, the level spacing distribution and the pair correlation function. So here's the definition you take a sequence of points in the unit interval the unit circle. So this is from x1 to xn. And now let's order them. And when you order them. This is a non trivial operation will give them a different name. So people call this the oldest statistics. So in the example we had before we took a square root of two times and that's an ordered sequence of real numbers but when you look at the mod one the order and gets mixed up. It's something completely different so to, to highlight this I gave them different names so the ease of the order statistics it's the same sequence of points but now labeled according to the order in which the line the unit interval. Okay, once you order them, you look at the gaps between nearest neighbors in this order. The end points in the unit interval then on average the size of the gap is one of them. That's the definition of an average. And you ask for the distribution of these gaps on the scale of the mean gap. You look at the proportion of nearest neighbor gaps between these older elements, which are, let's say between zero and X over in the typical gap, the average gap is one of them. And you take the limits and hope that the limit exists and if it does exist. You hope that it's given by some continuous density, and then density is called the level spacing distribution. So this is the line that the limits are not guaranteed to exist and the need not exist, but many interesting cases we believe they exist. So this is the level spacing distribution it's this distribution of new gaps between nearest neighbors of the sequence once you order. So that's one quantity. I'll show you examples in the next slide. The first quantity is simpler. It's called the pair correlation function. Again, it's not guaranteed to exist, but I'll give you a definition. So, in the level spacing distribution I looked at the gaps between nearest neighbors for the pair correlation function I look at gaps between any any two points in the sequence so now I don't need to order them. Okay. So I look at the number of gaps between any two points of the sequence which are less than a given multiple of the mean gap game the mean gap is one of them. Then I divide by n and take a limit and hope that that limit exists and if it exists that it's given by some nice density, and that density is called the pair correlation function. Now, let's go back to the levels facing distribution. By definition, it is a probability distribution, because you are asking for an asymptotic proportion. It may not exist, but if it exists, it's a probability distribution. In the case of the pair correlation function. It's not a probability distribution. And let's look at this definition so how many pairs of elements do I have I have roughly n squared pairs. So why am I dividing by n and not by n squared. So this reasoning for people who haven't seen this. It's true that they have n squared pairs, but the average gap is one of the end. So you assume that the difference, let's say that these numbers are all that the difference between the two of them will be roughly divided by n. So as soon as j minus i is big, then you will not be less than t over n. So each x i will only have a bounded number of other points which which high distance t over n from it. And since the end choices of x i that's why we divide by n. Okay, so people digested it. But these are the two quantities. The pair correlation is easier to define because it doesn't need this non trivial operation of ordering the levels it's easier to define it easier to study. But it's not a probability distribution and that gives you some other problems. Levels facing distribution is very intuitive. If you just stared it. But it requires this really non trivial operation of ordering the weapons. So look at examples not of deterministic sequences but of random sequences, because I want to know what to aim for when trying to prove something. So the first example is what is called a picket fence, you just look at capital and points, which are equally spaced. And let's say the end. You look at little n of a capital N, little and running from one to capital N. So the difference between successive points here and you don't need to order them become ordered is one of a capital and all the nearest neighbor spacing are all the same. When you normalize them you get the normalized nearest neighbor spacings are all one. And so the level spacing distribution is just a delta function. So this is not very exciting, but this is what it is very simple to study. Let's look at this more complicated never example. So you take random points in the unit in the unit interval random means you take capital and independent uniformly distributed points in the interval. And the question is, is the limiting levels facing distribution and it's an exercise in probability theory that the answer is yes. And that the level spacing distribution, let's say the expected value that's a simple thing to discuss is given by this exponential distribution. The plot here, it's the green curve. So I plotted the level spacing distribution for the case of uncorrelated levels it's called the Poisson ensemble Poisson model. Another thing that is easy to compute in this model is the pair correlation function and it turns out to be one. So that's the plot of the pair correlation function. Again, it's not the probability density it's it's it's not into the book. So that's the second example we had the picket fence we have the Poisson model. The third example is much more complicated. So instead of looking at random points in the interval. You look at Eigen values of random matrix. There are many flavors of random matrices, the whole subject called random matrix theory. So, I think the original, almost original flavor was to take the so called Gaussian ensembles, for instance the Gaussian autonomous ensembles which means you take a random and buy in real symmetric matrix. So random matrices and real Eigen values. So, randomness would mean that the matrix entries are independent Gaussians and you sort of choose the variance appropriately I won't describe it. Another example is, and by and so real symmetric you can take complex submission matrices again you get real Eigen values and that's called the Gaussian unitary ensemble and flavor which is technically easier to study is you take a random, for instance, random by and unitary matrix and look at its eigenvalue. So the matrices are random with respect to how measure on the unitary group, the eigenvalues are random but not independent the correlations in them. This is called the circular unitary ensemble which is Dyson's name for this particular way of choosing random matrices. In all these cases, you can work out what the level spacing distribution distribution of nearest neighbors. It's a non trivial thing. And it was done in the 60s. And, right, instead of writing down a complicated formula, I just, I just plotted here this red curve, which is what you get from the Gaussian ensemble. The Gaussian unitary ensemble is visibly very close. Okay, and you can also compute the pair correlation function that's easier and for instance for the Gaussian unitary ensemble, it is given by one minus the square of the sine kernel. So here is a plot of the three pair correlation functions for these models. The green one is Poisson which was one. And then there is the blue one is for GOE and the orange one is for GUE which is for GOE it has this nice explicit formula. What you see here is that for the random matrix theory, the pair correlation functions vanish at the origin, as does the level spacing distribution, while for the Poisson ensemble, the distributions do not vanish at the origin. And this is called level clustering, while for random matrix theory you get what's called level repulsion. There's very small probability of having close eigenvalues on the scale of the mean spacing. So that's what we now are aiming for. So now let's see how this is encountered in real life. Okay, so one version of real life is quantum chaos. So here's a particularly simple model of a quantum system. It's a quantum billiard. So what is a billiard for our purpose is that it's a plan of domain with let's say piecewise smooth boundary like this one. Then you think of this domain as a membrane, and you ask for its characteristics vibration so think of it as a drum you beat it and you ask for the frequencies. The way you compute the frequencies is by solving an eigenvalue equation and meaning you look for eigenfunctions of the flash with appropriate boundary conditions. For instance, the original boundary condition just asks for vanishing on the boundary. The eigenvalues here that's a non trivial thing. It was proved by Herman Vile. That not only are there lots of eigenvalues we can in fact count how many there are asymptotically and Vile's law says that in case of a plan of domain, the number of levels that eigenvalues up to X grows linearly with X asymptotically. And the coefficient tells you the area of the drum. So here is a plot of the so called spectral staircase. There is staircase and there's a linear approximation, and clearly you see the same. You see that asymptotically that same. And then the mean spacing here is constant. If the growth is linear, it means that the average spacing is one of this pre factor. So we can ask what is the spacing distribution, the level spacing distribution. It's a well defined question doesn't mean it has an answer, but the case you can ask the question. And it turns out to be an extremely complicated question we don't really have an answer that we can prove in any reasonable case but some fascinating conjectures that have emerged about it. And that two extreme cases about this right one is, for instance, what happens when you take a rectangle. The case of a rectangle you can actually write down the eigenvalues glorified sounds of two squares. And here is an experiment for the level spacing distribution of rectangle whose aspect ratio is pi over three. You can clearly see what's on here. I don't know how to prove this, but numerics to me seem very convincing. And it's a purely arithmetic question because eigenvalues are given here. And this is expected to hold for other so called integrable systems, unless there's the reason that it's not true, for instance for a square. It's not going to be true because lots of multiplicities for the sphere it's even less true because of multiplicities but once you recognize this you think okay this is a reasonable conjecture. The other extreme is when the dynamics are chaotic whatever that means here is an example you take a square and remove a concentric disk from the inside. You look at the eigenvalues of the fashion with seriously boundary conditions meaning the functions of Spanish on the boundary of the disc and the boundary of the circle. You don't, I don't know how to write down the eigenvalues in any explicit way but you can numerically compute them, at least if you're an expert in doing that kind of numerical analysis. This is one such early computation that was done a long time ago, and you clearly see something completely different than Poisson. You see level of repulsion in this smooth curve here is the GOE distribution and if you're betting men then you would say okay this is a reasonable thing. There's a lot of traps you can fall into and described in the same version, but they will not be going to it. So the universality conjectures is that in these two extremes, generically at least you will get these two distinct distribution and Poisson for integral systems and random matrix theory for chaotic dynamics. There are exceptions for instance, for the geodesic flow on the modular surface. The distribution looks like Poisson. But we can explain this after you see it you can explain it. And there's no case where these conjectures are no single example. Okay, so the next next example is the Riemann zeros. So for people who don't remember. This is the Riemann zeta function. It's defined by series or an infinite product over primes and it has an analytic continuation and the functional equation relating the values at S and the plan minus this zeros of this completed zeta function are supposed to all lie on the critical line, the line of symmetry of this functional equation that's the Riemann hypothesis, which numerically is certainly correct. And the Riemann formula tells us how many zeros we're supposed to have up to high T and answers roughly T log T, which means that the mean spacing between consecutive zeros behaves like one of a log T. So to look at things like level spacings or correlation, you will have to rescale. They are too dense. You want to get the mean spacing to be of size one so you have to rescale them because the zeros are getting denser and denser. And there was a, an absolutely fundamental discovery made by Montgomery about 50 years ago. He studied the pair correlation function, which I'll describe what it is in the next slide. And as a result of the studies, the conjecture is that the zeros of the zeta function a random matrix theory statistics specifically those of the Gaussian unitary ensemble. And the proof is a computation of 10 to the five zeros, the 10 to the 12 which are this for that many, many years ago. And you see a scatter plot here, the squares and you see a smooth curve which is the GUE level spacing and it's clearly the right time. And the difference by this plots containing, you know, hundreds of millions of zeros near the 10 to the 24 I think is the latest figure, which even better fit. Now, what Montgomery originally looked at is the pair correlation functions and his conjecture was in this form a look at pairs of zeros with different indices and you ask how many of the differences. So at most a times the mean spacing, which is to buy over log t and divide with a number of zeros up to this should be capital T. And the answer that the conjecture is that it's the integral of this kernel, and the story goes is that Dyson pointed out that this is something that Dyson had worked on the 60 series of papers on the statistical theory of the magic levels of complex systems. And we recognize that this is what you get for the per correlation of the GUE. So this is another plot of the pair correlation function and the smooth port is this factor here one minus the square of the sine curve, the scatter plot is the numeral for again 10 to the first 100,000 zeros and it looks good. These conjectures are not proven yet, there's some evidence again Montgomery proved that you get this, not exactly but at least the Fourier transforms of the two agreeing in their restricted range. And you can compute so to speak moments of these distributions which are not the pair correlation but three level and 17 level correlations. And under similar restrictions you see you get agreement so there's actually quite reasonable evidence for this much more so than the case of quantum chaos. Okay, so this is example of these new statistics, as you see them in nature. For the second half of the talk, the last third of the talk. I will discuss how you see these fair correlation and level spacing statistics in the examples of classical uniform distribution theory. We can make our favorite sequence like alpha and mode one alpha in school mode one and asked for its fair correlation function and level spacing distribution. This is something that took a while for people to look at the, there was a gap between Montgomery and this was done. And the result, which in retrospect tells us what are the nearest neighbor spacing of the chronicle sequence alpha times and the result is called the three gap theorem, which is a number of parents, for instance, they are sure in fifties. Here is the way to find out about the serum the hard way you do in America and you don't see anything stabilizing. Then you make up all kinds of excuses why it doesn't stabilize but this is the reason. So the theorem says that if you take the first 100,000 levels. If you take the first sequence order them. This is the non trivial thing, then that it's most three distinct gaps that are cheap. If you take the first 200,000. Again, you get it's most. Let's say exactly three distinct gaps that are not the same ones as you had before, but only three. In this particular case, there is no level space in this very distribution, the limit does not exist so it wasn't an imaginary question whether the limit exists it need not exist even this very natural example. This is disappointing. So let's move to something more interesting. Instead of alpha times and let's look at alpha times and squirt, which was uniformly distributed, as long as alpha is irrational. And I looked at this with Peter Sarnac and, for example, such a rescue in the late 90s. It was a quarter of a century ago, I'm ashamed to say. And the conjecture that we arrived at is that if alpha is your fan team, for instance, square root of two, my favorite example. The spacing of the level spacing distribution of alpha times in squared mode one up was on. So not only the level spacing, but the pair correlation functions and these are the more complicated correlation functions. So you find team covers all algebraic numbers. The form of definition is that you cannot approximate alpha to well by rationales in the sense. The right numbers are known to be like that, and almost all find a sense of measure theory of different. We don't know how to prove this the one one of the things that we did manage to show is that for almost all alpha the pair correlation is was on. For alpha and squared and we had the same results. If you take alpha times any integer polynomial integer coefficients as long as it's got degree at least two. Because if a polynomial is degree one we are back to the chronicle sequence which we know now doesn't have a level spacing distribution. That was the theory as almost 25 years ago, and it was dormant for a couple of decades until a new generation of people started looking at it, and there's actually new developments here for instance. There's a nice criterion for when you can expect to get almost sure pair correlation function. They abstracted some of the arguments you have in that theory and the term and the relevant quantity turns out to be the additive energy of this sequence of integers. I take a sequence of integers I want alpha times they end to be almost truly pair correlation for Sonia. Well says almost truly alpha times a and is a uniform distributed. The criterion has to do with additive energy. So the additive energy is defined as number of quadruples of coincidences between pairs. That's the definition. My definition. It's at most n cube because if I know KL and M then I know way of it. And it's at least n squared because if I take K equals M and L equals M I sort of get the coincidence. So the additive energy is between n squared and to always for any sequence of distinct integer. And then the criterion was that if the additive energy is slightly less than the upper bound, the trivial up above. So slightly less here is n cubed over the power of login, then almost surely alpha times a and it's passed on pair correlation. So that's a positive result. There's an interesting negative result. If you look at the sequence of crimes, well then the additive energy can be computed. It doesn't fall into this upper band, but this is not the necessary and sufficient condition that Alan Walker showed that almost surely the pair correlation is not possible. So here's an interesting sequence where you can show that almost surely you don't have possible and pair correlation. So this is integer sequences. I'm not looking at the chat. Now let's look at non integer sequence powers. So instead of n squared, I want to do into the half. Okay, so into the half is particularly interesting sequence. As I mentioned, failure had proven that alpha times routine is uniformly distributed in the 1920s or 30s. In the 90s and the late 90s, Michael Boschernitz and did some remarks on this and they discovered the picture like this for the level space and distribution. So that doesn't look like was on doesn't look like anything we've seen before. And no one elkis and Kurt McClellan proved that the level of spacing distribution of the fractional parts of the 10 does exist. And they computed what it was what it is. It's a definite thing. But it's not was on it's not random matrix theory you don't see a level repulsion. And these algebraic measures that come up in the homogeneous dynamics and they use homogeneous dynamics in studying this. It's got this funny property that it's actually constant level space and distribution is a constant in the interval near the origin and then it decays but it doesn't decays financially it's not algebraic decay. So it's not possible. So it's not possible can be determined but it's not possible. So this is number one. The second development is very curious one. So Yens Markov and Daniel Elbaz and Ilya Vinogrado show computed the pair correlation function for fruit and one. So you have to remove ends which are squares, otherwise you get multiplicities. So once you do that, you can use these methods of homogeneous dynamics to study the pair correlation function, and surprisingly to me, and to them. They show that the pair correlation is Poisson. You don't know what to make of this. I mean, it is what it is. So let's try to perturb the problem instead of looking at square root of n let's look at cube root of two times square root of n more fun. And here is an empirical plot of the level space and distribution. And this looks what's wrong. If you compare it to what happens to one time through 10, they are clearly different, except this is a theorem, and this is a conjecture. This is a conjecture. I don't know this, but I think it's an interesting conjecture. Let's say if we take. Okay, let's say cubal to 10. Merically, the level spacing distribution, the nearest neighbor gap distribution is clearly Poisson. So I was very intrigued by this and tried for a while to say something about the pair correlation function of this. And eventually, Nicholas Tech now, at that time a postdoc in Tel Aviv that I think is now in Caltech is usually everywhere dense but they think physically in Caltech right now. So he managed to show that the pair correlation function alpha times root n is Poisson for almost all. Without giving a single example. That will come later. So what do we know about this theory, and this is all very, very recent. So I started in the beginning of the 20th century now let's move to the last year or two. So Christoph Eisleitner, Daniel Elba's and Mark Munch understood how to do alpha times into the theta provided theta is non integer which is bigger than one. And they show that almost surely the pair correlation here is Poisson. But the method, they kept complaining that the method doesn't work for theta less than one. I'm quite happy that they failed because I was really interested in theta equals a half. And we succeeded in doing this for theta between zero and one. And so now we know that for all theta alpha times into the theta mode one is Poisson and except when theta is one. The integer case is what I did with Peter Sark quarter of a century ago and now any non integer data is known. And in this case of alpha into the theta there's also the deterministic result which I think is very beautiful by Chris Lutz go and make a stack now. And they prove that alpha times into the theta is Poissonian for almost not for almost for all alpha provided theta is less than one third or slightly even better than that. But they can't get to a half the method that she fails. So this is the state of the art for alpha and to the half which is started out within this section. So let me end with to some of the ingredients just to slide they will be happy if I finish ahead of time. So this is number theory, the end of the day this is number theory so until now it was hiding that let's let me explain what it is that you need to do here. So let's start with a case when theta is two or integers. So after doing some harmonic analysis, what we've reduced to is counting solutions of the euphantine inequality, which in the case when theta is an integer is actually inequality. Okay, so I look at the number of integers, the six integers involved number of six to pose of integers, roughly all of the same size. So that the difference of so that the size of this polynomial is bound is less than one. So if they're all integers, if theta is two, let's say these are all integers, then it means that this difference is zero. Okay. So in the case theta equals to we asked to count how many solutions there are this euphantine equation in variables which are roughly of the same size of size capital M. And what the theory needs is that the number of the solutions is slightly less than and to the fourth. So in the case when theta is two, let's understand how many solutions does the system have. Okay, so what I do is I fix the right hand side here I fix J2X2 and Y2 and ask so those n cube choices and then I ask how many trippers, J1X1Y1 I have which for which J1 times the difference of these squares, it's a definite number. And the answer is, there are very few such choices because because you're using now that you have x squared here so x squared minus y squared factors as x minus y times x plus y. This is deep stuff. And then you ask how many triples of Jx and y there are so that this product equals a definite number of size at most n cube. And the answer it's a divisive function, and very few divisive for number of most K to the epsilon choices and n cube choices is J2X2 and Y2 and therefore we found that the utmost end to the three plus excellent solutions which is within what we need. So this is how you solve the problem, roughly speaking, in the case of theta is an integer, as long as that integer is bigger than one, when theta is one it doesn't work. Last slide, thankfully, is what to do with non integer theta, then this trick doesn't work. So I still need to solve this euphantine inequality but now it's a genuine inequality, think of theta being half. So I have six variables so n to the six options. I need less than n to the fourth solution. So just to say heuristic, the size of this polynomial is roughly n to the, this is J is n, x to the theta is n to the theta so this is n to the one plus theta. So you take n to the six points and put them in an interval of length n to the one plus theta if every integer, if every point is sort of equally likely you expect n to the four minus theta solutions. So that would be enough for what you need. Except if theta is big, this is actually wrong because those diagonal solutions, you take J1 equals J2, X1 equals X2, Y1 equals Y2, you clearly have a zero for this polynomial and thus n cubed such points. So you shouldn't take this too literally. The high-slide albas in Manch did is they use the decoupling, let's call it. They reduce this system to a system of four variables. Essentially this kind of system. So now you have, we got rid of the J's. This is important and you're asking for the number of solutions of this you find inequality and Robert and Sargo actually did this not that long ago. And you can count, you can give an upper bound for the number of solutions of this inequality, you find inequality. The diagonal solution X1 equals Y1, X2 equals Y2, that gives you n squared solution. So this is this term. And this term is this random heuristic that I explained to you. So please, this upper bound makes some sense. Now, the decoupling and this will be a Sargo theorem was sufficient for theta bigger than one, but I wanted theta equals the half. Then you had to have some extra steps. And we managed to do these extra steps. I won't get into this. I slightly wasn't much needed to know something about the ringman's data function. That's what you use for the counting they need to know to know sub convex bound any sub convex bound will do and we needed some other things. In any case, this is the state of the art now for this particular example. So let me just conclude with leave you with a couple with some homework. So then I mentioned these problems so alpha is your fountain. I want to show that alpha times n squared is Poissonian level spacing is Poissonian. We know percolation we don't have a little spacing. And instead of root and root and I explained that almost surely we know that the percolation is Poissonian. And I want the level spacing distribution and here's the experiment. It's clearly true. But I don't know how to prove it. So this is homework for people. Okay, thank you. I will stop here.