 Thank you very much for the invitation to speak at this wonderful seminar, which I very much enjoyed going to. So this talk will be about the alternative and hypothesis and point processes. And this first slide is that this talk will be reporting on on work in two joint papers with Brad Rogers. There are correlations on the alternative hypothesis 2020 and limited mimicry of point processes by point processes supported on a lattice 2021 results in the first paper were obtained independently by terrorist Terry and his blog post. I'll say a little about that a little bit about that later on. The topics I would like to cover our history of the GUI hypothesis and the alternative hypothesis. And then I will discuss point processes models. And then the third part of the talk. We'll discuss the point process that behaves like the alternative hypothesis and can't can't be discriminated from the GUI hypothesis on the known data. And then the, there's a trade off on on how much on a band limited functions which I will describe briefly at the end of the talk. I've prepared slides for more but when I tried to do them. I ran out of time. So the GUI hypothesis, the Gaussian Unitary Ensemble concerns the distribution of normalize zeta zeros up to high T as T goes to infinity. The alternative hypothesis concerns an alternative distribution of normalize zeta zeros up to high T, which we've been unable to rule out, and it's believed to be false. I mean totally false and and and the number theory goal is to roll it out. And this hypothesis is related to the possible existence of exceptional zeros for Dirichlet L functions with real quadratic characters, which, which show up and estimating class numbers for quadratic fields and a goal. Since he goes paper in 1935 is to show there aren't any exceptional zeros. So this talk will be concerned. The number theory is concerned with the, the, the spacings of the zeta zeros. And the number of zeros of zeta of s in the in the critical strip from real part of between zero and one on an imaginary part between zero and T as a well known formula titch marsh T over to pi log T over to pi minus T over to pi plus a small error log T. But we are only concerned here with the main term there, which is the one over to pi T log T plus so of T. So it says the average spacing of a zeta zero up to high T is to pi over at around high T is to pi over log T. So there are two ways to handle normalized basings. So if you just in the bottom if you just take the ensemble of all the zeros up below T, you can rescale them all, we can rescale all the zeros. One at a time, you can rescale the spacings by multiplying by log T over to pi uniformly on that ensemble of zeros, or else you can rescale them separately one at a time. Yeah, gamma J tilde where you instead of T use the current value log gamma J to pi and then you do it once and for all. And then the with that definition the normalized spacing will be gamma J plus one minus gamma J, which achieves average spacing one. Why one is interested in if you take random Dirichlet series if you even go a little bit to the right of a half and you look at the spacings of where the real parts are equal to zero or where the where the thing is zero it's oscillating you'll you'll get a density that's the same density but the zeros will be spaced completely evenly like a clock. And there were they won't show any fluctuations. But on a critical line they apparently they do and, at least on Riemann hypothesis. So Conrini-Weinrich, if, if you can find a large fraction of zeta zeros having spacing, a little bit less than a half the average spacing, then you can affectivize Siegel's theorem and you can prove that for a sufficiently large T there's a real Dirichlet character. For every real Dirichlet character L, one chi is bounded by log q to the minus to some explicit constant. And this is generalized by the Dirichlet class number formula that the class number of a real of an imaginary quadratic field will be bigger than a square root of q divided by a fixed power of log q which then would allow you. Well, to, for example, to prove the prove the effective finiteness of the genus quadratic forms the idonial numbers, an open problem, long standing open problem due to Euler. Okay. So I remind you on the on the history of what's known about the normalized spacing of zeros and so the breakthrough result of Montgomery showing in 1973 shows that indeed there are fluctuations and he proved a positive proportion of the zeros have spacings less than 0.68 of the average spacing. The methods been that method has been refined and the record is current arrow Chandee Littman and Malinovich who showed and on our age, got it the number down to 0.60. And there's an averaging method due to Montgomery and odalisco that puts in weight functions. So you don't necessarily and allows you to show zero small infinitely many zeros with smaller spacings and they, they originally but without getting positive proportion, and they got down to 0.5179. And the record is 0.5154 and the Riemann hypothesis and a preprint put up earlier this year. Analyzes thing and says that the Montgomery odalisco method cannot in principle get below a smidgen above 0.5. So we don't we don't get below a half. We hypothesis. So this is it's evolved through many forms this is the form most convenient here. You take the sync function sine pi x over pi x, which is a nice smooth function and it interpolates to be one at x equals zero. It has an improper integral that's equal to one. And the hypothesis is, if you take in any test function that's a Schwartz function on on n dimensional space and you average it from t to 2t sampling against all the ordinance of the zeroes distinct ordinance of the zeros in the range from to 2t, then the limit as t goes to infinity will converge to an integral the test function against the sine kernel determinant, which is the determinant of this n by n matrix, which will be as it says the endpoint correlation function of a well for what we would do here a point process, which is the sine kernel process. I remind you of one of the many famous plots of odalisco, which is the normalized consecutive zero spacings histogram of them. Maybe this is the one for 10 to the 20th, the, the curve plotting through the points is the GUE prediction giving great agreement and a positive proportion of the normalized spacings are below 0.5 so that that would, according to the Conry and Bonyek's theorem would give an effective version, if we if we could prove it. So that's what the data says. The known results. Montgomery in 1973 pair correlation, a large calculation of Headshaw doing the triple correlation. And then the seminal work of Rudnick and Sarnac doing endpoint correlations. There's, there's a misprint here it's for fixed n and band limited test functions data on S of R n, whose Fourier trend, well, the Fourier transform of the test function is required to be in a bounded domain, given there. The fact that the Fourier coefficient of the Fourier transform is in a bounded domain makes the is equivalent to the function being band limited. Kind of the definition. And then we get the conversion I said earlier you, you take a integrated test function and as T goes to infinity it will, it will converge to the correct limit of the sine kernel. So, of the GUE distribution. So what this means is, we don't see the GUE distribution directly we we see it when we probe it with a test function, and we have a limited class of test functions for with which we probe it this way. And then do is verified against these test functions. The alternative hypothesis. So let's take those normalize a to zeros, and the alternative hypothesis says you get a very different distribution for all sufficiently large j, the differences, the consecutive differences look like they are half of an integer with it with an error going to zero is you go to infinity this is a very strong form of it. So if the, almost, and this is almost all of the pairs are doing this. So then, if you, if you take things that are far away they, they still likely will have a half integer difference so it's saying that the differences are mostly half integers. This is a straw man it's discussed in Conry's Riemann hypothesis survey and the notices of the AMS in 2004. I'm not sure if that's the first place it appeared in print. There's a there are very in the, in the last 10 years there have been a number of variations staring, studying the consequences of this hypothesis in detail. So we would assume our age, we allow the value zero which allows multiple zeros. But we would, we would expect simple zeros. I'd like to mention some conventions on the Fourier transforms that are in this talk. So the Fourier transform in this talk will be the, the integral against exponential minus two pi i times they product X Xi Xi. And then minus two pi i for the inverse Fourier transform. The important thing is with this definition the sync function as I gave it is the Fourier transform and the characteristic function of the interval minus a half to a half. It's a band limited function with that frequencies in the band minus a half to a half. The signal analysis, people use the usual Fourier transform without the two pi i. Again, the definitions of band limited and how wide the bands are might differ from this by a factor of two pi. Two pi sorry. More statistics the GUE two point correlation function. Do you allow the coincidences the diagonal that it would be the temperature distribution the delta function and zero plus one minus sine pi X over pi X squared. The form factor introduced by the physicist is essentially the Fourier transform of this. The transform of sine pi X over pi X squared is a convolution of the box car with itself so it's a triangular function supported on minus one to one. Okay, that's so I can put up the one plot where you can tell the GUE apart from the alternative hypothesis for two point correlation functions. It has a triangular gap near zero. It has a spike at the origin which is the delta function and it has it's flat outside one and the form factor for the alternative function is the same between minus one to one, which is where you can apply those test functions and then it repeats periodically with period two. It's easier to draw that form factor because if you drew the histogram the histogram for the alternative functions will just have spikes at the half integers. Now I would like to flash to the past and discuss the pre history of the alternative hypothesis. One of which appears in a talk of Heath Brown given in the aim and our age conference in 1996, which a few of us here were at. This talk is available on a video recording, which you can find on the web posted by him. Heath Brown never wrote the paper. So what what we have is the talk but anyway what the talk does looking at it is that find something like the alternative hypothesis produced by the exceptional zero. And the heart, the starting point in that talk is to assume you have a. You have an L L value it should be L one guys less than Q it's misprint Q to the minus one quarter minus delta. This violates the seagull, the seagull ineffective lower bound with which that that band proves there only finally many such But we are going to see the consequences of assuming you had such an L value, which would have a zero very close to one. And you want to produce the ideas to study things and eventually go back and get a contradiction. But this analysis has some true true things in it. He takes the zeros of the dedicated Zeta function of the imaginary quadratic field which is Zeta of SL of us Kai. He takes the function f of s which is a normalized version putting in the gamma factor. So that has the functional equation f of s is f of one minus s. So approximate it with a function that replaces L of s Kai was Zeta of two s and then fixes all the Euler factors up to height q where it differs from L of s Kai. So there's a difference that the primes dividing the conductor q finite set and then, then you have to switch the signs of some of the Euler factors where Kai piece plus one up for P less than q. h of s plus h of one minus s is supposed to approximate f of s, you've symmetrized it with the functional equation. This is a factor of a half somewhere. Not not sure about that but anyway, the sequel zero would imply most small primes have Kai p equals minus one so this last product will push things. Small zero. The first theorem he announces that if you had such a thing, then the mean value of f of s minus h of s plus h of one minus s will be very small with this factor q, q to the minus delta will give you and he said it applied for q to the a not for some large a not from T up to a very large value, but we're going to be interested in what happens where it's at this bottom M near cube a not. And he stated that they that the h of s plus h of one minus satisfies the Riemann hypothesis. And then he can prove it way we really needed to prove it for a range of heights. In this in this range of heights where he's going to do things. Anyway, that that that that result is interesting and I'd like to see more on it. The second resulting announces is that if we're now looking at n of T which is the counting the zeros of Zeta of s times L of s Kai so it's going to grow like one over pi t log T. And he says the the g function has the Riemann hypothesis so it's got it zeros one one half plus it and and presumably his proof will show they are simple zeros. And for each zero of this he's going to find unique simple zeros a of s L of s Kai very close to it. Okay, and so he's getting the full set of zeros, because n of t, because this thumb of the zeros will grow like one of our pilot keys. So almost all the zeros and then his assertion is. If you're down at the bottom end of the range you get TM minus TN all of all the, all the differences will differ by a half integer, plus a plus a small error. Which would be what this alternate hypothesis asserts in this equation I've divided his equation by two pi which is the density of the zeros of either is eight or L s Kai so you can see the half integer on the right. And he also states at the end of the talk something like the Fourier transform of the pair correlation is close to that of this alternate hypothesis form factor and namely that exists, it exhibits the period to periodicity outside for some finite range of one which increases up to two. So the ingredients of the alternate hypothesis are already in this talk. Now what has happened, what has happened since of course is he assumed a very strong hypothesis which we know is was wrong. But if you replace end of the one quarter with a much weaker hypothesis that says that the sequel zero is at a distance log log q to a power which which we have not ruled out. Then some weaker version of the same stuff will be true. And would would follow and so you would still see something like the alternative hypothesis. Okay, on the point processes. Well, I mean, the, the GUE and the CUE they've been studied in great detail with with the finite matrix models and we look at the GUE limit from the viewpoint of infinite finite processes rather than finite random matrix theory. So the random matrix I advise statistics involve taking a limit where the size of the matrices go to infinity but in this point process viewpoint we're going directly to n equals plus infinity. And we will we will look at infinite point processes on on the line with average space thing one. And finally, there's all this machinery to define point processes I mean, but informally we're throwing down a random set, possibly infinite points on to a space X so the actual measure space is a configuration space of the allowed set of configurations got a topology Braille sets and measure on it. Here I'm going to consider only the real line. Or, or a lattice wells, a space of lattice points on the real line. And then we are throwing down points on to these regions. The points may have finite multiplicity. And then when you sample you get a sample thing which will be a set of points a process will be simple if all of its configurations. You never always have more publicity one. These are sometimes called fermionic point processes. And then the information describing a point process for for a measure would be the moments. So this is analogy the endpoint correlation functions or intensity vent measures. The question functions then, if it's a continuous density boy, we row van times DX one through DX and they, they, they encode information on the expected weighted counts of endpoint configurations in these spaces when you integrate over some finite region. Some compact region for example, but we're going to create them against test functions. So if the if the test function is one inside a region and zero outside. Then that would correspond to getting correlations in that finite region. And the correlation functions will encode the expected values of these test functions in the regions. We can either use compactly supported smooth test functions. And then with some under some circumstances for its functions. And if you have the discrete function on the lattice we replace the integral with a sum. You'll notice the sum has a form resembling that which goes into the Poisson summation formula which is certainly what's used. As under some restrictions the correlation functions uniquely determine the point process. That's, let's say that's the standard state of affairs, but in the moment problem for measures it's known their their bad example or measures are not uniquely determined by the moments. If the moments don't grow too fast this doesn't happen. And in the papers we impose assumptions which put you in the case where you uniquely determine the point process and all the point process we're going to discuss will satisfy those conditions and therefore the correlation functions determine everything. They uniquely determine the process. The conditions are uniform local moments and some extra bounds exponential bounds in the uniformity. It is known there exist examples where the correlation functions don't uniquely determine the point process. A really big issue with point processes is if you're given a putative set of correlation measures. There's a lot of necessary conditions they have to satisfy but does there actually set does there actually exist a point process. Satisfying having these correlation functions. The answer is sometimes yes and sometimes no even when all the reasonable conditions satisfied there can be extra reasons that stop it from existing. So, the next two slides are just about how one can think about how you could produce a point process from Zeta zeros, you can consider the normalized basings of Zeta zeros at a particular high T. The finite ensemble of these normalized basings they give you a single a single set of points, but we will now allow random translations of these points. We will shift them all to the origin and then we allow random translation say up to T to the fourth, and then snip it off at this interval minus T to the four and then you get a then you get a configuration space of sample point sets on a finite interval. Then you, then you can run TJ off to infinity you'll get bigger and bigger intervals and you can hope that these sample spaces which are behaving like discrete point processes can converge in some limiting process to to a translation of the finite point process on the whole real line. Or you could ask for limit points of this set of zeros at height, G say that go to a limit translation invariant point process. For instance in the mathematics as you would form empirical correlation functions for the finite ones and then hope that your empirical correlation functions all all converge in the limit to the correlation functions of a point process that would be the kind of mathematics you would do. Anyway, a very strong form of the GUI hypothesis from this viewpoint might assert that all sequences of normalized data zeros converging in the limit to the sign process. So this limiting GUI processes called the sign process in the, in the point process literature, in the sense that all the correlation measures converge. We're not doing any of that in this talk. However, there is things like this have been studied in the literature and Shahibi Najnudo and Niccabale for example, try taking random characteristic polynomials of the CUE distribution and then they can go to a scaling limit and infinity and produce a model of random functions whose zeros then would function as a point process. And so they succeed to do this, but the issue is, so our question could be, could there be such a limit of normalized data zeros, if there were for example infinitely many single zeros and you sampled at a high TJ corresponding to that behavior in the talk of Heath Brown, then maybe you could extract a point process that's the alternative hypothesis in some scaling limit. Okay, well, if you're going to do that, then you would like to know that there actually exists a point process that could fill the bill. So you have to solve the existence problem for point processes. And that's what this talk is about is to exhibit a specific point process that does have the alternative hypothesis type statistics, and which will agree with the Rudnick Sarnac formulas on all the endpoint correlations that are in the in the Rudnick Sarnac theorem so that you will be unable to tell it apart from the GUI prediction. I did disturb you sadly. Yes. I just said very sadly. That you can't distinguish. I'm just lamenting the fact that you can't get rid of it. It's Peter. Yeah. Thanks Peter. I was about I was expecting you to say that I made a huge mistake. We're just. Okay. So, let's go back to the translation and variance. A point process and R will said to be will said to be translation and variant if you if you if you if you translate the process you get exactly the same distribution as before you get the identical distribution. In particular that would force all of the endpoint correlation functions to also be translation and variant. And so then we say it's translation and variant in the correlation sense. In our case the correlation functions determine everything. So the two notions of translation and variant are equivalent, but not in the other case. And when we took these limits, we looked only at, we were shifting around to get translating invariant point processes in the limit. So let me just say the Poisson processes are all translation and variant and the sign process that we will come to is translation and variant. The sign process is a determinant until point process. So these are point processes that have level repulsion near the near the origin. A process is determinant all if there exists a function. Okay, a kernel function are cross R to see on the whole real line such that has the property that if you restrict it to a compact regions, then this, this function will be an L2. And it's such that the correlation functions are given by the determinant. From I J to N of K X I X J. And, and if it's going to be translation then that translation and variant that kernel will have the form K X I X J is K of X I minus X J, I believe. I would say that there is a complete characterization of which kernel functions K give you a give you a realizable determinant all process first in Machi and then detailed in proof and Shashnikov. These conditions are that on a finite region it's a symmetric operator and it's got trace class, eigenvalues, and all the eigenvalues are between zero and one, or it's something like that. It's known that determinant all point processes are uniquely determined by their correlation functions. So in particular, there's only one correlation function for such processes and if you, if you have a set of correlation functions you can decide whether or not it could be the if it's a correlation function of a determinant all process. You know it's determinant all for sure. There isn't some non determinant all process that matches it. Okay, so the first existence theorem is there exists a determinant all point process the sign process whose kernel is this translated sync function. And this process is a simple point process and it is translation and variant under our the. There are several places in the literature where you can find proofs. They have to have to go by very verifying Machi conditions. No. Okay, so what we're going to need is a discrete version of the sign process. This is a version of the sign process that sits on the lattice AZ. And for several papers have discussed this. This process are used it, but we couldn't find a proof of existence. So we included one checking Machi's theorem in the in the paper. And the proposition is that for for each parameter a between zero and one there exists a process the a discrete sign processes whose configurations will obey that expected value formula except on the right hand side they give you a discretized form of the sign kernel on this lattice AZ AZ to the end for the endpoint correlation function. This this process is a simple point process, and it is AZ translation and variant. Okay, but it doesn't exist for a bigger than one. So that's a case where there's some there's some constraint to this. Okay. So now I want to discuss the construction of an alternative hypothesis point process. So it will start with the discrete a sign process that it will show that its correlation functions integrated against this range of brain band limited test functions agree exactly with those of the sign kernel process. This will apply only a phase between zero and a half. Okay, so, although the process itself is defined all the way up to a equals one these processes. The end bigger than one and any shorts function in in this class that's supported on the, the box minus one over to a to two a to end. So the box never gets bigger than one, because a is less than a half, then the claim is that the integral of the expected value of this thing. The left hand side is the discretized thing but the right hand side is now the continuous thing integrating that that function over all of our end. So that is the. So this is. So this is the key, the key lemon. Okay, so there is a proof of this lemon on this page. And of course, no, no one can follow proofs that are quickly going through. We want, we want to verify that to prove this equivalence, we want to verify for all test functions with Fourier transform supported in there that the equation star holds that the discretized thing on the left hand side, summed over the lattice exactly equals the continuous thing integrated over our RN. And the idea to do it is to use pause on summation on the left hand side of the equation. Which will turn in a sum over F half of L on the dual lattice so the, so the what the, the original lattice has side at most a half and the, the dual side will then have side at least two. Okay. And the, the key thing in this proof is to check that the sign determinant kernel has all its Fourier transform frequencies and minus one to one to the end, and that that that is a detailed calculation the original thing is on minus a half to a half but you, you have the I assume you know that that is true. Then the function eight of s debt si minus sj will have Fourier transform vanishing outside the closed box minus one minus one over two a to one plus one over two a of n, and since a is less than a half that box is inside minus two over two. But since we have Schwartz functions the, the Fourier transform is continuous so it, it, it must vanish on the boundary of the box as well so we get the open box minus one minus one over two a one plus one over two a to n. Okay, and this has the following feature with that Fourier transform, it says all the Fourier coefficients vanish except the zero Fourier coefficient, and the zero Fourier coefficient is exactly the integral on the right hand side. So that's the proof. Okay, now, now we have this discrete a sign process that's perfectly matching the correlation functions but it but it's supported on that lattice so it is, it doesn't have the translation and variance property. Okay, what it does happen is it's, it's a very another lattice but it's not invariant under our so to fix the process we fixed this make a new point process where we take the point configurations and we shift them by a random number drawn uniformly from the fundamental domain and the lattice which is a zero to a. Okay. So that won't do anything to the to the correlation functions we're interested in because they always, they always treat the difference of points in the process and the differences of points in the processes are completely unchanged so all those correlation things from n equals two on up will not be affected in the in the sign kernel because it's translation of variant. Okay. So the, the previous construction allowed us to do this from a from zero to a half. So we will choose the best point a equals a half, and we will call the process you get this way the alternative hypothesis point process. And now it's, it's correlation functions will be supported on our. They will differ from the one half discrete sign process when you did that, that averaging and the main result will say that this process when integrated against all they band limited test functions and the redneck Sonic result and additional ones when n is bigger than or equal to three will be satisfied so this is the statement of the result. The one half discrete sign process and you take a random variable drawn from zero to a half independently of this process, then the new point process you get a simple translation of variant and satisfies the matching condition for all band limited test functions, having support in cyber and psi plus one sign equals zero or sign one plus sign less than or equal to two. And more over any pair of points and any realized configuration will be separated by positive half integer distances a half one three half, so on so this thing is a completely limiting thing, and it, and it does exist. Now, I'd like to say a Terry towel in 2019 in a blog post. While we while we were writing the stuff up constructed an alternative hypothesis distribution for the CUE ensemble and fix finite and so you have an image on the unit circle so the average spacing is one over N, and he said that you can find a distribution where of matrices where all the matrices have eigenvalues with spacings between their eigenvalues integer multiples of one over and that this new distribution would agree in all of its low moments exactly with the moments of the CUE distribution. And with the, the moments growing the loud set of moments growing as n gets large, and he also observed at the end of the post about GUE that there would be an alternative construction which essentially matches what we've described above namely, discrete ties it on a lattice and then integrate over a fundamental domain of the lattice to smooth out. And I want to say that that finite construction and the very night is is is is it's it's just a very nice thing, because there are now finite versions of this. And the, the, how you handle the moment statistics is very elegant. Another remark I want to make is that actually this. When you, when you look at the higher correlation statistics with this domain you don't, you, they are not enough to uniquely reconstruct the higher order correlation function so any anything that be that behaves like alternative hypothesis must have the same two point point correlation function is what was on the previous slide but for the higher correlation things the, the whole cube is not covered and there are some regions where we don't know the correlation so they're put potentially are other alternative hypothesis like processes that do something different there and could be consistent with the Rudnick Sarnac test function data. This is a very specific one we constructed where it's, it's correlation functions are determined completely. And for example for this, for this distribution this alternative hypothesis process you could actually in principle you compute all you can compute all the, all the statistics for the probabilities of configuration so for example if you want to calculate the probability that given a point, what's the gap size to the next point then gaps size one half to the next point as probability one half minus two over pi squared, and you get a specific calculable distribution and these are the first few values for the gaps of a half or one or three halves or two between two, two points. Okay, that's the end of the alternative hypothesis part. Now I would like to, there's an underlying trade off there and that's the probability paper, which is something we would call band limited mimicry of point processes. Okay. We had, we found a point process on our and another point process, which was the sine kernel and another point process which was the discrete sine kernel supported on a lattice AZ that had the property that their endpoint correlation functions were not distinguishable using band limited test functions, having a restricted bandwidth bandwidth. In this case, we will say that the, the, the point process on the lattice is mimicking the point process on our at the bandwidth be. And then you can ask for what are the allowable trade offs between the parameters A and B for for a fixed process, where you could have this mimicry phenomenon where you the discrete ties thing matches exactly. By the way, we don't know whether there could be whether this can be done at all we we looked at two cases in detail the Poisson point process of any intensity and the sine process where you can figure out what's happened and it and it and it differs in the two cases and Okay, so the, the most basic point process is the Poisson point process is specified by its endpoint correlations being constants. The constants lambda and are the intensity of the process which are basically the expected number of points you can get in a unit interval. So if you want to compare with GUE you take lambda equals one then you'll get an average of one one point in an interval. The distribution of the number of points in an interval is given by Poisson distribution with mean lambda times the length of the interval. So they, they don't have level repulsion. One say that they, they would, I didn't put on this slide, because I don't know. I don't know how to say it. Things are as independent as possible for the Poisson process. Okay, well you can do the same thing you can make a discrete discrete Poisson process of intensity lambda which is the point process that's the number of points that each, each side are independent and identically distributed random variables which are Poisson random variables would mean alpha lambda. These processes exist for all a and for all lambda. They're uniquely determined by their correlation functions for all and and all lambda. The thing is, they're never just they're never a simple point process there's always a chance of getting, you know you're putting balls and boxes when you put down this process and the boxes are allowed to contain more than one ball that this is a, this is like. Okay. So, I, I, I thought I would only have time. I meant many, many slides and these things but I removed them from the talk. I'm just going to state the result. If, if you're, if you're, if you're given alpha a greater than zero and you're given a bandwidth, then the Poisson process can be mimicked at on a lattice AZ for minus B to be for any value of B that's less than one over a for for a between zero and infinity. And it cannot be mimicked by AZ whenever B is bigger than one over a for a for zero day to infinity, and the mimicking process is exactly a discrete Poisson process. The paper proves a general theorem which says if you have any translation and variant process. Any any translation and variant process at all you can never mimic at rate above one over a and this the Poisson process saturates that bound. And that that proves the upper bound in the picture so the upper bound in the picture this has a the the the lattice AZ on the on the x axis and it has the bandwidth be on the y axis and the red region for those of you who are not color blind. Is the region you can't achieve and the green region. For those of you aren't color blind is the. The region to the left. Okay, I say this because I am color blind. Red green color blind. Okay, and now I come to the interesting case of the sign process. So there again is a it as there is for any processes it's a band limited mimicry trade off. And the answer is different. This, the sign process can be mimicked on a AZ for bandwidth minus B to be whatever B is less than or equal to one minus a over a. And a can never go above one in this result. And the sign process cannot be mimicked for bandwidth one minus a over when B is greater than one minus a over a and that applies for a between zero and a half. And also, it cannot be mimicked when the bandwidth is bigger than one over to a, which applies when a is bigger than a half. I can't understand it and I can't understand what I said so here's here is the picture of what we know about this process at the moment there is a red region where we know you cannot mimic it. There's a green region where we know you can mimic it and there's a white region where we don't know what happens. The actual process we used in the construction for the alternative hypothesis is right there at the point where everything where we it's right at the lab the triple point in this in this picture. It's an extremal thing. So, what we can observe from what I said here is that the the band limited trade mimicry trade off differs between processes. It's not completely resolved for the sign process. And the alternative hypothesis process that we presented earlier was constructed using the process at the triple point in that mimicry plot above. I want to conclude with some open questions about this mimicry phenomenon is, first of all, is it generic or is it rare. Are there processes that you just can't mimic at all when you when you put them on a lattice that you just can't agreement. I don't know. And I also don't know whether this this phenomenon is is common or it's rare, maybe it only occurs in in in sort of discreet in in in frameworks where the process is extremely nice. For example, or the sign sign beta processes where beta varies over the random matrix parameter and beta equals two is the usual sign process. We don't know. And so to conclude the alternative my process shows the alternative hypothesis can't be ruled out current using the current results on the band limited test functions. We have an open problem of determining the exact trade off for the sign kernel process on when you can or cannot band limit, and I would say one main point is that this age point process that does exist even even whatever the status of the alternative process in this process may show up in other mathematical situations. Certainly the a discrete sign process has showed up in other mathematical situations and therefore it may prove interesting in in the future. Even even in number three. Thank you.