 Thank you very much for the invitation. It's really great to be here, and thank you all for joining. I'm going to talk about the three-gap theorem and higher-dimensional generalizations. It's, of course, an old theorem from the 50s, and it still inspires a lot of new work. It has many proofs. And I hope I can show you today some new aspects that we've recently discovered with Alan Haines in Houston, and it's built, as you will see, on some ideas with Andreas Trimbecks on Kruppsala. Now, to put this into context, let's start with what I think is one of the most beautiful theorems in number theorem mathematics in general. That's the uniform distribution, mod 1 of fractional parts of polynomials. Proofed by Weyl in 1916. You will all know this result, but let me just state it here. So you take, and I'm focusing here on just the simple monomials here, n to the d alpha mod 1. If alpha is irrational, this is uniformly distributed. And what that statement means is just written down here. So for any continuous function on R modulo z, the average of n d to the alpha mod 1 converges to the back measure. Now, that was a real breakthrough paper. That's something Hardy and Littlewood couldn't do. And if you like, it's the birth really of harmonic analysis because what Weyl understood is that in order to prove such a statement, you just need to prove it for the fundamental harmonics, which are the exponentials. And this paper from 1916, I think, is still one of my all-time favorites. In the case of linear polynomials, the proof was earlier and can be done by elementary techniques. And I've just done here a little illustration on what that means. So we're counting in bins and uniform distribution just means that the proportional values in each of the bins is asymptotically the same. Now, what I'm really interested in, and many other people, and I see many in the audience, is to then, once you have a sequence that's uniform distributed mod 1, to ask about higher order correlations, to really test the pseudo-randomness of this sequence. One of those statistical tests is to look at the gap distribution. So you order your sequence on the unit interval mod 1, and it's partitioned into n intervals. These are the gaps between the elements of your sequence. Now, we have capital N points. So the average gap size will be 1 over n. And so we need to rescale our gaps by multiplying them by n. So they become quantities that are not going to 0. And then we simply ask for the distribution of gaps, which is simply this probability measure Pn here defined in this way. Another great statistics is the two-point correlation function, which is a little easier to handle analytically because you somehow don't need to understand how your sequence is ordered. And that's simply the density of all spacings between the elements of your sequence, mod 1, and I've written this down here. So note again, we blow everything up by the number of elements we have. And again, we normalize by 1 over n and ask whether there is a limit. Now, if our points in the sequence would be independent uniformly distributed random variables, then the classic results from probability theory tells us that the gap distribution converges to a limit. And that limit is given by the exponential distribution. On the other hand, the pair correlation density converges to a limit, which is just uniform. So it's not a probability density, but it's a density that's uniform. And in this case, if we have something like this for a deterministic sequence, we would say that the gap statistics is possible or that the two-point statistics is possible. So that's if you like two tests of the randomness of a sequence that's uniformly distributed, mod 1. And of course, the exciting thing is that you don't always see independent random variables. And one example arises in a random matrix theory where the gap distribution is given by a more complicated function, the so-called go down distribution. So you just take the eigenvalues of the unitary n by n matrix. They lie on a unit circle. You cut the unit circle up and you map it to the interval 0, 1. And so you can look at the gaps in the same way. Now, if the matrix size goes to infinity, one can show that this converges to the go down distribution, which approximately looks like that. So it's certainly not an exponential distribution. And in particular, small spacings are less likely. In the exponential distribution, small spacings are actually the most likely ones. And similarly, the two-point correlation function, the pair correlation function, Rn also converges to a limit, which has a very nice, simple expression. And that, in fact, is the two-point correlation function of a certain determinant point process. So you get different answers. And of course, this is one of the big, big open problems in number theory is to actually prove Montgomery's old conjecture that, indeed, the two-point correlation function converges to the answer of random matrices. And furthermore, the gap distribution, the form can. That's Oblitzko's old data that you see here, spectacular numerics from the 80s. And there have been some really fantastic advances on this question. And if you want to read more, have a look at the beautiful paper by Rudnik and Sarnak in Duke Math Journal from the 90s, where they prove that all endpoint correlation functions for the Wiemann zeros are compatible with random matrix theory for a very restrictive class of test functions, though. So if one can extend a class of test functions, you would be able to prove that conjecture. Now, that's not really what I wanted to talk about. I just wanted to give you a little bit of the background what motivates me and why on earth you would still think about things like the three gap problem. So let me come back to polynomial smart one and see what we can say about the randomness of those. And there's a beautiful paper that I actually just gave it to my new PhD student to read. And that is about the pair correlation function for n to the d alpha and what one. And Rudnik and Sarnak proved here that indeed it converges to the uniform distribution. So it is Poissonian. We know much less about the gap distribution. We have some results due to Rudnik, Sarnak, and Sareresco on sub sequences of alphas. Very well approximable. Unfortunately, we don't expect convergence along all sub sequences in this case of all approximable alpha because we also find other sub sequences for which the result doesn't hold. We do expect, though, that the gap distribution and the two-point correlation function converge to the Poisson answer for badly approximable alpha. Unfortunately, we have no results in this direction. So we only have results for almost every alpha, but not for an example such as alpha equals square root 2. And here on the right-hand side, you see a little numerics. It's a beautiful thing in this subject is that you can just, even I can just go on the computer and calculate those gaps and problems. And you see they look exponential, absolutely close. No one knows how to prove this for this example. Right. So let's start with something simpler. And that is linear polynomials. So the results of Rudnik and Sarnak worked for polynomials of the degree 2 and higher. So what about linear polynomials? Now, you use the same program that I've just done. You put it on for your favorite alpha. Here alpha is square root 2. And we just plot the gap distribution. So what you see here is certainly not an exponential distribution. And in fact, you only see three gap sizes appearing. Now that, of course, is the famous three-gap theorem. And so let's explore a little bit what the three-gap theorem is. I know many of you have seen this and you've seen the many proofs of it. But what I want to tell you about is really a story by is a story about how you can prove the three-gap theorem in a very simple geometric way, in a sense, to one picture. And then once you have this one picture, you have a natural generalization to higher dimensions of the three-gap theorem. And we'll discuss several of them in a minute. But let me first try to explain to you how you can see the proof of the gap theorem in just one image. So first of all, what's the three-gap theorem? I mean, I said it already. You take an alpha, mod 1. You take the first n values. And you look at the gaps. And what you'll see is that there are only three possible distinct gap sizes for every fixed capital M. But the three is independent of n. So you'll never see more than three gap sizes. The gaps themselves will be different. And you see this here in the numerics where I've taken capital N to be 50,000 on the left-hand side. And 200,000 on the right-hand side, you clearly see that the gaps have moved. Here it's roughly a half. And here it's less than half. So these move. And in fact, one can show that they don't converge. So that's the three-gap theorem. Also, if you have three gaps, then you know that the third, the largest gap, is the sum of the two smaller gaps. Now, how can we get a proof of the three-gap theorem if you like in one picture? Now, I'm cheating, of course. Not just one picture. But you will see what I mean when I get to it. The idea, and this goes back to a paper that I wrote with Andreas Twinbeck on the American Math Monthly three years ago. The idea is very simple. That's the following. So SKM here is the gap between psi k, the kth element, so k alpha, not 1. And it's right nearest neighbor. Now, you can write it like this. So here we have our k alpha. And this is the set of all possible gaps that you can have as L runs through the elements between 1 and capital N. And then we'll just take the smallest, because that'll give us the gap. That's what it is. Then we shift. So we call this quantity here M. We just shift everything and you get something that looks like this. So now M, which is now the difference L minus k, runs between minus k and minus k. And now the key observation is that you can view this quantity here on the right-hand side as the smallest second component of a lattice where the first component falls into the script. So I'm going to, since it's early in the morning or late in the night, depending which time zone you are. So I don't want to disadvantage anyone. Let me just do that calculation for you here. So we have a lattice point Mn. And we multiply it by this matrix A1, which is to this. And this is our x, y. Well, and if you compute this, well, what you get is M alpha plus N. So it's very, very simple. And here is the picture of the situation. So the green domain is where we select our lattice points and we take the smallest one. And that gives us the gap size for the k-th element. OK. Now you see already two by two matrices coming in. The real insight now you get is by simply saying, well, I'm just going to not think of this matrix A1 that I had here, but I'll just look at a general matrix. So this was A1. And I'll just replace this by M. And M will be a general matrix two by two real coefficients and determinant one. And I'm defining now this function fMt. M is the matrix. T is a point in the unit interval. So t here, as you can see, plays the role of how I shift. And I'm simply now selecting the smallest second component of my lattice so that the first component falls into the shifted interval. And then you realize that if you make that particular choice, an, an is this matrix here now, you make this choice, and you choose t to be k over n, you get exactly the gap. So this is now the important function, fMt. So if I understand fMt, I understand the three gap problem, because it arises as a special choice of the values that I input into the function. And in fact, now we are going to prove a generalization of the three gap theorem, because what we'll see is that this function fMt, if you fix M, so you fix any matrix M, and you think of it as a function of t, it can only take three values. Now, why is this a generalization? Well, it's because if I specialize t to those rational points k over n, I need to show that that can only take three values. So if I can actually take it for all t in the interval, I have an implication of the three gap theorem. And that's exactly what the statement says. So first of all, well, we want to make sure it's well defined. And in fact, what we observe here is that it's not just well defined as a function of g, but as a function of g mod gamma. Now, if you remember, gamma here was S L to z. S L to z leaves the lattice z to invariant. And that's why fMt is in fact an invariant function. In other words, if I take gamma M, this will be the same as M. Yeah? Everyone's happy? Good. OK, so this means I have an automorphic function, fMt. And what I'm showing now is that if I fix M and I think of it only as a function from t to fMt, this statement here says that it's piecewise constant as a function of t and it can take most three values. So any choice of M, M has to be fixed. Because if you move M, the values change. And that's the proof of the three gap theorem. Now, how do you prove proposition two? And that's the picture. OK, that's how you see the three gap theorem in one picture. Now, the green strip is again the one that indicates the interval minus t to 1 minus t. And what we need to show is that as we take the smallest second components of the height of my lattice vector in this strip, so this height here, I can pick up at most three distinct values. And now what you show, the way you prove proposition two, is you start with the smallest height lattice vector in the full interval minus 1, 1. That will be this one. Then you can show that the one with the second highest that's not collinear with the first one must be on the left-hand side. So this would be this lattice vector. And then the only third possibility is the sum of the two, which would be here. So you know now that the three smallest values in the strip minus 1, 1 have to have this structure. And now all you need to show is that as you shift your green strip here from the left to the right, these are the three values you can pick up. Because the strip has length one, you can't pick up any one higher. So for instance, you wouldn't pick up this point here because that's already covered by that as the minimum. Yeah, if I had a thinner strip, then you could pick up more points, but luckily it's with one. And that's the picture of the three gap theorem. The green strip can pick up only those three basis vectors of our lattice of minimal height, minimal second component. Now, the nice thing about this approach is now you can generalize it to higher dimensions because we know how to deal with higher dimensional vertices. Previous analysis of three gap theorems involve continued fractions. And of course, continued fractions are exist in higher dimensions, but they're very, very difficult in many circumstances. And at least that's my view. The space of lattices in higher dimension has a beautiful geometry, which we can exploit in these questions. OK, so let me now talk about the first higher dimensional generalization. And that is my recent paper with Ellen Haines. That's on the internet where we look at higher dimensional chronicle sequences. So N-alpha mod 1, that's a one dimensional chronicle sequence named after Kronecker because he was the first to prove that for irrational alpha the sequence is dense. OK, so here is the setting. So we take a fixed vector alpha on a multi dimensional torus, and now we look at the translate on that sequence in alpha mod 1 on this higher dimensional torus. So we look at the first N values. And now there are many, many papers in the literature that have looked at higher dimensional generalizations. And I won't be able to list them all. We try to have a comprehensive review on our paper. But let me just mention the two most important closely related works. One is at Chevalier, who looked at various interesting higher dimensional generalizations in VJ with a paper with a nice title, 11 Distances R&R. And so what would you say is the obvious generalization if you look at points set on a two dimensional torus? So here's my, say, two dimensional torus. I identify opposite sides. And now I'm looking at my sequence of points. So it looks something like this. Then I wrap around so I can back up here. And you can now ask yourself, well, the gap distribution is something about distances between nearest neighbors. So we could just simply take the Euclidean norm here and look at nearest neighbors. So you'd say, well, this guy here is my nearest neighbor. So I'm going to record that distance. And you quickly see in this particular setting, I only see one nearest neighbor. But that's how we are going to think of our first generalization, higher dimensional generalization. So we'd simply ask how many distinct nearest neighbors are between the points of our chronicle sequence for any fixed capital N? So capital N points are looked at the distance between nearest neighbors. And I'm going to call that number, the number of distinct values that I can get as I look at all my nearest neighbors, gn. OK, here's another illustration where we have two rational values of alpha and nine points on the left-hand side and 12 points on the right-hand side. And what we see here is we see five distinct lengths appearing as nearest neighbor distances. So let me just tell you what is displayed here. So this is the point alpha. This is the point 2 alpha. And then I continue, 3 alpha, 4 alpha, 5 alpha. Yeah, you see it, 6 alpha, 7 alpha. And I always wrap around, of course, and then 8 alpha and 9 alpha. The blue lines indicate the nearest neighbors. So this is the shortest nearest neighbor distance you can see. Then you have this distance. And 2 appears twice here. So I have it here and here. And overall, I get five distinct values. And the right-hand side is the same. Now we can do the same in three dimensions, of course. And here's an example, n equal to 15, where, again, for a rational vector, we have seven distinct distances. So certainly, in dimensions 2 and 3, we don't have a 3 gap theorem. That's clear, because I already have two examples with 5 and 7. But what can one say? What's a good upper bound for the number of distinct distances in this case? Well, it turns out, in two dimensions, the answer is 5. So we have a five-distance theorem. And this is really the principal result of my recent paper with Alan Haynes. So no matter what choice of R field take, no matter what choice of capital N you take, you get 3. That's our three-distance theorem. I just want to say one thing here. Why I've listed this again. Just a trivial observation that if you look at gaps, that's not necessarily the nearest neighbors, because the gaps are the length of the intervals that your sequence partitions the unit interval. But of course, for instance, these two here, they are each other's nearest neighbors. So the set of nearest neighbor distances is a subset of the set of gaps. And so in principle, it could be that we only have two nearest neighbor distance theorem, because you might always miss out the longer distance. So in other words, the gap distribution is about the nearest neighbor to the right. You don't care about the nearest neighbor to the left. If that's closer, you still have the gap to the right. But in any case, you can very simply show that also there can be three nearest neighbor distances in dimension one. Now dimension two, that's our main result. We have an upper bound of five. And with this one example I showed you in the two dimension case, or these two examples, that's a sharp result, at least for one alpha and for one n. And higher dimensions, our bounds are not as sharp as we would like them to be. So in dimension three, we have a bound of 41. And in higher dimensions, this is the result. It is related to the covering of the units here, the surface of the units here, by balls of radius one half. So basically, the number you see here on the left-hand side is exactly related to that. In fact, it's two times the minimal number of balls of radius one half. You need to cover a unit sphere. Now what else can we say here? Sorry, I've just activated my voice recognition. Before I say this, here are the numerical values. That's three, five. We don't think these guys are sharp. So that's really a great challenge to try to improve our upper bounds in higher dimensions. Our results also hold if you replace the cubic torus by any torus, you can also replace the Euclidean metric that we've chosen here, which is a standard one, by any flat remanumetric on the torus. And in fact, things become simpler when you replace the Euclidean metric by the max norm In fact, there we have a result of Chevalier in dimension B equals two, who proved that we also get at most five numbers, five distinct gaps. And that's been recently general. Jens, your screen sharing just stopped. OK, sorry. This is, can we stop this? Now it's gone. So let me, I think this was my theory or something which just started. Let me share. OK. OK, I see the PDF slides again. OK, super. Thank you. So I was just saying that if you replace the Euclidean norm by a maximum norm, the problem becomes simpler because it is, if you like, simpler to cover by cubes than by spheres. And there's a very nice short paper by Elm Haynes. The proof really becomes very short in the case of the max norm rather than the Euclidean norm. So I also recommend that. He's done that work with an undergraduate student at Houston. OK, now how about lower bounds? So we've seen that the upper bound is five and we have other upper bounds in higher dimensions. This theorem here enables us to take any example, any numerical example that we have, like the ones I showed you a few slides ago, and turn it into a result that holds actually almost all alpha and for an infinite subsequence of n. And why is that? Well, this theorem says that we can find a set of fuller back measure so that for any alpha in that set, the limb soap of the number of distinct gaps is bounded below by the example that we have. So here is my let me choose some nice color here. This is, if you like, the example that we have, for instance, in two dimensions, we've seen an example of five values. So this quantity here will be at least five. And so that gives us a lower bound on the number of gaps this theorem. Now, this theorem uses ergodic theory on the space of lattices. In fact, it uses the fact that certain trajectories in the space of lattices whose initial condition is related to alpha are dense. And I'll come back to that a little later in the talk, how exactly that works. Now, don't worry too much about this formulation. What we're really saying here is it doesn't just work for a specific subsequence of integers n i, but it holds for even for very sparse subsequences of n i as long as they are sub-exponential. So these are lower bounds. And now, let's just remember, now something very funny is going to happen. So we have our five-gap theorem in two dimensions, and we can prove that in any dimensions the number of gaps is finer. So that's already quite a striking result, I think, just the finiteness alone. Now, let's remember what the original three-gap theorem was about. It was about the gaps. And as I said, not the nearest neighbors. So we have our gaps. And as I said, we can think of gaps as the distance to the nearest neighbor to the right. So I'm always starting here and I'm looking to the right. And now this suggests another higher-dimensional generalization, where we don't just look at the nearest neighbor in any direction, but we fix a cone of directions. Let me make a little drawing over here. So we are now on the unit torus, which I won't draw. I just draw my point. Here it is. And now, well, let me draw two points. So these are the points on my sequence, on the torus, if you like. And now we are only going to look in a certain direction. We're only gonna see whether we can find a nearest neighbor in this direction of cone. Now, for each of the points of our elements, we choose the same cone. We always want to look in the same direction. So we sort of stand the room horizon, but not in every direction around us, but just in a given cone of directions. And so we are now just recording the nearest neighbors in this cone of direction. So this would be an example here, even though this point is closer, I'm not gonna pick it because it's not in my cone. It's not in my... So if you're like, you know, you have a bad radar and you're just looking in one direction. Good radars look in all directions, but this one is only sweeping a particular direction. And now I'm gonna call that cone angle tau. So this is my tau and I'm fixing a cone for all points. Turns out the result is independent on which direction I fix my cone. The only important thing is the opening angle. And so we have another term here that gives us an upper bound on the number of distinct distances in that particular cone direction. And you see, so this is just dimension two, really just two dimensional. And now the answer depends on the cone angle. Now what we're doing here, as you realize our cone angles are large, they're bigger than pi. So I'm actually also allowed or actually required to look a little bit behind my back. Yeah. And the answer is five. So five is the original theorem. That's looking in every direction here, the cone angle is two pi. And then we get five, nine, eight, nine, eight, 12 plus a certain formula for cone angles that are close to pi. Yeah, is it clear what we're doing here? So this case down here would be, if this is my lattice point, my cone angle would, my cone would look a bit like this. All right, so this is tau. And this is the direction that I'm looking at. And this goes, as you can see, to infinity. So the bound becomes really bad as tau approaches pi, goes to infinity. So we don't really have an upper bound for tau equal to pi. But now here's the surprise, okay? If I look at acute cone angles, which are less than pi, I don't have a finite gap theorem anymore. It breaks down. So this is the statement of this theorem that if I look in focus directions, and this works actually in arbitrary dimension. So I take a cone, I assume that that cone is contained in some open hemisphere. So if you think of the two-dimensional case, all we're saying here is that the cone angle, that the cone is contained in the half plane, which means that tau is less than pi. Then for almost every alpha, I can again find, and any sub-exponential sequence of ends, I get infinity many gaps. That's what this statement says here. On the other hand, I can also find a subsequence for which I have bounded gaps for the same alpha. That's this statement. Furthermore, what we can prove is that if alpha is badly approximable, and so that's the measure zero set, then in fact, I have a finite gap phenomenon. Now, as you will see from the second variation of the three-gap problem that I'm now gonna discuss with you, these results again use the egotics theory of flows on homogeneous spaces. And particularly here, our homogeneous space is the space of lattices. And I hope I'll have some time to illustrate that to you. But before I go into that, let me talk about the second natural generalization of the three-gap theorem. And that's now not plotting points in higher dimension, but rather taking the values of a linear form. So again, we start with a vector alpha. We fix a bounded convex set. Just think of D as being a ball. That's good enough. And we now look at all the values of M alpha mod one, where M now is a vector. So that's a linear form, mod one. And we take all integers here inside our ball. And we again ask, well, what's the number of distinct gaps now between the elements of the sequence, which is again a sequence on the unit interval with opposite side. So R may see. So that's a little bit closer to the original question because we now look at again a one-dimensional sequence. And we want to understand whether we can get upper bounds when we make our ball larger and larger and larger. That's like the original N going to infinity because we'll take more and more values. Now, this is again a very old problem that was studied by Bush and Nixon and Dyson. And Erdrich looked at this, Giel and Simpson, many other people have studied this problem. And again, what I'll try to explain to you is again, this geometric approach in terms of the space of lattice is will actually give you really deep insights into this question. There is, I mentioned the private correspondence between Dyson and Bush and Nixon and both of them are my heroes. And you see here in the his letter to Bush and Nixon, Dyson asks them about this question, ask them and check this paper. And of course, these were outstanding mathematicians. Freeman Dyson just passed away this year and Michael Bush and Nixon last year. A lot of the things that I'm interested in and I'm working on was motivated by these two. Also the problem of square root N, one one comes from Bush and Nixon. Zair Rudnik who I've seen on the list here knows this well. So it's great to see that they also were interested in this problem. And here you see Dyson's theorem that says if alpha, the components of alpha are algebraic integers belonging to a number field of degree D plus one and are independent, then we only get a bounded number of gaps no matter what the choice of alpha is. Now our theorem here, which was published just this year makes very similar statements as the ones that you've seen in the first higher dimensional generalization for the Kroniker sequence. We can prove that in fact for almost every alpha, this is not true. We don't have a bounded number of distinct gaps between the elements of the sequence. So we have the sup is infinity. And again, we find also a sub sequence for which we have bounded gaps. Again, this will follow from the ergodic theoretic approach. The many results in this direction, the best is due to Lecher Röder and the students, this group of students for which they won the Siemens Prize a few years ago, where they also showed not for almost every alpha but a smaller set of host of dimension, three halves, that the same result is true. And again, Erdos asks actually the question whether we have infinite number of distinct gaps whenever we have linear independence. And as you've seen, Dyson answered that question, it's not true. So you can have alphas where you have, which are linearly independent. If they're badly approximable, you get a finite number of gaps no matter how large your cover parameter is. Okay, and that's basically the statement of this theorem, which I attribute here to Bush and Dyson and Lecher and this group. So now what remains is just for me to give you an impression of why this geometric approach that I explained to you in the three gap theorem really works here and what the kind of ergodic theory is that's involved. I'm gonna do that very quickly on just two slides because really what I see is the key input here is what I showed you in the first, this one picture for the three gap theorem. That's really behind this. The ergodic theory that we're using here is by now pretty well established. So, and you see the similarity with the three gap theorem. So here we have our linear form. We have the element K alpha and we look at the gaps. So again, the gaps will satisfy this formula. This will be all distinct spacings. Again, L are the neighbors, all the neighbors over which we are summing. And we simply take the minimum. This M here gives us the more one condition. We rewrite it as before and you see just as in the one-dimensional setting where we had a two-dimensional that is emerging. We now have again that the case gap is given by the smallest height. And by height, I mean the last component of the lattice vector, the smallest height of all the lattice points that fall into a certain cylinder. Yeah, and the cylinder I'm shifting again. So I'm defining now the analog things I have I'm defining my function as before FMT. Now it lives in a ZD plus one and the lattice CD plus one shared by a matrix M. So that's a E plus one-dimensional lattice. And as before, we get the case gap by evaluating this function at a specific matrix AB and at a specific value of T. So everything works as before. And then we can go through the same procedure and the key input here, the key estimate. Now the geometry of course is much more complicated in the space of higher-dimension lattice as you will appreciate. And that's where all the work goes in. So I don't wanna sell this as a completely trivial approach. The ideas are always, I think the important thing to how you translate it, but then the real work starts. And the real work here is now to show that if you wanna get a lower one, if you wanna show that there are really infinitely many distinct gaps, what you need to do is to prove something like this for your function FMT where M is now fixed and T again varies in piecewise constant function in T and T varies. And so maybe I wanna be fast. This GM is the distinct number of values that our function FMT takes as T varies. And what we are producing here and it's just the same as for the Kroniker sequence in that setting is we find little neighborhoods in our space of lattices where my function can take and exceed an arbitrary large value. So GM is the number of gaps. GM is also functional GMOT gamma. And what we find is little neighborhoods where GM can be bigger than R. And I can do this for any R. So our manifold is a non-compact space. And so these neighborhoods will be in the cusp of the space. Really, the really crazy thing is why doesn't it work in dimension one when we have SO2R and what SO2Z that's the modular surface and there doesn't work. So there we don't find these neighborhoods. So it's a real higher dimensional phenomenon. So when D is equal to one, this function GM is really bounded. Yeah, so that's really the interesting thing. So the higher dimensional cusp really are critical and finding those large values. And once we have that, once we know that we are looking at a function that can have arbitrary large values in open neighborhoods, and we can then use density of orbits to prove all the statements that we want. So if you remember, where did we evaluate our function GM? Well, at this matrix one alpha 01 and then here we have now I forgot what the parameter was that I assigned to it. I think I called it T or something like that. No, I don't remember. So it would be something like one over T D by D matrix, zero, zero, T to the D. And you see, you can write this matrix in this way. I'm writing it like this because then it parametrizes a flow, the value S. And what you can now use is the fact that as you evaluate, as you look at the trajectory of initial point M, our initial point M here would be this one, one alpha 01. So I've drawn it here. As we look at this trajectory, it becomes dense in the manifold. So if it's becoming dense in the manifold, it will visit those little neighborhoods in which my function can take arbitrary large values. And that's why we find arbitrary large values for this particular choice of alpha. And that will happen for almost every alpha. Now, on the other hand, if alpha is badly approximable by rationals, we have a theorem due to Dhani that says it's down here that says that if alpha is badly approximable, this kind of trajectory will remain in a compact set in your manifold. So if it will remain in a compact set, then also the function will not visit those neighborhoods where it takes arbitrary large values. And therefore we have a bounded gap phenomenon. Okay, so that's just a quick tour now. I'm running out of time. I hope I've given you a really quick insight into what is really the core of the ideas of these two papers. Of course, there are many, many geometric estimates that go into proving that actually the upper bound for the conical sequence is exactly five or 41 and three dimensions and so on. But I haven't talked about that at all. My intention here was really to just explain to you the geometric starting point of our theory and then how we explore it. And the space of letters is really extremely helpful because you see we can forget in our analysis about alpha and we can forget about N. We just think about fixed letters and how many values a certain function can take with respect to that fixed letters. So it's no longer an analysis where you need to worry about large parameters and so on. That's really what makes it work. But in the last two minutes, if I have them, what I'd like to do is just take the corner back because I've started by telling you about the motivation for this problem coming from understanding the pseudo-randomness in these sequences. And as we've seen for the linear polynomials, mod one, we don't see randomness at all. We don't even get convergence of the skeptics. On the other hand, when we look at N squared alpha mod one, we have almost no proofs. I mean, there's some beautiful papers there, but we can't prove that this is a gap distribution. So there's one solution here and that's randomized alpha in the linear case. And if you do that, you can show that actually you do get a limiting gap distribution. Now we look at N alpha mod one, we only have three gaps, but now we randomize those gaps. We just average over alpha. And so then we get a beautiful nice gap distribution. There's a recent paper of Kolanko-Schultz-Zarresko where they get nice remainder estimates, but the problem really goes back to some really nice work by Bleche and Maisel Sinei and a particular green man who was the first to compute this limiting distribution that you see here in this picture. It's an explicit beautiful formula. That's really, I think, where I want to stop, except to say that our agotic theoretic machinery also allows us to prove the convergence of gap distributions and high dimensional variants in the same way. In fact, it gives a very nice, transparent proof for these things. And finally, just as a little take-home message, there's something really incredible here. I mean, we've looked at linear sequences and we looked at higher order polynomials. How about logs? And I'm not talking about these logs. This is my log shed out in the garden here. I'm talking about log N mod one. And you see here, if we take log N mod one and we choose the base B carefully to be e to the one-fifth, it's supposed to be transcendental. That's the only important thing. And it's supposed to be close to one. So take the base that's close to one and transcendental and you plot your gap statistics. You see a curve like that. It's almost exponential. So the red curve here, that's an exponential distribution. And the true answer is this other curve, which we can compute. And I leave you with this mystery. So one is log N, this slowly increasing sequences, looking so random, we can even prove that here and we can prove anything about N squared off from what one, I mean, anything is an overstatement, but we can't prove that it converges to a gap distribution. N squared off from what one should be much more random than log N mod one. I think you agree with that. So I leave you with that mystery. If you wanna learn more, I have a look at my paper with Andreas Tuembergson a few years ago. If you wanna read more on the previous topics here, the list and I'm grateful for you being here. Thank you for your attention and I'm very happy to take questions.