 Thank you very much for the introduction, Elena. It's an honor and a pleasure to be here. My talk is about finite point configurations in nucleotide space. And while the talk is not exactly about number theory, I'm going to give you a survey of finite point configuration problems. And whenever possible, I'm going to emphasize number theoretic aspects. Please interrupt me any time for any reason. If I don't cover everything, the world is not going to end, at least not for that reason. All right. So many problems in mathematics can be phrased in the following extremely general way. So you have a function from space x to space y. And space x is suitably large, where I'm being deliberately imprecise about what that means. And the function f is suitably non-trivial. Then the image f of x is large. Many problems not only in mathematics, but especially perhaps in theoretical computer science can be viewed through deadlines. And an example from classical mathematics is Picard's theorem that if a function mapping complex numbers to themselves is entire non-constant, then the set of values that f of z assumes is either the whole complex plane or the plane minus a single point. One can easily give several talks on examples of this phenomenon. But what I'm going to do in this talk is focus on this phenomenon in the context of finite point configurations. So let us begin with the predecessor of all of these configuration problems as far as I'm concerned. And this is the Erdos distance problem. So the question we ask is, how small can the distance set of a set e be as the size of e gets larger and larger? This is in Euclidean space. And let me be more precise. So let's say that we are in Rd and the size of the set is capital N. Then the total number of distances is, of course, N choose 2, which is approximately N squared. However, some distances may repeat. So how few distinct distances are necessarily determined by e? What is the smallest number out of all the sets of size capital N? So the question is, given sets of size capital N, capital N should be at least as large American national debt. And then we want to compute the number of distinct distances. So what is believed to be the critical example? And let me state believed because we are nowhere near proving that this example is critical in any reasonable sense. So let's take an integer lattice and intersect it was a cube of side length N to the 1 over D encompassing approximately capital N, lattice points. Then how do you compute the size of the distance set? This is just a number of values of the quadratic form x1 squared plus x2 squared up to xd squared, where xj is between 0 and N to the 1 over D. This is, of course, a classical number theory problem that is very well understood. However, for our purposes, doing baby number theory on this problem is more than sufficient. So these are all integers between 0 and a constant multiple of N to the 2 over D. And they're separated. So the number of these values cannot possibly exceed a constant multiple of N to the 2 over D. So as everybody in the audience knows, when D is three or higher, this is up to a constant the correct count of the number of values of the quadratic form. When D equals 2, the correct count is N divided by square root of log N and having to do with what is to me one of the most beautiful facts in mathematics that a positive proportion of positive integers cannot be written as a sum of two squares. And what this tells us is that if we have capital N points in D dimensional space, where D is at least two, in one dimension, this problem is not interesting, then the number of distinct distances, you cannot expect it to be larger than N to the power of 2 over D. Okay, and this is indeed what Erdusch conjectured. He conjectured that if the size of E is N, then the size of the distance set is bounded from below by roughly N to the 2 over D, where one squiggle means that we are ignoring constants and two squiggles means that we are ignoring logarithms. Okay, and we just saw exactly where the exponent 2 over D comes from. So, okay, so this is a picture of Paul Erdusch that I shamelessly lifted off the internet. All right, so what is known about this problem? So after more than 60 years of efforts by this point, 70, by many outstanding mathematicians, the Erdusch distance conjecture was finally solved in R2 by Goethe and Ketz in 2011, using a combination of group actions, polynomial partitioning and incidence theory ideas. This is Lerion Nutz. And in higher dimensions, the best known results, which are roughly 2 over D, the conjectured exponent, minus big O, one over D cubed, are due to show emotion in VU, but the conjecture is still open. Let me stress that while the two-dimensional conjecture is essentially solved, the bound that Goethe and Ketz obtained is N over log N, whereas the sharpness example coming from the lettuce is N divided by square root of log N. So what this means is that we are not yet anywhere near proving that the lettuce example is in any way critical. And this, of course, would be an interesting barrier to breach someday. All right. So one point that should be made here is that the metric matters. The metric, in other words, what sorts of distances we are considering. So first, an extremely superficial example. Suppose that in place of the Euclidean metric, we consider the little L1 metric, right? Or the Manhattan metric, okay? Then the number of distinct L1 distances determined by E is approximately N to the one over D. This is not difficult to prove for the little L1 metric. And this result is sharp, which can be easily read off from the lettuce example. In fact, for any distance induced by a norm in the plane, one can show that the number of distances, number of distinct distances with respect to that norm is always bounded from below by a constant multiple of N to the one over D. And this is a very nice theorem due to Julia Garibaldi from about 20 years ago. Okay. And so the metric matters. So then the question is, what is the difference between the little L1 distance and the little L2 distance? Well, there are lots of differences, but the paradigm that's been driving these problems is that the circle with respect to the little L2 distance, the Euclidean distance is curved, whereas the circle with respect to the little L1 distance, this rhombus on this picture is flat. So the guiding paradigm is the curvature, okay? And this guiding paradigm has led to many interesting results in this area, okay? However, there are glimpses, there are little hints that suggest that this does not tell the whole story, okay? And so here is a kind of a tantalizing example that suggests that maybe there's much more to it than curvature. So let's consider the same example where we intersect the letters with a large cube encompassing approximately capital N points. And now let's consider the little LD metric in D dimensions, okay? And of course, let's stick to dimensions three and higher as in the two-dimensional case, this is exactly what we have already discussed, okay? Then suppose that you want to apply this metric to the lettuce example, then it leads to the following innocent looking question, okay, it looked innocent to me, I'm certain that it doesn't look innocent to the number theory audience. And the question is, can a positive proportion of positive integers be expressed as a sum of these powers of positive integers? To the best of my knowledge, there has been no significant progress on this problem for a very long time and the existing bounds are rather bad. If somebody in the audience knows otherwise, please let me know, okay? But this is an indication and these indications are all over the place in these distance problem is that any direction you poke in, you arrive almost by accident into very difficult questions from different areas of mathematics. This is one of the principal reasons why I like these problems so much, okay? And so a continuous variant of the Erdos distance problem is a Falconer distance problem, which was introduced by Kenneth Falconer around 1985. And here, once again, we consider a subset of RD, this time compact, and we define the distance set again to be the set of Euclidean distances. I have already explained why the distance matters, but from this point on, I'm going to mainly stick to the Euclidean distance until I start discussing sharpness examples where once again, the number theory associated with different distances is going to kick in, okay? But for now, we have a Euclidean distance. And the question we ask here, so the notion of large changes, in the case of the Erdos problem, the notion of large was simply the count. In this case, the notion of large is the Hausdorff dimension. So how large does the Hausdorff dimension need to be to ensure that the Lebesgue measure of the distance set is positive, okay? And just for the sake of perspective, the problem that tends to pop up on analysis preliminary exams is that if E has positive Lebesgue measure, then E minus E or E plus E contains an open ball. This is the celebrated Steinhaus there. So in this particular case, of course the Lebesgue measure of the distance set is positive, but what happens if E is much smaller? So what we are aiming at here is still obtaining a conclusion that the Lebesgue measure of the distance set is positive, but we want to weaken the assumption considerably to Hausdorff dimension of the set being sufficiently large, okay? So this is the Falconer problem. And let me go straight to the construction because I want you to have a feel of what it looks like on the level of examples, right? This is a sort of a central theme of this talk. So this time we take the integer lattice, we intersect it with a large cube, we scale it back down to the unit cube, and then we take a Q to the minus D over S neighborhood. So those of you who are used to working with fractal sets know what I'm doing, I'm trying to imitate or simulate a set of dimension S, okay? And it's a classical and not very difficult theorem that if you take a sequence of Qs going to infinity, really, really fast, super exponentially, and you take an intersection of all of these sets, then the Hausdorff dimension is equal to S, okay? And just like when you work with canter sets, the point is that you can work with the various stages and the stage is described in this displayed formula right here, okay? So what can we say about the Lebesgue measure at a given stage? Well, there are two types of interactions and I'm going to show you a picture in a moment. So this set over here consists of little balls of radius QI to the minus D over S. And if you measure distances inside each ball, you will get intervals in the distance set of length roughly QI to the minus D over S. But then you have to multiply this by the number of distinct distances between the centers of the balls. And this is precisely the problem of counting the quadratic form, which is the sum of squares that I've described earlier in this talk. And we know that the, and we've already seen that by completely elementary considerations, the number of distances between the centers of these balls is bounded by a constant multiple times QI squared in this case. And what this means is that the Lebesgue measure of the QI stage of the distance set is bounded by QI to the minus D over S. These are short distance interactions, the length of each interval, times the number of distances between the centers. And what you can see is that this goes to zero if S is less than D over two. So what this tells us is that we cannot expect positive results to the Falconer problem when the Hausdorff dimension in general, when the Hausdorff dimension is below D over two. Moreover, and this statement is less precise, but I still believe is important. What we have seen here is that this D over two dimensionals threshold in the Falconer problem on the level of examples, is comes from the same source as the two over D conjectured exponent in the Erdusch distance problem. And this raises a number of fascinating questions, which I will not have time to address today, but one obvious question is that can one prove a theorem showing that either Erdusch and Plyce Falconer and Plyce Versa. In general, the answer is no, in the sense that we don't know how to do that, but there are some partial results in this direction due to myself and Isabella Laba in another paper by Steve Hoffman. All right, so here's the diagram just to make sure that everything is clear. Once we've scaled the set into the unit cube, we have these balls of radius Q to the minus D over S and within each ball, we get distances of length roughly Q to the minus D over S. This gives us intervals of length Q to the minus D over S and the number of these intervals is roughly the number of distinct distances between the centers of the balls where elementary number theory was used. So here's the Falconer conjecture from roughly 1986 and this sums up what we saw in the sharpness examples that if E is a subset over D, which is compact and if how is our dimension bigger than D over two, then the Lebesgue measure of the distance set is positive. So what I'm often asked is why bigger than D over two, the counter examples that I showed you only rule out strictly less than D over two, but first of all, the number theory in the two-dimensional case does rule out the endpoint. As far as higher dimensional case goes, we simply have bigger problems. Once we start getting closer to the critical exponent, the discussion of the endpoint will become more meaningful. Okay, and this is Kenneth Falconer, picture also shamelessly lifted off the intro. Okay, all right, so let me give you a brief timeline on the work on the Falconer problem which has intensified considerably in recent years due to the explosion of activity that was led by incredible developments in decoupling theory by Burgen, Demeter, Goethe and others. So Falconer established the threshold D plus one over two, meaning that if the how is there dimension is bigger than D plus one over two, then the Lebesgue measure of the distance set is positive. Okay, and he did it essentially using the method of stationary phase. It's a very beautiful argument, very beautiful and simple argument. Burgen improved the two-dimensional exponent to 13 ninth, but as usual, Burgen did much more. He introduced restriction theory, four-year restriction theory to the study of this problem and in one form or the other, this has guided the developments in this problem ever since. In other words, this was a normal day in the life of Jean-Pourquet, okay? In 1999, Tom Wolf improved Burgen's exponents to four-thirds in R2, okay? Again, using restriction type argument. Borak Erdogan obtained the exponent D over two plus a third in dimensions three and higher using bilinear restriction theory. And this is sort of superficial evidence that the problem is probably not trivial because to go from D over two plus a third to D over two plus a half, to D over two plus a third, an improvement of one-six took 20 years and much of the then cutting edge machinery of harmonic analysis, okay? So there was a long period of relative inactivity on this problem, at least in this form, but then there was an explosion of activity in recent years. So Dugu, Uwe, Wilson and Jane obtained the threshold nine-fifth in R3 using decoupling, okay? Decoupling as probably everybody in the audience knows has also had a tremendous impact on analytic number theory. Vinogradov's conjecture was proved in this way and also the best currently known exponent for the Hardy-Circle problem was obtained by Burgen and Watt also using decoupling techniques. Then Du and Jane soon after obtaining the exponent D squared over two D minus one was an improvement to D over two plus a quarter in even dimensions by Du, myself, Uwe and Jane. Again, all using decoupling as the main tool. And in two dimensions, the threshold for thirds was improved to five quarters and extended to other smooth metrics by Gu, Uwe and myself. Okay, so this gives you a rough timeline of the developments on this problem. So what I'm going to do now before going to finite point configurations where other elementary number theoretic examples are going to arise is I wanna give you a glimpse of the range of applications of this problem, okay? Of this, of these distance problems. Again, as I mentioned before, it seems that if you any direction you look at, you will find distance sets lurking nearby, okay? So here's an example. So this is a result by Nets-Kettstein-Petersen and myself from a long time ago. And we proved that if you take L2 of the unit ball that it does not possess an orthogonal basis of exponentials. So we've all known from early age that L2 of the cube or a torus, however you wanna look at it does have an orthogonal basis of exponentials which is generated by the integer lattice. But the answer to the same question about the ball is no. And this was a question posed by Ben Fuglet in the 70s. And let me give you a brief idea of how this is done and what this has to do with distance sets. So if you have an orthonormal basis for L2 of the ball then the set lambda that generates it is separated. Okay, this is very easy to see. Moreover, the density of this set is equal to the volume of the ball. This is the celebrated Burling density theorem but the truth is all we need for this argument is positive density and this can be done by hand using methods that are quite standard both in harmonic analysis and in an analytic number theory. What does orthogonality mean? Orthogonality means that the Fourier transform of the indicator function of the ball have valued it at the difference of the two elements of the putative spectrum is zero. Okay, but the Fourier transform of the ball can be written in terms of Bessel functions. And all we need to know is that the zeros of the Bessel functions are uniformly separated. And what this tells us is that the number of, first of all, so there are two things. From density, we know that the number of elements of the putative spectrum in a large cube of side length R is approximately R to the D. But the separation of the zeros of the Bessel function tells us that the number of distinct distances determined by the elements of the putative spectrum inside the cube of side length R cannot exceed constant times R. So we have produced a magical set of approximately R to the D points in D dimensions that determine only constant, at most constant times R distances. And this is simply impossible by the known results on the Erdos distance problem. And this doesn't require any of the latest results. All the results known by 1953, which is a PhD thesis of Leo Moser were more than sufficient to derive a contradiction here. Okay, so this is an example that's particularly sort of near and dear to my heart because it illustrates how a seemingly difficult problem in functional analysis is resolved almost trivially once one takes the right point of view and the right point of view in this particular case is geometric combinatorics. And this makes one wonder, and it certainly made me wonder how many other problems like that there are. And I think that's there thousands. Okay, a related question that was introduced by Dennis Gabor is the following for which G in L2 over D does there exist a set S in R2D? So that this set G of X minus A, G is called a window function times E to the two pi IXB is an orthogonal basis. Okay, so if you're not familiar with this, you should think of G as this kind of a sliding window and then each of the two pi IXB provides a basis inside this window, roughly speaking. And this was invented by Gabor for practical applications that I don't have time to go into. Okay, so this is Dennis Gabor who got a Nobel Prize in physics in the 70s. Okay, and here's a theorem. So one rather poorly developed aspect of this theory is what happens if the window function is an indicator function of a set. Okay, there's been some recent activity on this problem and a simple result is joint work with Azita Mayeli from a few years ago. And we proved that if B is a symmetric convex body with a smooth boundary and non-denishing Gaussian curvature, then if the dimension is not congruent to one minus four, then there does not exist a set S that makes this an orthogonal basis of L2 over D. So to put it simply, there does not exist an orthogonal Gabor basis where the window function is an indicator function of a ball. If the window function is an indicator function of a cube, there is no problem. And this is classical Gabor basis, okay? And the distance set ingredient of this proof is a generalization by Misha Rudneff and myself of the celebrated Erdosch-Finteger distance principle, which is another one of my favorite problems, which says that if A is an infinite set and all the distances are integers then A is contained in a line. What we had to do for the purposes of applications to our technology problems is prove an approximate version of this principle, which says roughly speaking, that if distances are approximately integers with sufficiently good error bounds, then the same conclusion can be reached. And what is interesting here is that if you have a approximate integer condition, the conclusion is still exact, namely that A is contained in a line. And so going back to the distance problems and point configuration problems, there is a much of the work as at least as an initial setup uses the following setup introduced by Perti Matila. And so there's a certain integral that I'm gonna show you in a moment that I used to call Matila integral until Perti Matila told me not to do that. So instead I'm calling this the Matila paradigm. And we define the measure nu on the distance set in a completely natural way. So what does measure nu do? It counts the number of pairs separated by distance t, but it does this in a continuous way. And this is a natural way to do that. So it's simply a counting, continuous counting function and the set of pairs separated by a given distance. And if we apply the Cauchy-Schwarz inequality, then what we see is that if we can bound the L2 norm of the distance measure from above, then we can bound the Lebesgue measure of the distance set from below. You can rightfully start complaining that why do I even get to talk about the L2 norm of a measure? Well, I don't, but if I can make sense of the L2 norm of a measure and if I can bound it from above, then this idea will go through. Okay. And so what does it take to estimate the L2 norm squared of the distance measure? Well, this comes down to counting quadruplets. Again, I'm using the word counting in quotations. Quadruplets x, y, x prime and y prime such that the distance from x to y is equal to the distance from x prime to y prime. I just noticed that there are questions in the chat. Maybe I should take a look. Yeah, answer. Oh, there's a question. What is the... Oh my God, Andrew Nikas is here. Hello Andrew. Okay. So what is the measure mu? Mu is a Borel measures for the underline set. This is correct. And there's a typo on the slides. Thank you, Yuma, I will correct. All right. So we are counting quadruplets so that the distance from x to y is equal to the distance from x prime to y prime. Okay. And so equivalently, this means that x prime minus y prime is just a rotation of theta by x minus y. So high school geometry kicks in, which is always pleasant, okay? And in two dimensions, this rotation is essentially unique except for the FBS degenerative cases. And in higher dimensions, it's not unique, but it is easy enough to deal with the multiplicity, okay? And what this does is it reduces the estimation of the L2 norm of the distance measure squared to the estimation of another L2 norm. And this time it's L2 norm of the measure of the natural measure under set E minus theta E. So the beautiful idea here is that, and this idea is due to allocation sharear in the discrete setting is that studying the L2 norm of the distance measure or functions, it doesn't really matter what the context is, comes down to understanding the structure of E minus theta E, where E is the underlying set and theta is a rotation, okay? In a suitable sense, okay? And equivalently, so this slide is called Fourier magic. You'll see why. One can define this measure in E minus theta E in a straightforward way. So we just define as double integral of F of U minus theta V d mu U d mu V, okay? This is except that the function should have the same name on both sides. So this is G, so this should be G also. Sorry about that, okay? And what this allows us to do is to take the function to be a character, okay? So this argument works equally well in whatever setting. For example, it works just as well in finite fields. And it allows us to easily compute the Fourier transform of this measure in E minus theta E. And when we go through this computation and use spherical coordinates, we arrive at the key object in the study of distant sets, either in Euclidean space or in vector spaces over finite fields. This is something. So the reason I chose not to discuss these problems in vector spaces over finite fields is because the theme is how number theory arises in Euclidean problems, not how number theory arises in number theory problems, right? And the finite field setting is essentially number theoretic setting, okay? So we have this key object. And the key feature of this object is the spherical average mu head of our omega squared, the omega, okay? This is the key object that needs to be controlled, okay? This right here. So here, one quick point, which I'm not going to derive here, but it is not difficult is that Falconer conjecture would follow if we could get the best possible estimate on the spherical L2 average. Namely, if we could bound it by R to the power of minus S where S is roughly the Hausdorff dimension, we can be off by epsilon here. That's not the point, okay? But if one can obtain any result of this type, this would imply Falconer conjecture. And this is where we run into very entertaining number theoretic considerations. First of all, this is in general false, okay? But it's false in an interesting way. So in higher dimensions, one can prove that if the current approach to Falconer were to succeed, in other words, if we could prove this optimal estimate here, which we can't, I've already mentioned that, then this would imply the following, again, innocent looking conjecture, that if S is a closed convex, smooth symmetric convex surface with non-Denisian curvature in RD, then in dimensions three and higher, the number of lattice points on an R dilate of this surface cannot up to logarithms exceed R to the D minus two, okay? So this, I believe, is known as the Schmidt conjecture in analytic number theory. And also please correct me if I'm wrong, but it is my understanding that the prospects of proving this conjecture in the near future are dim, okay? And the key point here is that, what you saw on the previous page was a spherical integral, whereas all of a sudden now I'm talking about general closed and back smooth symmetric convex surfaces with non-Denisian curvature. But this is because in, if you look at the totality of the literature on this subject or related literature involving restriction theory, decoupling, et cetera, you will not find a single result to the best of my knowledge that distinguishes between let's say a sphere or a piece of a paraboloid or any other surface that has non-Denisian curvature that is convex and has non-Denisian curvature, okay? So this is a cautionary tale that unless we learn to distinguish between spheres and other surfaces with non-Denisian curvature, we will be running into these rather improbable paradigms where our results, if they were true would have to imply results from number theory that are very, very difficult, okay? So just a few comments about higher dimensional restrictions before diving into higher point configurations. So there's a variety of approaches some due to Misha Rudniev and myself and Arsalaam Bennett, Karburi, Riz and Viola and Zoom and Dew, they all involving a paraboloid in one form or another, they show that you cannot obtain a sharp estimate or the spherical average if you're allowed to replace the sphere by a general surface with non-Denisian curvature, okay? And the idea behind this, so this idea at least, the idea due to Misha Rudniev and myself are different ideas pursued by other people is that if you take a spherical average corresponding to the boundary of a smooth convex surface with non-Denisian curvature, and if you take as your measure the measure that I presented earlier in this talk as the sharpness example yielding the D over two exponent for the Falconer problem, if you plug it in and you crunch this integral you arrive at counting integer lattice points in the neighborhood of dilated convex sets, okay? So this is the technical aspect of this paradigm. Okay, so finite point configurations in higher dimensions. So let me see, let me maybe skip the sets of positives so finite point configurations in thin sets. So in the realm of subsets of RD, D bigger than or equal to two of a given house or dimension, we're going to consider the following problem which I believe can be viewed as a generalization of the Falconer problem. So given a compact set in RD, D bigger than or equal to two, let TK of E denote the set of congruence classes of non-degenerate K-dimensional simplices with endpoints in E. So for example, dimension is two and K equals two. We are looking at congruence classes of triangles, okay? And the question here in analogy was the Falconer problem and this is a question that's received a lot of attention over the last decade or so is how large does the house or dimension of E need to be to ensure that the K plus one choose two-dimensional Lebesgue measure of the set of these congruence classes is positive. So why K plus one choose two-dimensional? So this is the generalization of the side, side, side principle from high school geometry. If you wanna determine a congruence class of undegenerate triangle, it's enough to know the triple of the sides. And the same thing is true of simplices in dimension of dimension K where K is less than or equal to the ambient dimension. You can also consider congruence classes of point configurations where K is bigger than D and this is a fascinating subject that I don't have time for today. So let me give you a sample result. I'm not going to describe the sort of the full zoology of results in this area, but just a sample result and this is due to Greenleaf-Lupals and myself from a few years ago is that if E is a compact subset of our D of how's their dimension bigger than D K plus one over K plus one, then the K plus one choose two-dimensional Lebesgue measure of the set of K-dimensional of the congruence classes of K-dimensional simplices is positive. So the sharpness examples lead to some really interesting considerations that I'm going to get to in a couple of minutes. Let me mention in passing that while these exponents have been improved slightly, the biggest mystery left in this problem is what should the sharpness exponents be because the number theory questions that arise here seem to be of more difficult nature. So let me go ahead and describe this but before I go there, let me point out that the group action method that I described earlier is also a gateway to studying these problems, okay? And so the original proof of the Matilla paradigm that I described that led to this key integral involving a spherical average was proved by the method of stationary phase and there's absolutely nothing wrong with that. However, the group action approach that we established in this paper over here is what provides the gateway to the study of higher point configurations. And the idea is, is that the same method that I described allows one to control the L2 norm squared of the counting function on a K-dimensional simplex. So this time, instead of controlling it by the L2 norm squared of the counting function on E minus theta E, you can show it by LK plus one norm raised to the power of K plus one. Again, of the counting function on E minus theta E. In fact, that argument gives it to you for free, okay? And what this allows one to do is to reduce these problems to the analytic considerations that are very similar to the ones that arise in the study of the distance problem itself, okay? And so let me describe some of the examples involved and some of the difficulties that one runs into with higher dimensional examples. So suppose that we consider two-dimensional simplices in our two namely triangles. And so here what we must do and I will explain this in more detail is count triangles in congruence classes of triangles in Z2. And this example is due to Burak Erdogan and myself and it was computed at a bar near University of British Columbia after a couple of beers. And our biggest shock was when we checked the next morning and it was still correct. So here's the example. So first of all, what are we doing? We're counting these triangles. And as I described to the organizers before this talk an hour before the talk, I discovered to my horror that I forgot to prepare diagrams for this talk. So every diagram you see was prepared by chat GPT, okay? So these are triangles, I think it looks okay. And so what do we do? We again take Q to the minus two over S neighborhood of the integer letters in a large, in a large square scale down to the unit square and then we take a neighborhood and then let's compute, let's estimate the three-dimensional Lebesgue measure of the congruence classes of triangles. Well, first of all, again, we have small scale interactions, it's Q to the minus two over S cubed times the number of congruence classes of triangles in the letters, and this is not difficult to compute. The point is you can put one vertex at the origin, you have Q square choices for one distance, you have another Q square choices for the other distance and then you use the fact that a circle in Z two of say radius Q cannot contain more than Q to the epsilon letters points. In fact, it's much less, but this is good enough for our purposes. And this tells us that the three-dimensional Lebesgue measure of the set of congruence classes is bounded by Q to the minus six over S plus four up to this epsilon factor. In fact, it's less than epsilon. And what this tells us is that the three-dimensional Lebesgue measure of these congruence classes is not in general positive if the house or dimension of the set is less than three halves. The best known exponent in the positive direction is eight fifths. So we're getting close, but there's still a gap and my guess is that this example is sharp, okay? However, things get, so let me describe another situation where sharp examples can be computed. So here, the non-trivial number theory that's used is the fact that a circle of radius Q over Z two does not have many letters points. This is the idea here. Now, suppose that we wanted to study quadrilaterals, but not simplices on four vertices, simply quadrilaterals. So we only specify distances between consecutive vertices and it loops back in. Okay? So as you can see, Chet GPT produced a picture of this in exactly the same way, okay? And the idea, how do we count these? And again, the best known positive threshold for quadrilaterals is 12 seventh. So what is the best sharpness example that I know how to produce? Again, starting with the same integer example, okay? We see that the four-dimensional Lebesgue measure of the set of these two-dimensional quadrilaterals, four, because we have four links here. One, two, three, four, okay? Is bounded by what? It's small-scale interactions, Q to the minus two over S to the power of four, times the number of quadrilaterals. Distinct quadrilaterals with respect to these distances that are determined on the integer letters. And it's not difficult to see that that's approximately Q to the six, right? You have Q squared distances, Q squared distances and the remaining point can be chosen arbitrarily and each choice yields a different class. And what this tells us is that we cannot in general have a positive result here if the house door dimension is less than four thirds. So please note that there's a significant gap between 12th, 7th and 4th thirds, at least according to my calculator, okay? And so what happens in higher dimensions? In higher dimensions, the same approach to sharpness examples leads to counting K-dimensional simplices in a large cube in ZD. And to put it simply, you know, there are several groups of people who tried this and no one has obtained anything resembling reasonable bounds. The reason for this, there could be a variety of reasons for this. One is that we may not be counting these simplices efficiently. This is where it would be extremely welcome if people with deeper knowledge of number theory than I have where to take a look. It's also possible that a more subtle sharpness example is needed. However, it is almost certain that such an example would have to be of number theoretic nature. Why? Because there is a belief and it's a reasonable belief that sharpness examples should be based on an integer lettuce. Why should they be based on an integer lettuce? Because we want distances to repeat as often as possible for the worst possible case. And it's hard to think of a setting where that would happen that would be better than the integer lettuce. It's a superficial argument. However, it's the best that we have at the moment. Okay, so I think I'm going to stop there.