 Thank you. Thank you, Philip. Thank you very much for the invitation to speak in this seminar and thank to all three of you, Alina, Michael and Philip for organizing this very, this wonderful seminar during this pandemic. So I'll speak about some work that has been ongoing for a long time. Let's see. So here is the very basic setup. So we choose an integer and uniformly at random and then we want to see what we can say about its multiplicative structure. And more precisely, we would like to say maybe things about the statistics of its prime factors, or about the statistics of its divisors. So in the beginning, I'll just give a brief history of the, a brief account of the history of the first question, which is very well understood. And then I'll pass to the second question, which is the main topic of this talk. Sorry. Okay, so let me write P1 up to Pk of n for the prime factors of n, where k is with we write omega n for this function the number of prime factors. So we know from the time of hardware I'm a new John that for for almost all n, meaning that exceptional city has cardinal delete low vex, omega n is very close to log of x, but a lot more is known. Starting with work of land down and then sell back in the lunch. We know that is the point was distribution of omega and how frequently omega n takes the value that say, okay. So this is some, some perturbed version of the personal distribution with parameter log log x. So it does concentrate. So these are, right, so it concentrates a lot of log log x, of course, like it was on distribution so this is an improvement of the first result. So we know a lot more we know that if we fix some this joint sets, I want to say up to I am that satisfy various technical conditions that I'm not going to mention. Then if you look at how many prime prime factors and has from each of these intervals I j, then these are random variables that are roughly independent of each other and personal distributed with parameters lambda j to the sum of the values that we see because of the primes in each of the sets I j. So, I'm not 100% sure who to do this result to the seems to be known in some version to be links clay and there is recent work of it in uniform versions of it you do 10 about me for years and years. Right, so we can also look at order the order the prime factors and look at the small, the intermediate range of one and the intermediate, intermediate one so I start from the jth and end up the came out j. And if I look at this vector and I want to ask very statistical questions about this their joint distribution. I can reduce this question to asking questions about the distribution of order statistics so the middle part of K order vectors, so they do behave basically like uniform variables. And in particular the jth, we know that the jth device a prime divisor is approximately equal to j. For most n. Whereas if I want to look at the big prime factors that behave quite differently the much more irregular. And the right scale to look at this is the logarithmic scale, so if I normalize by log n, the logs of the top j prime factors, then this look like the first day entries of a person directly distribution. So all kinds of questions you want to ask about the statistics of prime factors they're very well understood, and we have quite satisfactory answers to them. This is really not the case if I look at the distribution of divisors. This theories is far from being complete. And the two guiding problems of the subject have been two questions of various. So, they're very simple to state. So the first one asks about the local distribution of divisors. Is it true that if you randomly pick an integer and that there are two devices that are close together were close if you take logariths if you take logs then they're they both lie within log two or bigger one of each other. So that's the first question. And the second question is, how many images there are, how many distinct images I should say appearing if you take them by a multiplication table. Okay, so you considered of course, some images will appear in many, in many ways, remove all the multiples how many do you have left. And the motivation maybe for the first one is to really start understanding more local properties of the sequence of divisors. Because if you look at it from a more global point of view we do have some some rough estimates. We do know that if I look at the log log of the jth divisor, then this is roughly equal to log j over log two. And actually, if I take this very literally I might then just take, I might then think that if I look at the log of consecutive ratios, then this actually tends to infinity. But this is very, very false, a very misleading model thing to use as a model. And actually the he conjectured that the devices really get tend to form very large clusters. So he thought that the answer to this question is yes. And the second problem. In high school. There is this elementary school way of stating it but there is also a more, maybe a more general way of stating it and asking. If I give you a device, sorry an interval YZ, how many integers up to X have a divisor from this interval. It's a very natural question that might appear in all sorts of context of other problems. If you take X, or then there's up to n squared, then to appear in the n by multiplication table basically means you have two factors roughly of size n. So it's roughly the second query that more general question captures the first question if you take Y and Z to be, let's say a dyadic interval. And then over to NZ this end. Okay, so these are the two problems that have inspired most of the work in the subject. And there, and I will study this, I will discuss these problems and also to generalizations of them. So the generalization of the first one is. We don't just ask for two devices but we ask for many divisors that are all packed together in a in a small logarithmic interval. So maybe I think of K as a function of X and I want to ask then. Is it true that for most integers and up to X that I can find K devices that are close together in this dyadic way. This question can be reformulated in terms of the delta function introduced by who Lee. So the delta function counts the maximal concentration of divisors in a dyadic interval. So you, you count how many devices you can find between why and why and then you take the maximum over all these. So basically the first problem eight asks how big is delta and so they say start asks how big is that time for a typical integer and it's equivalent. And who introduced his function when he introduces function in the paper he introduced his function he started the average, this, the average value of it delta and and he and his motivation for doing so is actually very different. He, he noticed that if he had some good estimates for the sum he could then use some corollaries about the various defining problems about counting to counting solutions to equations and inequality. So it's quite an original idea that he had. And. Right so he started his efforts, we in this talk I will be interested in the almost surely in the in the almost sure behavior of the time. And the generalization of the other of the multiplication table problem is going to be in K dimensions so again here we have K devices we want to pack together. Here we have K devices, what a product of. We want to write and as the product of K things. And, again, I can think of this problem in a more general point from a more general point of view. If I give you some intervals yj zj. Then the question is how many is this and I can factor with each of the factors line in this. So I'm going to then first pick about the first problem of others. Okay, so in all these problems. There is this very common principle that I hope will come across from my talk. So what you do is that, so the right skate to look at all these problems is the logarithmic one. And so what we do is that we. So what we want to find in our disease problem is we want to find two divisors whose ratio is very close to zero, which is very close to one, sorry. So equally, equivalently it's logarithmic the logarithms of the ratios will be very close to zero. So let me consider this quantities log of D prime over D for all possible pairs of divisors, and let me put a box of length of diameter, sorry, radius look to around them. And then you can check that the condition. There are distinct D D prime, such that. Right, I should have said that D and D prime here are different from each other. There are the D D prime. That line is dyadic interval, if and only if this set R of n of these ratios log of the ratios contains zero. Okay, so now we can think of this in a more geometric way. And when I. The heuristic is that as long as R of n is big enough. As long as R of n covers pretty much. As much as it can cover, then zero will lie in it or actually any given point will I need with high probability. So that's the kind of conditions we have on R of n. So notice that all of course all of the points here D prime over D are between N and one over N. So naturally R of n is condemning the interval minus look to end and look to end. So I view this as a geometric constraint on how big R of n can be. And this just comes from how many of these pairs I have this distinct fractions I have the prime over D. And the number of distinct fractions that you have is not four to the omega n but three to the omega n. And because you have to delete common prime factors. Okay, so just think of n being square three. Then for every prime I have three options either I put it in the denominator or in the numerator or I don't put it anywhere. So you have about three to the omega n possibilities for this intervals. So if I want to cover so so the so then the question is what kind what do I need for for my to have enough many intervals. In order to have a chance to cover all of this interval or a big portion of this interval. Well I would need. That the combinatorial count is bigger than the geometric count. And here because omega n is a typical integer and so it has about log log n prime factors. Three to the omega n is about looking to the lot three and because look three is bigger than one. I have a lot more intervals than I have done the measure of the interval I'm trying to catch to cover. So unless there is some crazy conspiracy between these intervals. I ought to be able to cover really quite well. This interval minus look to end up to logic to end. And then there is a side argument to show that if this happens. Sorry this should be made this will have been measure of our event if this does happen if the measure of our event is really as big. Then we can locate a ratio close to zero because we want the ratio to be at the specific close to one should have been here again or take the logarithm and it's close to zero. Because really we want the ratio to be close to log of the ratio to be close to zero we want zero to be narrow then, and there is a method for doing this which is to use some additional primes to sort of shift your. The method that you have around using some additional primes big primes here, and sort of go from covering well, kind of sort of well but maybe with some holes minus again to log in to really covering any given point you like. So this is quite technical so I don't want to really discuss this. But this is basically in a nutshell the argument of my antenna bound from 1984. And what my antenna bomb notice here, they proved they proved at this conjecture and actually they proved a lot more. So notice here that I have a lot of room between login to the log three and login. So, and indeed my antenna bomb show that. You can even shrink the diameters of these intervals here and still get a large measure because you have so many intervals. And the conclusion is that for almost all and there are divide there are distinct divisors did the prime whose ratio is basically smaller than one plus login to this negative power here. So it gets really close to one. And the heuristic reason of this is a very, very rough explanation of why this ought to be true. So now let me switch gears and speak about the other problem where it does. I'm sorry, you know, before I speak about the art, I'm going to speak a little bit about a star, which is the generalization of the program where it does about the holy delta function. This because this argument has to has a really a lot of room between login and login to the log three. There are two directions towards one, towards which one can improve it. One is this one that I mentioned already, which is to get devices in even shorter intervals. But my antenna bound notice that there is also in a different way. One can modify this argument by being more clever. And they use it. They use this other way to prove to prove that there actually you can really cluster lots and lots of divisors in a small interval. So recall, this is the notation for out of n. And now let me fix two parameters, why and said, and consider what the portion of my integer and that consists of these primes prime factors between why and said, and let me assume that everything is going to be okay, so I don't really care about squares and things like that. Okay, so now I will just try, I will try to construct two devices that are close together using this restricted set of prime factors between why and said. And now I have two competing constraints. The first one is again a geometric one. So all of the divisors of all of the divisors are of course at most NYZ. And because this is a, this is a Z smooth integer. Most of the time is going to be Z to the bigger one. So it's log is going to be bounded by constant times log Z. So this means that this interval, sorry, this set out of out of NYZ really basically lies between minus constant log Z and constant log Z for a tip for most and with very high probability. So this is a geometric constraint and the combinatorial constraint is how many distinct intervals I have. So I have three to the number of prime factors of then between why and said, and for a typical integer. You expect that the number of prime factors between why and said this is a double logarithmic length of the interval so it should be log of log of Z over log Y. Okay, so what is it that you need, you need in order to in order to have any chance of what you would like is to have enough. The interval so that you have a chance of covering this big interval here, not many little intervals to have a big chance of covering a good portion of the big interval. So you would need this bound here to be bigger than log Z. And this happens when a log of Z is bigger than this power of log of Y. In this case we have enough many intervals to have a good chance to really cover all of the interval as a big portion of the interval. And then we can find that this means that we have a really good chance of finding a ratio deep prime over D that is close to one again I made this mistake. So these are the factors of this ratio line between why and said, okay, that's great, because now I can iterate this process and use not just one interval but J distinct intervals. And then for each interval I get two, two divisors, but all these devices are completely co prime so then I can take any possible products of these divisors to actually get to to the J divisors of and that are close together. So this is a potential product trick that gets you from two to many. And using this argument my antenna bomb showed that the holy delta function is gets quite big. So log log n to this set to this power h one where h one is the is the ratio of log of two, but this overload of overload of this exponent. Now this is what you get. And so it's a positive power it's about 0.28754. Okay, so. All right, so this is what I'll say for now about the first about problems in a star. Let me now discuss problems, be and be star. So here we're trying to find integers with a divisor in some intervals, let's say, I will focus on the dyadic case, where my divisors are between why and to why. Right, so, again, in this case, what I'm going to look is I'm going to look at logarithmic scale and I'm going to look at all the log d's and then put an interval of length log two around log d. Well, it's an asymmetric interval in this case that's just a technicality. And what I want is that this, then the union of these intervals contain a specific number which is log of why. So you can see it's very similar in spirit to the previous situation. And here is the heuristic. The heuristic is that the probability that this set L of n contains log why should basically be the, the average of its major divided by log x because L of n here lives in an interval of long length log x. So if this is somehow a random set then this is what would expect. This is quite a reasonable guess. And this can this question marking indeed be justified and made rigorous. And then there is a much, there is a leap of faith where I have this two question marks here, where that says that maybe this measure looks like the minimum between tau of n and log of x. Okay, so tau of n is the number of the divisors I'm used this, I use this notation for the number for the divisor function. So this represents this, this minimum here represents the two type of constraints one has, like in this case we had a geometric constraint and a combinatorial constraint. And in this case to we have a geometric constraint log of x which is that L of n is a subset of one comma log x, and a combinatorial constraint which is that L of n is the union of thousand intervals of bounded length. It's the balance between these two constraints that is important. And right, so if you, so it's possible. Now, this is quite easy to analyze and to show that actually all the action happens when tau of n has size log of x, meaning you have exactly as many intervals to really cover just exactly as many you would need to cover all of your interval and no more than that. And no less either. Okay, so really the crucial, this is this goes back to the work of Erdos and then improved by Tenenbaum in the 80s. That you need to have that the crucial range of ends to look at is when n is has log log x over log two plus bigger one prime factors. So that the two constraints balance each other. And this leads to, at least when y is a power of x to to showing that this problem to guessing that this probability that there is a divisor between y and two y is about this quantity. Okay, this some quantity it's not so. It's just a calculation of this, of this right hand side that can be done quite easily. Delta is this cost that is quite well known appears a lot in the industry. And Ford, in his work for work on this problem and show that actually that this heuristic this doubles question mark the double question mark is not really that accurate in order to really make it accurate. So he showed that there are other constraints on that one needs to take up into account of, you cannot have too many small prime factors at any scale so basically, at any scale up to any scale why you have to have at most the right. At most log log y over log two prime divisors. And in order to have, in order to have big measure here. In order for this measure to be as big as it can be which is log of x, and the probability of this, this is not a very frequent event, it only happens with the ability about one of our log log x, and we should have, we should have used a different variable here this have been maybe a T for all T up to x, you know, the wise fixed. So Ford proved that then you see this new idea to prove that the probability that you have a device between why and to why is about this naive guess but then when you also add an extra log log y the denominator. Okay. All right, so now let us study now the higher dimension analog of this problem. Okay, so here we want to understand the probability that you have a factorization into k factors and really you just need to fix the k minus the size of the k minus one factors and the size of your integer, then the last one will also be fixed. So it's based this question is basically equivalent to understanding what is the probability that if you pick an end up to x. There is a product of, there is a product of k minus one integers that divided and each of these k minus one integers since some day at the kindergarten. And like in the previous case I should look I should, I should, I should come up with an appropriate set. I need to understand this distribution. And in this case is this LK event set. Right so what do you do you look at. Now you have this is this now lives in K minus one dimensions. So for it's such K minus one to pull, you put a box of you put it, you consider a cube of length side length one or a lot to around it. And of course, because the di is, you want the di is to be less than why I, we should be assuming that the di is our why I smooth. You don't want to consider all of these tubes but only the relevant ones. And the question is now what are the kind of constraints one has on the set. And they should give me some sort of heuristic guess of what how big this probability should be an analogy with the previous setup. So, okay so let me write omega I for the number of prime factors of n between why I minus one and why I went where why zero is just one. Okay, so. Right so I have only the prime factors up to yk are important side partition then into K intervals, according to which thresholds they have surpassed or not. And then there are two competing bounce on LK of n one is the geometric bound, which is just. Well, the di is here are why I smooth. So, their locks would really be as big as log why I at most. So I just I take the product of all these because right so this dimension is bounded by log why one the second dimension by log why to etc. And then I just count the other competing constraint is a combinatorial constraint that counts for many pairs I have so how many intervals I have. And this is, it's an easy exercise to show that the total count of of tuples here is given by this product. Okay, so you get K to the omega one then came minus one to the omega two of them etc. Okay, so and the heuristic, if I. Okay, so even though the the this argument is more naïve heuristic heuristic was not quite correct. It still came really close to the truth, at least as far as the correct power of log x is concerned. So let's just ignore powers of log log x for now. And then let us use the previous heuristic to guess the right answer for this probability. So if it's heuristic then what I should do I should then look at right so I should look when these constraints completely bothers other. So I look at all the possible choices for the cardinalities omega in such that this product here is basically equal to this product up to constant. I look how many integers. What is the proportion of integers such that omega i of n is equal to my. So the proportion of such integers is roughly given by K minus one independent Poisson distributions that have parameters lambda I, as I explained in the beginning in the first in the second slide of the talk. So, this is the expected proportion of integers with such that omega i of n is my, and then I just take all possible possible possible combinations and maximize overall of them. Okay, so this is a guess one, one might. This is a guess one might make on this probability using this sort of interplay between combinator and geometric conditions. And then understanding this maximum is just a calculus exercise, it can be done using, for example, using like last like grants multipliers you can using sterling formula grass multipliers you can calculate this maximum. Okay, so is this actually true. Well, this. So, back in 2014 I showed that if K is less than six up to six then this heuristic is true. This actually, it's true in an in a, in a strong sense in the sense that one can get the exact order like in Ford, in force theorem, if one considers additional constraints of this form, which are too complicated to state now so I'll just skip all of that. But if case at least seven, then things change. In general, you do have this kind of nature is being true. If all the side lengths are sort of the same. So you have a little bit of room here, you can go. You can let the biggest one being log y log of the smallest one to the one plus epsilon for some epsilon. So really this is best post is in a sense is best possible. And here is a particular case when, when this heuristic fails. Take K to be seven, and take the first five thresholds to be all the same, and take the sixth one, free and bigger. Then there is some particular constant like we can calculate arbitrary precision. These are a few of its digits, such that. If the six is smaller than log y one to the sea, minus a little bit more than this nature is stick is true. But if the six dimension is much big is really much bigger than the other ones, then the situation changes. And this really makes sense because now you really start considering a multiplication table where all the dimensions are really tiny and there is really one that is much bigger one. Right, so once it really when considering this K dimension multiplication table problem was to think about higher dimensional. Sorry, lower dimensional possible constraints that come from this other configurations. That are constraints that are both that are somehow a hybrid version of the geometric combinatorial constraints I mentioned before. So here's an example in one case seven. So let's factor. Let's factor our integer and into seven prime factors according. So this and I is whatever is between why I minus one or why I. And then my L seven. I can. This would have been the i divides and one or two and I am sorry about this, because it's why I smooth. What I can do is that I can fix these five dimensions. Just fix all that these days and look at the last one last dimension, which is really this putting a lower dimensional constraints and just take a union bound in this fashion. And this would give me some other possible constraint on the measure on how big the measure of seven can be. So it's going to be the measure of all this DC, the union of all these the sixies, which is captured by the measure of some two dimensional object, or one dimensional because we're always one dimension down times and some of the possible, all the possible, do you want to defy. Okay, so you have all these other constraints to fight with and this is the important one. And this particular example. What comes into play and ruins issue the heuristic than the most nice heuristic. Okay, so this kind of low dimensional constraints really coming to play and one has to consider them quite carefully when dealing with when one starts going into higher dimensional multiplication them in her dimensional theory of divisors. So now let me finally switch gears and speak a bit about the a star problem the generalization of the first program of those and about who the delta function, which is a recent work that did with David Ford and Ben Green. Okay, so recall. So, here's how one can improve the original 1984 my 10 and bomb result. So first of all, as I recall, their argument. So what did they do to produce a large value of delta and to pack many devices close together. So they used J disjoint intervals yj zj for which there are such that for each interval we can find this thing divide the thing divisor the JDJ prime. That's that they're close together. And such that they only consist of primes from this given interval. Right and then I can take every possible product and construct to the J prime. Divisors close together. Okay, so this is the original 1984 my 10 and bomb result. Then in 2009, the revisited their result. And they said, Well, wait a minute. This is really an optimal construction, because after I've used all the primes in y1 z1 to find the one d2. Then when I'm trying to construct the prime the divisors d2 d2 prime, then, okay, I can use all the primes in y2 z2, but there are also some remaining primes in y1 z1 that maybe I could use as well. Then, then I have more primes to work with so maybe I can construct, maybe I can, maybe I can construct these devices in a more efficient way. And indeed the show that this new idea leads to an improved lower bound on who is delta function, almost surely the previous lower bound was 0.28 something. So this is 0.33 something. So in recent work with Ford and Green, we took a, we took a different point of view. We have a question in the chat by Christian Tuffalo maybe Christian maybe you just want to unmute and ask directly. So the question is why does it break exactly at seven I think it refers to the previous slide. So it's because this type of lower dimensional constraints are not important before they don't give this kind of upper bound is bigger than log of, not the trivial ones, not log of y1 log of y6. Sorry. The trivial. This is the bound one was comparing to always the total volume is the product of the log yi's and you have all this other. You optimize, you find whatever the optimal situation is and it turns out that this kind of upper bounds are not not relevant in this case they give you even bigger. So this is the measure to be less than this product. I see. Thank you. Okay, so, right, so, right, so I explained to you how my tenant bound in 2009 improved the previous result. And now I'll explain how in recent work we improved the 2009 result using different idea. So, instead of just, okay, so this is much closer in spirit in a sense to the first proof of the first argument. So instead of just on finding these joint intervals that contain two distinct prime factors. Sorry, divisors. Why not try and find J disjoint intervals, why JZJ such that in each one you can find K distinct divisors for some fixed K that is bigger than two. Okay, let's try and play this game. And let's see what it gives. And what it ended up giving is a better lower bound on delta n, which is this h3 which is a little bit bigger than three five three three two two, which is a little bigger than the previous lower bound of my tenant bomb. And it does have an explicit description. Well, a precise definition, not explicit, but that, but I'm going to give towards the end, but but it's really complicated so I'm skipping it for now. Okay, so let me try and explain how, so the key would be then how can one modify the construction of the my tenant of my tenant bomb to instead of just constructing two devices are the close together cost to construct K divisors are the close together. Okay, so it turns out this is basically an algebra problem. Merge with some complicated combinatorics. This is a linear algebra problem. All right, so what we want to understand is when there are distinct devices do one up to decay that they all consist of primes between y and Z prime factors of our typical random integer and and we want them to all be close together, or if I take logarithms they're longer to be close together. So I can view this as a linear condition on the set of log on the logs of the prime factors of n between y and Z. Right, because of course I can factor it's di and it's prime factors and then. Right, so the log P stick of local pieces unknown variables. Okay, and then you have this linear system that is going to determine some of these random variables. And you want to do this as efficiently as possible in order to pack K divided close together. Okay, so the, um, the first step for understanding the combinatorics because this is a total mess now. This is a total mess because all the di's might interact with each other in the multiplication table problem in the generalized the di's do not interact with each other because they are completely they consist of completely disjoint sets of primes. Right, because the product of the one the two up to the K minus one must divide and so all the di's are completely disjoint so the van you have a very simple van diagram between the prime factors of the di's. So here the van diagram can be a total mess. And we really need to understand this combinatorial object if we want to make progress to this problem towards this problem. Alright, so for the van the van diagram can be indexed by these vectors omega in the unit cube, zero one. Right, so, so for each omega I consider the capital this up omega, which is the portion of. Right, so it's the part of this divisor such that these are the primes that appear exactly in the di's such that omega is one and do not appear in the other ones. This is how you partition the van diagram you index everything. With this notation. I can now write this vector of logs of the di's as this sum of this linear combination where the linear combination is over this unit cube vectors, and the coefficients are log of the omega. The advantage when you do this is that the D omega's are of course co prime by construction, because they have only this joint set of prime factors, and they are also. Yeah, so, so they sort of behave sort of independently of each other. At least they're decoupled. So what do you need to understand you need to understand what is the possible distribution of this log di's and if you have a chance of detecting something that is very close to zero mode one. And of course, the distribution of those log di's will depend on if you condition on, you might then want to condition on the kind of configuration you're going to examine. So the configurations are going to be determined by the parts of the Venn diagram that are actually non trivial. So maybe you only want to consider. You only want to construct the visors whose Venn diagram is not the full Venn diagram, not everything is not trivial but only a small part of the Venn diagram is not trivial. So then you're going to have geometric conditions and combinatorial conditions, and the geometric conditions are going to arise by linear algebra by starting the structure of this space, whatever is found by this omegas mod out by the constant vector one. And then the combinatorial conditions are going to come by the distribution of the prime factors of this. The omegas. Because this tells you how many such divisors you have. Okay, so it gets quite, it gets quite complicated and I don't want to explain too much detail but the most important. And perhaps the easiest thing to explain is the geometric constraints. So the geometric constraints are. So, in what kind of a box do these vectors lie. In the previous cases where I was looking at the, the martin and bound construction or the K dimension application table event, it was pretty obvious what was the box in which everything was lying in. But now it's not really that obvious. And one has to think a little bit about it. The longest, right, so what is, how can I, how can I understand what the box is well. I need to understand what is the longest possible dimension of this vector. And the longest possible dimension coordinate of this vector would be whatever the projection is on to omega one, some vector omega one chosen such that this log of the omega is as big as possible. And that's one of all the choices I have. And because integers log of the log of the integer is basically controlled by log of its largest prime factor. She at least most of the time. It suffices to look at. I'm gonna. Then basically look at whatever whichever of this Venn diagram has the largest divisor in it. And then what is the second longest dimension longest dimension. Well you look at omega two in omega, but you want to be linearly independent from the first one. So you remove everything that is already in the span of the first dimension. You maximize, you look at the whatever the largest prime is that among all these things. And what this assume you have constructed j, the j longest dimension that the next one is going to be such that this will have been j plus one here. So it has the maximum prime factor or more linearly dependent. Okay, so this is how you construct this omega one up to omega. Let's say our, and this also is a way to construct sort of a special basis of this vector space B. And this basis tells you what is the geometric constraint that you need to consider. The log of them. Sorry, on the kind of linear on this on this kind of objects. So that you then start understanding, then you then it's meaningful to ask are they well distributed or not right this is your measure of comparison is this product here. That is determined by this algorithm here. And, right, so what ends up being important is then whatever the size of these things are, and to capture the size of this biggest, this would have been the biggest prime factor. P of P plus of D of omega j I'm sorry that's a typo, it's pretty bad typo. And then you end up having. So you have this filtration that is of vector spaces that is caused by this process of constructing this basis in this iterative way. And you have this some thresholds that basically control how big your coordinates are. And then you have some probability measures that I explained the role in a minute. So then the combinatorial construction of to in order to get many devices close to get k devices close together. Right, so what you do is that you look at. You look only on the Venn diagram that is in this VR in the top vector space. And then you want all the all the prime factors for the vectors in vj to be controlled by this threshold. And then you then you just control the distribution of the prime divisors of the D omegas for each of the between each of the thresholds. So it's really the condition gets really complicated. Just this is a rough idea. You have this argument that produces this vector spaces and then you also want to control the combinatorics and right so even all of this data, there are various kinds of constraints geometry combinatorial constraints one might consider that might that are sort of your enemy of that might suppress these vectors to line as in a space of small dimension of small measure. And so these are all these lower dimensional constraints that take up some specific shape so I'm basically out of time so I'm going to skip this and but just to tell you that basically, just like in the k dimensional problem now in a much more complicated way we have all this possible in all this possible lower dimensional constraints that we need to consider. And this lower dimensional constraints are in terms of this threshold CJ and in terms of the entropy of these measures I. And then what you have to do is to basically solve this coming at this this quite complicated combination of problem where you need to find the optimal value of C for which such a system such a source of data exists. Okay, so, and what we proved is basically that these are, this is equivalent to constructing k devices close together using prime factors between thresholds like these so you can find such a threshold C to construct prime k devices close together if there is a choice of vector spaces like this and threshold CJ and probability measures exist that satisfy these constraints and. Right. So this is, so this is the theorem and then, this is the abstract theorem and then we needed to exhibit a specific configuration that does better than the marketing about construction that can be seen, the marketing about construction can be seen as a configuration when k is to do the R. And we came up with something called the binary flag that does better. Okay, but I'm out of time, and now it's getting quite technical, and I don't want to rush through all this really technical slides so thank you for your attention.