 Thank you very much for the invitation. It's really a pleasure and an honor, especially as I'm not a number theorist. I communicate with so many number theorists and I always feel like an idiot because they're also clever and they know so many things. But I still have trouble making, I mean distinguishing between the Mibu's function and an Amiibo something like this. But this talk actually is about, is a talk in number theory proper, I think. And this is kind of the main message that I would like to pass. So this is a talk about the sun product problem which is one of sort of, which everybody presumably knows. And it has been treated and studied usually as a question of arithmetic or adjective combinatorics or maybe geometric combinatorics. So everything I've seen done about this problem was done using these methods. But this is, I think, the first time when we fetch number theory proper. And now the after this work has been done, I do think about it as an open problem in number theory rather than anything else. So this is, this is a recent paper, I mean, reasonably recent, in the summer it came out in maybe in late spring or summer. And it with Brandon Hansen, Ilya Ashkredov and Dima Dmitry Zhelezov. I mean against the rule, I know this against the common practice, but I emphasize Brandon because he was really the brain behind this project. And I, this could, this work could have easily been done without me, but it definitely wouldn't have been done without him. Okay, now I press my cursor and it doesn't work. Oh, okay. So this is, this is the Ender-Stimeredi. So I'm just, I'm going to quote from, from the 1983 Ender-Stimeredi paper. I wish, I mean, I was practicing my Hungarian, my fake Hungarian accent, but, but it doesn't sound, I mean, it's crap. So I'll just read it in English. So suppose we have an integers. And let's consider integers in the form, which are either pairwise sums or pairwise products. And then they write it as tempting to conjecture that for every epsilon greater than zero, there is a threshold value n zero. So that for every n, which is bigger than n zero, the number of distinct sums or products is n to the two minus epsilon. So almost n squared, almost as many as, as, as, as one can possibly get. And since that's paper, this is known as the sum product conjecture. Furthermore, they say moreover, the conjecture that you can take a longer word, that for every k and again, and big enough, there are more than n to the k minus epsilon. That is almost as many as you can possibly get distinct integers in the form, which are k-fold sums or k-fold products. And even more so, even more carefully, they say, perhaps our conjectures remain true if the a's are real or complex numbers. I will say just a few words about these multiple, many-fold sums or products, because the product, the problem is still wide open for this case as well. Yes, by the way, if anyone has questions, it would be really good if you just stop me and ask. Right, so what is best known, first of all, since most of the studies, or pretty much all of the studies that I'm aware of has been done over fields and over reels, or perhaps finite fields, I'm not going to talk about finite fields in this talk, but the best, the world record over the reels is that the number of distinct sums or sums or products is a to the power, which is a little bit bigger than four thirds. So in other words, one can take epsilon bigger, slightly smaller than two thirds. So epsilon is how far away from the holy grail exponent two. And for multiple sums or products, the known result is much weaker and actually known only over integers. So we don't know much about all over multiple sums or products for fields. And basically, this rather convoluted looking estimate just says that if you want, so you have n, if you have n elements in the set A, and you want to reach the cardinality n to the B, this is how many roughly two to the B squared times that you need to add or multiply the set A with itself. And this is a 2020 result by Brandon Hansen, Oly Rosh Newton and Dimar Zhoff. This result follows. And let me ask, as opposed n here is the cardinality of A, right? Yeah, n is the cardinality of A. Thank you. So this estimate, it follows from a more specific estimate, which is really the essence of a lot of previous study. It's an asymptotic case of the conjecture, which some people call few products imply many sums or F, B, M, S. And the specific result which is behind this estimate is this, that in the asymptotic case, when the product case is reasonably small, so A is M times A, and you can think of M as the size of A to a very, very small power. So not a log, but a very small power. Then the size of the K-fold sum set is, well, A to the K, this is as much as you would want to get, but, well, there is a little correction, and there is an epsilon sitting here. So if you want an epsilon to be small, well, you're welcome to have epsilon very small versus, say, K. So this correction can be made pretty small. However, the price you pay is that M, the multiplicative doubling constant comes in this huge power, well, you will have to divide by the huge power of M by the power M to the power 2 over epsilon. So this is the best few product many sums estimate, and we are only going to be talking about the case K equals 2. So we're only going to be talking about just some set and product set proper. So from now on K equals 2, and so I'll just rewrite this estimate for K equals 2. So this estimate in this form was obtained by Dima Zhelezov and Dormator Palveldi in about 2016, maybe, maybe 19 actually. Well, I mean, yeah, in the, yeah, in the, so maybe it was, yeah, it was 2000, I think it came out in 2016. And this was what I call a demystification of a famous paper by Burgan and Chang from 2004. So Burgan and Chang had qualitatively the same type of estimate, maybe with a slightly weaker numerology, but that paper was, I mean, that paper is still inaccessible for myself, for a lay reader, I like myself. On top of this, I should add that just recently, so this Zhelezov Palveldi proof, I really like this proof. So it's, I mean, there are two papers, it's based on a paper with more authors, there was also on that other paper, Dormator wasn't on that paper, but it was Imli Ruzha and Zhelezov Shakan, and goodness, Matolci Jr., the son of Mati Matolci, what's his first name, I don't remember. So, yes, so they kind of wrote what I call the demystification of Burgan and Chang, however, just recently in 2003, in 2023, this summer, there is this paper of the trio, I mean, Ben Green, Freddie Manners and Terry Tao, where they reproof this estimate, even, I mean, they reproof all this estimate, even slicker using entropy. So the proof is even shorter now, does, I mean, it's quite mysterious as far as I'm concerned, but so now there is also an entropy proof of this estimate. Right, so a few sort of technical things that we can probably, everyone knows. So first of all, we're going to be talking about energy, mostly additive energy, so energy or the second moment of convolution, so we just roughly, so in other words, this is just the, we take every sum in the sum set and take and add up, so r is the number of realizations of the sum, so we take with some of the squares of the number of realizations. And since we're talking about this equation, this equation can be rearranged, so the terms can be moved around, so the energy is either this, the realizations of sums, or the realizations of differences. And s stands for sum, obviously, and d for difference. And multiplicative energy is defined, obviously, in a similar way. When we look at this energy quantity, so if we had a power one here instead, then we just count the total number of realizations of all sums, and that would be a squared. And if we took a maximum number realization, which is a itself, then it would be a cube. So there's a trivial estimate on both sides for the energy. And Cauchy Schwartz, of course, tells us that the cardinality of a plus or minus a is bounded by this ratio. In other words, in other words, and this has how so many things have been done, in order to bound cardinality from below, you need to find, you need to bound the second moment from above. So most in so many things, like, say, say the Guth and Cauchy's resolution of the order distance conjecture, which I still think is one of the most, one of the most impressive papers that I've ever seen, if not the most impressive paper. So this is what's being done. So instead of estimating the cardinality in that case, the set of distances from below, you estimate the energy from above. And then use Cauchy Schwartz. And just for bookkeeping, exponential sums can be used. So if you just define a function on a circle, and just as an exponential sum with the frequencies in your set A, then this energy is just the fourth power of the of the l4 norm of this exponential sum, because basically complex conjugation response to the minus sign. And so there are two of these fours come from here, and there are two more of these fours come from there. So this is just a tautological, just tautology. In other words, if this relation doesn't take place between the between the exponents, when you when you take, I mean, when you when you take the fourth power of this expression, you just get an integrate over the circle, you get the right. So traditionally, the sum product problem was approached in the following way, one would try to bound energy via the opposite sex set. And say one of the versions is a well known estimate by Schoymusher from 2008, which bounds the multiplicative energy via the sum set. So it just says that the multiplicative energy is bounded pretty much by the sum set squared with a log that you can that you cannot actually get rid of. And this estimate immediately implies f s m p f s m p, meaning few sums, many products. In other words, if we have few sums, if a plus a is above the size of a, then the multiplicative energy of a is pretty trivial. So as we said, as we've seen before, energy always sits between the size of a squared and the size of a cubed. So if a plus a is roughly a, then the energy is pretty much a squared. And this is kind of the trivial case, which automatically implies that the number of products is almost as big as it gets up to the log in Cauchy Schwarz. So a few sums, many products issue closed. It's true. Sorry. Yeah, a few sums, many products is true for reals. So for integers as well. Now, a few products, many sum. Here, we only know about z. We don't know. We don't have an analog for the reals. We have some partial results for the reals that I'm not going to talk about. So this is, these are the estimates that were started by Buggan, that come from Buggan and Chang and their demystifications. And the estimate is now in this form, so that if the product set, if the multiplicative constant is m, if the product set is m times bigger than a, then there is a large subset of a and throughout a prime will always denote a large subset of a. So basically, you can think of a prime equals a, at least in cardinality, a large enough subset. Such that the, such that this subset has a reasonably small energy. So a squared is, again, this is the minimum energy, the minimum possible. Well, there is a four epsilon. Okay, I mean, we all often have to pay in, in, in various abstinence. But on top of this, unfortunately, there is this factor, which of course, grows pretty wild as epsilon goes to zero. So these estimates are really asymptotic. So, so they hold for m perhaps equals cardinality of a to the power one over 1000 or something like this. But, but for the genuine, some product conjecture, when say when we think of a such that is product set maybe is a to the three halves in size or something like this, this is pretty use. I mean, this is completely use. But having this estimate, having so this estimate suggests that perhaps so here, but we basically prove that, that, that if a has a small product set, then we have a large subset whose energy is as small as we wish pretty much. Maybe we want to maybe is it can we conjecture the following statement, the following sort of product set versus energy statement, which would weaken a little bit, which would, which would, which would be stronger than our some product conjecture. Can we say the following, is this true that, that, that a has, well, again, has a large subset a prime, the way I read this estimate a prime doesn't have to be large, but, but let's think about this large. So is that true that any every set a has a large subset a prime such that the following takes place either the product set or what, what the Cauchy Schwartz would have given me for the cardinality of a prime plus a prime. So this is what the Cauchy Schwartz would give me for cardinality of a prime plus a prime, right? There it is. So, right, so either the product set or what I would get from all my Cauchy Schwartz estimate for, for all the consequence of my Cauchy Schwartz estimate for, for the energy of a prime would give me almost a squared. So this is a stronger statement than the some product conjecture. The answer is no. The best you can expect is the power five thirds rather than the power two. And the example is very simple. It was done by Balagouli 2017. And there it is. So what we're going to take, we're going to take an interval between one and n, one and n squared, one little n squared. So the cardinality of a is n cubed. So we have an interval n squared. And then we take n dilates of this interval by powers of some huge prime. So p is just bigger than the interval. So we take this interval and take this gigantic dilates. And the dilates, I mean, the dilates differ from one another so much that different dilates just have no additive interaction with one another. And so if we look at the product set, then of course, when we, when we do products, so these powers g, they come from there, they're just there, they're an arithmetic progression between one and n. So when we, when we multiply, we sum the powers and the maximum power is going to be just two n. So in other words, on the, so the, the size of the product set is smaller than a to the power of five thirds. So why is that? So again, when we multiply these, when we multiply the interval from one to n squared, we get almost n to the fourth. But we don't really get n to the fourth, we get n to the fourth divided by some log n, log to some power. And on top of this, we get different powers of p and we get n powers. So we get, we get, we get almost a to the fifth, but not quite. And since, sorry, we'll get almost n to the fifth, but since the cardinality of a a is n to the third, we get a to the five, a to the five thirds. This is the product set. And on the other hand, when we do the energy, oh, well, again, we have a problem here, because every element of a, every element of a is a member of some arithmetic progression of length n squared. So when we look at this element of a as an element of a plus a, this element will have n squared the length of the arithmetic progression in realizations as a sum of product. So we square this n squared, we get a to the fourth. So we get n to the seventh, which is eight to the seventh thirds. And that accounts for this estimate five thirds is the best that we can get. And this little, little off one is genuine. And on top of this, nothing will change. If we take any large, this argument will not break down if instead of a itself, which is just a union of arithmetic progressions, which are exist on kind of different scales and don't interact with each other, additivity, and nothing will change if we take reasonably large subsets of these arithmetic progressions. So this five thirds is what is the best we can get. And this is the main result I'm going to talk about. So suppose a is the subset of z and then there exists a prime subset of a such that a prime is large in cardinality and the estimate I've just claimed holds higher with the little so in this here over here that there's a little off one just just subsumes all sorts of logarithms. I mean, well, this specific logarithm, but here there's a little all that depends on R. And there is no R in this formulation. So there's unfortunate condition to this there. The condition is that if each member of the set A has at most are prime factors. It doesn't mean that the set A, of course, is spanned by a few numbers, a few primes. No, not at all. I mean, each member of A has its own prime factors. But what matters is that the number of prime factors for each A is bounded by log by almost by log A. And I wish I could make it log A, but I can't. So it has to be an EPS. If we return to this example, what I want to say is this is certainly true a proposal of this example, because because I can just take instead of this interval, instead of this interval, I can just take an arithmetic progression of primes. We know that there are arithmetic progression of primes as long as they want. And then I'll take this extra prime, just really gigantic, much bigger than, I mean, gigantic in comparison to this arithmetic progression of primes. And I will still get these five thirds. And if I do this, then in this case, my R would be just equal to two. Every member of A will just have two prime factors, this common prime gigantic P and one of the primes from the arithmetic progression. So already for two prime factors, this problem is not true. In fact, that was Simireti who asked this question. So at some point he was, so I told him at some point, oh, like I mentioned, this error distance problem is solved. And he said, oh, no, no, no, it's not solved. What about the single distance problem? This is the real problem. And then he said, oh, you guys are working with these some products. Can you actually do something maybe in this case when every member of your set, or let's work with integers, he said, when every member of yourself of your set has, say, 10 prime factors. And this is how we started thinking about it. And then we realized that even two prime factors pose a problem. So we started thinking about just two prime factors. And then it went on to more prime factors. So now we can handle this. Okay, and there is an interesting corollary which follows from already methodology. So it's not obvious from the theorem. So parallel to the sum product problem, people were studying this quantity. So a plus or minus AA. And here's an interesting estimate that this a so of course, if we take either an arithmetic progression here, so it would take an integral from from one through N. And then AA is pretty much everything from one through N squared. And A plus AA is still everything from one through N squared times two, whatever all constants are equal to one. So the estimate would be just a squared in size, or we can take a as a geometric progression. Of course, if a is a geometric progression, then a times a is a itself. But when we add, it's when we add a geometric progression to itself will will square its size as the sum said. So but here it says in this year is an interesting corollary that this quantity is always much bigger than a squared, unless there are two endpoint cases. And the endpoint cases are either the product set is large. So either M equals either M equals A in the size of A, or when the product set is very small. So when when M is very small. So now what are the ingredients and now why why is this a number theory? So there are three ingredients. The first ingredient is a structure theme. And the structure theme is combinatorics. All the three ingredients so far would break down if we removed this few prime factors assumption. Some of them require some of them more stringent in terms of the number of prime factors. Some of them are less stringent, but weirdly enough, they're quite compatible with all the three. I mean, the number of prime factors they can digest is kind of more or less the same. So one is the structure theme. This is just basic combinatorics and pigeonholing. The other one is, well, we worked on this one pretty hard. And we did most of it ourselves. And then we discovered a paper which was called a Marcengale that occurs in harmonic analysis. This is the title of the paper by Gandhi and Veropoulos from 1976. This was a review paper. And that paper, and we realized that that paper can be just used as a black box for what we want. And there is another, well, application of a result back from around those times, which is Schmidt's sub space theorem, a special case of the subspace theorem. And this is, of course, I mean, this is the number theory. Well, I mean, miles beyond my understanding of number theory. And interestingly enough, it also comes seemingly indispensable. So this is kind of genuine harmonic analysis, not just bookkeeping. So I've written exponential sounds above, but I said that those exponential sounds are just for bookkeeping. No, this is harmonic analysis proper. I mean, you really can't see it just on one side of the Fourier transform. And the subspace theorem, well, the subspace theorem is the subspace theorem. There it is. So there's the version that we're going to use. So suppose we have r primes, and they generate a multiplicative group, gamma. And then we take some number L, and we compose a linear combination. So we take a bunch of coefficients from A1 through L, whatever they are, we fix them. And now we look at the number of solutions of the equations of, when we have a linear combination of these fixed A's and some elements of our group G, such that the sum equals to one. And we need no degenerate solutions, of course, because if we had degenerate solutions, if some subsum was equal to zero, we could have come up with a much larger number of solutions altogether. So if we count these non-degenerate solutions, then the number of solutions is bounded by the length. L is the length of this linear combination. Two, well, a large power. But most importantly, there is this rank here in this bound. So L times R. And then these three pieces will come together. Okay. So first let me show you how that without any number theory, without any harmonic analysis, the exponent three halves is easy. And all it needs is the structure theorem and element and structure theorem and basics. So here's the statement of the structure theorem. It's very convoluted, but I'd like to sort of, to demystify it because it's very simple and it's very easy to prove. So it says that there is a large subset of A which has structure. And the structure is as follows. So there exists a bunch of primes. So altogether, every member of A has R prime. So there exists a set of common primes. So this set A prime is spanned. So every member of A prime is divisible by one of the common primes. So the number of common primes is R prime. And now every member of our subset A prime looks like this. So we have some sort of set B which I think of as a base set. And each element of this B is multiplied by a bunch of powers of these common primes. So it's a pretty easy statement. So in other words, every member of A prime looks like B an element from the base set times the element of G. And G is exactly this group generated by these common primes. So every element of A prime is the element of a base set B times some element of G. And moreover, everything can be regularized. So the set of Gs multiplying different Bs can be different. But what is important is that, and this is general kind of pigeon holding, not much to do really, so that essentially the number of powers of each of these primes multiplying every B is roughly the same up to say power up to say factor of two or something like this. So and this is the expression of this. So that every every B in the base set is multiplied by L1 prime, L1 powers of the first, by L1 powers of the first prime by some number L2 powers of the second prime and finally of the last prime. So in other words, every element B of the base set is multiplied by approximately some number L of the element of Gs. So G is the element of the group generated by these common primes. And this number L is pretty much the same for every B. And therefore, therefore, we have an estimate, of course, that roughly speaking, the size of A prime is just L, the number of primes multiplying every element B and the size of B. And this is pretty much A. So let's just think that LB equals A in size. And on top of this, when we multiply A with itself, or one thing, one more important thing I didn't say is that 50% of the pairs of the base set are co-prime. So when we multiply B with B, we get B squared. So B, in other words, pretty much has no multiplicative structure. All the multiplicative structure has been taken into this part, into this group gamma over here. And so the product set, the size of the product set is at least A squared times L. In other words, when we multiply, we lose from the maximum possible value, we lose the factor of L because we know nothing about these powers of these primes and all these powers can be in some kind of arithmetic progressions. So that's the structure there. In addition to this, there is an old lemma by Chang from 2003 that was an annals paper of Chang in 2003 on the sample of the conjecture. And this whole paper was based on this lemma. This lemma is very easy. So this lemma evaluates additive energy over the set A. It says that suppose A is a subset of Z, symmetric. Symmetric means A and minus A is the same. So we don't have to differ between sums and differences. So B is some prime. And let us just call A, V the part of A such that the P variation of every A in this part equals V. So in other words, A, V are the elements of A which are divisible exactly by P to the power of V. Not a bigger power, not a bigger power of P, not a smaller power of P. And then the energy of A is bounded by the sum over V. So V corresponds to different powers of this P of these energies. And what does that mean? So the energy, the total energy is the number of solutions of this equation. Each of these As is divisible by some power of P. All this lemma says is that it is not possible that the powers of P divided by these four As are all different. Because if they're different, that's a contradiction. Say if this one is not divisible by P at all, and these three are divisible by different powers of P, then we get a contradiction because it would imply that this one actually is divisible by P. That's it. So this is the content of Chiang's lemma. And if we now have this estimate, and we're Cauchy-Schwarz this estimate, so this is just the usual Cauchy-Schwarz inequality with energy notations, if you haven't seen, it doesn't matter. So there's a usual Cauchy-Schwarz inequality that tells you that this is what it is. Now you cancel the square root out and apply it to and combine it with this statement. So when you combine it with this statement, so basically this lemma tells you that the energy of A prime equals L squared times the energy of B. A prime itself is in size L times B. The worst possible case for the energy of A prime would be L cubed times B. In other words, when you estimate the energy of A prime, you win a factor of L. So the worst case is L cubed times B cubed, but then you get L squared times B cubed. On the other hand, as we said on the product set, we lose exactly the quantity L. So when we take, when we put these two things together and we optimize, we get this estimate with your exponent three-halves, done. Everything else that I'm still going to be talking about in the last 20 minutes, 10 minutes, in the last 10 minutes, is how to improve these three-halves to five-thirds. Three-halves is basic. There is no analysis, no number theory is in a sense, and there's only pigeonhole principle, that's it, and divisibility. So this is this, this can be taught to anyone. Now from two-thirds to five-thirds, this is the main thing in a sense, that's an improvement. So the main claim is that the energy of A is bounded by a much stronger, so there's a much stronger bound. So here in Chang's Lemma, all we require is that the P variations of these two guys are the same, and the P variations of these two guys can be anything. The strengthening is that as a matter of fact, as a matter of fact, the P variations here have to be equal, and the P variations here have to be equal. So I've written it, new gamma of A1 equals new gamma of A2, which means, which means that the valuations of A1 and A2 must be equal relative to all common primes P1 through PR. So we have common primes P1 through PR, and all valuations of these two, of terms in the left-hand side, have to be equal, and all variations of the two terms in the right-hand side have to be equal. So this is a much stronger quantity of bounds. And for that, this is kind of, well, this is kind of harmonic analysis-proper in a sense, so use maximal functions and stopping times and standard, but 20th century difference. So from here we conclude, so the conclusion of this is more or less immediate, so when we decide that if we assume that this is true, then the conclusion is that the energy of A prime, so that's at A prime that we got from the structure theorem, is A prime squared, this is just the trivial term, these are trivial solutions, this is when these two guys are equal and those two guys are equal, so this A prime squared, and on top of this, on top of this, this is the case that, so this is the case, this is what's left, why is that so? So because if we use a structure, if we use a structure theorem, so each of these A's is B times some element G, so and this says that the G is corresponding, so there are two B's here underlying these A's, and this says that the elements of the group here have to be the same and the same thing here, so basically, so there is some G here and there is some G prime here, and when I divide them this is what I'm going to get, I'm going to get an element of the group, so the energy is bounded by this expression, in other words I can think about it like this, so suppose A1 is anything, then what do I know about A2? I know that the group element multiplying A2 is the same, so for A2 I only have the B, and as far as A3 and A4 are concerned, all their options are going to be subsumed in this quantity, so when B, well this is going to be B3 and B4, B3 and B4 fall into the group, and now how do I evaluate this quantity, and this is where number theory kicks in, to evaluate this quantity unless the rank equals one, if the rank equals one I can do, if there is only one common prime I can do it by hand like in a one-page induction proof, but if I have more than one common prime then I need the full power of the subspace theorem and this is the statement, and this statement says that this quantity is pretty much B, it's B times something very small, it's B to the power, it's B to the power one plus little so in other words, in other words if I look at this solution and I say B1 minus B2 equals one, then of course the number of solutions can be as large as B, but if B1 minus B2 lies in a whole multiplicative group, then the number of possible solutions is not much worse, it's still pretty much B times something which is a little low of the size of B, that's the subspace theorem, and if I put these things together, you can trust me, if I put these things together, I'm going to get my statement, my exponent five-thirds, so essentially what does that tell me? This tells me that, so I know that I've lost the quantity L, that size of yeah, I lost the quantity L on the product set, and now if I look at the energy, how much do I win? I win A prime and then I have B squared, so A prime is B times L, so this is L times B cubed, so I've won L squared on the energy. In the trivial case, I've won L on the energy, but here I win L squared, and when I put this together, it's an optimize instead of three-halves, I get five-thirds. Right, so in the last five minutes, I'll just give you a few hints on how this is proven and just drop a few names, really, so suppose we have just one common prime, so our set A, suppose A and the same as A prime, so our set A looks as full, so there's a base set B and every element of the base set B is multiplied by a bunch of powers of the prime P, and these powers, this set V of B, can be different for different B's, but the cardinality of each V of B is roughly the same and equal to L, and this is the main proposition, the proposition in the case rank one, so that if we do this energy, then the P valuations of A1 and A2 must coincide, so A1 and A2 have some P variation V, and A3 and A4 have some P variation V prime, so therefore what happens is this, so when we calculate the energy again, so there's a trivial term, this term corresponds to A1 equals A2 and A3 equals A4, and now suppose we're dealing with non-trivial terms, so we have A prime choices for A1, knowing A1 we know the P valuation of A2, so we have B choices for A2, and now we know that A3 and A4 have an equal P valuation, so suppose this B corresponds to A3, B prime corresponds to A4, so then B minus B prime must lie, well maybe not in G itself, but in a coset of G, which is defined by this quantity, but in a fixed coset, and of course it doesn't matter whether it's gamma or coset of gamma, because I mean we can just divide the B and B prime by gamma, so the estimate doesn't, so if I want an estimate like this, it's a very easy estimate, because gamma is just, so I can just, so gamma is just rank one group, so I'm just talking about powers of the prime P, I can write everything in digits mod P, I can write all the elements of B and digits mod P, shift things around and prove this estimate, and there this little of one is actually a log, so there's not much to do here, it's like one page exercise, but this only works for rank one, so for rank one I don't need any subspace there, for rank one, namely for one common prime, namely for the Balagwuli example in a sense, well for refined Balagwuli example, whenever the arithmetic progression of primes multiplied by powers of some other prime, then this five thirds comes for, well the main thrust of these five thirds would be in this margin-gill-harmonic analysis proposition, so just one word about the proof, so just again this is name and term dropping, because there is obviously no time, and it would be too much, so what I do is I consider an exponential sum over the set A, so I have one common prime, and I consider an exponential sum, and what I do here is I group this exponential, I mean I partition this exponential sums basically by the powers of prime P multiplying A, so these powers of V, and then I define the square function, so the square function is kind of a standard quantity in harmonic analysis, in margin, I mean in maximal function theory in little wood pally and whatnot, and this is a standard definition, so what I do, defining a square function, I take each piece, so each piece of the exponential sum which is divisible by, where the frequency is divisible by specific power of P, take the square of the L2 norm, sum them over V's and take the square root, so this is the square, this is the square function, and this is the theorem from 1970, and this theorem tells me that for any Q between bigger than one, the Q norm, the LQ norm of the FT itself is of the same magnitude as the given, is of the same magnitude as the LQ norm of the square function, and this is exactly what my statement about energy says, about equal P variations on the left-hand side and the right-hand side, so the application of this theorem is a totology, so this is a maximal function, so this is a little bit, but again we don't have time for this, so these are just the standard additional constructs that have to be created in order to prove this inequality, moreover this inequality works in the opposite direction as well, and this is necessary to embrace more primes, so on the proof all I want to say and finish probably at this point, yes definitely finish here, so on the proof all I do is so I take my exponential sum, so this is just the sum of complex exponentials where frequencies line my set A, and I split this exponential sum into pieces where the valuation of where A is divisible exactly by P to the V, but I also need to consider a cutoff of this sum, so I will cut off the sum by those As which are divisible by say P to the 10th or higher power of P, and then all I need to notice is that I can just rewrite this in this form, so in other words if I just if I if I take the part of the exponential sum where I have As which are divisible by P to the power 10 or higher, I will obtain I can obtain this the value of this exponential sum as a point T as follow as follows, I would take the point T on the unit circle and then average my quantity around the right polygon with P to the 10th vertices, so and that's why these quantities, these cutoffs form an inverse martingale, so I will be I will be taking a point on the unit circle T and then average my exponential sum over the right polygon with P to the P to the V vertices, and of course the bigger V the more detailed average I'm getting and that's why it's a martingale, and then all I need is I can basically go on Wikipedia and and and and kind of dig out some some martingale inequalities and apply them in the correct way and apply Cauchy Schwartz and get the result, get the main proposition for rank one. And for rank higher than one I need more harmonic analysis tricks, I need some hinching, I need the hinching equality, I need some randomization, I need the inequalities going the other way, but this is all doable and well if I erase through these pages I arrive in the in this quantity which is again the same thing B1 minus B2 falling into the group where the group now has r generators and for that I have my I have the subspace there getting there yes I use this formula there's a little lemma in there but this r is crucial and this this really this really imposes this is this is the most stringent imposition of this condition and I guess it's time to stop.