 So firstly, yes, again, thank you to the organizers for giving me an opportunity to talk about some of my recent work towards variations on the sub-product conjectured. And I should also mention, Misha Rudner gave a very nice talk about this sort of a problem a couple of months ago, and he defined everything. But again, I'm going to redefine a lot of things and do some of the basic examples. So for the experts and people who are already familiar with this, I apologize. Things might be a bit slow at the beginning, but hopefully there'll be something for everyone by the time the talk ends. Okay, so with that, I'll start talking about the sub-product conjectured. Oops, let's see, wait, why is my slide? Yeah, okay, so, yes. The sub-product conjectured revolves around two very central objects and additive combinatorics known as the sum set and the product set. So given a set of N integers and some integer S, I can define the S-fold sum set of A to be the set of all S-fold sums where all the summands are from A. And similarly, the S-fold product set of A is just the set of all products where all the elements being multiplied are arising from A. So in particular, yeah, you pick up any S-elements from A, you add them together and then you see how many possible such sums can you get. And in fact, a big question in additive combinatorics concerns how large the sum set or the product set is compared to the original size of the set. So just to start that off with, there are some super trivial bounds on these guys. So the cardinality of the sum set and the cardinality of the product set of any set A of N integers must be bounded between N and N to the power S. This is sort of a trivial upper bound because again, you have N to the power S many possibilities for all of the elements individually. And the lower bound, again, is quite trivial because you can fix A to A3 up to AS and just let A1 vary and that will give you N distinct sums and N distinct products. So these are sort of like the super trivial upper and lower bounds. And essentially like a big conjecture in this area is can you say anything about the set A? So I haven't given you any description of set A. If I just tell you that the sum set, S A is very close to N or is at least a bit far away from the upper bound. And the reason why people sort of care about this is because the sum set is a very good way to measure how much additive structure you've got in your set. And similarly, the product set is a very good way to measure the multiplicative structure of your set. Maybe I can make this a bit more concise by, precise by doing some examples. So in particular, when I let A be 1, 2, 3 up to N, so this is an arithmetic progression, this is very, very additively structured, I see that the sum set does not have that many elements. That's just because the sum set is contained between 1 up to S times N. So the sum set actually grows quite slowly. It's very close to this upper bound. This is actually a good time to mention that towards this talk, I'm thinking of N as my factor that's going to infinity and the rest of the factors like S should be treated roughly as a constant. So I really care about how things grow with respect to N, in particular, what kind of an exponent I'm getting on N. And to again sort of exemplify this, we can see that the product set of A actually blows up and it's quite close to the upper bound that we have. To get this product set, obviously, you can just take S distinct primes in 1 up to N. The number of ways in which you can take S to 6 primes is exactly this. And if you multiply S distinct primes, you genuinely get a new product every time. So that's a very easy way to see this lower bound. And it's quite close to N to the power S. Also again, anyone who's unfamiliar with this notation, I'm just hiding some absolute constant that depends on S. So in particular, I'm hiding something like 1 over S factorial in this case. But you should just think of this as greater than equal to some constant. So the other obvious example is when A is a geometric progression. And here again, you can see that the product set is quite small because the product set will just be powers of 2 and the exponents won't be too many. On the other hand, the subset again explodes. So it's quite close to this upper bound that we had here. And that's again, because if you take any S distinct powers of 2, they give a distinct sum because of uniqueness of binary representations. So just to sort of summarize this in a very additively structured guy has a very small subset, but a very large product set. And a very multiplicatively structured guy has a very small product set, but a huge subset. So somehow, this is already saying that there's some sort of incongruence between having additive structure and multiplicative structure. Something that's very additively structured will have absolutely little multiplicative structure. And that's sort of exemplified by this guy being quite large. So with this heuristic, maybe one would be tempted to conjecture that these two are essentially the extremal examples. And this leads us to a very nice conjecture of Erdition-Semeredi, which says that if you just genuinely look at any set A of n integers, either the subset or the product set must be as large as possible. So in particular, if let's say the subset was small, then you should have, you should somewhat resemble an arithmetic progression. And that should make the multiplicative, the products that explode and the other way around. So just sort of heuristic. And again, this is very rough. Like obviously, I think it's a very hard question to be able to say if the product set is small, can you describe A genuinely? So that's why I'm just writing this as a heuristic. If this is somewhat small, then your set should resemble a geometric progression. While if the subset is small, you should resemble an arithmetic progression. And these two things just look very differently. They behave very differently. And a lot of results towards this conjecture sort of try to quantify the sort of heuristic. It is, I guess it's worth asking before we go too deep into how can we sort of prove results towards this? It's worth asking why should we prove results towards this? What sort of the motivation behind this conjecture? To me, it looks very intrinsically nice because of the sort of studying this sort of incongruence between additional multiplication over finite sets. But there is some other external motivation as well. So just this particular case, very crucially, it's quite important to study what happens if an arbitrary set has a small subset. And that's why this connects very naturally to Freiman type inverse theorems in additive combinatorics, especially deep conjectures like the polynomial Freiman-Rusza conjecture, which I'll mention maybe a bit again at the end. So it has very nice connections to questions in additive combinatorics. It turns out you can also ask this question in other settings. So you don't have to focus on integers. You can also ask this in finite fields. So one possible example is if you have a set A of size n in Fp, where p is sufficiently large with respect to n, can you get some sort of expansion between the subset and product set? And it turns out the finite field case has a lot of applications to sort of questions of combinatorial geometric nature to questions of a number theoretic nature in finite fields. So I think this was first studied by Burgan-Katz and Tao. And they, in fact, used they proved a non-trivial sum reduct estimate in Fp. And they used it to prove an incidence geometric result in Fp. So it had a lot of nice applications. And then Burgan used these ideas in other papers as well. And finally, you can consider a discretized version. So instead of having a finite set, you can think of A as a union of a few intervals. So now this is some sort of a genuine measurable set, let's say, of the real line. And you can ask some sort of a similar question. And it's, again, related to some very nice problems and analysis. So I guess at some level, this sort of studying this sort of a conflict between addition and multiplication can naturally be used to study a lot of other problems. So there's some sort of external motivation as well. With this, I hope I have provided enough motivation. So maybe I can mention what sort of results are known towards this conjecture. And as far as I know, there's some sort of a contrast between the various cases. So let me focus on the s equals 2 case, which has seen loads and loads of nice results towards this problem. So Erdition-Simeride proved a non-trivial result. And then Nathanson made this a bit more quantitative. And then came Elakish, who actually came up with a combinatorial geometric idea to study this problem, which was very nice. Then Sholamosi's result was another highlight where he wrote up this very, very nice combinatorial argument to get a very nice exponent of 4-thirds. And then the question was, can you get beyond 4-thirds? And there's been a series of results in this direction culminating in some very recent work of Michel or Devan Sophie Stevens, where they say that if you have a set of N integers, either the subset or the product set must be quite large. So we are still far away, super far away from N squared. But we are at least, we have made some progress from just the lower bound N. And sort of the nice thing is a lot of these results have sort of involved a lot of geometric ideas. In contrast to this is the case when S is large, where by large, I mean, let's say you know, take S to be at least 20, take S to be at least 100. But now, if you're taking the S-fold subset and the S-fold product set, you want your exponent to sort of grow with S. So using this geometric techniques, you can get N squared or something or N cubed. But the question is, can you genuinely have an exponent that goes to infinity with S? Because that's the least you expect. You expect this to grow like N to the power S. But even getting an exponent which went to infinity with S was quite a hard question. And it was solved by Burgan Chang, I think, in 2004, in their breakthrough result, where they showed that the subset of the product set grows like N to the power log S to the power 1 by 4. And in fact, contrary to the S equals 2 case, there has only been one further improvement in this regime. And this was by Paul Volga and Cherazov, who proved that you can have this exponent log S over log log S. And again, in comparison, this uses more analytic and combinatorial ideas as compared to the heavy geometric flavor for the S equals 2 case. But that's essentially the state of the art. So we are still super far away from N to the power 2 minus epsilon N to the power S minus epsilon. But there's been a lot of nice progress and a lot of nice techniques. Something that I personally would find very interesting is if one can even get a lower bound of something like N to the power S divided by 10,000. Have some exponent that grows linearly with S. But that also seems quite hard. So this is sort of like the current thing, the status for the sum product conjecture, which again was a question about looking at how the subset grows and how the product set grows. And they were both notions of additive and multiplicative structure. In recent years, it's turned out that you can also study other notions of additive and multiplicative structure. In particular, this allows me to give a definition about what's known as additive and multiplicative energy. So I define the additive energy to just be the count of solutions to a linear equation, to this very specific nice additive equation where all my variables are in my set A. And similarly, the multiplicative energy is a number of solutions to this very nice multiplicative equation where all my elements are in my set A. And again, it's quite intuitive why this is a good measure of the additive structure, because if you have a set which is very additively structured, you should expect a lot of solutions to this equation. And similarly, with a set as multiplicatively structured, you should expect a lot of solutions to this multiplicative equation. So we'll just do the standard examples again. So first, we have a trivial bound. Both these counts of solutions are between the trivial bound n to the power s and the trivial upper bound n to the power 2s minus 1. These are exactly the diagonal solutions. So the solutions where A1 is A s plus 1, A2 is A s plus 2. So these are sort of like the trivial solutions, because I can let these guys vary and then just fix these guys to be a permutation of them. And the trivial upper bound, because you have one equation, 2s variables, and that's exactly what you get. So these are some of the trivial bounds. And as you can imagine, if I put A to be 1, 2, 3 up to n, something that's very additively structured, I'll exactly sort of start achieving these bounds. So in particular, the additive energy via standard application of the Cauchy-Schwarz inequality. The additive energy is quite large when the sum set is quite small. And remembering that the sum set roughly grew like n, we actually get a lower bound n to the power 2s minus 1 when A is 1 up to n. And it's exactly being super close to this upper bound. On the other hand, the multiplicative energy is bounded by n to the power s plus epsilon. And yeah, I should probably not focus on this too much in a number theory seminar where everyone would be super familiar with how to prove this. But just to mention, to show that the product set was large, we just took s distinct primes and said, you get unique products. You can't apply some sort of approach like that here, I think. In particular, because it could happen, you are getting some solutions which are coming from numbers that are not primes, maybe numbers with a lot of small divisors. You can again tackle them using what is known as estimates on the divisor function. But my point is to show that this guy is small, already requires a lot more effort than to show that the product set is large. So somehow showing that these counter solutions are small is much harder than showing that the corresponding product set is quite large. Similarly, when A is a geometric progression, the additive energy can be shown to have almost diagonal type behavior that it's quite close to the slower bound using some sort of a much more evolved version of this trick where I take s distinct powers of 2, but it's still a bit more involved. And the multiplicative energy must be large, again, because if I apply quantity shores, I can show that when the product set is small, which is true in this case, the multiplicative energy must be quite large. So again, we are in this situation where for an additive structured set, the additive energy is super large, the multiplicative energy is super small, analogous thing for a multiplicative set. And so putting my naive sum product cap on, I might be tempted to conjecture that in fact, those two are the external cases in general for any set A of n integers. Either the additive energy or the multiplicative energy must be quite small. Another maybe motivation to think of this idea is if I could genuinely show this is true, I get some product conjecture for free because if the additive energy is small, then the sum set must be large. Or if the multiplicative energy is small, then the product set must be large. So just by having this bound, I get the sum product conjecture for free by Cauchy shores. I did mention this is sort of a naive question because if you think about it for a bit, you realize that this is obviously not true. I can take a union of an arithmetic progression and a geometric progression. And I get my solutions to the additive equation from the arithmetic progression and I get my solutions to the multiplicative equation from the multiplicative progression. So in fact, if I take a union of an AP and a geometric progression, both my energies are quite large. So this sort of a conjecture, this fails absolutely horribly for very simple reasons. But it turns out that this is essentially the only counter example. So recently, Balog and Woolly proved this very nice result that you can still have some sort of a incongruence between additive and multiplicative energy if you consider partitions of your set. So if you partition your set A, sorry. So if you have any set of n integers, you can partition them as B and C, where B has a non-trivial power saving over the additive energy and C has a non-trivial power saving over the multiplicative energy. And this power saving is somewhat small, but it's still quite a non-trivial power saving. In particular, if you consider this example, my set B will exactly be this geometric progression which has a small additive energy and my set C will be exactly this arithmetic progression which has a non-trivial saving over the multiplicative energy. So yeah, this is generally the only counter example. So at least for me, this is already quite aesthetically nice, but another reason people studied this was because you can use these results and feed them back into the sum product technology to get better exponents. And given them like time I have for this talk, I won't show how one would possibly do it, but I'll just show a very trivial calculation. So trivially, the sum set of A and the product set of A are at least the size of the sum set of B and the product set of C because A contains B and C. Applying Cauchy-Schwarz, I can show that it suffices to get lower upper bounds on the energies. And now if I assume the Bellovouli result, I know that both the energies are small, but also because A is partitioned as B and C, at least one of B or C must be quite large. So if B has size at least 10 by two, I know this guy grows like end to the part two S, I have a trivially, I have a non-trivial power saving. So in particular, this guy would be large. And if C is large, then I can use this sort of an estimate to show that the product set is large. So in particular, just by using Bellovouli decomposition as a black box, I get a non-trivial sum product result. I'm using these very trivially, again, like people who are like the experts in the area can use these much more cleverly to get even further improvements, but just as a question shows inequality, at least for all values of S, you can get non-trivial sum product result for free. So this was quite nice, but the Bellovouli constant was two by 33. There's been loads of improvements, but the constant somehow still was below one third. And Bellovouli asked a very natural question. Given that we know this exponent goes to infinity with S, can we actually get a corresponding decomposition where the power savings here also go to infinity with S? So in particular, could we qualitatively generalize the Burgan-Chang theorem for energies instead of some sets and product sets? So that's one of the main results I wanted to talk about today. So I was able to show that any set A can be written as B union C, where the additive energy of B has a non-trivial power saving, the multiplicative energy of C has a non-trivial power saving. And this exponent goes to infinity with S. It goes quite slowly to infinity with S. It has a couple of logs, but it does genuinely go to infinity with S. So this proves the Bellovouli conjecture for integers quantitatively. And so this is some sort of a genuine generalization of the Burgan-Chang result because, I mean, not quantitatively, but this already gives us an exponent that goes to infinity with S, but it applies more generally to energies. And year after this result, Shkradov was able to remove one of the logs here. So Shkradov was able to improve upon this sort of estimate by showing that the exponent can now grow like log S to the power half. And in some more recent work where not only looking at solutions to a linear equation and solutions, sorry, solutions to a linear equation and solutions to a multiplicative equation, I was considering more general solutions to polynomial equations. And during that paper, I could actually get the exponent log S over log log S, which matches the best known exponent towards some product conjecture for large values of S. So this was a genuine quantitative generalization of the Burgan-Chang, Paul-Wolke-Chalazzov sum-set product set estimate. It's kind of nice to get the same order of exponent that we had in the sum-set product set case. So it kind of felt nice that we had this nice quantitative power saving for energies, which could give you the sum-set product set estimate for free via Cauchy-Schwarz. But a very reasonable question to ask is, sure, you can do this, but could you access some other results using this sort of energy decomposition that are not accessible via the sum-product estimate? So are there genuinely new applications that you can find using these notions of additive and multiplicative structure instead of the standard sum-set product set? It turns out that is generally the case. So these sort of results had further applications to the problem of finding additive and multiplicative set onsets. And I'll take a brief, well, it wouldn't be brief, but I'll sort of outline what is this problem and why one should think about this. So let me just give you a definition of the onsets. So given parameters S and G, I refer to a set X of integers to be an additive BSG set. If any integer N has at most G representations as an S-fold sum coming from X. So again, you should think of G as a constant. If you want G can be one. And again here, I'm counting solutions up to permutation. So X1 plus X2 is the same as X2 plus X1, at least for the count of these solutions. So these are quite prevalent in combinatorial number theory, studying how big can an additive BSG set be in some ambient set of integers? And similarly, I can define a multiplicative BSG set. So Y is a multiplicative BSG set if for any end the number of representations of N as an S-fold product is small. Already from these definitions, you can see that an additive BSG set is a set which has very little additive structure. There's very little sort of additive structure that could count for solutions to some nice additive equation. And similarly, a multiplicative BSG set is a set that is devoid of any multiplicative structure. And for example, in this sort of a study of a interesting problem is, can you find the largest additive BS1 set in one, two, up to N? I've mentioned a lot of names here. So addition to run studied this question when SS2. And again, there are people in the audience who would be much more knowledgeable about this question than me, but I was surveying the literature and I couldn't find the most precise reference for this question when SS is at least three. So Jia, I think was the one who studied the sort of question upper bounds for the sort of a question in 1993. Ciarolo had a lot of beautiful work towards studying the largeness of BS1 sets. And I think in a survey article of Tim Gowers, I was able to find the question where he asked, what is the size of the largest BS1 set for S greater than equal to three in one, up to N? And I think the best known results for that question, at least up to some constant, are by Ben Green from 2001. So again, if someone in the audience knows more proper references, I'd be more than happy to know. But these are just some of the many highlights towards this question. So just to sort of briefly mention what's known, if X is the largest additive BS1 set in one, up to N, when SS2, we know that X is roughly size N to the power half. So you have this lower bound by singers from 1930s. There's a little low one here, but that's just to account for all N. It's known that the lower bound is genuinely N to the power half for infinitely many N. So you should think of, you should sort of ignore this little one. And the upper bound was N to the power half plus N to the power one by four plus one or something from addition to run, which was the case till like 2021, where there was a very nice result by Biloxford at the end Roy, who got a slightly better lower order term. And I think a really interesting question is, can you get an upper bound with a sharp lower order term here? The question is much more harder, it seems when S is at least three. So again, you have a lower bound, which roughly looks like N to the power one by S. But in fact, in this case, the upper bound, we don't even have the right, we don't have one as a constant here, plus lower order terms. This is generally some constant C that depends on S. And I think the best known estimate on this constant comes from work of Ben Green from 2001. I think it's a very interesting question in combinatorial number theory, and many people seem to have studied this, asking for some sort of asymptotic on X, whether this is generally N to the power one by S plus some lower order terms. I don't know, but it's a very interesting question. So, okay, this is a really hard question. And maybe one sort of a philosophy why this is hard is, you have one up to N, this is a very, very additively structured set. And you want to find a subset of this, which has no additive structure, and that seems to be quite hard. You can ask the same question for multiplicative BS1 sets. And if I write why it to be the largest multiplicative BS1 set in one up to N, a very quick example actually gives me a very large set. So in particular, the set of primes in one up to N are a multiplicative BS1 set because of unique factorization. So actually the largest multiplicative BS1 set is quite large in one up to N. And in fact, a very classical result of Urdish set that genuinely why it looks like the set of primes up to some lower order terms. So it's quite, at least to me, it's like very interesting that we can get exact, not exact, but like quite strong asymptotics for this problem. And I guess that's because I'm trying to find a set which has no multiplicative structure, inside a set which has very little multiplicative structures. So that somehow probably makes the question slightly easier. So just to summarize, the largest additive set in one up to N has size N to the power one bias roughly while the largest multiplicative set on set has a size much larger. And now I think we've played this game twice. So you can expect that the largest additive set on set and the geometric progression is large while the largest multiplicative set on set again is quite small. And you can almost see where I'm going with this. So just to mention another result, there's a very classical result of Kamla, Sulu and Semeredi which says that any set of size N has additive BS one set of size N to the power one bias. So not only for one up to N, but any set of N integers will always contain an additive BS one set of size N to the power one bias. And very naturally you might ask, given these sort of examples, could I break this N to the power one bias threshold if I'm considering both an additive BS one set and a multiplicative BS one set? Again, because a set ideally should only be additively structured or multiplicatively structured. So at least I should win in one particular structure and for that thing, I should be able to get a much larger set. And in some joint work with Yifan Zheng, we were able to prove an approximate version of this result. So instead of BS one, basically, we get BSG for some absolute constant G. So there exists some absolute constant G so that any set of N integers, if I write X and Y to be the largest additive BSG and multiplicative BSG sets in A, at least one of them must be much larger than this N to the power one bias threshold. And in fact, by large, I mean, we get an exponent that's generally positive. And again, G here is a very computable constant. We haven't precisely computed it, but if one works hard for like, goes through our arguments and writes down everything, this is generally something that one can write down. So it's quite computable. In fact, when S equals two, you can generally take G to be 31. And one reason why we were thinking of this result was because you were motivated by a question of Alexi Klerman and Kosman Pohata. I'll mention this just in a second. But yeah, so my point is that for you can generally compute this G, we have computed it for the specific case. And again, like this gives you non-trivial sum product results because the subset of A and the product set of A are at least lower bounded by the subset of X and product set of Y. And because X had very little added structure, the subset actually genuinely blows up. And because Y had zero multiplicative structure, the product set of Y blows up. And if one of these guys is large, then you get a genuine non-trivial sum product result. And as before, so this is nice, but you'd ask, can I get this exponent to go to infinity? And yes, like if you just consider S to be sufficiently large, you can take G to be one. And this exponent exactly grows like what you would expect to grow. So this was sort of an improvement on our earlier joint work with Yifan Jing. So with Yifan, we could get log log S to the power half and with better energy decompositions, we can just get a better exponent. And just to mention, just by this heuristic, this is another quantitative generalization of the bookland Chang result. So maybe a couple of more remarks about the Sedon set case, because I was quite interested with some other ideas that were involved here. And so I think it's quite natural to ask, now that we've got these right formulations, can we actually expect them to give us the sum product conjecture? So for example, in the Sedon set case, can I genuinely expect this entire exponent to be close to one minus epsilon? And that's quite reasonable to expect, given in our examples of one up to N and geometric progression, I could generally get N to the power one minus epsilon her. So it's quite a reasonable question to ask. Cosman and Olex, he asked this question for the S equals two case. So they asked whether any set A contains at least an additive Sedon set or a multiplicative Sedon set of size N to the power one minus epsilon. And again, this would be a genuine generalization of the sum product conjecture for S equals two case, because yeah, you do the calculations and you get the sum product conjecture for free. So it was very interesting when it turns out that this still fails to be true. So this is some sort of a generalization that you'd sort of expect with the sum product heuristic, but it still genuinely fails. And I think this sort of an example also popped up in Misha Rudner's talk. So I wouldn't go too much into detail about this, but it's basically a union of arithmetic progressions of size N to the power half, where all the arithmetic progressions are dilates of each other by a power of two. So it's some sort of a twist between a geometric progression and an additive progression. And the reason why, like for example, if you write y to be the largest multiplicative Sedon set in A, the reason why y can't grow too much is because the product set of A is not too large. You get some sort of a non-trivial power saving because these dilates multiply with each other very nicely. So in fact, the largest multiplicative Sedon set in A has size at most N to the power three by four, where the size of A is N. And the largest additive Sedon set in A, whatever that Sedon set is, if you think about it, the intersection of that Sedon set with all these arithmetic progressions is still a Sedon set, but we know very well how to control this. We know this has square root the size of the arithmetic progression. So over each sub arithmetic progression, my set has size at most L to the power half. So I get a non-trivial power saving by just looking at the union as well. And this just comes from the fact that in an AP, the largest additive Sedon set has size close to the square root of the size of the AP. So my claim is that you can genuinely construct this twisted set, which has the largest additive and multiplicative Sedon set, both have size N to the power three by four. So you genuinely can never get N to the power one minus epsilon. So I think this construction was noted by Olli-Rosh Newton and then a couple of days later, there were three different constructions, so one by Ben Green and Sara Pilius, one by Rosh Newton and Audie Warren, and one by Shkradov, and they all constructed different sets which had size N, so that the largest additive and multiplicative Sedon set were smaller than this N to the power three by four. And this was a lot cleverer. And with Yifan Jing, we generalized this sort of a construction. So we constructed for every even S, a set A of size N, so that the largest additive Sedon set and multiplicative Sedon set N to the power half plus something small. So it just kind of nice because yeah, you generally have this half plus something threshold that you can't cross. And it was also kind of nice because we actually used some external graph theory because Sedon sets, you can use them to construct some sort of Cayley graphs which forbid cycles. So this very standard notion that has been used throughout the study of Sedon sets, we use this to get some nice pounds. So again, the lower bound here is roughly N to the power log S over S. The upper bound is N to the power half. And I think it's quite an interesting question to see what the truth is, whether it's something, the exponent is generally a little low one or the exponent should generally be larger than a constant. And I think this would be quite an interesting question. So in the remaining 15 minutes that I roughly have, I think, I will sort of try to go into some of the proof ideas of the energy theorem. So the theorem that said that A can be written as B and C where B has small additive energy and C has small multiplicative energy. I'll try to go into some of the proof ideas. So the proof idea of that theorem, it sort of goes in four steps. So the first step says that if you have a set, so you take the set A of N integers, if the set itself has a large multiplicative energy, then you can find a large subset of your set which lies in a very nice set, which has a lot of multiplicative smoothing in some sense. So given this notion that a set can have a lot of multiplicative structure via this energy, it turns out you can convert that into some sort of a product set estimate by passing to a large subset. Once you have this product set estimate, which is where a lot of inverse results from additive combinatorics come in, once you have this product set estimate, you can actually show that if there is a set which has a small product set, then its multiplicative dimension should be small. By multiplicative dimension, I mean it lies in the span of few prime numbers. So I need very few prime numbers to test out my set A in some sense. And I can still, and for my original set A, I can find a large subset of A, which is contained in the set. So if my set had a large multiplicative energy, I can pass to a large subset such that it's contained in the multiplicative span of few prime numbers. I haven't actually used any notion of multiplicative in addition here. This is purely a result about how multiplication behaves and how multiplicative energy interacts with this sort of a multiplicative span of primes sort of a result. So this is where I've done a lot of additive combinatorics or multiplicative combinatorics. Now is where we sort of start doing the sum product interaction. So in particular, if I have a large subset of A, which lies in multiplicative span of few prime numbers, then all additive energies of this set are actually quite small. And that is where we sort of actually use the sort of a collision between addition and multiplication. You'd think of this sort of a condition as being that my set A prime is contained in products of few multiplicate, my set A prime is contained in products of few multiplicative progressions. So if you take a couple of geometric progressions each with a different ratio, my set is contained in the product of the set, but I have very few ratios. And this is where we genuinely see some interaction that such sets should have a small additive energy. And this sort of argument is known as Chang's Lemma, even though there's a more famous Chang's Lemma and additive combinatorics. This sort of argument was used by Chang. And the final thing is you just iterate these steps. So if you have a set with large multiplicative... So recommend that up for just one minute. Something like that was the only known proof to me is one where you use linear forms in low-garage. Is it that deep or easier to obtain? That's a really good question. So because I'm only looking at a set, so because I'm only dealing with sets of integers, you can actually just use prime factorization to get this. But if you ask the same question for sets of real numbers and then define an appropriate notion of multiplicative dimension, then one way to prove such a result would be exactly by using some sort of like a result like Schmidt subspace theorem. So for sets of integers, you can use a lot more elementary tools here. But the case where you don't have this nice prime factorization structure, you can use much more deeper tools to study this question. Thank you for the question. Anyway, so if you have a large multiplicative energy, you can get a large subset which has a small additive energy. And there's nothing stopping us from iterating this so I can pluck A prime out and look at the remaining set and I can do the same operation again. So in particular, iterating these sets gives me a remaining set C which will have small multiplicative energy. It could happen C is empty, but like this iteration will stop at some time. And I get these union of sets, each which has a small additive energy and then I can hold it over them to get a non-trivial power saving for the total cumulative additive energy of this union. And this will exactly be my set B. So that's roughly how the proof goes. This is sort of, so this is where most of the additive combinatorics happens. At least some sort of passing from energy to an estimate on product sets. And this is where the second bit of additive combinatorics happens, where you go from information on the product set to genuine algebraic information. And this is the step which is very closely related to questions such as the polynomial frame in Groucher conjecture. And this is where most of the number of theoretic ideas happen where we say that additive energies of sets which look like geometric progressions would be small. So in my remaining, I think, six, seven minutes, let me try to outline some of these steps just to tell you some of the brief sort of ideas. So step one is for experts in the area, they would be very familiar with this sort of idea that you can use Belloc similarity guards to go from energies to product sets. So just to describe this to everyone, remember if we had this nice Cauchy-Schwarz inequality which said that if the product set is small, then the multiplicative energy must be super large. And this was just why Cauchy-Schwarz. And a natural question is, could you get some sort of a converse? Can, if the multiplicative energy is large, can you get a small product set? As you can imagine, that's probably not true, but an approximate converse is true. So in fact, you can do this if you are happy to pass to a subset. So if the multiplicative energy of a set is large, you can find a large subset which has a small product set. So by passing to a subset, you can roughly make these approximately true. I think in this case, C can be made to be a constant like four or six. So this is quite nice. In fact, if K looks like log A or like A to the power something very small, it's actually quite nice because this guy roughly looks like A to the power epsilon times S. So if epsilon is generally small, you're good to go. But if K looks like even A to the power one by 100, this guy becomes A to the power S by 100. So you started with a hypothesis that the multiplicative energy is A to the power two S minus one, minus one by 100. So something quite close, but your product set is actually growing like A to the power S by 100. So it's huge. So somehow the standard Bellog Samaritan Gauss theorem is very ineffective when this K is large. And one of the crucial ingredients in our proof is proving some sort of a more efficient Bellog Samaritan Gauss for large values of S. So the statement is roughly like this of the multiplicative energy is large and a slightly smaller multiplicative energy is somehow regularized. I'll sort of go into a bit more detail about this. And the regularization factor is L. Then actually you can get a large subset A prime where the largeness just depends on L, whose product set is almost K times A prime. So this is exactly the kind of thing you'd expect and you lose by this regularization factor. And this can be seen as some sort of a generalization of a Bellog Samaritan Gauss theorem because if you have this hypothesis, then you can always set L to be K here because A to the power S minus one is a trivial upper bound for this guy. And if you set L to be K here, you genuinely get this sort of a conclusion. But you don't have to set L to be K here. So in fact, if you can somehow control this lower multiplicative energy with something much nicer, then you can get much nicer estimates. And just to quickly mention, if you have this sort of a lower bound just by using triangle inequality on the multiplicative energy, I can show that this guy must be at least A to the power S minus one over K. So if you ignore this L, this is genuinely a lower bound. So this sort of a hypothesis is measuring how irregular is your smaller multiplicative energy. And the reason why I mentioned this is because using some sort of a dietic decomposition argument, you can genuinely prove a result like this. That's why you're losing this C by log S because that's exactly my L factor. So just by using a very neat dietic decomposition trick, you can have some sort of a regularization where this behaves like K to the power C over log S. And now using this more efficient Biloximilarity Garbers, we can get a much more controlled conclusion. Obviously, this conclusion is not the same as this conclusion, but you can derive this conclusion again with these sort of ideas. The second step, which I think is sort of really cool is where the polynomial from Andruzha comes. So suppose you have some ambience at U where my set A, a large subset of it lies in U and it has a small product set. That one way to sort of see this is if I write out all the prime numbers P says that P divides U for at least some U in U, then I can sort of describe U purely in terms of these primes. So any little U can be described as a unique factorization over these primes. And so if I look at the set of all, like valuation vectors, there's a projection between the set of all valuation vectors and my original set U. In particular, when I add these vectors, it's the same as multiplying these integers. And this is where sort of like the key idea, like the connection to PFR comes because if I look at the set of these vectors, they actually lie in some higher dimensional space in ZD. And if I close my eyes and if I say, okay, maybe the set of vectors generally behaves like some sort of a compact set, it behaves continuously, then the subset actually grows like two to the power D times U from a Brunm-Minkowski type estimate. And then just by comparing my hypothesis with the slower bound, I get that the dimension can't be too large. This dimension D must be log of L. And of course, this is not always true. You generally have cases where U has grows like D times U instead of two to the power D times U. But this sort of a question where can you approximate Brunm-Minkowski type behavior for discrete subsets is actually one of the deepest questions in narrative combinatorics. And very, very recently, there was a breakthrough work by Tim Gowers, Ben Green, Freddie Manners and Terry Tao, where they said you can do some sort of argument like this if you pass to a large enough subset. So this was quite an amazing result which actually makes explaining these sort of ideas a bit simpler. Obviously, all the sum product results I mentioned came quite a bit before this result. So they didn't exactly follow these sort of ideas. They defined a variation on this multiplicative dimension, which was a lot more inductive, which was quite different from this sort of a notion of dimension. And for that notion of dimension, Paul Volge and Zhelezhoff proved an inverse result. So that's what we were using in our sets of results. But now with this amazing breakthrough, it's a bit simpler to explain. So I think since I'm almost out of time, I'll skip this last step about how to go from this multiplicative, once you have this multiplicative structure, how to show that the additive energies are small. It's quite an elementary step, but it's quite standard in this area. So that was the step and I would like to say thank you.