 Thanks to all the organizers, especially for continuing this this great seminar series, which I think is a service to the community. So, I'm returning to the topic, which was partly in my thesis, and I remember. Maybe it was. I was asking the 30 some years ago, why are you still working on wearing problem. So anyway, maybe this is an answer. So, I'm not presuming that everybody is familiar with when it's problem have a little introductory spiel like to talk about a new results and most of us talk is joint with your prudent, and that will be made clear again later on. So the obligatory blah blah. So conjecture of wearing goes back to 1770 this is Edward wearing not the famous British sports commentator any wearing about which nobody in the audience will know anything about. So, wearing made a conjecture which I think was a type of actually in the usual way in which this is written in Latin, I don't remember where it is. Just if the conjecture is that, well, let's let's write down the gist in some language which is a bit more plausible for modern modern readers. So every natural number is a summer most nine positive integral cubes also some of the most 19 by quadrates and so on. So, so it's notation just to summarize this in ways we can use them before. The UK is traditionally used to denote the least number s so that every natural number is for some of the most SK powers of natural numbers. So, wearing conjecture claims in this language of G3 is at most nine and G4 is most 19 gfk is finite in general. So this this bound on the number of cement should not depend on the number that you're representing this is the point. Okay, and so just to show that they have time to doodle. It turns out that Lagrange proved that all natural numbers were some of the most positive squares. And 2022 does not disprove this curve this this ferrum it's it is a film. And I think it's fair to say that the special reasons why this little gfk problem is not the one which we studied these days and if you haven't encountered this subject much before then maybe that's why I'm giving this sort of very brief introductory spiel. Very small numbers requiring unusually large number of case powers and their representation. So we pick up an integer which is a little smaller than three to the cake. You can only use one to the case and two to the case in its representation. And that means that your number of summands is higher than you might have always expect this to be grows exponentially okay. And it's fair to say that really we have a complete understanding of little gfk at this point and have done 40 years. So I say that this conjecture is essentially known that the bound very low bound that I just presented is even holding with equality. And that's more or less true work going back on the 20th century basically the most recent of which is a not an easy thing little gfk took a long time to complete me. But the upshot of this is but this conjecture is now not or at least you know how to calculate gfk so if you give me your favorite value of K you can calculate it easily. But little gfk is very easy. And reason why it's so easy is because of the small cake powers somehow they're, in a certain sense widely spaced apart relative to their size. Once your integers but they're taking care of powers of become large and some of the spacing becomes more reckon. So just for the aficionados and I guess people can look at this if they want to look back over the talk video. This is what I mean by that complete understanding so it's conjectured but the top line is all you need to know to calculate little gfk. But if you give me your favorite value of K, you know exactly what to do in order to determine what the correct formula is. So it means there's still work to do to really show that the conjecture holds but that's a problem in Daphne's approximation and what a problem in analytic number three. Okay, so good. So let's study the serious problem. For this talk anyway which is big gfk. And, and here we just seek to represent all the large positive integers as sums of. And we'd like to make us as small as possible. That's what this big gfk is traditionally used to denote. So again, Lagrange's theorem that's showing that the two is four. You can't get away with three squares that's classical result. And just modern contribution subject comes with hardly a myth of its work and century. So it wasn't the first of their papers and put it to you. It's the one where they actually obtained an explicit bound. Figure out exactly. On that machine. So they showed that an upper bound gfk is at most something roughly k times two. And there was a lot of development in the subject. It's worth saying that local solubility issues which one has to take account of similar to abstractions to using three squares to represent all large integers. So the powers of two, when case approach to two addicts on those issues one has to contend with. So big gfk might be as big as 4k, even conjecturally, when case approach to plays not a power of two of the powers of three which also make an appearance but in general, big gfk would be the most free house okay. So, certainly the correct order of magnitude for big gfk expect to be no worse than 4k. Much more than two to the cake. And very rapidly work was moved in that direction. I've summarized some of this but not all of this here. So first take the polynomial bound. 1930s and very rapidly using the integrals mean value firm. All kinds of ideas. This work was able to get about which was of order k log k. progressively interesting developments, which caused this constant times k log k to be reduced. So in some sense, the natural original ideas, it's free k log k by being a grad of 1947. And then subjects kind of ground to a halt. And then we're going to combine this last time you took any ground off, which is of order to k log k. It's fair to say that the, the argument of a proof here is quite involved. I mean, it's more and more the case that people tend to look only at the recent literature and recent records and dig into the historical development of subjects. That's sort of a factor of online access to materials. So we're both looking at being a grad of 1959 for the best bound for ideas there, which many modern workers have been forgotten some very clever ideas, which, which allow him to use some of the developments involved in the grad has been very firm but other ideas to get us to to k log k bound. Quite a difficult argument. And that's what I mentioned here a sort of minor refinance. It's really a piece of a breakthrough balance. Okay, so, so this bound remained essentially unimproved until the mid-1980s. When carrots super and before it's a use of smooth numbers. And so certainly calisthenics uses integers with small prime divisors to imitate some of the features of being a grad approach. Oh, Trevor, the policies could I just interrupt to pass on a question from Hershey keselowski about what the situation becomes if you allow the cubes for example to be to be negative. So we've got much more resources for so-called easy awareness problem. So in many circumstances, there are much sharper bands available. And then in general, it's not necessarily that much easier. So, so to then one, the major advantage but one has is that the summons may be much larger than the integer that you're representing because we've got some definite. Use of polynomial identities, for example to Can I just interrupt a minute? It might actually be a harder problem because the corresponding number to G of little G of K could be actually so small. That's only true. So the upper bounds that you get, you can expect to be smaller. But the truth may also be expected to be much smaller. So, so that makes it a more challenging problem in some ways. Exactly. I hope that sort of answers, at least in part, the question there. Okay, so, so back to the plots. You know, we've got to be a grad, I was bound and I mentioned these improvements by Calcubo and Vaughan using integers or small prime factors. And I'd really like to highlight the contribution that Bob made here. So he really reformulated the formulation of the use of integers with small prime factors in a way which makes the method very flexible and capable of iterative processes and so on. And so if I could sort of go in analogy, this is if Karatsuba and some of the other developments in this line is some sort of made for purpose device that is very complicated to apply you have to sort of re-configure all the components every time you use it. What Vaughan did was gave us a sort of a USB plug and play device. You just pull it out of the cupboard, plug it in and use it for whatever purpose you want. It's a very flexible device and it was an important contribution. So Vaughan, as you can see, improved on Vina Grados bound. You can see where the improvement is maybe, but it's in this log log K ton. So one of the log log K's disappears. That's some explicit control here as well. So still 2K log K. And then in my thesis actually I was able to save a factor of roughly two. And this is where this method which I call the efficient differencing method. You shouldn't be confused. They're different methods. I notice in Larry Goof's ICM paper he sort of suggests that I was working on efficient conferencing in the 1990s. It's not the case. He's a different methods. So, well, I won't say very much about the actual method here because really the focus is somewhere else. Okay, that's right by a finger slip. There you are. So I put one of the big reveals at the top anyway. So we've got an improvement now on this bound. So the constant term, the constant in front of the K log K hasn't changed. Sorry, but the log log K ton has disappeared. And all this big old stuff that's disappeared as well. So there's an explicit bound, which is valid for all natural numbers K. It even works for K cause one, although the bound is not very impressive there. Because big G of one is one. But still, it's, it's definitely an improvement. And, you know, what I'd like to highlight is the progress in the subject has been very slow. So, since being a grad, being a grad, we've had rather little progress for 30 years and then another 30 years have gone by more than that in fact. And at least we've saved the log log K ton. So the ideas are applicable for other topics as well. But I thought we'd just reveal what happens with Rowan's problem a little more. So one slight improvement on this bound. I'd like to, to indicate the limit of the methods and the numerics work very nicely to obtain this bound. So if it's a bit harder, you get a slight improvement, which is also explicit, but more closely indicates the limits of the method. So there's some constants which are defined by transcendental equations. The upper bound that you actually get for natural numbers K again is K log K plus C1 K where C1 is 4.4.200189. So you can see it's very slightly smaller than the other constant there. And for large K that makes a difference, but for small K up until the 20,000s or so, this first theorem is stronger. And this kind of indicates the limits of the method. So, so really, the second theorem is the challenge for improvement, even improving the C1 constant. Okay, so this suggests that maybe this is largely about large values of K. One can say something about small values and here's just a bit of history and the state of the art in this problem. So these classical results of Lagrange and Linnick on sums of cubes, sums of seven cubes, and what has this famous result on 16 fourth powers. So these days, if you include local solubility conditions, you could take this down to 12, maybe even 11 if you impose an extra condition. So one has reasonable upper bounds on these things. For small values of K, people have worked this all out. And I thought I'd just sort of get a table of results here, which gives you some impression, what's going on so. So these are values of big Juf K upper bounds when K is between seven and 20. You can see the conjectural bounds are impacted by local solubility conditions. There are large conjectural bounds of K power two, like eight and 16. But in general, quite small. And his bones, 1989 bounds. You can see that with this repeated efficient differencing methods, you say, well, asymptotically it was a factor of two was quite a sizable improvement, even from modest values of K. And then Vaughan and I spent some time and really this work goes up to about 1995, I guess, or about to longer to publish in my talk. So you gain something, whatever to be small amount, but still, these are the strongest pounds that we know at the time. You save a few extra variables. And then these remained unimproved up until 2016. So I came up with a minor customer, which utilized the main conjecturing linear gradors mean value firm, which has now been proved. And relatively moderate values of K. One saves something from this mean values. So it's a very efficient mining up bound for complete milestones. You can see the saves, maybe four variables in this intermediate range, but it ceases to have an impact for a larger than 16. And then this recent work today, picks up these intermediate ranges so so if this 2016 work hadn't existed, then the improvements would already start coming with K as small as eight. And then the next steps, they start picking up for K at least 14. I've gotten to 20. So five variables. This this log log K term, which has been saved to she slowly starts to see. Sonya this some, this all has some sort of impact. I'm just going to give a hint about how, you know, when I was writing this talk, I initially did not have this section three. So other applications you'll see some other applications in just a moment. But I thought it would be unfair not to hint at what idea is new here right now, because we're doing something novel. Kind of like to know why nothing would have an improvement. So if you're an expert. If you're not involved in this kind of subject. Maybe you're not going to get much out of this but I promise that later on in the talk will be a sort of something with more details that probably gives you more an idea. So just in brief. So these approaches use a second method, and in the modern versions, smooth numbers integers, or through spring devices are small powers of the variable which are interested in every size of a variable. So, you know, these are things which much study the multiple. So I introduced this form in my second method. And just to recap, experts know this. The object is to obtain minor arch estimates for moments of these swim file sounds. Minor acts of sets of real numbers which are poorly approximated by rationals. That's what I'm talking about. And you just try to use a soup normal for some of these files on the value for the rest, and you hope to win something over the expected size of the corresponding main term for major contribution. You want to win something about Peter T plus you might. And then you try to optimize choices of T and you, and speaking about the big G of K will be the smallest that you could take T plus you so that this band is achieved. Okay, so. So that's just to provide a framework to talk about. So we have quite good upper bounds for the mean values of these swim file sounds, where you win an amount. Well, so you don't quite achieve the optimal estimate but you lose an amount which drops up exponentially. So here, I haven't made this an even moment. I don't have the minus you are okay here. But this is acceptable when you is a little larger than okay. And it's the other stuff which is where everything. Interesting stuff is going on. So, so one can obtain upper bounds on this classical set of minor acts which are created before, where you win relative to a trivial estimate amount which is something like one, one of a K log K and exponent. So it's this log K factor, which delivers the log log K term. Now there is work. Which addresses what happens when you're on what I would like to think of as an extreme set of my works where the alpha somehow as poorly bashing approximated as you can imagine what you're doing. This was investigated by current super. Brown. The idea also occurs and work going back to the 50s on the zero free region and really needs a country. So there you get a bound this extreme set of minor acts where the log K isn't present. It was a bound actually which is slightly better than this for constant 10. So this constant, what comes from the swim number treatment. So, so the log K disappears and if you could use this then the log log K term that disappeared from the key of K. So that's what you would like to use and so the problem that Bruton and I address is how to handle the intermediate set of acts which lie between the extreme set classical minor acts which are currently the problem. So it was motivated by work of Janya Lou and we usually have a problem. So they have a method, which I'll say a bit more about it some. It's not as flexible as what we do and it wouldn't handle the situation we need to in this problem. Okay, so that's kind of a hint of what the new idea is that gives that saves the log log K term. And so hopefully that if you an expert and you're wondering, okay, when is he going to tell us what's going on. Well, I've hinted at it and we'll see more details later on. Well, maybe not. Okay, so let's carry on. So how about some applications. What's ascending powers problem. So class rough. Really, I guess this was part of his thesis, I suppose. Looked at the problem of representing integers as sums of a square cube and so on. I just interject, I think it was a master's thesis not be able to assist master's thesis. Okay, well, there's a, there's an interesting story to this which would take too much time to tell that. Always think of this as as an analog of what in chess would be called a finger slip variation. Anyway, the story, Bob can tell it in a different point maybe. Okay, so I've managed to show that you can do this with at most 50 some and this has been a popular topic for various workers over the years. I mean, other people have looked at this. Most recent contribution is due to. Okay. We showed that you can take us at most 13. So I want to sort of jazz this up a little bit. So what if you take the summands in an arithmetic progression. It's something you could definitely do. And so there's a bit of notation there to do it so somehow. The starting point of the arithmetic progression is is kind of a cave term like KQ plus a, and then you just keep adding Q every time to the expert. I do this, just because I can. So Ross original problem, you know, Q is one is zero, and you start with the square term. Okay, so this kind of problem has also been studied by Kevin Ford in 1995. So he, he looked at the problem where you have you start with the cave term, and just add one to the experiment each time and he showed that you could get away with a K squared low case summands. Good. Well, okay, so we saw some sort of announcement to this and even that's why there's a plus here this is being improved in real time. So you can get an upper bound for this kind of results and in particular in the variant of the problem which Kevin looked at, instead of case squared log K you can get away with actually a little less than 99 case squared variables. So the log K terms disappeared altogether. So what you might look at is what happens if a is zero, and Q is K for example, something like this, where we sort of looking at multiples of K, you can see that we could bounce all of these things from this kind of result. So again, this is something which the log K disappears because of the new sort of technology that's available. It's quite kind of satisfying. We have a cognate problem. So what if you look at sums of an R free number, and the number of K powers, what could you do here. So, you know, maybe the simplest version of this is if you look at a square free number plus a number of K powers. So square three numbers have positive density, of course. So this gives one, one ought to be able to get a pretty good bound. Very to interrupt Trevor, just from the previous one, what does the capital A depend upon. Yeah, the capital A is just an absolute constant. Okay. So maybe we'll even write a suitable absolute constant. Probably it will end up being something like 99. Okay, so so back to the R free numbers or square free numbers. So one observation is that some, if you have roughly half big G of K variables where big G of K is actually, if I'm honest about this it's the other bound that you're able to prove using the methods from the second method which we're talking about here. It's just a shorthand for this kind of notions and being a little sloppy. Well, technology in the subject allows you to show that almost all integers are sums of roughly a half G of K. In particular, you've got a set of positive density, even the exceptional set has density zero. Well then, your set which excludes a set of density zero has to talk with this square free number or R free number, and you'll end up representing more large integers. So the upper bound for this PR of K is certainly at most. Half the big G of K, half K log K, let's say, and you can jazz this all up a bit and actually do a little better. So for our free numbers, you could get something quite easy, which would give one of our plus one K log K is your upper bound. Here I'm thinking of ours being fixed. So, and K has been sufficient in March so, so you shouldn't think of ours being a super large number in terms of K, don't think of ours being into a case that's that's silly talk. So, think of ours being fixed. Okay, so a constant times K log K. The kind of result you can prove using these new ideas would be a constant times K so the log K again is disappeared. And the constant you can write down explicitly what it is for the square free numbers. It's a little bit larger than one times K. When ours large, well, you know, think of ours is growing slowly with K maybe then this number is this constant is very small. So I just like to expand a little bit on that. Just to draw a consequence. So if you think about is growing slowly with K, all the best still works. Then you've got a essentially epsilon k positive integral k powers. And this set of k pounds represents a very thin set of integers. So we've got an athlete number plus a very thin set representing all large integers. So this is certainly a sub convexity result in a simple method. You have to beat the square root barrier that's exactly what happens. Not surprising it's we are three numbers but hope you do that. But still this, this is, it's kind of a challenge and come up with a better, better approach which deliver stronger results. And that's another problem and here's another cognate problem. And this will be the last example. So you can look at a prime plus a number of k powers and this is something that Hardy and Little would looked at in the century ago. They had all kinds of conjectures about problems involving primes and k powers. And it's worth highlighting here that this is kind of an analog of a goldback type problem in some sense is sort of inhomogeneous. There's a difference between this kind of problem and representing primes as sums of k powers and you see this most, most evidently. In looking at the problem with two squares right so for my, and probably this was known to to people before for my, and I'm sure historical experts in the audience will have their own ideas on who first proved this and don't want to get into that. At least three centuries ago. And that's a primes convert to one word for the sum of two primes to squares. I knew it was going to be a type of somewhere. Okay, the sub of two squares. The corresponding inhomogeneous problem. We try to represent him as a prime plus two squares, right to squares. This was tackled on me. Well, you can see 60 years ago, basically, really. So it's much harder problem. Okay, so what about general situation. So the same line of reasoning as we've had on the earlier slide would show you that's half K log K variables allowed. So we can do a bit better in the homogeneous problem. So, co water, who done and I, about 20 years ago, used some, I guess, several method and said ideas to get roughly eight thirds of K. K powers representing. Um, okay, so what about the inhomogeneous problem. So. So in general, one can get by with a little bit more than two times K. It's a 2.144 trip. Okay, and again this uses some of the ideas which present discussion. It doesn't need quite as much, but certain the same ideas play well. And this is unconditional if you are prepared to assume grh, then you should go a little bit lower two times. So again, I would view this as something of a challenge. So these are results you can get from the second method. So this is a good idea. So it's a challenge to do better by using other ideas, maybe much heavy use of civs or hybrid markets. Okay, so those are some results, which are accessible and now I'd like to get into the methods a little bit. And so, you know, it's a bit of background to really get into what about the methods changes for what's accessible. If you're an expert, some of this will look quite, quite easy and sort of familiar. If you're not and admit you're going to get an impression of what's going on properly. So certainly, well I sketched earlier what some of the motivation is right so so some of this will be working to that we've got smooth numbers integers that are divisible only by small primes. So, people have different ideas about what smooth numbers can be called I stick with smooth. So, that's perfectly good name. So exponential sum over these smooth numbers and think of P as large and ours being a very small power of P, that's the way to think about this. We have new values of these objects which also play well. So that's all something that I covered before. Now, I want to focus on to sort of specialise the situations, so that it makes life a bit easier to explain the ideas and so think of S has been at least as large as the number of variables we need in in Waring's problem now. So, if you're interested in representing integer and P is going to be enter the one of a cake. And remember, Smith well some is defined in terms of P biofog anality. This integral here counts the number of representations event as the sum of s cake powers of smooth integers. If you're able to give a good low bound RSK event this mean value, then we'll be able to show the big gf case bounded by this value of s, and that's going to give us a result. The standard setup for a second method. If you haven't encountered this before you may be wondering about the smooth numbers. Well, I've chosen the smoothness parameter in a way so that the density of the settings, positive density, positive proportion of all the integers have this level of smoothness. And you can write us in products for these things. And it's fairly easy to show that we are a major contribution to this representation problem is a positive proportion of the conjectured asymptotic and conjectured low bound number of representations, which is written in terms of gamma functions and standard singular series in the front. So in other words, for smooth numbers play well with the usual applications of rounds from the circle, that's something that will figure out. Okay, so the convenient choice of major arcs would have national approximations to points in this interval of wood, which is a power of log times. And in reality, there's some pruning that has to go on, you have to expand these regions which we covered. But I'll hide that. I just want to say this in order to say that the major analysis is by now very well understood, even with s as small as 2k plus three will be very easy to prove this. So, so we can focus on the corresponding minor acts, and that's what these new ideas address. I want to show the corresponding minor contribution of little pts minus k. And that's where the fun starts. So, and here. This is not a very satisfactory slide, but I'm going to present it anyway, just so that experts really get a better idea for what goes on. So you can subdivide the set of minor acts into what you can think of as a standard. I think of them a standard complexity. So, you have rational approximations to alpha, where if you look at any points inside the subsets. The width of the intervals is measured in terms of p to the minus k. That's what this is doing here. And how close you are to p to the minus k is measured by q. And you want to make sure that you keep the, you're never too close to a of a q. Somehow, you're at least maybe q over two capsule q over two away from a of a q. And also, that the denominators involved are also not to not to make the larger than capital q over two. Making sure that the complexity of these alpha and the sense I like to think of this is of order of magnitude q capital q. So you know how complicated these points are. It's a fairly standard idea of these days. So efficient convincing plays a role here I alluded to this earlier so so you get good balance for these mean values. And now, I want to apply to a spin to obtain a rational approximation to alpha, which is somehow as efficient as possible. So the denominators at most p to the k over two, and the approximation is like p to the k over two. So this is just a situation where this work is called super and deep brown self comes in. So, if you apply all of this work, you get an upper bound in this extreme region. If the denominator is almost as big as it could possibly be, you get an upper bound which wins over the trivial estimate or one of a constant k. So the best constants here come from the repeated efficient differencing method. So there's a formula there which you don't need to pay attention to it's there. So I'm honest. Okay, and the comparisons I've said before, the classical minor up bounds would have a one of a k log k there instead of one of a constant k. So that's where we win. So if I define this new object delta star, it's kind of a best possible situation is what's in a moment where I pull out a suit norm on these extreme minor hugs of this file estimate and that's where the minus team tower k comes in. And estimates remaining mean value using this repeated efficient difference about. So this is kind of like a minor exponent. And one version of the results, which we're using is this minor up bound for intermediate arcs. And it doesn't matter where Q is too much anyway, but you get a bound which saves almost optimally, and what's left over depends on this minor case exponent. So if the contrast star is negative, then we're really something here, you're getting a bound which is good for getting minor bounds. That's really the objective. So just some, some words on what you do in jail do. So, so this intermediate arcs and is taking out the low complexity part. And no jail. Don't do that. Doesn't allow to do that. And even they're not really using this set of major arcs. They have to have some people with major arcs in order that a large city inequality be applicable. So you end up with something which doesn't play well with many applications of the supplement so it's a little bit tough. Also, s has to be large which would exclude these applications for example with our three numbers. And then the swivel sums which you look at have to be more like the part super smooth balsam so they're less flexible for applications. So this is all certainly this bound here which you see in the forum where my other examples you'll see that one out is more in the plug and play mold that one had rather than this very carefully arranged setup of a new apparatus every time to use it. So it doesn't work as well. And it actually gives you minor arc bounds, when something the little jail work does not do that. Okay, so, so I can say now where this bound comes from at least to get a sketch of where it comes from. So the idea is to use the smooth property of the smooth numbers. All the integers are visible and you buy small primes to extract from every some and extra okay in a smooth balsam factor, which is very close to your favorite size. My favorite size will be enter okay, and we'll choose them later on. So if you do this. What we're really saying is that I'm choosing a w dividing X of the right size. Because the prime factors are all very small. I miss only by a very small factor, my favorite size and so I'm going to pretend that I, but I hit that size bang on or less. So it's called as an equality now to factorize for some ends, and force these w factors to all be at the same size of all of this and answer. And so you get a, an upper bound for mean value and I'll use the set and the subscript q just means that I'm looking at denominator q was defined in the notation that I don't expect you to remember that. The mean value now which I force one factor to be all the same in these and the balsam has been shortened now instead of having the p it has like PRW. Okay. And the point is that these arcs now are built out of intervals for small neighborhoods of a of a cube. Thank you. And the intervals have a specified width. And the w to the k width has changed. So alpha w to the k now lies in an interval which is actually w to the k times wider. And I've written that in a form which emphasizes how it depends on the length of some p over w. So this looks very nice. There's an L a w to the k of a q here. But if a w if and q are pro prime and w and q are pro prime, then I'm just, in a sense, permuting the residues more q, reduce residues more q, when I multiply by w to the k. What we're doing is mapping the original set of arcs under the scaling to a new set of arcs which correspond nicely to the shorter length of the exponential sum. And I can make a change of variable in the integration to remove the alpha w to the k and make it a beta. And that gives me a w to minus k. So there's a perfectly set up for this. The problematic part which takes some effort to deal with, and this is where the, somehow the skill set comes in, you know, to do this one can use techniques, very similar to what I used in work on breaking classical convexity and problems. You have to work with a very careful decomposition of the exponential sums for smooth parts to arrange that you've got this factor of suitable size pro prime to q which you can use. That takes some delicacy. And especially we're not looking at values over a complete interval we really have to do this precisely because we've got integrals that don't have any of the staff and interpretation. Well sticking with a slightly simplified problem. We can now make a choice of them, besides our favorite size so that when we insert this choice of them, number w plus size roughly and the upper bound that we get is in terms of major arcs where the size of the variables is sort of the width of the cube, the size of the variables is like q to the two over k. This is actually scaled so we're now at exactly that kind of extreme end of the minor arcs, which we wanted to target. This is this is kind of perfect. With this set of major arcs and I'm actually I've got something like the complete interval here. That's what I wrote down. But if I'm able to do this with these major arcs where I take out the middle part so I really have complexity, which is measured by the parameters. And that's what we do. And even this gives you minor information, using this very strong minor pound. The knowledge is with worker code super. So anyway, we know what to do with all of this at this point. We need q to a minus two of a k. This is an exponential sum of length q to the two of a k, we've got size s minus two of a k s s minus k delta. And just do a bit of bookkeeping. This is pds minus k times this probably everything works beautifully. So, the idea of a scaling idea, and we can do this all relative to things which are called minor arcs. And just to sort of summarize. What's sort of a key result and look at this, this I regard as something which for many many purposes probably is a plug and play result for people who might want to use this kind of stuff. So what this result is saying is that you imagine the experiment that you get by assuming you have this very strong minor up down that you only get on the experiment of my marks in your traditional version of circle method. And you have some something else which is innocent this isn't going to hurt you. So what you get is an upper bound for minor arcs of traditional type where complexity denominators are measured in terms of q, which could be any anything between one and two k over two. And you get an upper bound, which is as good as you expect. Where you win a power of capital Q. As long as delta star is negative. That's ought to be good enough to apply in almost any possible situation. Good. So I think that's where I can end and so the advertised new world record sitting right there. That's a challenge for anybody to do something. Thank you for your attention.