 Thanks very much for the introduction. And it's a pleasure to be speaking here. So my topic is short exponential sums of the primes. And this is based on several joint works with Kaisen Matamaki, Maxim Radziwi, Fernando Schauer, Terry Tao, and Tamar Ziegler. So I'll start by explaining to you what sort of exponential sums we were looking at and what's been previously known about them. And then I'll state the new main theorems. We can say about these short exponential sums. And then I'll discuss some applications of those exponential sum results. And finally, I'll say a few words about the proofs of some of these results. OK, so the main question in this talk is the following. Given some interesting academic function F, what can we say about short exponential sums of that function? So by a short exponential sum, I mean just the usual sum of F, twisted by an additive character, taken over a short interval of length x to the theta, or more generally, you take a polynomial phase twist. So you twist by E of p of n, where p is any polynomial of a fixed degree. So we'll mostly be looking at this more general case of polynomial phase twists. And the key parameter here is the interval length. So x to the theta. So how small can you make theta? Of course, it depends on your function. But that's the parameter that we try to minimize. And the kind of bounds that we want for these sums depend on whether your polynomial is sort of major arc in a certain sense in which you get an asymptotic, or if it's a minor arc, in which case you expect cancellation for the sum. So for the time being, we'll consider all intervals. But one could also consider this exponential sum for almost all charges of the interval. So that we'll do a bit later in this talk. And so we'll mostly concentrate on three arithmetic functions in this talk, so three very natural functions. One of them is the Mobius function. So it says the parity of the number of prime factors of the integer n if n is square of 3 and 0 otherwise. So for the Mobius function, we would always expect cancellation in this kind of exponential sums no matter what the polynomial is. Secondly, we'll consider the primes or rather the formal function with the weighted indicator of the primes. So you give weight log p when n is the power of p and 0 otherwise. And for exponential sum for the Mobius function, you expect cancellation when your polynomial is minor arc and you expect a certain main term when your polynomial has rational coefficients with small denominator. And finally, we'll consider the higher order divisor functions. So dk n is the number of ways to write n as a product of k natural numbers. And so these are the three classes of functions that we look at in this talk. One could certainly consider the plenty of other interesting multiple functions as well. And some of our results possibly could be adapted to those functions and things like sums of two squares, the indicator of that or various other functions, but I'll just be concentrating on these functions for the rest of this talk. And so if we want to understand the short exponential sums of any of these functions, so the Mobius function for my own author, the divisor function with its first review was known about the long sums, so the dyadic sums. So here p of x is any polynomial fixed degree. And it turns out that these long exponential sums with an exponential phase twist are well understood by classical works of Winogrado of Davenport and Huwa, basically from the 1930s. So they studied the varying Golba problem and this kind of exponential sums arose there. So in particular, for any of these functions, one can get good arbitrary power of log cancellation in the long exponential sum, unless your polynomial is major arc in a certain sense. And what do we mean by major arc? I mean that all the coefficients of your polynomial are very close to resonals of small denominator and to make it more precise, there's some number q, which is not too large, so bounded by a power of log such that if alpha j are the coefficients of your polynomial, then q times alpha j, the distance of the nearest integer of the number is extremely close to zero in this precise sense. Okay, so that's the major arc case and in the complementary case, you get cancellation, okay? And what if we are in the major arc case? Well, then we also understand what happens for these long exponential sums, basically because then p of n, so all the coefficients are very close to resonals of denominator q. So basically p of n is just a q periodic function with q being bounded by a power of log and therefore if you're twisting f by a q periodic function, you just need to understand the mean value of f in n represents modulo q and that you can do by the Siegel-Walter's theorem or variance of the theorem for the movies function and the divisor functions. And therefore also in the major arc case, we can understand these sums and get some main term for them, although that main term may look a little complicated. So it's a term that comes out of the circle method. So there is a sort of simplification that we'll use in this talk, which is a voice you need to split into major and minor arcs and to define what precisely are the parameters that define these arcs. And also then we don't need to worry about the main terms, what kind of objects we get out of those if we do the following. So we compare our model, our array function f to a simpler model function. So f model will be some function which behaves very much like f. So the exponential sum of f is very close to the exponential sum of the f model point-wise and crucially, this model function will be a lot simpler to work with. So we want to find a model function such that on the Fourier side, it's very close to the original function, but it's easier to work with. And if we do this, then we don't need to worry about the major and minor arc sort of splitting and whether you're in one of those cases or not, if you just compare f to this model. Okay, and of course the question arises, what models should you pick for the maybe function, the fundamental function and the divisor functions? So what kind of models should we take so that we can easily compare or hope to compare the exponential sums of the original functions of those models. And for the maybe's function, it's very simple. We just take the model function to be zero because we don't expect any main term in exponential sums of the maybe's function. So there's no need to compare it to anything simpler. You just, by the maybe's randomness heuristic, you expect that you have cancellation everywhere for every single polynomial. So there's no need to subtract any model function from the maybe's function. For the formal function, we do need to subtract the model function to avoid the major and minor arc decomposition. And the model that we chose is what's called the W tricked model. So basically you approximate the indicator of the primes by only taking into account the small prime factors. So the fact that primes never have small prime factors less than some parameter W and you don't worry about the large prime factors. And then you just take the indicator of numbers which have no small prime factors and you really normalize that so that the mean value is one. And the truncation parameter that we take for the small prime factors is either the longest of the one over 10, but it's not too important what this parameter is. So there's quite a lot of wiggle rule. One could sense this parameter quite a bit. It starts to balance some error terms that we choose this particular value. So for this model function, one can show that for example, the partial sums behave very much similarly as for the formal function, basically by the fundamental lemma of the sieve. And for DK, it's slightly trickier to find a good model function, but one can take this kind of truncated divisor sum. So you take the divisor sum over only small divisors going up to a very small power of x. So except the eta k where eta is a very small number and you have these certain polynomials evaluated at log n where these polynomials are explicit polynomials of degree k minus one. So I won't give the definition of these polynomials they're fully explicit, but it takes some space. So it's not an entirely trivial model function but the key point here is that the divisor sum is short. So the n variable only goes up to a tiny power of x. And therefore this is a nice type one sum that's a lot easier to understand than the original DK function. And similarly for the model of D from the angle function. So for this function, we can for example evaluate its correlations using the fundamental lemma without any problem because this W parameter is not too large. And similarly for the DK function we can evaluate its correlations because the again divisor sum is truncated from a small power of x. So it's a bit like often in additive combatorics especially in connection with the transference principle if one is, let's say, someone has an application to count in linear patterns rated by a function F. Then it's often simpler to first model that function F by a simpler function. And then for the simpler function evaluate the count of these patterns rated by the function. One could also formulate all our results with the usual major and minor decomposition. It was just look slightly trickier. Okay, and actually there are several possible models for ease of these functions. So it's not a unique model. One could use different ones and get largely similar results. So for example, for the fundamental function one could also take a truncated version of the convolution identity that lambda is mu convolved with the logarithm function. So if one truncates that identity one gets an approximation for the fundamental function which is you just truncate this device or sum from some parameter and you could take for example this value for the parameter. Or for DK, actually there are simpler model functions if one only aims for a small amount of cancellation in the exponential sum. So the model that we chose gives power savings but if one only wanted a small power of log saving one could use this simpler model with. So you take the identity that DK is one convolved with DK minus one and you truncate that from a certain parameter. So you take the device so it's only up to entity alpha and you normalize by alpha to normal minus K. So that's the model introduced by Andrea Smith. But the reason we chose these particular models is so for the formal function okay, firstly, it's easy to compute the correlations of the W-tricked function. And secondly, it's a non-negative function which is helpful for some applications to edit combatorics as I'll mention later. Basically, because of in the theory of Gauss norms you need to see the random major ends and if you have a non-negative function it's a lot easier to work with those major ends. And for the DK function, we chose our model because with that model you can get power saving error terms. But again, if you don't care about the quality of the error terms, you could use something simpler. So there's again flexibility in which model you choose. Okay, now let me discuss what's known about the short expression sums previously. So starting with the formal mangled case. So if you have the formal mangled twisted by a polynomial phase, there's work of Zen who proved that for intervals of length x to the five over eight was epsilon and degree one polynomial. So E of alpha n, he proved cancelers in this sum for the minor R case and a main term for the major R case. And a few years ago, Matamaki and Shao handled the case of high degree polynomials and they got an interval length of x to the two thirds plus epsilon. So two thirds is slightly bigger than over eight. And for the Mobius case, one can actually do a bit better. So Matamaki and myself proved that for Mobius you can take intervals of length x to the three fifths plus epsilon. Again, for the degree one case. Although the price you have to pay here is that the amount of cancellation is a small power of log as opposed to an arbitrary power of log. And finally for the divisor functions, if one says adapts the method of Matamaki and Shao one could prove cancellation in this exponential sum for interval length. So in the case of D two, intervals of length x to the one half plus epsilon. And the case of D three and higher for intervals of length x to the two thirds plus epsilon. So the same interval length as for D for mango function, okay? And so as I already mentioned, one can also consider what happens in almost all intervals. So what if you only consider this exponential sum for almost all values of x up to capital X? Meaning that we can throw away little of x, bad intervals where you don't know what happens. Okay, so then you can ask a similar question but now we take the supremum. So for every short interval, take the worst possible polynomial that makes this exponential sum to largest. And can we show that for almost all intervals the correlation is still small for the worst possible polynomial. So it's a supremum problem in short intervals which is considerably harder than if you fixed your polynomial. Let's say you had square root of two and squared for every interval. So here we're allowed to vary the polynomial depending on the interval. So there are not too many results about this but there is a result of Matamaki, Rajivir and Tal. So they proved that if you have the maybe's function then you can take intervals of x to the epsilon. And you almost almost get cancellation in this short exponential sum, which is a supremum. And well, for the formal function, of course you could consider the sort of simplest possible case where the degree of your polynomial is zero. In other words, you don't have a polynomial factor. Then you're looking at prime to sort intervals. And for that, the best result is due to Huxley. So it's intervals of x to the one six plus epsilon. So that's certainly a limit of what one can hope for for the primes because that's what we can do in the degree zero case. And Matamaki, Rajivir and Tal also considered the DK case. If the degree is zero, so if you just have a short sum of DK and then again one can take x to the epsilon, like intervals and get an asymptotic for that. Okay, so that's basically what's known about T. Almost all problem. And now we come to our main results. So we have results about all intervals and about almost all intervals. So let's start with the all interval case. So this is with Matamaki, Shao, Tao and myself. So the setup is the following. So take your function F which is either mu or lambda or DK and subtract the model so that we don't need to worry about major and minor arc. And look at this exponential sum. So you take the supremum overall polynomials of a fixed degree and D and you're looking for a cancellation in the sum. So the trivial bound will be H and let's recall the model function. So the model of maybe this is zero and the model of different model function is this W check model. And so in any of the following cases we get cancellation in the sum. So the first one is if F is either the mebius or the terminal function then we can take intervals of length x to the three, five over eight plus epsilon. And the amount of saving we get is an arbitrary power of log. Secondly, if F is either the mebius function or DK with K at least four we can take intervals of length x to the three fifths plus epsilon but then we only get a small power of log saving as opposed to a large power of log. So the exponents is smaller but the saving is also smaller. And finally, if F is the DK device or function so for D two, we can take intervals of length x to the one third plus epsilon for D three, we can take intervals of length x to five over nine plus epsilon. And for higher DK we can take intervals of length x to the five over eight plus epsilon and with power saving error terms. So here perhaps the most interesting case is the D two case where we get down to x to the one third length intervals. And note that there's some functions you can apply several of these results. So for example, to the mebius function you can apply either one or two but they don't imply each other because there's kind of a trade-off here. So if you want arbitrary power of log savings then you need to take five over eight as your exponent. If you allow just saving a small power of log you can do a better exponent. And similarly with DK here if you want power savings for K at least four the best we can do is five over eight but if you just want the power of log savings you can do three over five. Okay. And again, the point of the model function is just that you don't need to you can take this over all polynomials. Are there any questions about this theorem or anything so far? What does CK look like as K grows? So it's quite a small power maybe something comparable to one over K like some constant divided by K I think if I'm not mistaken. So you only question in the chat from Jakob Staipel is there some simple intuition for why we get these uniform results for DK? Okay. So I assume you mean the model that you can separate this interval model from the DK function and then you don't need to worry about what your polynomial looks like. So basically you would expect to be able to do that for almost any reasonable function. Let's say you start with the convolution identity for DK. So if we're, for example, thinking of the simpler model of Android and Smith that I mentioned, which was this function here. So it's just comes from the fact that DK is the convolution of one and DK minus one but then you truncate that. So you sort of expect that on the Fourier side the sort of Fourier characters only see the small divisors. So the divisor is going up to a certain small power and then the other divisor should just expect cancellation for the rest of the sum. And therefore this sum should be a good model for the DK, so it should be normalized. So that also in the major arcs, it agrees with DK. And Andrew Granville, would you like to unmute? Well, it's related to the questions and you just answered, but in your model for DK you looked at divisors less than n to the epsilon. So you take no account of a large prime factors at all. And so you expect the divisor function to be out by a constant factor. So it's surprising that you claim it's a good model. Yeah, in fact, it's not such a good model that you would get very strong error terms. And that's exactly the reason that we had this more complicated one which was kind of a sum of log n for suitable polynomials which comes basically from a residue calculation. So indeed this simpler model only agrees up to a small power of log with a divisor function. And if you want a model that agrees up to power-saving error terms, then you can get something like this just sort of again starting with the convolution identity and then approximating sums by integrals and then completing those integrals, something like this pops up. Okay, thank you. Yeah. Okay. So a few remarks are in order. So the first part of the theorem generalizes Zansberg because Zans got the same exponent for degree one polynomials. And also it improves on the exponent of Matamakian-Shaw who had two turds for arbitrary polynomials. And the second result is the work of Kaisan myself where we handled the degree one case from Mobius and got intervals of like three-fifths. And finally the D2 case, so the third part, if it's a thick degree zero polynomials, in other words, you don't take a polynomial twist here. Then it basically recovers the Voronoi result that you can handle short sums of D2 of length x to the one-third plus epsilon. So it's kind of a natural exponent that comes out of the x to the one-third because it's the one that comes up in Voronoi as well. Okay, so that's the result about all intervals. What about almost all intervals? So this is still work in progress with Matamaki, Rajiviv, Shaw, and Tao. So if we now look at these exponents of sum, again the same thing for almost all values of x. So we allow an exceptional set of size x times an arbitrary negative power of log. Then again, we get cancellation in any of the following cases. So in the case of the formal function, we can now do intervals of x to the one-third plus epsilon for almost all short sums with an arbitrary power of log saving. And for either the movies function or the divisor functions, we can do intervals of x to the epsilon. But now delta is fixed. In other words, we get such a qualitative saving and not a quantitative one. Okay, so again, let me say a few remarks. So as far as I know, the result for Devon-Mangold is new even for the DG1 case. The fact that we have the supremum here makes the problem quite a bit harder that you're taking the very possible polynomial for every interval. And secondly, so the decay result is also new and the result for the movies function, well that we proved in earlier paper with Matamaki-Raziviv, Tau and Ziggler, but with a quantitative, sorry, only a qualitative saving on the exceptional set. So there we were saying that there's little x exceptional values of the interval. And here we get a quantitative saving of power of log on the set of intervals. Okay, so let me also mention this result here. So the results from 2020, because it's not superseded by this one eta for another reason. So there we just considered the maybe's function or in fact bounded multiplicative functions in that paper. And we got qualitative cancellation in the short exponential sum, which is supremum for almost all values, but we could take the eight parameter quite small. So e to the log x to the five over eight as epsilon, whereas on the previous slide it was x to the epsilon. And so about this theorem, it generalizes what Matamaki-Raziviv and Tau did for the DG1 case. And they had a equal texture epsilon. And what's interesting here is that if one could do considerably shorter intervals yet, so if one could take h to be as small as log x to the little one, then Charles' connection would follow. So Charles' connection, let me recall, is the statement that the maybe's function does not correlate with its own shifts. So if you take any shifts a from up to hk, and you take the maybe's function, multiplied with the shifts, you should get cancellation. So that's Charles' connection. And if somehow one could do much, much better in the h aspect, one could prove Charles' but we don't know how to do that. And let me also mention that we can deal with the larger class of phase functions than just the exponential polynomial phases. So in fact, we can deal with the class of functions which are the new sequences of degree D. And we get the same results for them. So the first result I mentioned about the all intervals and the second result I mentioned about almost all intervals instead of supremum over polynomials, you can take a supremum over a larger class of new sequences. Okay, so what are new sequences? Well, we don't need the precise definition in this talk. Just examples are more than enough, we won't be using this term later, but so any exponential polynomial phase is a new sequence firstly. And secondly, you can also have things like bracket polynomials. So you combine polynomial operations with the floor function in any way and you get objects such as this. So this would also be basically new sequences. And finally, let me just briefly mention the general definition, although we won't need it. So you take some new manifold, G mod gamma, so a lead group. So G is a lead group and gamma is a lattice inside that. And then you take some polynomial map from the integers to the lead group and finally you take ellipsis function from the new manifold. And you compose them as F of GN modulo gamma and that's your new sequence. So just to give the simplest possible example, if your lead group is the real numbers and the lattice is the integers, then you have R mod Z, which is the torus. So that will be a new manifold. And on the torus, you would just have a polynomial modulo one and ellipsis function of that. So if you fully expand ellipsis function, you get basically this kind of polynomial exponential phases. So no significance on that, new manifolds has boiled down to the polynomial case. But so the reason I'm mentioning the new sequence case as well is that it's what we need for some applications to additive combatorics. And the connection to new sequences comes via Gauss norms. So let me briefly mention what the Gauss norms are. Again, we don't need the concept after these slides, after this slide. But so if you have a function on the integers with finite support, you can define this kind of counting object. So it's counting k dimensional parallel pi pads weighted by your function F. And then you can also define this object for a function defined on a short interval. So that's the case that we're interested in the short interval Gauss norms, which means you just restrict your function to the interval and take the UKZ Gauss norm and you normalize by the length, by the UKZ norm of the indicator of this interval. Okay, so these are objects that naturally arise when you try to count any kind of patterns in your set. So for example, if F was the indicator of the primes and you would be counting linear equations to the primes, you would naturally bounce into these Gauss norms. So it's just the concept that comes up. And the key point is that there's something called the inverse theorem for the Gauss norms, which says that proving bounds for these Gauss norms is equivalent to bounding exponents of sums of F twisted by new sequences. So that's where the new sequence results come into play and they immediately imply the following. So if we again, take F to be either the Mavius function or the Fomenko function, and we look at the short UK Gauss norm of F minus the model, we get some cancellation here. So the three real bound will be bigger of one, we get little of one in any of the following cases. And these cases directly correspond to the two previous theorems that we had. So the all intervals case and the almost all intervals case. So for example, for Mavius, we can take intervals of three-fifths, for Fomenko, we can take intervals of five over eight. So these are exactly the same exponents as before and exactly for the same reasons. So it's really just an application of those results with the new sequence twist. And we can take also the Mavius function and intervals like X to the epsilon for almost all X. Or the Fomenko function intervals like X to the one-third plus epsilon and almost all X. So these correspond to the almost all result that we had. So I'm mentioning this Gauss norm result because it's the one that we apply for most of our applications. So let me now mention a few applications that we have of these results. So firstly, we can do a short introversion of the linear equations in primes theorem of a Green-Town Ziegler. So this is using our all intervals result and in particular part two of this corollary about Gauss norms. So using this fact that the Gauss norm of Fomenko is small on intervals length X to the five over eight, we can say the following. If you take any set of K linear forms in several variables and if these linear forms are pairwise independent, then we can count how often all of them are simultaneously prime. And with crucially this vector N restricted to a short interval. So all the variables are restricted to short intervals of length X to the five over eight. And the asymptotic, I don't write it here, but it's a sort of block of the global principle. It's kind of what you would expect to get. And okay, so you can do things like, let's say you have the ternary gold bar equation and you want all of your primes to be very close to each other. So within N to the five over eight plus epsilon from each other. And you also can add some extra conditions like let's say two P one minus P two is also a prime. So we can find solutions to this kind of equations and you could add more equations if you wanted. If you just had the gold bar equation, there are already results of this shape with a better exponent, but we can take any number of these equations as long as you have something which doesn't look like true in primes or the binary gold bar problem. So this result directly generalizes the linear equations in primes theorem of Green-Tau-Ziegler, as I said. So they did the same result for variables coming from long intervals. And also another application of the Gaussian result results as briefly mentioned is to Ergodic theory. So there's a result of Franz-Kanakis, Hobson-Kra and Woelian-Ziegler about multiple Ergodic adversities of the primes. So we can now do a short interval version of that result. Basically that's using the fact that the result is based on the Gaussian uniformity of the terminal function, which we can now do in short intervals. So we get a short interval result of their result as well. Okay, so we also have applications to correlations on average. In particular, so I already mentioned Charles-Connexer, the claim that the shapes of merbues are independent of each other, so they don't correlate. So this is currently known only for K plus two and K odd, but using our result from 2020, so with Martin-McKey-Rexer-V, Tau and Ziegler, we proved that for almost all age up to actually epsilon, you have cancellation in this correlation of merbues for any fixed K. So K could be anything here. And of course, you try to minimize the length of the interval here, so if you could take H to be bounded, that would be precisely Charles-Connexer. And we could do interval-snaked X to epsilon. So the length of age is what you want to minimize here, how little you need to average. And also using our results about for mangled or divisor functions in almost all short intervals and translating those again to the language of Gauss norms. We can prove some results about the hard little bit conjecture or the divisor correlation conjecture on average over shifts. So for the mangled case, so hard little bit conjecture, we can say that for almost all age up to actually one third plus epsilon, if you look at this one-dimensional correlation of the for mangled function, you get the expected asymptotic. So I don't write about this expected asymptotic S, but it's some kind of Euler product which is explicitly computable. And you get that as your main term. And it's for almost all age up to this threshold. So for the K equals two case, there's an earlier result of multiple grams of V and tau who got an exponent of eight over 33 which is about one fourth. But as far as I know, for large device of K, the previous result was the green tau result who handled the case of long sums. So age going up to X. And for the divisor correlations, we can go up to, so we need to average age up to X to the epsilon only. And so you take some divisor function DL and you take the correlation of that every single over a single parameter age. And we get some main term which is the main term that you would expect and that's for almost all age again. So again, for K equals two, something like this was known, but not for the higher values of K. Okay, so those are the main applications that we have. Then in the remaining time, I'll tell you just a bit about the proofs. So obviously there are several theorems here and I'll just concentrate on the case of the all interval theorem for the formal function. So it was the statement that for intervals length X to the five over eight plus epsilon, the exponential sum of the formal function minus the model twisted by any polynomial phase has cancellation. So let's concentrate on that proof and the first step as is natural is heat Brown's identity. So heat Brown's identity allows you to split formal function into type one and type two sums. So you get a linear combination of sums which essentially looks something like this. So you have E of P of N one times so on times N K, where the product of N one times N K is between X and X plus H and the N I are in some direct intervals. So let's say that N of I is about X of D alpha I where alpha I are some exponents that sum up to one. So we need to analyze this kind of sums and the analysis of course depends on the values of alpha I. So depending on in which ranges we have D alpha I we're going to have to use either a type one estimate or a type two estimate or a kind of I two estimate which is for the divisor function. So firstly, if you are in the case where K equals two then we noted our sum is the sum over the lattice points under a hyperbola. So we have the hyperbola M times N equals something and we're counting the lattice points under that curve. And so the way we approach this is by decomposing the set of lattice points under the hyperbola into a union of two-dimensional aeronautic progressions. So basically you approximate the slope of the hyperbola such like in the hydroliterable circle method you approximate real numbers by rationales of small denominator. So we want to get not too many aeronautic progressions and with not too large moduli. And so when we do this decomposition we can split all the points under the hyperbola into a bunch of aeronautic progressions two-dimensional and then we're left with sums of the form E of P N1N2 or N1N2 lying to two-dimensional AP. And this kind of sums we can understand basically because this does corresponds to E of another polynomial. If for example, we had, let's say, I mean, these are some linear forms in N1N2. So you get some quadratic thing here inside and therefore it's another polynomial that you're looking at and that kind of sums we can certainly understand. And so the key thing here is that this decomposition that we have only works for h bigger than x to the one-third plus epsilon. So that's where delimitation for D2 comes from that if the interval goes smaller than x to the one-third then the decomposition no longer works. And so you could also have the case where one-third variables is very long, bigger than one minus theta. So theta is five over eight. Then we get prove a type one estimate. So this is sort of the simpler estimate. So I won't concentrate on that. Let's instead consider in which ranges we can prove a type two estimate. So it turns out that if our alpha i, these were the sizes of our divisors, if you can find the subsum of those alpha i, that's between one minus theta and theta, where theta is five over eight, then we can prove a type two estimate in the minor arc case. So by minor arc case or rather by major arc case, I mean the case where our polynomial E of P of N looks like an artimidian character N to the IT. So note that this could actually happen. If, for example, P of N is the truncation of the Taylor series of T times log N, then you would indeed get the T of P of N is approximately N to the IT. So this is what I call the major arc case. And if you are not in the major arc case, then by following the earlier process, not to make a show where they got the exponent two-thirds and we can get this kind of information. So this kind of type two information. And so for the new sequence case, there's some extra complications in the type two estimate. We need a large C for the new sequences as well as a multi-parameter factorization theorem. But I won't say anything more about those. But then we're left with the type two major arc case. So what if our E of P of N does look like N to the IT? Well, then we basically have a multiplicative problem that we're facing. So then E of P of N1 up to NK just becomes N1 to the IT times NK to the IT. And here we can apply just a polynomial methods. So things like the Baker-Hurman-Pins estimates for them. And it turns out that if there's this kind of decomposition of the index set, so the alpha I can be decomposed into three sets such that the sum of one of them is in this range and the other two sums are close enough, then it turns out that we can prove a type two major arc estimate. So the case where E of P of N does look like an Archimedean character. And finally, the key point is that if theta is five over eight plus epsilon, we can combine all these things. So you can use a bit of combinatorics to check that no matter what your alpha I are, you're always in one of these cases. So either you're in the type one case or the D2 case or the type two case, all with one of them holes. But as soon as you reach five over eight, you have a case you can't handle, which is if you have exactly four variables, all sides one over four, and if you're in the type two major arc case. So this case here, the final case, then if theta is five over eight, this interval two theta minus one, four theta minus two becomes one fourth, one over half. And you can never find a subsum of these four numbers. That's between one quarter and one half. So that's the bottleneck in our proof case of D4. And that's the reason that we got this five over eight exponent. And the proof of the other all interval results is kind of similar. So for the maybe's function, we also have heat from sanity. And for DK, we already have the kind of necessary decomposition into these type one type two sums. And then you just need to check which ranges are applicable of these type one type two estimates. But so that's basically how we proved the result for the formal function intervals in fact, extra five over eight plus epsilon. And I think that's all that I might just say. Thanks a lot for listening.