 So today, I'm going to talk about how we prove equidiscretion estimates for the primes. So to remind you where we're at, the main conjecture in this area is the area of Halberstam conjecture, which is the statement meant for any theta up to 1 fixed. If you take the one-mangled function in arithmetic regressions, let's say between x and 2x, some large x, measure its discrepancy. You take the worst discrepancy in a given modulus and then sum of all moduli up to this level x theta. The conjecture is that you beat the trivial bound, which is x log x, by any power of log that you please. So this is the Euler-Halberstam conjecture. If we had this conjecture, we could do quite a lot of things, especially if we had a certain technical generalization of it, like we could prove prime caps of order 6, bound by 6, for instance. But currently, we only have the Bombay-Venogatov theorem, which proves this statement up to 1 half. And we don't actually have any progress even after the work of Zhang on the Euler-Halberstam conjecture for any theta bigger than 1 half. In fact, even equals 1 half, I think, is still open. But what we do have is the following with our Zhang, that there exists some small parameters, which he calls VARP and delta. Actually, he just calls them VARP. In his paper, he takes, in fact, these parameters be equal and less than about 1,000 or so. There exists some small constants, VARP and delta, such that you have a weaker version of this. So the super's outside. And the q can get a little bit above 1 half, a small amount of 1 half. We restrict it to be square free. And x to the delta smooth, or x to the delta freeable, which means that the only primes that divide these q are less than x to the delta. So you only take smooth numbers, square free. You do the same thing. You measure the discrepancies and absolute value. You take the super for all a that are co-prime to everything. So these are co-prime to all primes less than x to the delta. So if you wish, you could take a in the profiner integers or something, but it doesn't really matter. So you take this discrepancy. And this guy, you can get arbitrary log saving. So this is what Zhang proved. And then later, a polymath project improved, if I recall correctly, this threshold we could make about 10 times bigger. And you're speaking, I think, it's not exactly what he explained. Yeah. The rules of work. Yeah, OK. So he didn't take the super for all a. His a was a specific a, but the proof goes through. Yeah, you have to. OK, so with a bit of tweaking. OK, so Zhang has epsilon. OK, all right. So for the application to bound and guess between primes, the specific a you need comes from a certain class of congruence classes that come from the Chinese remainder theorem. But you can abstract away that hypothesis and then state it like this, which I think is the cleaner way to state it, actually. Because it's not clear you can actually use any special properties of a. So you may as well just take to a arbitrary. So a key point here is that a can be unbounded. For fixed a, there were previous results like this by Fouverie and Bombiery, Freedland, and Nirvana Edge. And not exactly like this, they didn't quite have this absolute value here, which is very useful for us. Instead, you take the sign, some weighted by some nice weight. But the absolute value is actually, turns out not to, it just means you have to cut your shorts a few more times than you actually do. But the important thing is that this a can grow with x. So we don't assume a is bounded, and yet we have a bound which is uniform here. And that's sort of the novelty in one of the main nobodies in Schach's method. So previous techniques, which relied most on automorphic form methods, they needed a fixed or bounded. OK, so that's the objective. All right, so the first step is what I said before, is that you first use the Heath Brown identity. And what this does is it splits the Balmangor function into a bunch of pieces, let's say polylog pieces. And each piece falls in one of three types, as it turns out, once you do the chromatronics. So there's type 0, there's what your uncle is type 1 and type 2. There's a distinction which is not showing up yet, but in fact, I won't even mention it. Well, it's not so important. The two types are almost the same, and there's a third type. OK, so type 0 pieces of the Balmangor function look like convolutions of two weights. This guy supported on maybe say m2n, m2n, this guy, n2n. Actually, well, it's not so important, in fact, as you don't actually make it n2n, you make it a little bit narrower than this, but actually, that's not important. OK, so this is just an arbitrary rough weight. It has some bounds. It's polylog of being size, maybe up to some multiple divisor function. OK, so it's basically bounded up to polylog errors, but this guy is smooth. Yeah, so this function should be coming from some smooth function supported on the unit interval, and so this is some smooth function with all derivatives bounded. So you take some smooth function from n2n, and you sample it at integers, and so you get a sort of discretized smooth function. So this is a nice smooth sequence. This is your friend. This guy is rough. It's got the same size, it's bounded, but it can oscillate in a way that has got all kinds of Nazi arithmetic structure in it, and we don't try to exploit any structure inside this coefficient. OK, so type 0 is something like this, where mn is combative x and n is pretty big. The way you run the identity is bigger than 3 fifths, turns out. OK, type 1 and type 2 sums are similar. Now you're combing two rough functions, so again supported m to 2m into 2n. Let me just make sure my m's and n's are where I want them to be. And actually, all right, just so I don't confuse myself, although I'm going to confuse you, now I want the alpha to be a scale n. And no, I was at the first time. OK, so what I want now, so these guys are now rough. They obey magnitude bounds, but no smoothness. And again, mn brought apply to about x. And the way we're going to normalize it is that n is a small guy, m is the big guy. So this is small, and this is wide, and they're between x to the 3 fifths, x to the 2 fifths. So you have two rough functions at scales close to x to the 1 half, within about x to the 1 tenth of x to the 1 half. So you're combing two rough guys together, at least so you get these bilinear rough sums. The distinction between type 1 and type 2 is that it turns out for technical reasons that when n and m are both very close to root x, the argument for type 1 doesn't quite work, and you have to use a slightly different argument. That's a very relatively minor point. And then finally, the type 3 sums look like a quadruple convolution. So this guy is rough and supported at some scale, m2m. These guys are smooth and supported at scales n1, n2, and n3. And again, they have to multiply up to x. Otherwise, they won't contribute to your sum. And these guys are fairly big. Yes? OK. So these are medium-sized, actually. n1, n2, n3 are between x to the 2 fifths and x to the 1 fifth. But their products are big. So n1, n2, for example, is at least n to the 3 fifths. So the precise numbers are not so important. But this is a medium-sized. A good example to keep in mind here is if you just drop this, you should think of just a triple convolution where all these guys are supported at scale x to 1 third. That's a good model case. This is also a component of either a second or third divisor function. I don't know the conventional notation. Is it tau 2 or tau 3? Tau 3? OK. If you convert one of those three times, yeah, OK, that makes sense. We call it tau 3. Then you get pieces that look like this. So the type 3 sums are very similar to the third divisor function, which has been studied quite intensively. So in fact, before Zhang's work, there were existing results on the distribution of tau 3. I think first by Friedland and Ivaniac, and then Heath Brown, and then Fuvry, Kavasky and Michele. And in fact, Zhang had a slightly different way of dealing with that sum. But actually, it turns out that the recent work of Fuvry, Kavasky and Michele gives quite good results for this type 3 sum. To the extent, in fact, that we understand this sum quite well, this is no longer the worst sum. It's the type 1 and type 2 sums that actually we understand the least. OK, so this is what the Heath Brown decomposition gives you. I should remark that if you enlarge the range of the type 1 and type 2 sums, so instead of being between 2, 3, 2 fifths and 3 fifths, if you enlarge this class to lie between 1, 3, and 2 thirds, so distance 1 sixth from 1 half rather than 1 tenth, then you don't need the type 3 terms anymore. And in fact, you can just use one identity to get the decomposition you don't need this more difficult, or not that much more difficult Heath Brown identity. OK, so there is some games you can play. If you don't like one of these, you've wanted a shorter proof of these equities between estimates with maybe a worse value of these VIP and delta parameters, you could just prove type 1 and type 2 estimates and not deal with type 3 estimates. Is this just the 8 and the 13? Oh, yeah, yeah, OK, yeah. These are the numbers. We've reoptimized them over Zhang. Yeah, Zhang has slightly different numbers here. Yeah, there was a 3 eighths, as I remember. Yeah, it evolved over time because as different, as our technology improved, we improved the balance of either this on type 1 or type 2 or type 3 estimates. And every time we did that, we had to remove the optimal thresholds between the two and so forth. OK, but this is where they ended up. Yeah, so nowadays, the type 3 estimates are proven, ultimately, the main tool that comes in here are the deline's work on the Riemann hypothesis, varieties of finite fields, which allows you to control higher dimensional exponential sums. And currently, we don't know of any way to do it. I don't think we have any way to do the type 3 sums without using the input of deline right now. The type 1 and type 2 sums, you can use deline if you want to get the best balance. But actually, it turns out that you don't have to. You can bound the type 1 and type 2 sums using the more elementary or the easier Riemann hypothesis on curves, due to Andreve, which gives you control of one dimensional sums, which turns out to be enough for the type 1 and type 2 sums, although you can stick in the more advanced deline estimates to move the exponents up a little bit more if you want, which means that if you want to use, if you mess around with the thresholds and eliminate the type 3 estimates using Vorn's identity, you can, if you want, prove some equidistrian estimates without ever having to use deline's work. But the exponents you get are worse. The best results, you need both. OK. So now what you have to do is that you need to prove equidistrian estimates for each of these three sort of pieces. OK, so the type 0 estimates are very easy. If you have a type 0 sum, the thing is that this coefficient is so smooth and so wide. It's wider than x to the 3 fifths, which is wider than this Q. So Q, remember, is going to be basically x to the 1 half plus a little bit. It's basically x to the 1 half. That you don't need to do any number theory to count this. You just go ahead and just split it up. OK, you just split up the sum into a double sum. And basically, psi is supported on a smooth at some scale of size 3 fifths. And Q is less than x to the 1 half plus a little bit. So n is being restricted to some arithmetic progression. Mod Q, which is much smaller in scale than the scale for which is smooth function, is living over. And so just almost any technique will work here to estimate the sum. You can either use Poisson summation or quadrature or whatever, or the Maclaurin or something like that. But lots of things will work. And you get great balance here. So this is easy to estimate. OK, so the type 0 sums are not the difficult ones. So let's talk about the type 1 sums. And I think I need a full board. So I think I'll start over. The actual details and bookkeeping are really messy. And so I'm not going to show you actual computations with any precision. I just want to give you sort of the flavor of the type of things you do. In particular, why is it important? I just erased it. Why is it important that you have square free and smooth and so forth? This will show up very shortly. OK, so to estimate the type 1 sums, so the type of thing that you care about, you're summing of all q, which are just a little bit bigger than 1 half. I'll just go x1 to 1 half plus. Square free, I'll just call it smooth. I'm not worried about exactly how smooth. And then you've got absolute value. And then you've got a sum n equals a mod q of a convolution of two rough pieces. And let's say for the sake of concreteness that this guy is rough at, let's say, at the scale x of 3 fifths. And this one at the scale x of 2 fifths. So this is the extreme case of the type 1 sums. I'll minus the mean, which I'll not write on explicitly. And you want to control the sort of sum. All right, so how do you do this? So the first thing is that we want to give, these absolute values are annoying. So I would delete them. But to delete them, you have to pay a price, which is an arbitrary coefficient. It's just some sign, plus or minus 1, depending on the sign. And we don't control much about the sign. It's equal to plus or minus 1. So eventually, the only thing we can do that at some point is either kill it by the triangle inequality or by Cauchy-Schwarz. But we don't do that just yet, because we need to do some other maneuvers first. So we unpack that. Then this, of course, you can split into a double sum. And then, of course, there's another term, which I'm not going to write down. OK, and then there's this. Now, this cube, so here's where we start using the fact that you're smooth. So smooth is sort of the opposite of prime. It means that you are the product of many, many small factors. And so here's numbers 1 to Q. If Q is prime, then there's no factor between 1 and Q. But if Q is smooth, then you have lots and lots of factors between 1 and Q. So if, for example, you're x to the delta is smooth, so that no prime factor is bigger than x to the delta, then it's easy to see just from the greedy algorithm that if you take any intermediate number, say, R, and R times x to the delta, there are so many factors of Q that there must be at least one factor of Q between the interval R and R to the delta for any R. Because you just keep model-plying the factors of Q one at a time, the primes of Q one at a time. And because you have always small jobs at some point, you must fall into this intermediate value theorem, the discrete version. So basically, if you're forgetting this x to the delta, basically, you can factorize Q wherever you want. The Q is well-facturable in some sense. So basically, you can split Q into two factors, which being square of 3, at least with co-prime. And you can basically place them wherever you want. So we're a bit of a prescient, so I'll tell you where you want to factor them. So it turns out that if x is of size about 1 1⁄2 plus a little bit, you'll factor into two pieces R. And R is going to be about x to the 2⁄5 minus a little bit, so S is going to be x to the 1⁄10 plus two little bits to cancel that. So you can factor any Q into these two pieces, R and S. Actually, just because my notation all of us will get this, this was, I'll call this Q prime. I'm sorry. This is bad of me, but I'll confuse myself if I don't follow my own notation. OK, so you can split your original modulus into a modulus of size about x to the 1⁄10 and a modulus of size x to the 2⁄5 minus a little bit. And the point of this minus is that it's just a little bit, so basically the scale of R is just a tiny bit lower than the scale of N. And this turns out to be the important thing. OK, so you can split Q, you can factorize this. OK, and so if you do that, you should do that before you take the absolute values, turn them into size. So actually, keep the absolute values here, split up the sum and then introduce a sign. So if you do all that, you start stirring it into something that looks something like this. So N is of size x. So it looks kind of scary because there's so many summations over there. But actually, having lots of summations is your friend because you have many opportunities to cancel and do cashier shawarma and so forth. OK, and then there's some other term which I'm not going to write down. OK, so we now have two moduli and we have two parameters here and there's this constraint, which is now QR. OK, and so you have this sum here. So OK, so the whole game is to rearrange and in such a way that you can actually do something useful like a cashier shawarma or something. So that's why we're following what Zhang did. I imagine he must have proceeded a trial and error for many years, actually. It's not obvious always how to actually arrange these things in such a way that you would actually get somewhere. But well, again, oh, well, there were previous works of Bombay-Frinality Avengers that did similar things. OK, so the way we're going to control this is that, first of all, the R sum, you can export averaging in R, but actually you can even get type 1 estimates without averaging in R sum. I'm just going to forget the R sum. So let's just cure the R summation. And then I want to rearrange the M out in the front and there's a constraint. So I think of R is now just fixed. And then we have what's left. There's a beta over N somewhere and there's some coefficients. And then there's another term. So you rearrange the sum a little bit. Now, why is this the right way to rearrange things? So R, remember, is of size x to the 2 fifths minus a little bit. OK, we can count how many terms are in this sum. So this sum, the number of n's and q's here is about x to the 2 fifths times x to the 1 tenth plus a little bit. But we're not summing over all of them. We have to quotient because we have this congruence condition. We're only summing over all those n and q for which m and n is equal to a mod qr. And if you pick n and q at random, the chance that this congruence condition satisfies is like 1 over qr. So really, the number of terms here should be this times this divided by qr, which is q is about this size and r is about this size. And so because we've chosen r to be just a little bit shy of this n, you have a non-trivial sum here. So there's just a tiny bit of gain. There's a tiny bit of summation going on here. This is a slightly non-trivial sum. But even if you have just a tiny bit of summation, then you can set up what is called the Nyx dispersion method, which is basically a particular way to apply Cauchy-Schwarz. Because now that you have a non-trivial sum here, you can apply Cauchy-Schwarz. I mean, you can apply Cauchy-Schwarz with a trivial sum, but you're not going to get any interesting. Let me just say this in square. You can apply Cauchy-Schwarz here. And this is something that you understand. So the whole point is that you get rid of at least one rough guy. So alpha was one of these rough things you didn't understand. You can eliminate it. Actually, strictly speaking, you don't completely eliminate it. You replace it by a smooth cut-off. But maybe I'll just hide that. This sum should be sort of a smooth sum. So this thing you understand, and you just need to understand this sort of sum. So you can expand this out as a double sum, and comma q, and then maybe an n prime comma q. You take q and q2 obeying various conditions. And you have some big thing in the middle here. And as I said last time, when you have that square, there are two terms. There's a diagonal term, when these guys are the same, and off diagonal term, they're not the same. And the whole point of having a slightly non-trivial sum here is that the diagonal term is a little bit smaller than the trivial bound. And we only need to gain just a few logs from the trivial bound. So the diagonal term is negligible because we have a non-trivial sum here. And so therefore, we can disperse the sums, and we can do of two ends. We can do with the case when these guys are not the same. So if you do that, what do you end up looking at? In modular terms, you understand pretty well. So what we should get, so we have an m, which is a certain size, n and n prime of a certain size, q and a q2 of a certain size. And n, let's say nm is equal to a mod q1r. And n prime m is equal to a mod q2r. And then you've got some coefficients. So there's like a beta of n, a bit of n prime, cq1r, c. It looks kind of bad here because we don't control anything about these coefficients. But it's remarkable that there's still cancellation in this sum. I mean, there's also minus. There's still actually a lot of cancellation here. It's not so apparent yet. OK, so this looks kind of messy. But now we're talking about five terms. But we can cut things down a little bit. Right now, nm and n prime m are the same mod r, which means that n and n prime are the same mod r. Yeah, you can ensure that various things are co-prime. A is co-prime to everything. So OK, you can invert. So n and n prime actually have to differ by multiple r. But n and n prime are already of size x to the 2 fifths. And r is of size x to the 2 fifths minus. So there's like basically n determines n prime up to a very small error. So n prime, you can think of it as n plus a small shift times r, and l is small. OK, so just for simplicity, I'm just going to ignore the shift. And I'll just look at the diagonal term. n prime equals n. It's already kind of typical. So just for simplicity, I will just look at the diagonal term. That's already a pretty typical term. So that simplifies things a little bit. OK, so we'll just take a diagonal term. And these guys are different because we're assuming that the pair n and n prime q2 are different. So q1 and q2 are different. Actually, with a bit of work, you can ensure that not only are the q and q2 different, they're actually co-prime. Let me just ignore that as a sort of technical step. These are actually co-prime. To do that, you have to go back to an earlier factorization, make sure that q has no small prime factors, with all the small prime factors under r. Anyway, OK, so you have an expression like this. All right, so we rearrange again. So this expression is kind of messy, but there's one variable in which things are really good, which is m, because nothing depends on m. So all right, so we can just rearrange this sum. OK, so there's some smooth sum here. So I'll take the three-fifths, and there's a constraint. m has to be of the order of a over n, or q. So there were two constraints here. I'm sorry. I should not have erased this one. You have to combine them by the Chinese remainder theorem. I guess I'm going to like that. You get the L, it would be more complicated, but OK, let's just do it like this. And then one. OK, now, how big is this modulus here? So r is of size x to the 2-fifths minus, and these guys are of size x to the 1-tenth plus a little bit. So if you put it all together, this modulus is x to the 3-fifths plus a little bit, if you work it out. And so this is pretty good. The spacing here is only just a little bit wider than the width of the sum. So this is almost something that is easy to compute. If this was just a bit smaller, then post-consumption or whatever would work really well, and you would get a very good bound on the sum, but it's just a little bit bigger. So which means you actually have to care a little bit about what this residue class is, because sometimes you don't actually this interval at all. But you don't have to care too much. So in this cyclic group, you're summing a really big range. It's almost the full size. And it's so dense that Fourier analytic methods begin to become effective. So what you should do here actually is that you should take this cutoff function, the cutoff to this scale, and break it up into Fourier modes. And then estimate each Fourier mode separately, take absolute values. You will lose a factor based on the ratio of this and this, but that's something that is a tolerable loss. So what you should do is that you should do Fourier expansion or post-consumption or the same thing, really. So if you break this up into Fourier modes, what ends up happening is that you start staring at something that looks like this. OK, so if you Fourier expand this thing, then you will eventually end up looking at a sum. OK, so these guys haven't changed. So still a bunch of scary coefficients here. But what this is, it becomes basically an exponential phase, something like this. OK, so here we're working in a cyclic group, Q1, Q2R, and E, Q1, Q2R of anything. It's just e to the 2 pi i is the fundamental character of this. And h is some Fourier mode. It isn't, well, h is something of size about the ratio of this thing. So h is pretty small, and actually you sum over h. And if you want to be really efficient, you actually want to exploit averaging of h. But let's not talk about h. So there's some h. And you can assume h is non-zero because there was this main term that we kept hiding from you. And actually, at this point, the main term happens to exactly equal to zero mode, and you can cancel it off here. So there's a non-zero h here. And so now you have an expression that begins to look more like an exponential sum. OK, we've got an exponential sum, and you also have, you divide by n. So m is constrained to equal this congruence class. So if you expand that out by a Fourier expansion, you'll get something like this. Yeah, I mean, you can multiply by the Fourier coefficient of this kind of function at h and sum over h, but OK. But oh, yeah, yeah, yeah, yeah, I'm sorry. Yeah, OK, yeah, yeah, yeah, this is division in this ring. Yeah, this, OK, I mean, I guess people also use this notation if you like instead. OK, all right. OK, so you have these rough coefficients. OK, so you have to deal with them. Right, so you have to cut your Schwartz a few times to get rid of the last few things you don't understand. So what you do first is that you pull out the n sum. There's a certain phase here. OK, so you pull this out, and you cut your Schwartz to get rid of this beta. And if you do that, you start staring at what is looking to be quite complicated. But you have to have a certain amount of faith that things are getting better. I mean, you have to be sort of an optimist to do the sort of business, OK? So if you split things up, it temporarily looks bad, because you're getting more and more unknown coefficients, not less, when you're doing this. But so you do a cut your Schwartz, and you get an expression that looks like this. OK, so it looks like there's more terms. But finally, there is one variable for which is not hitting any of the coefficients you don't understand, which is n. OK, so if you rearrange that, finally. As long as you get enough summations floating around, the Kashi Schwartz eventually wins for you. Well, give you some cancellation, unless things are really degenerate. Although, because you're losing things elsewhere, we lost a little bit from Fourier's expansion and so forth. It's not always clear that the gains outweigh the losses. And so there's a delicate game to play still. But at least when bar p is in delta very small, this will win. OK. So when you rearrange, basically, it all comes down to understanding some expression like this. So if you can understand the absolute value of these guys and get a good bound, if you can be the trivial bound by a lot, then you're in good shape. Now this you can combine by some x to the delta. I'm sorry, two firsts. Right. OK, yeah, so all right, so this by some Chinese remainder theorem. And the worst case is when these guys are all co-prime. So there's this. OK, it turns out that you can combine all these terms into a single expression like this. So this becomes an incomplete, sort of a Sally sum, whatever. Is that a? Please demand some. In general, it's not quite as simple. There was this L that was zero earlier. And actually, in general, you have to do a slightly more complicated sum. But sorry, yes. OK, all right. So the kind of thing you're staring at is a sum like this. And the size of this modulus, OK, so we do the math again. OK, so r is x to the 2 fifths minus a little bit. These guys are all x to the 1 tenth plus a little bit. And so, yeah, this adds up to about 4 fifths plus a little bit. Whatever, OK. All right, you can sort of see why we only get like 100 and so forth for our donors and so forth. OK, so here it's a relatively short sum. The length of the sum is about the square root of the length of the four sum. Now, we were inefficient. There are actually slightly smarter ways to cash you Schwartz in such a way that when you do all this, you actually have a sum which is a little bit bigger than the square root. Here, the way I did it was a little bit less than the square root. But if you're smarter, you can make it a little bit less than a bigger than the square root, which is then good. Because, yeah, so, all right. So the type of problem you're faced with is just a special case of a more general question of can you control incomplete exponential sums of some rational function over some modulus, which is some nice big smooth modulus, over some range here. And you should think of it as being a lot less than q. So it's a very incomplete sum. OK, so the Riemann hypothesis is your friend. If you have a complete sum, then as long as f is not completely trivial, the Riemann hypothesis gives you bounds of the other square root of q. Actually, because q is not very prime, it's actually one plus epsilon, but OK. But essentially, you get a square root cancellation for the complete sum. Now, you have an incomplete sum, but one of the standard ways to convert an incomplete sum to complete sum is the Polyvenogadov method, which is basically just to take this cutoff and Fourier expand it. And so you can Fourier expand this guy into combinations of complete sums. And there'll be a bunch of these guys. But completion is a little bit inefficient. There's an inefficiency, which is based on the ratio between this scale and this scale. And so if you do completion, and then you apply this bound, what you get is that you can sum, like, q over n times root q to the amount of epsilon, which, unfortunately, so this type of this type of Polyvenogadov type bound, this would be non-trivial, only for q bigger than, sorry, only for n a lot bigger than root q. OK, that's the standard square root limit to Polyvenogadov. OK, so you want to control shorter sums. So in general, this gets hard. So even for the simplest situation, character sums, if you wanted to shrink below root q, you can do it, but you have to use this method of Burgess, which is not to particularly, well, I mean, it's a little bit tricky for characters. And then you're going to apply it for, Klußmann's sums is still for the, again, you can, in principle, do it. But it is quite tricky. Strangely enough, see, the conventional wisdom is that prime sums are simpler than composite sums, because we have the Riemann hypothesis of prime sums. And we always try to do prime sums first. But actually, there are some cases in which having a very composite or smooth modulus is very helpful. So rather than use the Burgess type methods, it turns out to be more efficient to use a different method to get a Gram and Ring Rose, and also heath brown, although not exactly how the relationship is between a bit. So some of the q-banded corporate method. So this turns out to be a very good way to control these medium-length exponential sums. So if you have a short exponential sum like this, then you can introduce some intermediate averaging. So we have this banded corporate average method, which is that you can introduce some additional averaging parameters, where these are much smaller than size n. You can take this sum and shift it around by some h in some parameter set, and go to average it. And then you can interchange, well, this is basically the same, you can interchange the sums in Cauchy-Schwarz. And at the end of the day, what you end up with, there's more averaging going on here. So you can Cauchy-Schwarz one of these sums into a sum that looks something like this. Times another Cauchy-Schwarz term that you understand very well. OK, now if f was a polynomial, then this is the val differencing trick, and this works very well, because if f is a polynomial of some degree, the differences have a lower degree. And so this gives you good estimates. In our case, f is a rational function. And if you differentiate a rational function, actually, things get worse. The degree gets bigger rather than smaller. So ordinarily, this isn't a win. Except that you get to pick where you shift. And so the idea of the cube and the corporate method is that if q is very smooth, it can be factored into pieces. And as I said before, you can pick where the factors are. And you can choose your shifts to be multiples of one of these two terms. Let's say that these h's are multiples of r. So you only shift by multiples of r. And the reason you do that is then this difference becomes divisible by r. And so rather than being over this big modulus, this reduces to e of a smaller modulus of some other polynomial, which depends on various parameters. But you can replace one exponential sum by another exponential sum over a smaller modulus. But whereas the length of the sum is still unchanged. And so if you had a short sum, you can turn a short sum into a relatively larger sum, relative to the modulus, than what you started with before. And in particular, you can get past this root q threshold for the polynomial you're going to write off. So if you apply this method, it turns out so the polynomial you got a method gives you non-trivial results for when your length is bigger than root q. If you apply this van der Kolben method, once you can enlarge the range for which you have a non-trivial bound. And you can get a non-trivial bound on all four. Which sums up short as q to the 1 third. If you apply it again, you can iterate this. And you can get up to q to the 1 fourth, q to the 1 fifth, and so forth. Although in practice, because of the Cauchy Schwartz, every time you apply this method, the gain you get halves. And actually, it turns out that there is not a win. Actually, we won only van der Kolben once. Because actually, we experimented with multiple van der Kolben. It's actually ended with worse exponents than when you started with. But you can actually improve the range for which you can get exponents. In fact, this is actually called the q van der Kolben A process. There's something called the q van der Kolben B process, which is Planche-Ruel. And if you apply them both, then you can actually get all the exponent pairs, if you know what that is. But for this application, you just need to get a little bit below square root of q. And so this would be enough, at least if your original alpha and beta are pretty close to square root of x. If you want to get this larger range of type 1 sums, between x 1, 3, and x 2 first, then you have to use this twice or maybe three times. But basically, using this method, you can get non-trivial bounds for this sum, which will eventually feedback to give you enough cancellation to get a non-trivial type 1 estimate, estimate for these type 1 and type 2 sums. OK, so last few minutes, I'll talk about type 3 sums. Which actually, nowadays, if you are willing to take the leans with dollars as a black box, they're actually simpler than this. I mean, there's a lot less of these Kashi-Schwarz. I mean, there's still lots of Kashi-Schwarz. And it's not quite as intricate. OK, so let me just look at a model case. We're just going to take a triple convolution where these guys are of size about x 1 third. OK, OK. And yes, this minus something and then absolute value, which is the same as sticking in some coefficient here. OK, so we wish to understand a discrepancy like this for the type 3 sums. OK, so you expand this out. And Q is the size of x 1 half plus a little bit. OK, so if you expand this out, then what you get is you get sum of n 1, n 2, n 3, all the size about x 1 third, Q, which is bigger, x 1 half. And then you're summing basically your cutoffs. And there's a congruent classic condition, n 1, n 2, n 3. OK, so the modulus is much bigger than each one of these ranges. So these ranges are living in some sort of reasonably small sub-interval of Z mod Qz, if you like. But so what you can do, and this is expensive, is that you can try to expand these guys into, you do a Fourier expansion in Z mod Qz. Now, this is not particularly efficient, because the width of these intervals, x 1 third, is a lot narrower than the total modulus here. So you're going to lose something here. And so you need to pay it back with quite a bit later on. And this is where you need to lean, because the lean is the only guy who can give you a cancellation. OK, but if you expand this out into Fourier series, you start looking at expressions that look something like this. So there'll be a sum of h 1, h 2, h 3, which are size about the ratio of these guys, about h 1, 6. And OK, then you have something for n 1, n 2, n 3 in a cyclic group of English congruence condition. And you have some nice exponential sum in the middle here. OK, and there's still a coefficient here. So this is what you get after Fourier expansion. Now, it's inefficient. If you bound this trivially, you will not recover the trivial bound here. The price, the inefficiency here is the ratio between the width of this interval and the modulus here three times. So there's like 1, 6, 3 times. So you lose about x 1, 1, 1, plus a little bit here because of Fourier expansion. OK, so you've lost the whole square root and plus a little bit more, which is a bit bad. But OK, we keep going. Now, we have this expression. These h's, you can change variables. Now that you're in this ring, you just dilate the n's by the h's, you have to cut out the case when h has a common factor of q. That's technical, like you're really annoying to be honest. OK, but let's just ignore that. But if h is co-pimed to q, you can scale those out. And they just come in here. OK, and this has a nice feature. I think this was first explored by Heath Brown. Is that? March earlier. Hm? March earlier, two mountains, short intervals, a grad of change, then another pathway. This gluing of these frequencies, they live there. OK. Sorry, but the guy does not understand, but does not empathize there. OK. All right, well, OK, I did not realize. OK, I did realize. OK. All right, OK, so the next trick is apparently quite an old trick, which is that this whole sum depends on h1, h2, h3 only through the product. So you can just make a change of variables. And so you can concatenate three small sums into one big sum. You pay a small price. You pay basically essentially a divisive function, like a truncated divisive function, because of some more simplicity here, but it's still a win. So you end up with an expression like this. Now, this is just a pure exponential sum over a finite field. It has a name. This is called the hyperclustrum sum. In fact, the third hyperclustrum sum of a h at q. And the way it's usually normalized is actually exactly q times this. So the hyperclustrum sum, one of a q, times the sum of n1, n2, n3 equals aq of this expression here. So it's an exponential sum over about q squared terms. Yeah, because of this q cubed terms here, we're one congruent relation, those q squared terms. But so if you expect square recalculation, then this should be of bounded size. OK, so this is the type of expression that you have. So this is one of the things that Deline can certainly deal with. So if you use the Riemann hypothesis for this hypersurface here, what you find is that it's actually bounded, well, up to a negligible errors because of all the prime factors of q. But this is essentially bounded. OK, so you get a square recalculation. You get a square recalculation of q, which, if I did things correctly, should exactly balance. The first loss. Right. I'm just a little bit perturbed because one one is a q. I lost the x to the x to the 1 half, and I gained a q to the 1 half. I gained a q. You gained q, right? Yes, OK, all right. I gained a q, which is about x to the 1 half, yeah. OK, so there's so much cancellation in this estimate. So this piece of trivial bound are basically q. And q is about the same size as x to the 1 half. So you've almost recovered all the loss that you have before. Now, this doesn't end the story because it's just a little bit short, OK? So we're almost back to where we started. But you just need just a little bit more cancellation than what we have here. And anything you do will give you something more trivial, OK? So basically, what that tells you is that you just have to start because you're shorting again. So by the way, if you try the same thing with tau 4, it doesn't work because the lean doesn't save you enough to counter the loss. And we just don't know how to do a tau 4. But tau 3 is just within the reach of all these methods. OK, so last thing then. All right, so you do all this. OK, so again, you exploit the smoothness of q. You split q into two pieces, both the size of x to the 1 quarter. One of the sums I'll just throw away. You can exploit it if you want. But I'll just throw away, say, the s sum. And so if you do this, you end up with some expression, which looks something like this, OK? So this is the type of thing you're looking at. And then you cash your Schwartz to give it of this annoying thing. So if you do a cash your Schwartz, OK? Then you get a bunch of coefficients, which look a little scary. But then you have this, well, this is kind of scary, too. But you get the product of these two custom weights. And then these guys don't turn on H. So what you should do is I should put this guy in here with some in here. And you cash your Schwartz one more time, or not. Actually, no, you don't need to do that, OK? Yeah, it's already complete in H, yeah, so OK. So what you have here, this is like an incomplete exponential sum. But rather than having exponential sums, it's an incomplete sort of clusterman. Well, the exponential functions are placed by these more complicated expressions, which are still bounded. But they're sort of generalized exponential phases. They're what's called trace weights. And it turns out that if you understand DeLene's theory well enough, you can get cancellation for these two correlations as well. I guess maybe Emanuel talked about this in a week or two. So it turns out that you can use DeLene one more time and get some non-trivial cancellation, quite a good actually non-trivial cancellation in here. And if you pass that through, you will get actually quite a good type 3 estimate. It's so strong, in fact, that it's no longer the dominant barrier to improving further improvements to these estimates. Because we won't cut you shorting nearly as much somehow, as with the other. And also, we're exporting really strong cancellation, square root cancellation, rather than sort of like the van der Kolbe method is going to give you like 1 sixth gain and so forth. It's not so strong. But this part, we actually understand quite well, of course, more to low having to use DeLene's theory. But yeah, you put that all together and you actually get non-trivial estimates on equities estimates on the price. OK, that's a good place to stop. Thank you. Are we at question? Maybe I should ask you, you didn't mention Don Biennial, a new version of van der Kolbe, which was not used by Jean, that was your point of view. Yes. Yes, yeah. So Jean instead did some more efficient cashier Schwartz. So he could just use Polyvenogadov. So in polymath, we did his efficient cashier Schwartz and van der Kolbe. We used everything to get the best possible bound, yeah. Without this view, the van der Kolbe, with type 3 is combined with type 2. That's the point where the maximum is right. That's the best result, but in your approach now, as you said, to the last moment. Right, because you didn't know where you were going to go to the base, right, yeah, so you cannot do much because of van der Kolbe's use, right? That's one of the biggest gains. Yeah, we also exploit a lot more averaging than Jean did. There's a lot of parameters, averaging that we just threw away, which I also threw away here for simplicity, but if you save a lot of the averages. Basically, any average that doesn't affect the modulus, but it just affects whatever's inside the parentheses is something that you can exploit by these exponential sum methods. What is difficult is trying to exploit cancellation when you vary the modulus. We can't use automorphic methods for a variety of reasons. That's the only thing I know that would do that. We don't do that. I don't think there's a opportunity to mention Yes, we use automorphic stuff like this. A must be fixed, but that doesn't necessarily be small. It's just that the point is that we fix, you know. It could be very large if it's a manager of projection. It's really the fact that A is moving to the modulus, which very detestable to Jean. That's the end, but he did a very clever use of Chinese remainder theory. And then other innovations. I think we've integrated this soup and base outside. That's right. By changing it. So we use automorphic methods here, too. We have to control the A bound with Q. And it's not through the weakness of one of the forms of spirit theory. It is just the fact that other than the Q cannot be taken advantage of, because it moves through A. That's the reason. So when you set here, N1 is completely N2 model of Q and Q is tiny smaller than N. In order to get good result, we just switch to the models. N1 minus N2 over Q is another model. And this switching requires A to be not completely N1 Q. It's not really the speculative automorphic forms, which prohibits A to be large or small. It's just A moving with Q. So if you switch to models to get complementary models, how you control this A in terms of the new models, that's the point. Sometimes it's not to use that switch. That's the innovation. That's my good point. Another question or comment? Just one comment. I will talk about these things in the next weekend. I'm planning to try to finish the course with the estimate. If any one can nowadays, knowing a little bit of the lean's work as a black box, I'll pull forward and say I'll touch with the next week. Stick around.