 Well thanks Emmanuel. Let's go back here at IHES. Yeah so I was slated to get three lectures on bounding-gastrician primes. I forgot when I agreed to do this but I think it was before Maynard's work actually. I see right so events have kind of moved faster than anticipated so the the theorem about bounding-gastrician primes is now so easy that I can present it in one hour and so but but the other two lectures will be on on equidistribution estimates on primes which is still relevant for this but okay so let's talk about bounding gaps between primes so the the okay so let Pn denote the nth prime and then for any natural number m define Hm to be the lim inf of the gap between the nth prime and the m plus first m plus m prime so Hm is the smallest interval for which you can you can squeeze in m plus one primes inside such an interval infinitely often okay and so what we care about is is what what are the values of these numbers so for example that the twin prime conjecture is just the assertion that H1 the gap between consecutive primes is equal to 2 infinitely often more generally the the prime the highly literal prime tuples conjecture which generalizes that the twin prime conjecture would imply as a special case that the higher m's the higher h's should be equal to h of m plus one where h of k is defined to be the the smallest diameter minimal diameter of an admissible k tuple h1 up to hk okay so and admissible k tuple is is is a k tuple of images let's say in increase in order such that such that you miss one congruence class mod p for every p okay which avoids so ie a tuple of increasing images which avoids at least one congruence class on p for all primes p okay so for example a 0 comma 1 is not admissible because it doesn't avoid any congruence classes mod 2 but 0 comma 2 is admissible as the first admissible 2 tuple and this is why h1 should equal 2 and similarly the the narrowest 3 tuple you can get is a 0 2 6 or 0 4 6 would also work 0 2 4 doesn't work because it doesn't avoid anything mod 3 so what should happen so h1 should equal 2 h2 should equal 6 h3 should equal I think I should know this but 8 okay and then 12 16 so forth okay so these are what the hm should be these numbers I will understood by the way so asymptotically you have a large k the the the narrowest k tuple that you can you can have is at most k log k and at least half k log k up to up to smaller errors so these are easy these are both fairly easy for this bound you just observe that the first k primes first k primes larger than k are admissible you just think about it for a minute and you see that's true and the prime number theorem tells you that the diameter of this set is about k log k so that gives you the upper bound the lower bound is a consequence of the Bruntisch fashion equality if you remove one congruence class when block of intervals mod p for every p Bruntisch match tells you an upper bound for how many how many guys are left and if you do if you just plug that in you you'll get this lower bound and these are basically the these are almost the best known bounds there you can improve upon the little of ones on both sides by using slightly sharper constructions or slight refinements of the Bruntisch March inequality for example the Montgomery Vaughn large seven equality but but this this is this is where we add also for small k like k up to 160 something you we have you can use computers and compute exactly what these ages are okay and then for medium-sized k like a million or something we there's lots of numerical upper and lower bounds we can get okay so so that's well understood but this this conjecture is not well understood okay so all right so it's not even not even obvious that these ages are finite of course you have the prime number theorem which tells you the m prime is roughly m log n but that doesn't tell you that these guys are finite all that tells you is that these gaps if you normalize the gaps by by by log of say pn then then this is this this becomes at most k at most m sorry okay just from the original principle okay but so infinitely often this should be of the order of log pn but you can't get much better than that without actually doing more than using more than the prime number theorem conjecturally we should have much more to say about these ratios these things should be Poisson distributed and so forth whatever the that's not quite right well that this distribution should be something connected to the Poisson process but but those conjectures kind of out of reach okay they don't need that board again okay all right okay so so the big breakthrough in this area the first big breakthrough was a G 90 or 2002 I forget now that the exact date I should have written that down but I got some pins at Yoderham and I've forgotten the precise year the early 2000s so they introduced a method to to control these things that basically what they what they found was that if you if you have a good equity distribution estimate on primes then you can insert it into the machinery of the silver sieve okay and you can get control at least on each one although initially in the method you couldn't say much about a lot higher each ones okay so what I mean about equidistant equidistant estimates on the primes so let's introduce the one mango function one angle function is log p and people in ends of power p and zero otherwise essentially it's the indicator function on the primes normalized and to have some nice multiplicative properties for example I mean you know this particular choice of function has this nice property that that lambda convolve one is log that's what's why we care about function really okay so this is the one mango function we have the prime number theorem of course this is roughly x and we have we have the problem with you in mathematical progressions okay that if you restrict primitive arithmetic progression then at least for for fixed q and for large x this should be one of a few of q okay that they should the prime number should be equidistributed in all the congruence classes that are primitive okay so so this is true for fixed q and large x that's the prime number theorem in arithmetic with progressions and you can ask what happens if q is not fixed if you make you a bit bigger so for example yeah so and passive theory you you don't care about a single q so much but you care about what happens to q on the average on the other hand you don't you do want to yes to be uniform in the modulus so let me just write down the definition so given any theta in zero and one we see that we have the area to help with some conjecture at a level of theta if the following estimate holds that if you take the discrepancy in the prime number through the error in the prime number through in after my progression progressions and then you take the worst discrepancy over all the primitive classes for each q so you take the worst guy for each q which is which is it makes things difficult but then you get averaging in q which makes it better again if you average all q up to a certain level and the way this is normalized actually average averaging actually is just summation the some of the restriction to q here at least is doing averaging for you then what you want is that you want this guy to be bounded by by x to power saving of any log you wish for any fixed power of log this should be much less than x so that the trivial bound here by the way is x log x this is trivial again so you want to save an arbitrary power of logarithm over the trivial bound so the statement is saying basically that for most q for like 99% of all q's you have a good part number few in ethnic progressions for for all primitive classes a more q for and then for most q okay so this is the elite hub of stem conjecture so the prime number of human ethnic progressions is kind of in some sense the the fair equals zero end point of this actually the Siegel-Wolford theorems maybe with the more accurate yes so the yeah is it the definition yeah no I'm not claiming this is a theorem okay yes yes okay so I'm gonna say now what's known right so right so a quite there's a quantitative version of the part number few in ethnic progressions called the Siegel-Wolford theorem which it doesn't quite correspond to it's sort of theta is is zero in some sense it says that they call the star that you know star is true as long as q is polynomial logarithmic if you only restrict to q's of polynomial size and this is true this is the Siegel-Wolford theorem in that case in fact even one it's equivalent but you get it for each q separately and then Bumbier-Vernagradov the Bumbier-Vernagradov theorem says that you can in fact take q all the way up to x minus x the square root of x over time up to a logarithmic loss and there might be that you lose here depends on on this a might even be like a plus one half or something I forget exactly what it is but you can almost go up to one half so in in particular you get a help us down for for any theta just up to just a little bit shy of one half okay and then this is still pretty much the best that is known for this as written here so the conjecture is that this is true for all theta up to one at one it fails and then you can't take you up to x in fact you can't take you even within a logarithm of x or within like either the square root log log x or something of x as a result of Friedland the Grandville-Hillbrat and Meyer okay but okay but we can't get anywhere close to one okay we're still stuck at one half yeah we've been with this exact for this exact estimate later we'll see that there are weaker versions of this which we can't go beyond one half okay so this is this is the the standard way to to to describe equidistriction estimates on primes I should remark by the way that if you knew the generalized human hypothesis GRH that would give you the Haberstein conjecture up to one half and then you would need to average that you get a good estimate for every queue not just on the average but even with GRH even with GRH we still can't push past one half in fact I'm not sure we can even push past you can even do anything that would be but yeah you get a better term if you have these conjectures but I still don't think you can get you can beat the one half even with right okay yeah so it's yeah but it's all different universes I mean this is SIF theorist care about about about average case queue okay and and my replicative number of theorists would somehow the focus is more on individualize a queue and so it's there's somewhat orthogonal but yeah but Bombay-Avernogado is sometimes called GRH for SIF theorists I mean it's sort of as good as GRH and it's unconditional it's true okay it's not the main focus of my talk but of course there's this like with many things like number theory that there is this issue this this constant depends on A and on theta of course okay and unfortunately because of the Siegel theorem depends on Siegel's theorem and Siegel's theorem is ineffective in the constants we don't understand L1 chi as well as well as we ought to all the constants here are ineffective which is a little bit annoying in practice it's not a big issue is an arbitrary yeah so so for any fixed a right yeah so yeah so there's an issue of what this constant is how it depends on A and this unfortunately is ineffective there was no there was no explicit bound known for this constant although in practice if you want effective versions of the theorems that I'm gonna say it's not such a big issue because the failure can be localized to basically a single exceptional modulus you can find that there's a single queue for which something goes bad or maybe a single set of primes for which things go bad but for most queue outside of outside of a few exceptional ones things are still good so it's not a big issue but I'm not gonna focus on effectivity issues today okay so these these are the this is how you can measure distribution on the primes and so what what Gordon could see and you don't show in the first paper back in the 2000s so first of all if you assume everything if you if you assume if it happens down for all theta then you can bound h1 with quite a good bound bound by 16 that you can find infinitely made of pairs of primes just in 16 apart in fact they only they is they don't need all theta it's just theta coefficient close to 1 like 0.996 or something that that's enough more generally if you can get anything beyond one half okay like if you can get any improvement one half and and nowadays we normalize it one half plus two of our pie here then you get some finite bound on h1 but then but these are these are not known so just using Bombay or Bombay or Vinogradov they couldn't get finiteness but what they were able to show so if you normalize the log you at least get zero so that you can find somewhat small prime gaps which are a little bit smaller than the average spacing of log pn but you can't get it as close to bounded later on they were able to release this log of square root of log and then in an unpublished work qubit of log it wasn't quite these three authors but it's a different set okay but these are all results of h1 the method initially could not handle even h2 but the best they could do for h2 is that if you assume everything if you assume the ever have some conjecture they could also get a small result they could get it they could extend the result for for the first gap to the second gap but yeah but for reasons which got removed later they were not able to get beyond beyond one okay so that was where things were ten years ago later on it was observed independently by Mordahashi and Pince and independently by Zhang although this was several years later that this conjecture is a bit too strong for what's needed for to get to get if you just interested in bounding h1 for instance you don't need the full strength of eight Habersdam that you can weaken the other Habersdam conjecture beyond one half and and still get boundedness plus two bar p okay this is young's notation yeah it's it's it's a yeah you could call it epsilon if you like okay all right yeah I'm just used to calling it bar p all right well okay yeah if you can get anything beyond one half you can get bounding aspirant primes on the other Habersdam conjecture but actually you don't need the full eight Habersdam conjecture so first of all you can weaken it by taking the super outside the sum but okay if you give you but you have to co-prime to everybody to all a to all primes up to some some threshold so so it's the same conjecture so you can weaken it in two ways so so here you can choose a different a for each q so for each q you pick you pick the worst you pick the worst a but rather than do that you can just pick a single modulus a but now you think about an integer and then you caution it down to each of these these these moduli now this by itself doesn't actually gain you very much if you if because of the Chinese remainder theorem okay that if all the q's were co-prime already then picking a single a for each q is the same as picking a single modulus jointly but what you also do is that you restrict q to not not to have a to not be co-prime you force q to be so this is not one half plus 2 bar pi but you also force q to be what's called an extra double smooth I guess in some in France I should call it fiable okay fine okay which means that it has you know so what this means it has no no prime factors larger than larger than extra the delta so now there's two parameters VAPI and delta and and this estimate becomes easier when delta is small because you have fewer you have fewer numbers and then q becomes more and more smooth more freeable okay so you're strict to smooth moduli so that so that then taking the super actually has some content because there's they have a lot of shared factors these q's and so it constraining the moduli to be all common you can get some mileage out of that okay so this is this is a weaker statement nowadays we call it the multi-high motor hash it pins shunt conjecture with two parameters VAPI and delta it's weaker than this conjecture the other half of them conjecture and so the claim is that if you can prove this conjecture for some value bigger than zero then you still get bound on this on of each one okay so this implication isn't quite implicit in this work but it is pretty close to being in these papers okay and then of course last year Zhang what he did in this language was that he actually proved this conjecture for some values of VAPI and delta that he actually proved this thing is true for specific choice of Rp and delta and okay it's not so important what value he got but he he got some small positive value one of a 1168 and then with that he was able to get an explicit finite value of h1 okay and it was 70 million okay and then so later on I and many others including many people in the front row here formed what we call the polymath 8 project the 8th of these collaborative online math projects started by Tim Gowers about 10 years ago now and we were able to to improve this result so it's a well first of all we're able to get a better distribution was out for VAPI as big as as anything up to up to six over 700 minus minus this small thing this small thing is 9 over 35 delta okay so you can get as close to six over 700 as you want by by shrinking delta as to be sufficiently small so this is a bound about 10 times better than than this bound still about 20 times off from the truth which is one quarter but okay and so using that we were able to get somewhat better bound 4,680 but it was still basically using the the Goldstein-Pinsey order method but then more recently this I guess in October of last year it was realized by by James Maynard also independent independently by myself at least some of what he did that actually you can rather than improve on the just on the distribution side of the argument of course and your room you can improve on the silver except outside of the argument and in fact you don't need any distribution estimate at all so in fact other than Bombay or Gradov so just from the classical Bombay or Gradov estimate avoiding these more difficult distributed estimates he was able to actually show that that each one was bounded in fact we're even a better bound than we had before so you've got about a 600 initially and it's the same methods show that if you had the full area they'd have some conjecture you can even get to what he's able to shave the 16 from course in business you're done a little bit to 12 but perhaps more interestingly for the first time his argument was able to also control the higher gaps so unconditionally he could get a bound which is basically exponential in M for M and on area help us down he could get something about half the size but still still exponential in M okay but for the first time we could now get bounded intervals with many primes in them and then after that we actually joined forces so he joined the polymath group and as of about a month ago also we finished up what we can do now is unconditionally we can get well actually unconditionally well okay yeah so just using Bombay or Gradov we can we can get 246 using our previous bounds okay so using we can improve this slightly you can get us a modest gain it's not you can put the exponent unfortunately we have no way to improve the the exponential loss I mean it should be like M log M as I said up there but we can we can only get exponential here and unconditionally so yeah if you assume actually not just the elite have some conjecture but something which we call the generalized elite have some conjecture then in fact we can get we can get a couple of six and we know and also this is our best possible well not in the sense that each one actually is six because we all believe it's two but it's it's the best that we that we that one can ever hope to accomplish from purely sift theoretic methods and all these methods are ultimately sift theoretic I'll try to explain why so you so there's this this parody problem of Silberg which was which was already known to block any sift theoretic attempt to prove the twin prime conjecture but a variant of the method shows that you even can't get for example you can't prove each one equals four you can't bound each one by four from sift theory you have to do at least six okay the generalized elite have some conjecture is the same conjecture as elite have a stamp but you you just you generalize it by replacing the vermengold function by more general convolutions Dirichlet convolutions of arithmetic sequences which obeys certain bounds and estimates which I think I won't write down but it's it's it's you just replace a vermengold slightly more general class of functions good question okay so the the boundary of a new grad of extends to a setting that's the result of motor Hashi Young's results unfortunately they do not extend to to function of this generality at some point what you what what's needed in Zhang's argument talk about this in the later lectures is that you can decompose one angle function in many different ways some of which look like comp Dirichlet convolutions of two functions but also some which look like this a convolution of three or more functions and it's it's important that you have this this flexibility to to what is it to decompose actually different components of vermengold into into these pieces and that is that structure is still used in all the the Zhang type arguments we do need that extra structure so unfortunately we we have no Zhang type result for in this generality beyond one half yet but in principle there could be I think it is that's that's that's more of just a limitation of current technology okay all right but in principle at least if you have enough equity distribution then you can get the optimal SIF theoretical result which is six all right so that that's the state of of knowledge of results okay so let's talk about proof methods ah okay I see it's invertible okay all right okay so what we actually do I mean the way we established gaps was also about bounded gaps is is actually by making progress on the prime tuples conjecture so we introduce what rotation of pins we're going to introduce what's called the the Dixon-Heidi-Geroult conjecture DHL with two parameters K and G K and J okay so given two so given two natural numbers J and K K bigger than J we denote by these the statement DHL Dixon-Heidi-Geroult of K and J to be the assertion that for all admissible K tuples okay so so given any K tuple then there exists infinitely many N such that if you look at the shift of this K tuple by N that this K tuple contains lots of primes contains J primes at least J primes okay so given any admissible K tuple if you shift it far enough you can eventually catch at least J primes okay so the the prime tuples conjecture of Heidi and Littlewood says that you shouldn't in fact be able to make all these guys prime that you should be able to get to get K primes in a K tuple now that would be wonderful we can prove that but that that that that's that's way out of reach so so even getting to even getting two primes in a two tuple that would give us a true conjecture okay so so this this we can't prove okay but what we can prove is that we can prove weaker versions of this where where K is bigger than J so for example so the final results that we know for example is unconditionally I can tell you that that the Dixon-Hidey little conjecture is true for 50 and 2 so given any admissible 50 tuple I can find for you a shift of it which contains at least two primes and it turns out that you can write down an admissible 50 tuple of diameter 246 and this implies so this is why each one is less than 246 okay and on the generalized elite help us I'm conjecture we can actually get three comma two which is about as close as you can get to two two without actually getting to two so given any admissible three tuple for example zero two and six then and then we can find shifts of the office triple where two of these guys are prime okay and so this is what gives you six okay at this point maybe we're just supposed to make it an amusing note so this this argument so I here we did we're taking very simple linear forms it's just n plus the constant n plus the constant n plus the constant but the same argument you can take other affine linear forms you can take a two here or you can maybe do do h minus n whatever you can take you can take other linear forms and a similar argument and holds so one amusing so rather than to play with n and plus two and plus six there's a there's a connection with the Goldbach conjecture so actually the same argument shows that on gh if if n is a sufficiently large and over six then the existing n such that at least two of n n plus two and n minus two n minus n a prime okay so this is a slight variant of this dh l 3 2 but rather than n n plus 3 n plus 6 the n n plus 2 and a fixed large n minus little n so this has the following amusing consequence of again assuming gh you get a dichotomy you get a disjunction so we can't prove the true conjecture using this this technology and for similar reasons we can't prove Goldbach conjecture but I can tell you in some sense that one of them is true that that that if you have the the generalized of the habsang conjecture then either the true conjecture is true or every multiple of six every every large even number is within two some of two primes but I can't tell you which of these is true in fact I mean they should both be true but I but I can or you can prove and that even that is conditional on this rather ambitious conjecture is that one of them is true and the reason is just some pitching or principle just cases alright so okay but I saw this infinitely in which well okay so so either this happens if we often in which case that in which case H is two or one of these two happens infinitely often in which case n is within two over some of two primes so yeah so you can get sort of you it's still an open problem to get an unconditional Goldbach result from this method so it's it's it's using all this machinery it's tantalizingly close to being able to prove that that every large even number is within a bounded distance of a sum of two primes but strangely enough the methods are not quite good enough to to to get that this is this seems to be the best type of thing you can do okay alright so there are some senses I mean the even go back injecting the trim prime conjecture are considered very close to each other but there are some there's some senses in which that go back is strictly harder than the trim prime conjecture that various partial results we have on trim primes do not translate to partial results on Goldbach necessarily okay so the main thing is to prove this sort of conjecture that you want to find k tuples where at least Jnum is a prime so this would be very easy if the primes positive density okay if the primes had positive density I say density bigger than alpha okay of course they don't let's suppose they do just temporarily okay then then this would imply the the Dixon-Hartley conjecture whenever J is bigger than alpha than alpha k and the reason it's very simple if the primes are positive density you just pick a number little n at random inside some big interval you take a big interval 1 capital N you pick a number little n at random if you pick a number uniformly at random then yeah if you pick n uniformly at random then if the primes had positive density then just probabilistically n plus h1 should be prime probably at least delta okay n plus h2 prime or at least delta and so forth so just from the linearity of expectation the total expected number of prime numbers you get inside this k double should be at least k delta and so if k delta is bigger than J then you must get this must happen at least once actually rather than 1 let's make it a 2n okay and then you just let in fact we call it x to 2x because that's my notation very shortly and then you do that x goes infinity and infinitely many such tuples okay so if primes are positive density then it would be very easy to get these sort of results now of course the prime number thing when you put the primes of intensity zero with respect to uniform distribution so you can't run this argument directly but there's no reason why you have to pick a uniform distribution right you can pick n by some other distribution and the same argument works so that's basically what Godson-Pinzen-Utterham do so more formally what they observe is that to prove the Dixon-Heidelberg partial Dixon-Heidelberg conjecture it suffices to find a good probability distribution on x to x actually the way we set things up we don't actually use a probability distribution we actually unnormalize it to find a function new from x to x to the positive reals okay so it has to be non-negative okay so this be our unnormalized probability distribution such that we have some upper bound on its total mass it's bounded by some alpha let's say times a normalization constant so x is a parameter going off to infinity every time we write little over one it means something like it goes to zero when x goes off to infinity so alpha times some normalizing factor wish would depend on x but also okay so the total mass is small but the probability that you are prime which in this language will be the same sum weighted by let's say the Shebyshev function theta and so theta of n is just log n and is prime and zero otherwise okay it's basically of a mangled function okay if you get some lower bound on this say bound by say by beta i times b log x because of the log x here so so these two statement this is true for all theta i from one decay this is like saying that the probability the probability that that n plus h i is prime is like beta is at least beta over alpha and so if the total expected number of of primes that you get if you sample n using this this weight function if this is bigger than than j then you can prove this conjecture I guess here b has to be non-zero but it doesn't matter what b is okay I guess alpha was going to be non-zero too okay so alright so basically if you can find any weight doesn't that doesn't have a uniform weight doesn't work but if you can pick any weight for which you have a good upper bound on the total sum and a good lower bound on on the sum weighted by one of these prime counting functions and if the ratio between this bound and this bound is good enough it's bigger than some j then you can find j then you can find j primes in this k-tuple so once you have the observation then it's then all you need to do is you need to find a good a good sieve okay so so these these these things are called sieve weight functions these news the news have to be to be positive so that you actually get a probability measure this doesn't work if you was negative at some places has to be positive everywhere you need to be able to get upper bounds on on the raw sum and lower bounds on the weighted sum okay and this ratio has to be good enough okay so then so then the trick is to find is to choose a good good new okay so okay but but okay but this is a sieve theory question and and this is something that sober gradient was stood 30 40 years ago okay so yeah so we there are many sieves that that you could try but the most efficient sieve as it turns out we tried many other sieves but okay the most efficient sieves are the selberg sieves and the selberg sieves come basically in order to make this always non-negative it's it's not necessary but certainly sufficient to to pick a square so the selberg sieve the idea the selberg sieve is is is to is to pick a function which looks like this so these are some sieve weights and you some of all these well okay initially the initial work of Goldstein-Pinson-Yoderham the sum of all these that that divided the product okay so that the motivation for this is that you you you you want your sieve to already have a good chance of already picking out those numbers n for which n plus h 1 n plus h k are prime well mostly prime so you know a good initial choice which which if you pick mu of d log n over d to the k so if you pick the k th weighted the k th moment function apply to this product and plus h 1 up to n plus h k this function as it turns out basically only is only non-zero when all these guys are already prime maybe this is maybe some can be powers of primes but but but but essentially this this sieve is already non-negative and already picks out those numbers which is prime and there should be if you assume the highly loaded prime super conjecture these bounds you should get all the right bounds that you need and you should be able to prove the whole prime conjecture now unfortunately for this particular choice of sieve proving these bounds is is basically exactly as hard as proving that I'm to whose conjecture so you this this function doesn't work but but you can pick but you can sort of take truncations of this sieve weights which which have a better chance of working where you don't restrict all these guys to be exactly prime but you basically restrict them to be almost prime so go to some pins in your dorm tried to sort of the classic silver sieve which is basically something looks like this and then you can tweak what these weights are but then actually was observed by Maynard actually this is something we really should observe many many years ago but you don't have to use these one-dimensional sieves actually it turns out to be more efficient to use a multi-dimensional sieve okay so so rather than then base a sieve entirely on what divides the product of these numbers to determine yourself more flexibly you can you can find sieves that that are based on the on the on the individual factorizations of the of these numbers rather than the joint factorization and this is a more general sieve than this one and it turns out that that in retrospect this is what we should have used all along these these are the best sieves to use now you still have to choose the these these weights so you know this guarantees that this sieve is positive okay and then when you when you form these sums you get certain quadratic forms in these coefficients and then see if you have this linear algebra problem trying to to choose the choices of that which will make this quadratic form small and this quadratic form big and and so after a lot of experimentation it turns out that well there's actually a couple choices that work well but the precise sieve that we end up using is is this so at the so we wait by the Mobius function a couple times and then we pick some function of the normalized logarithms all this has to be squared and then for technical reasons it's not so important but we find it convenient to look like to a single congruence class okay so this congruence class w is just gonna be a product of small primes and and an omega something very very small say log log log x okay so you just want to you don't want to I mean if you don't do this then you have to deal with a single series which is not such a big castle but it's convenient to not to not have to care about it so you pick a congruence class B is a class such that B plus H I is primitive or I you can do this precisely because your top is admissible and then you and then F is some compact smooth compact supportive function from a positive author in fact okay to so some cutoff localizing D1 and DK to be to be bounded by some small power of x like six x square root of x or x four tenth or something so you have some localizing function here which you get to pick so you pick a function you create the sieve and then what will happen when you do that if all goes well you should be able to prove bounds on on on on these on these expressions and involve where alpha and beta will depend and in terms of the office office smooth function kind of function capital F and then the remaining game is sort of this this calculus of variations problem is that you have to do to maximize the ratio of one integral involving F with another integral of opening F so after some trial and error this turns out to be pretty much the optimal choice of of of sieve weight let's see okay so how would you so if you choose such a sieve how would you try to to bound these sums that I mentioned before so this is something that it's actually very standard sieve theory so it turns out you don't have to do well at least it depends on where F is supported if it has small support then you don't need to do very much some of the more recent results you have to do some tricks when your support of F is a bit bigger than what classical sieve theory can handle but if you want to control something like this so suppose for example you want to control just the raw sum which is simpler than the one the primes well you want to control the sum you just you just stick this in and you expand out the square and you rearrange and so after just a little bit of work you get a monstrous sum well not that monstrous but okay moderately complicated okay so if you expand out everything you will eventually get get this mess here okay so there'll be a sum of devices moduli and then some weight function depending on those moduli times times this thing which is basically counting your summing one over a certain ethnic regression all the numbers n in a certain interval makes a 2x which way a certain of congruence conditions okay now because you're only something one we know how to we know how to sum one an ethnic regression that's one of the things we actually can do so well at least the modulus is smaller than the width of the of the of the of the interval so so first of all the congruence conditions are only compatible if you need d1 d1 prime up to dk dk prime and w to all be co-prime and in and once they are this expression is equal to just x over w times gc d d1 d prime up to dk dk prime plus an error big old one from round off okay now now this error turns out to be okay as long as as this function f doesn't have to better support so this error is negligible if the support of f is contained in those k tuples where the sum is strictly at someone half so as long as you're bound in this sort of a simplex so you know that you have this positive orphaned here rk plus so as long as you are in a certain simplex of size at most one half as long as that supported then then the total modulus here actually can't get any bigger than than than x you do one half because there's two factors d1 and d1 prime so the other half only comes in the square yeah but yeah as long as your support isn't too big this error is negligible and then you can just have the main term and so you then then so then you have this this big sum of this main term but this you can you can you can bound you using multiplicative number theory which is actually not very difficult multiplicative number theory it's like it's elementary and it's on the same level as knowing that the zeta function has a pole at one okay you don't need any zero free regions or anything fancy you just need the simple pole at one so to cut a long story short if you do that what you find is that you do get asymptotic where b is some normalizing factor which is not very important but here it is anyway b is a normalizing factor and alpha what it is is it will be at the integral of f a differentiated once in each direction you take a k4 derivative in mixed derivative and you square it okay so you get this but you get this particular quadratic expression in f and and that is your alpha this is this will be the denominator in the Gaussian principle method okay and you can you can do that as long as your support is not too big okay so so that's pretty easy well it's straightforward okay it is like a page of calculation okay now the trickiest sum is one of the primes in it for example you might be interested in what happens when when when h s h k is prime so you also have to control this sum and you can expand out a very similar thing but but one simplification is that prime numbers don't have very many factors so so the dk actually will turn out to be just one so you it becomes a similar sum but you only go up to k minus one okay you get lots of similar things you also have mu's also f's but the last entry in the f is now zero and you get a similar sum okay which I won't write down exactly the conditions but you get a similar sum on an f make progression but rather than some one which we can do very easily we have to we have to sum the prime counting function theta and so this you know again this will be bound by some main term and the main term in this case if you work it out like you turns out to be so like this plus an error but but now the errors is not just bigger one it's a more complicated error and if you want to control the error yeah you need one of one of these distribution estimates so so this is controlled by Bombay over in Gradoff or one of the area half the time can conjectures or maybe this move to Hashi pins shame okay and it turns out that that that these guys one of these guys will work if your function f has a has a support in a certain range and the more powerful your distribution hypothesis the larger the range of the larger the range that your function f can be supported it so there are certain conditions and maybe for lack of time I want exactly what conditions will come up but but if your f is supported in a small enough range and depending on what level distribution you want to accept you can deal with this error and then you have a main term and then after some more multiple number theory you find that this is equal to beta times b log x and beta is a similar thing to this guy sister now you you you this is a k dimension integral now you only integrate k minus one variables and differentiate in k minus one variables and the final entry is zero you square that and so that you have a different quadratic form of your function f and that's beta k there's similar forms of beta one and beta k minus one is permute the indices and so you get these these numbers and so as long as your function f has certain support conditions which I've not written down you can you can you can write these down explicitly and then so you have this calculus of variations problem now you have to you have to sum all these betas and try by alpha and you have to choose a good cutoff to make to make this this ratio as big as possible so for example it turns out that if you choose a good choice of f you can make this you can make this ratio grow like like log of k basically f has to be a tensor product of one dimensional functions and then cut off to a simplex but if you pick f's to to optimize this and turns out that this can grow like log k which is why for k large enough you can get about log k primes in admissible k tuples which is which is where you you get bounds which are exponential in m that's that's where these sort of bounds come from and for small k you can use computers to plug in like a explicit polynomial for f and you just numerically find what alpha and beta are and so for small for small alphas this is how we get the explicit bounds on on gaps for small k okay so in the last zero minutes I would just say that okay so there are these support conditions which are a bit restrictive so if you want to maximize this ratio you want to make the f's as range over as large a class as possible and so a lot of the program a lot of our polynomial project was was obsessed with finding larger and larger support ranges for which you could actually run this argument so there's a couple extra tricks that you can use here so so one thing you notice is that the sum the sum of the primes it is much harder to estimate than the sum without the prime so this is in balance of difficulty so you can you can try to exploit that by by sort of transferring some of the difficulty of of this expression over to this expression so rather than so so here we expand k fold sum here and then we're summing one which is a very easy sum here and then a messy thing here but actually you could you can you can play it's like different game like you could you could just sum up to k minus one and you put some sort of divisor sum in here yeah there's there's some obvious functions and whatever you can move some of some some of these terms over here you have to split up the s and into pieces in order to do that but but you can rearrange the sum a little bit so that rather than something one which is sort of too easy to sum you some are something a bit more difficult you sum a divisor function here at the cost of yeah and then you get to remove one variable over here and and then that expression begins to look a lot a lot more like the type of expression that you had to control for the other sum and so it turns out that if you shift things over you can you you can relax the support condition a little bit basically rather than making this guy by one half you can you can you can make yeah this guy and you only need the sum of k minus one of these guys rather than k of these guys to be one half plus all permutations so you get an enlarged region here that's one of the tricks we used another trick we use is that okay for for this sum we we don't need an asymptotic that's that's too strong for we need we just need a lower bound okay okay we just need a lower bound on the on the on the prime density in order to prove our theorem so if you just want a lower bound here you don't need to estimate every single term in this big messy expression with a good error error bound you just need a lower bound and so what we found is that we found that this elementary inequality is very useful so yeah so you we have a square here and so and the the enemy is actually those these which which are which are very big need need the edge of the support of F and and and those terms those those give you the worst error terms that are hardest to control so you can split this big thing into two expressions one coming from all the d's which are a little bit smaller and then and then the boundary terms ones that are close to the boundary so you split into into two terms a plus b and the worst terms are the cross terms b times b and and they they give you modular that are just too big for your whatever your earlier help us am conjecture is to obey but b squared is positive so you just drop it and you just you just use a lower bound like this consistency of these capitals and these expressions that they're modular are still low enough that that that you can still use your distribution of policies you can still get a useful lower bound it's not quite as good as beta k but just you you you you have to give up a little bit in your lower bound but you still get you still get a non-trivial lower bound for these expressions even when your function f poke so outside of the support that that you you previously had so it turns out that this is trade off between enlarging the support of f and shrinking this lower bound and the trade off turns out to be slightly advantageous and so using those tricks this is how we could get k all the way under three in particular and to get a optimal bound of three two now I okay I wanted to talk about the parody problem of why you can't beat that but I think that we'll have to wait until the next lecture What? Any questions or comments? So all these expressions kind of you have to look at that right now from any perspective right okay you can do anything and one of these are obvious kind of constraints are there bound to one other side it sounds like you can't undo so it sounds like it sounds like an equation you can't solve from prime and you know simply you can't do that Not all of the images I don't know of any examples over function fields I think there are some some examples like this there are some polynomials which can't be prime even though there's no obvious obstruction of them being prime then Conrad and Conrad and some other people have some experience I mean given you do some non-disfunctional function fields where there are obstructions which are not congruent like yes that's true okay can you give an example not off the top of my head but there are some there are some polynomials over function fields which may be used essentially constant without being one yeah can you tell us a reference where yes that's the was pointed out by the Conrad's by by Brian Keith Conrad or at least by one of them but they think by both right yeah so over function fields there's a lot more structure and so there's a lot more going on somehow but yeah but over the images this is there should be no extra structure in the primes beyond the local obstructions but that's the common belief we have not known it for a very long time so the evidence has accumulated yeah I mean everything that we can prove is consistent with with randomness I mean but yeah we can't prove randomness directly I mean the primes are not actually random so thank you again