 So, I'd like to thank the organisers first of all for giving me an opportunity to speak. I've been really enjoying these number three web seminars and certainly I personally would hope that they maybe could continue even after coronavirus because I think they've been a really really good thing and I've really enjoyed lots of the talks. So, today I'd like to talk about primes in arithmetic equations to large moduli. So the basic question that I want to think about is just one of the most important questions in prime number theory, how many primes are there which are less than some number x and in some residue class a modulo q. So you give me maybe your three favourite positive integers x, a and q and I have to tell you how many primes there are which are less than x and come into a modulo q. And I'm going to write pi of x, q, a for this number of primes. And as I'm sure everyone here is very familiar with, provided a and q are co-prime, we know that there's infinitely many primes in any given reduced residue class a modulo q. And so as x becomes larger and larger and larger, the number of primes in any of these residue classes gradually tends to infinity and this is usually the famous theorem. Of course if a and q aren't co-prime, then there must be some common factor and so there can be a most one prime in that residue class. So given Dirichlet's theorem, the focus of my talk is really on the how many bit and quantity of questions about how large x needs to be before I start getting good estimates for the number of primes in these different arithmetic progressions. And in particular, a very natural guess would be that the only important constraint is the one that Dirichlet noticed that a and q have to be co-prime and after a certain point, it shouldn't just be that we have several primes in each of these different residue classes modulo q. It should be that actually we have roughly the same number. So maybe a refined question after Dirichlet's theorem is how big does x need to be for there to be about the same number of primes in each of the different reduced residue classes modulo q. So just to make sure we're all on the same page, here's a numerical example which sort of typifies the sorts of cases that I'm interested in. So you could have decided that your favorite three numbers in the world are 1,000, 1,000,000 and 33. And so then the challenge for me would be to ask how many primes are there which are less than a million and they have their final two digits, final three digits has been 033. So then the residue class 33 mod 1000. And you can check this on a computer and you find that there's 172 different primes which are less than a million and then the residue class 33 mod 1000. And in fact you can check this for lots of choices of a you find that the number of primes which are less than a million in whatever residue class is always some number between 172 and 280. So in this case we're looking at a modulus q which is about the square root of x and we're seeing not necessarily perfect liquid distribution but a pretty similar number of primes in each of the different residue classes. And it's particularly when q's around the square root of x which is going to be the case that I'm particularly interested in here. And in this case we're not getting perfect liquid distribution but the primes are almost liquid distributed in the different residue classes and you're seeing again a decent number of primes in each of these possible residue classes. So what can we actually prove theoretically on this question? Well unfortunately very little. So the only unconditional result we really know in this setting is the Ziegler-Valvitz theorem which says that if q's quite a lot smaller than x then you do indeed get liquid distribution. But for the Ziegler-Valvitz theorem the threshold for when you get liquid distribution is when q is at most some fixed power of a logarithm. So you can have a half too large fixed power of a logarithm provided you're looking at big enough numbers so you could think of this as being provided q is less than log x to the power 100 and x is large enough then you get an liquid distribution in different residue classes much like q. But unfortunately for lots of applications in number theory and elsewhere we would really like q to be able to be taken to be quite a lot larger than this arbitrary power of a logarithm but this has been an open problem and we've been completely stuck on this for a very long period of time to make any possible progress to allow q to be even just a little bit larger than a power of a logarithm of x would require to get some new results on so-called Ziegler-Landau exceptional zeros and this would have all kinds of other amazing consequences for bounds on class numbers and many different problems in arithmetic number theory and we really don't know how to disprove this potential existence of a few bad zeros which can really mess up this count but without understanding more about Ziegler-Landau zeros we don't have any hope of doing better than what's applied by the Ziegler-Landau set. So this is maybe a bit disappointing we quite quickly get one nice quantitative result but to make any progress on this nice quantitative result we need to overcome some famous huge obstacle that we really have no ideas of how to even approach but we believe that the there shouldn't be any of these bad exceptional zeros and in particular we believe that the generalized dream hypothesis should be true and if you assume the generalized dream hypothesis since that excludes these bad zeros it does enable you to get made progress and in fact it enables you to prove a pretty strong result. So if you assume the generalized dream hypothesis then you get this same equidivision that there's roughly the same number of times in each residue class much about q and this works provided q's smaller than the square root of x so when x is large the square root of x is much much larger than just the fixed power of the logarithm of x and the fact that q can be as large as a power of x is really useful for Bayer's applications. So if we take this big assumption on series of the Riemann Zeta function and Dirichlet L functions then we get a much more powerful and much stronger equidivision in statements but even this grh bound we think is way short of what should really be the truth and so Montgomery conjectured that you don't need to have anything like a stronger restriction as making q of size smaller than the square root of x in fact provided q's just a little bit smaller than x so maybe x to the power not .99 he conjectured that you should still have this equidivision of finds in any given arithmetic progression much about q so even the grh here is really not giving us anything quantitatively like the truth although it's a really noticeable improvement on the original Zebo-Valfitz found so maybe this is a bit depressing that we can only prove this in comparison very very weak result with Zebo-Valfitz theorem and for lots of applications we'd like to have results that are much closer to the grh bound or indeed potentially beyond the grh bound and closer to Montgomery's conjecture and for an individual q we are completely stuck and we don't know how to do this but for many of these applications we don't actually need to show you that you have perfect equidivision for every individual choice of q it's okay to just show it's true for most values of q and so the famous theorem in this direction is the Bombay-Vinogrado theorem which roughly says that the generalized dreaming hypothesis bound is true on average so this is often written as the way I put it as a theorem here that when you sum overall much like q a little bit smaller than the square root of x and you look at the worst residue class then the difference between the action number you get and what you'd expect if you had perfect equidivision this quantity is small but if you're not very familiar with these things that might look like an awful mess but the key consequence of it is it's shown for a growing proportion so for the vast majority of q's in this grh range going up to the square root of x we do have this perfect equidivision and so it's only a small number of potential bad moduli that could possibly cause any problems and they're suitably rare that for lots of applications this is actually good enough so there's a huge number of results where maybe people originally proved it assuming the generalized dreaming hypothesis and then the Bombay-Vinogrado theorem came along and it enabled them to get an unconditional proof of the same result because it works for q's in the same range and it doesn't matter that there's only um there could be a few bad exceptions the fact that it's true for most of the q's is the important thing here so in particular in particular for sieve methods this is an absolutely vital result and it's exactly as good as the dreaming hypothesis so for lots of results in sieve methods you gain absolutely nothing over the Bombay-Vinogrado theorem if you assume the generalized dreaming hypothesis because you only ever care about these average results for the typical values of the modulus q so just to summarize for an individual value of q we really start stuck to the Ziegler-Walford theorem and we have no idea of how to make progress if you assume a big conjecture like the generalized dreaming hypothesis then you can get up to this threshold of around the square root of x but for many applications an adequate substitute is the Bombay-Vinogrado theorem that says the generalized dreaming hypothesis is true on average over q in the same sort of range and so the key thing I want to highlight here is that this is exactly the same range as the generalized dream hypothesis for q's going up to about the square root of x and this square root of x really seems to be a fairly fundamental barrier so we believe that one should be able to go beyond the square root of x indeed I mentioned Montgomery's conjecture that says even just pointwise for an individual q you should be able to go up to x the power 0.99 but we don't have any terribly plausible way of proving such a result even assuming big fancy conjectures like the dream hypothesis or the pair correlation conjecture when you're talking about q's beyond this square root of x so the square root of x has become a big barrier and maybe one challenge that's existed for a while in analytic number theory is just to get any improvement on this square root of x to go from the general from the Bombay-Ovinogrado range where you can handle most partially which are a little bit smaller than the square root of x to be able to just get some tiny improvement to try and handle most partially that are a little bit bigger than the square root of x so this is something that we don't know how to do but it would be really nice if we could do this and this square root of x barrier is not just a artificial limitation of our techniques for lots of applications as well there's something special that happens around the square root of x so kind of there's a few different ways in which you can think of this as an important barrier so one reason it's very difficult to show anything going beyond the square root of x is that morally the generalized dream hypothesis says that all the zeros are on the one half line the Bombay-Ovinogrado theorem is saying that most of the zeros are on the half line and there's suitably few exceptions that there's only a few bad modular q that could cause any problems but to go beyond q in the range the square root of x you need to show interactions between different L functions rather than just one L function itself and we have no even plausible way of thinking of any technique that would show cancellation between the ordinates of different zeros even if you assume questions about the pair correlation conjecture and their distribution along the half line so it really seems a very difficult question that we have no idea of how to tackle and the other point is that there's several problems where it's the behavior about finds in arithmetic progressions right around this square root of x barrier that's a critical threshold for whether you can understand it or not so with the work of Golds and Pints and Udyr on banner gap screen crimes they could show you banner gap screen crimes if you could get a seatful version of the Bombay-Ovinogrado theorem that went just a little bit beyond this one half barrier just an epsilon beyond it similarly there's other problems such as the titchmarsh divisor problem or artin's conjecture on primitive roots or the number of representations of primes as which can be represented as the sum of two squares plus one where all the proofs of these results naturally run into problems when you're talking to do with primes in arithmetic progressions which are very close to the square root of x and for some of these problems you can get around this by other slightly ad hoc methods but it's a fundamental threshold that comes up in lots of different problems the behavior around the square root of x being a critical point where you either need another argument or you need a result which is in the direction of the challenge that I've been up here so this challenge has been around for quite some time we believe it's hard and this is still very much open and I would love it if someone could prove this even for some like pathetically small constant delta so in the direction of this challenge there was some really pioneering work over several different papers of Bombay-Ovinogrado, Freelander and Ivaniac which did successfully go beyond this square root of x barrier at least in special cases so they couldn't quite prove my challenge but they could prove various weakened forms of the challenge for different notions of weakness so I just wanted to highlight two different results here and so this is a result of Bombay, Freelander and Ivaniac and if you just fix the residue class A as some constant so maybe we're thinking of A as being one or two and we're looking at the number of primes which are congruent to say one mod Q on average over Q then they were able to show a very weak form of equity distribution in the sense that for most Qs you have roughly the expected count so in the Bombay-Ovinogrado theorem the right hand side was the trivial bound to the right hand side would be pi of x and in the Bombay-Ovinogrado theorem you could make the right hand side smaller by an arbitrary power of a logarithm here they're only beating the trivial bounds by a factor delta squared where delta is the constant of how far beyond one half you're going but one consequence of this is for say 99 percent of moduli Q of size x the half plus one over a million you have almost the expected number so you have you're often the expected number but most a factor of one percent from what you guess for the number of primes which are less than x and congruent to one modulo Q so this just is non-trivial when you're looking at moduli very close to x the one half but it is able to say something non-trivial beyond this Riemann hypothesis range and this is completely unconditional it doesn't require the generalized dream hypothesis and there's no clear way of how generalized dream hypothesis would help you in these sorts of questions at all. A second result is you could weaken the question by not looking at this difference between the number of primes in the residue class and the expectation with absolute values but instead some with certain weights that are nice and the way that they're as old as stated is for certain well-factual weights which means that they can whenever you have Q which can be written as Q1 times Q2 this weight lambda Q factorizes very nicely and moreover it's only supported on integers Q which factorize very well and the key point of these well-factual weights is that this is precisely the sort of problem that comes up in SIV methods so again this only works if you're fixing the residue class A so you're thinking of A as being one or two but from the point of view of SIV methods this second theorem BFI2 is often a completely adequate unconditional substitute for the Bombier-Vinogrado theorem that allows you to move from moduli Q going up to x to 1 half to moduli Q going up to x to 4 satellites and so correspondingly this gives first improvements to lots of different SIV quantities and using this for example you can get a fairly simple proof of Chen's theorem that there are infinitely many primes such that P plus 2 has at most two prime factors so this is a weakening with these well-factual weights but under this weakening you can go concretely beyond one half and in fact is large to four satellites and from the point of view of SIV methods in lots of setups this is just as good as if you had the Bombier-Vinogrado theorem statement itself with four sevenths in place of one half so these are two explicit examples going beyond x to one half the final example going beyond this Zhang Li's dream hypothesis range that I'd like to talk about is the work of Zhang and then its refinements by the Polymath ATA project so Zhang was able to look at this difference with absolute values but one key restriction was that he could only look at moduli Q which are bigger than x to half if he was restricted to Q which had no large prime factors so here he's restricted all the prime factors of Q to be very small but if you make this restriction then over these special moduli he is able to get a certain extension of the Bombier-Vinogrado theorem and this again doesn't quite take the worst residue class for every modulus Q but at least with the refinements of the Polymath project it is quite uniform in the residue class in the sense that you can arbitrarily choose whatever integer you like A to begin with that can be arbitrarily large compared to x and then for that integer you get the vast majority of these special Qs have roughly the right number of times the expected number of times in the residue class A modulo Q and the fact that this was somewhat uniform in the residue class was very very important for applications and in particular for Zhang's application to Banner-Gapps-Green-Frimes so the work of Goldson-Pinzen-Udeum said that if you were just able to cross this threshold of going beyond the square root of x then you would be able to get Banner-Gapps-Green-Frimes and Zhang showed this weak version of this where you don't have full uniformity in the residue class but some uniformity on where you can take much of IQ which are only have small time factors but this was a good enough unconditional substitute to combine with the earlier work of Goldson-Pinzen-Udeum to give Banner-Gapps-Green-Frimes so there's been actually several different consequences of these ideas because as I said they're going past some critical threshold so some related estimates that aren't the three theorems that I've put up here but were essentially estimates fundamentally due to Fuvry enabled Adam and Fuvry and Heath Brown to show that the first case of Fermat's last theorem is true for infinite many prime exponents so of course we now know that Fermat's last theorem is true in general thanks to Andrew Wiles's work but this predated Andrew Wiles's work by about 10 years and so this was the first case when we knew that there were infinite many prime exponents for which Fermat's last theorem is essentially true and this was all based upon results about primes and arithmetic progressions to large much in particular going beyond this square root of x secondly there's the very important work of Shang which I've already mentioned mentioned that when combined with the ideas of Goldson-Pinzen-Udeum Shang's result about primes in arithmetic progressions to large moduli was the key input to prove that there's infinitely many Banner-Gapps-Green-Frimes and so he showed there were pairs of primes that differed by no more than 70 million infinitely often and there's other results so the work of BFI with well-factual weights gives a lot gives improved bounds for all kinds of results from SIV methods going beyond this critical x-1-1-2 range gives good old terms and the Titchmarsh divisor problem and then there's been rather more recent work which for example allows you to get improved ranges for one level density estimates for zeros of L functions or good asymptotics for the number of primes which are represented as the sum of two squares plus one and these recent results are based on the same set of ideas about primes in arithmetic progressions to much bigger than square root of x so today I'd like to talk about some new results about primes in arithmetic progressions that are the same spirits and are given different weakened forms of the challenge conjecture where I'm showing that you get some sort of equidispusion or expect account of primes in arithmetic progressions beyond x-1-1-2 in various different settings so the first theorem is like the original results that I mentioned of Bombier, Fritz, and Van Eyck where we're dealing with a fixed residue class so we're thinking of the residue class A has just been one or two and then I have a result for moduli of size x a bit beyond x-1-1-2 so x-1-1-2 plus delta you're getting the expect account for most of these moduli provided you're looking at moduli that have a convenient size prime factor so the theorem that I've written up here maybe looks a bit of a mess but the key point is I'm looking at moduli q1 times q2 which will size about x to the one half plus delta and I'm ensuring that q1 is neither too big nor too small and provided I'm looking at such moduli that have a convenient size prime factor I'm getting equity distribution for this fixed residue class well I'm now able to get a good error term and go concretely beyond x to the one half so maybe in slightly easier to understand language I could look at moduli of size going up to x to the one half plus say one over two thousand and at least 99% of these moduli I can now prove have the expect account for the number of primes up to x in the residue class a modulo q for any fixed value of a so we're still thinking of a has been just one or two or something like that um another corollary is that if I'm looking at moduli that factor very nicely so I'm thinking of moduli where I have one factor of size x to the power one over 21 and one factor of size about x to the power 10 over 21 so the modulus itself is therefore size x to the power 11 over 21 I'm correspondingly getting a equity distribution result for such watches and the point here is that we can go on a concrete reasonable distance between beyond x to the one half we're getting x to the 11 over 21 and so this compares quite favorably to various previous estimates and all of these results are measuring the error with absolute values rather than these sort of well factual weights or anything that happened before and there's only relatively weak constraints on the moduli that I'm looking at that they need to have one factor which is in a convenient range rather than all their factors being very small and so they can be factorized in multiple different ways um or just get in weaker terms so maybe that's the first set of results that I'd like to mention and the key point here is that it's again a good error term and applying for most moduli the second set of results that I'd like to mention are if you're working with weights so I mentioned that for applications one of the most important results um particularly from the point of view of safe methods is this result of Bombay, Freelander and Vaniec where they can go up to moduli of size four sevenths and going beyond one half to four sevenths in explicitly improves lots of uh constants and estimates coming from safe methods so by refining some of the ideas going into that um I have a result that enables you to estimate times in arithmetic progressions now of size extra three fifths rather than extra four sevenths provided I have some wolf actual weights now I don't quite have exactly the same definition of wolf actual as the original BFI results so it's something that I call triply wolf actual but I expect that this should correspond correspondingly have various improvements to results come out from safe methods and to give one proof of concept um in that direction you can think about just the linear SIF weights which are the ones that are used in most SIF theory applications and before you had this result of Bombay, Freelander and Vaniec which was the state of the art and allowed you to SIF using moduli going up to x the power four sevenths now we can go up to x the seven twelfths so if you're not an expert on the linear SIF or SIF weights or any of these things the key point of these results is that before we had quantitative results going up to four sevenths and this can be improved to either seven twelfths or three fifths depending on the technical situation and this should correspondingly improve any results on SIF methods that was previously relying on the BFI results of which there were several and these are different weakened versions of a challenge problem about primes and arithmetic progressions to large moduli where now we can go up to moduli as large as x the three fifths which appears to be a hard limit of any of the techniques that I know for looking at these problems then the final result that our lives talk about is a set of results which are trying to emulate the amount of uniformity in the residue classes that was then the original Bombay or even a grad SIF so in the original Bombay or even a grad SIF for every modulus that appeared you were taking genuinely the worst possible residue class and this meant that if you look at all the collections of all possible pairs of residue class and moduli you were looking at some absolutely huge number of pairs of residue class and modulus and actually rather more pairs of residue classes and moduli then appear in any of the results that are going beyond x the one half so these are uniform results which allow you to take the worst residue class for all the moduli that you're considering and allow you to go to moduli which are now bigger than x the one half but again I need a constraint that the moduli don't they have some sort of convenient factorization so I'm looking at moduli which are q1 times q2 where q1 and q2 are of somewhat constrained sciences so I can't quite deal with all much so the first result is like a uniform version of the weak bfi results that a trivial bounds is pi of x and I'm only winning a small factor delta over the trivial bounds where delta is how far I'm going beyond x the one half but and I'm also restricted my moduli to look at moduli which are q1 times q2 where q1 is of size about x the one tenth but at least in this regime I do get a completely uniform estimate uniform in the same way that the Bombay or even a goddess theorem was uniform and I can get it non-triply going to moduli which are bigger than x the one half and if you're allowing a slight weakening then I can maybe a more down-to-earth way of saying this is that for almost all moduli of size most x the one half plus delta provided you're only looking at moduli that have a factor which is between about x the two fifths and x the three sevenths then out of that collection moduli almost all of them have roughly the expected number of pines in every possible arithmetic progression modulate q so the key point for these results is that it's enabling you to deal with the worst possible rescue class for every modulus that's considered so I think it was a problem that maybe Ben Green mentioned that overfall fact that if you if for every prime of size x the one half plus delta you picked a rescue class so for every prime q you pick a rescue class aq can you show this at least one prime which is of size at most x in at least one of those rescue classes so I can't do that for prime moduli but if I'm looking at say products of two pines where one of the pines of size about x the three sevenths in the modulus then not only do I show that you have at least one prime in this collection in fact for almost all moduli that you're looking at every rescue class has roughly the expected number of pines so some of these results are slightly technical when stated originally but the key point is that we have various new ways of going beyond this threshold of the square root of x with different weakening of the challenge problem so I have some results where I'm requiring the rescue class to be fixed somewhere I'm summing with well factual weights and somewhere I'm being completely uniform but in each of these different cases I'm able to get non-trivial results showing some sort of weak echo distribution that the pines have roughly the expected number of elements in a given rescue class a modulo q when these q's are larger than this square root of x threshold okay so hopefully I've you have a reasonable feeling for the results that I'd like to talk about I definitely don't want to get involved in too much of the proof in total the three papers that I've finished a bit over 200 pages but I would like to talk about on a slightly high level some of the ideas that go into this so on a very high level the style of the argument is very similar to lots of arguments to do with pines and in particular all the previous works about pines and athletic progressions so if you want to try and count the number of pines of these rescue classes typically you start off by applying some combinatorial decomposition to the counting function of the pines to express it as some counts of products of two pines or products of three pines and things so for experts here I'm thinking about something like Vaughan's identity or the Heath Brown identity or something coming from Harmon's tip this reduces you to asking various questions about products of three pines of particular sizes in different athletic progressions on average and you can use Fourier analysis to reduce this problem to estimating various different complicated looking exponential sums in lots of different ranges so that's supposed to be Fourier analysis not Fourier analysis and then you have these various different exponential sum estimates all of which look pretty horrendous and in lots of different ranges and depending on the precise setup you use different techniques to bound different to get different exponential sum bounds so in some ranges you maybe use bounds that are fundamentally coming from the spectral theory of automorphic forms and here I'm particularly thinking of the work of Desiree Ivenievic based on consequences of the case next of trace formula but we also have completely different ways of understanding exponential sums based on complete exponential sums and reinterpret them through the lens of algebraic geometry and so in particular for various setups you can coerce your messy looking exponential sums into things that can be properly estimated using techniques from algebraic geometry so in particular here I'm thinking about the Vaughan's or bounds coming from the Lien's proof of the Riemann hypothesis for writers over final fields so they're two fundamental ways of bounding these exponential sums and fortunately they combine quite nicely but in some ranges when you can't use this sort of spectral automorphic form techniques you can use bounds coming from algebraic geometry and vice versa so then there's a very messy optimization that you can bound some of these but you have to hope that your bounds are good enough that you can cover all the possible different ranges that are spread out from you from this combinatorial decomposition and the Fourier analysis telling the problem into exponential sums but maybe the tagline is that you're following this sand overview to reduce things to exponential sums and we're then trying to combine different ways of bounding exponential sums some from algebraic geometry and some coming from automorphic forms which often have the tagline clues to mania because it's to do with sums of clues so I'd like to just very briefly mention how these different things arise so the use of the spectral theory and the bounds coming from automorphic forms was a absolutely key feature in the original work of Bombiery through the field language of any edge and in particular the kuznet sort of trace formula allows you to take certain nice looking sums of kuznet sums which are a very special type of exponential sum that fortunately and somewhat miraculously is very common in added number thing in particular these problems about times and mathematical progressions and to reinterpret these sums of kuznet sums as averages of Fourier coefficients of automorphic forms and so if you can understand reasonably well averages of Fourier coefficients of automatic forms this allows you to get very good bounds on sums of kuznet sums and there's a whole variety of techniques for trying to twist and manipulate the sums that you have that come out from these problems about times and mathematical progressions into things that are very amenable to be reinterpreted as sums of kuznet sums and typically these can give exceptionally strong bounds whenever you have the your exponential sums of the right form and another very nice feature is that these methods often work very well and they don't have any particular constraints on the sorts of moduli that you're looking at but one big downside of any of the spectral theory techniques is that when you're looking at questions about say primes and arithmetic progressions they typically don't give any uniformity or very poor uniformity with respect to the rest of your class and so if you're allowing yourself to use the spectral type techniques you typically can't get very uniform results coming out as the final answer so the other way of bounding exponential sums that I mentioned was using techniques that were rooted in algebraic geometry and in particular this was a very very important feature of Zhang's work on finite and arithmetic progressions so here there's a maybe wider variety of exponential sums the meanability of this method provided the exponential sums look like algebraic exponential sums then you can reinterpret these exponential sums in a very algebraic manner and in particular the size of their co-homology groups associated to some curves or varieties and then you can use the deep work of Fein DeLeen to get very good understanding of these co-homology groups and correspondingly get very good understanding and good bounds of these algebraic exponential sums and there's a whole set of techniques for trying to turn the sorts of exponential sums that are spat out by the methods into the right kind of exponential sums that are amenable to these estimates coming from algebraic geometry and one big benefit of these algebraic geometry estimates is that they typically are very very uniform with respect to lots of the parameters involved and so in particular they tend not to worry too much about precisely what residue class is you're looking at but one downside is that it's very hard to then the standard exponential sums that come out if you're trying to look at these one behavior regardless of variance and typically the only work provided the moduli factor conveniently so in Zhang's work he assumed that the moduli could factor as well as he would possibly like because they only had small fine factors you could always modulate many times into factors of whatever size you wanted and so this is why his work was exceptionally amenable to the algebraic geometry exponential sum estimates and the uniformity of the algebraic geometry estimates was why he was able to get a moderately uniform results which was very important for proven banner cap screen times so on a very basic level my work is maybe you're trying to come up with different ways of combining these two different approaches based on the spectral theory of automatic forms and different estimates coming from algebraic geometry but unfortunately if you try and do this naively the two techniques really don't combine well at all that it turns out that there's a single worst case scenario essentially for both techniques which is when you're looking at products of five fine factors all of size x to one fifth in this decomposition that comes out and in order to try and handle these your whole my whole method would almost degenerate entirely down to Zhang's argument to handle this one special case and so you're almost winning nothing if you just try and combine two arguments however to make progress the key idea of what I did was to introduce refinements to both methods based on the spectral theory of automatic forms and refinements to the algebraic geometry style estimates to handle what was previously the worst case scenarios and so there would then be new worst case scenarios but fortunately after handling these worst case scenarios what's the worst case scenario for say the spectral estimates is actually one of the best case scenarios for the algebraic geometry estimates and so the methods then combine very nicely since I have to use some of the spectral methods that's why the first few theorems I had still required the versioned class to be fixed since I still need to use some of the algebraic geometry estimates that's why there's still some requirements at least in the first few results on the moduli having a convenient size by factor but these are both fairly weak requirements because I'm using them in there I'm only using the techniques in the regions where they're most effective and they're very complementary to one another once I've handled these worst case so then I really don't want to go and dive too much into technicalities of what I'm modifying on either the spectral theory approach or the algebraic geometry approach but I'd like to give at least a few hints of some of the flavor of the things that I'm doing so for the new ideas modifying the spectral theory approach to these different estimates that come out from looking at primes and arithmetic progressions my modification is inspired by a technique known as the amplification method by Friedlander Navalny and in fact in some ways it's the sort of opposite of the amplification method that the amplification method tends to involve some trivial step and at the beginning that looks like you're losing an awful lot and encountering a different but rather more complicated sum but then you can use greater knowledge about estimates in families to nonetheless win a little bit so you're doing a huge gambit and typically in the amplification method you want to increase the contributions from certain diagonal terms whereas in my case you want to do almost the opposite and decrease the contribution from best diagonal terms but the approach is still similar I'm going to take a gambit where I'm doing something that looks exceptionally artificial at the beginning and looks like it can only possibly hurt me but it turns out that once you filter through the argument this technical modification at a critical stage of the argument allows you to win a very small amount even if it makes everything else much more complicated but if that critical stage is the one bottleneck in your argument and you can adequately handle all the other more complicated estimates then you can nonetheless balance these things to enable you to win and so what I do here is I introduce a completely artificial congruence constraint at a very early stage of the argument once you go through some of these technical manipulations this congruence constraint after flipping devices translates into fact right into forcing a factorization of one variable and factorizations are very convenient for the point of view of lots of analytic techniques and this factorization allows you to win a tiny bit in the most important part of the argument even if you're losing quite a lot and lots of the other parts of the argument but fortunately you can balance things so that you win a bit in the important case and you don't lose too much in any of the unimportant cases and this allows you to just win overall and this handles the completely critical worst case scenario of the original bogey-free language arguments. On the algebraic geometry side I introduce some rather different new ideas which are instead inspired by transference ideas coming from additive commentatics so transference principles in additive commentatics say that if you're looking for certain estimates in a dent set you often don't need to know anything much about the dent set provided you can understand certain more technical estimates to do with a set which they lie in which is dense and the sort of philosophy for how you do this is you repeatedly apply calcium salts to replace certain unknown coefficients by smooth ones and you accept that you're just building up lots and lots and lots of technical conditions every time you're playing calcium salts you're maybe doubling the number of technical conditions that apply but you're winning something because you're smoothing some of these complicated coefficients and you hope that you can arrange this that you can eventually come up with some some which maybe involves lots and lots of coefficients and a huge number of technical conditions but all the coefficients are completely smooth and you can estimate these even though there's lots of these different technical conditions so my set up's a little bit different because I'm only have sparse variables rather than dense variables that restricts what you're able to do but following this sort of philosophy of just accepting that you're piling up lots of technical conditions which you're only going to deal with right at the end I am able to come up with arguments that enable you to get non-driven estimates in at least some very particular ranges and I guess one of the key features of this algebraic geometry approach inspired by the transference ideas is that these are now completely uniform with respect to the rescue glasses so this is totally vital for the final results I was talking about which are uniform in the same way the Bombier bit of glass is uniform. Okay so I'm almost out of time but just to summarize what I said there's then this very technical game of trying to hope that all your refinements are quantitatively good enough that you can just about cover all these different ranges and so you have lots of different estimates that work well in different ranges and you hope that combined these estimates can cover all possible ranges and because we could deal with the previous worst-case scenarios we can now change the worst-case scenarios which is when the spectral estimates combine very well with the algebraic geometry estimates and that allows you to fundamentally cover all different ranges although lots of additional technical work is needed to get good quantitative balance. So here's a very brief style of the different kinds of techniques where we use in either algebraic geometry or spectral theory to follow various different styles of estimation, exponential sums based on various different previous works and with different refinements each of these can handle some important range and this is just enough to cover all different ranges that come in a common tool decomposition for the primes and putting everything together this gives the results that I mentioned about primes and arithmetic progressions. Okay so thanks a lot for listening I hope you enjoyed the talk and I'll pass over questions. Thank you James for your beautiful talk so I'd like everyone to unmute their microphone and clap for our speaker please.