 So I want to speak today about this topic, which has to do with, sorry, I'm just adjusting my screen. Yeah, interactive sets. So an interactive set is a set S of natural numbers, which has the following property. So if you take any set A with positive upper density and I'll explain what that means in just a second. The difference set of A, so the set of all things that are the difference of two elements of A has some intersection with S. So there are two distinct elements that differ by an element of S. So positive upper density means that for arbitrarily large initial segments of the integers, one up to N, the proportion of elements in that segment that are in A is greater than or equal to some delta. Equivalently the limbs up of the number of elements in A up to N divided by M is positive. So I just want to make a sort of slightly historical remark about this, this notion of an interactive set. It's sort of motivated from dynamical systems really or from a Gothic theory. It turns out to be equivalent to being what's called a set of recurrence in that context. So let's take a measure preserving system. So that's a set X with a measure on it and a measure preserving self map T. And if you take a set A of positive measure in X, then there's some N in your set S, your set of recurrence, for which basically the set A comes back to itself under this map T in time M. So the measure of T is the minus N A intersect A is positive. That's something that can be proved by what's called the first and both correspondence principle, which is the essentially the technique that first and both invented to prove Samaritan theorem using a Gothic theory, but I'm not going to discuss that further here. So I'm going to talk about interactive sets. They're the same thing as sets of recurrence. I'm not going to talk about sets of recurrence anymore. Okay. So which kind of sets are intersected. Well, there was a conjecture of Las Olivas in the 70s, which is was that the set of squares are intersected. And that was proven in the late 70s by first and book using methods of a Gothic theory in particular that correspondence principle that I just mentioned. And also independently by sharkers using on the face of it very different ideas related to the hardy little would circle method. So let me just remind you what this means that the squares are intersected it means if you take a set A of natural numbers of positive density positive upper density will be some two of them that differ by a square and non zero square. Erdos asked around the same time, a related question, which is, are the shifted primes P minus one as P ranges over primes intersected. The primes themselves are not intersected. For example, you could take the set of multiples of four, which certainly has positive upper density, but no two elements of that set will differ by a prime for fairly obvious reasons. Erdos also asked about this prime shifted the other way P plus one peer prime. And that was resolved by shark as he as well, again in the late 70s, and using somewhat similar techniques to the way that he dealt with the lowest conjecture. So again using methods related to the circle method of hardy and little wood. The various generalizations of this. For example, the set of squares can be replaced by the set of values of any integer polynomial, provided that it takes the value zero somewhere. Now, I'm going to be more interested, I am interested in quantitative aspects of this problem. To get quantitative. Let's denote by Delta sub squares event, the density of the largest subset of the first end integers, which does not have two elements differing by a square. And similarly, let's denote by Delta sub primes event, the density of the largest subset of one up to n, which doesn't have two elements differing by a shifted prime P minus one. Now the fact that I stated on the last slide the conjecture of low bass proven by first and Berg and by shark is. So the fact that the squares are intersective is equivalent to Delta sub squares event tending to zero, if you unpack the definitions and the limbs up in the definition of upper density. The shifted primes being intersective is equivalent to Delta sub primes event tending to zero, but you can ask the question of how quickly do those quantities tend to zero. And in large part that's actually an unsolved question. So first and Berg's work which uses a Godic theory doesn't give you any quantitative information in fact it uses the axiom of choice at some point. So shark is his work does give quantitative bounds but they're relatively weak. So for example, Delta sub squares event is at most essentially log n to the minus one third. So this means if you take a subset of one up to end of size about n over log n to the one third will be two elements in there that differ by a square. And for the shifted primes it was even worse with shark as these techniques, this Delta sub primes event is less than basically one over a log log term squared. So it tends to zero but only just. What's the state of the art on these. So shark as he's bound for squares has been improved. So if you take log n to the minus a third that was improved to log n to a function to the power of minus a function tending to infinity by Pintz, Stieger and Samaritan. Their paper gave a function that really only just tended to infinity and quadruple log. But more recently bloom and may not showed an improvement of that so that that function omega event can be basically just a triple log. And that's the best bound currently known for that problem. But in the other direction it's conjectured that, in fact, there's a power savings so this is a, what could be true is sort of remarkably stronger than what's actually known. It's conjectured that if you take a subset of one up to n of size n to the one minus C for C and appropriately small positive constant, then there should be two elements in there differing by a square. In the other direction example of ruse shows that you can't hope to get C to be too large so he gives an example of a set of size about n to the three quarters, with no square difference. Now what about the shifted primes so shark as he's bound there was even weaker, and that was improved by Luciae, and then by ruse and sanders and then, most recently by Zoe Wang, who was a D Phil student here at Oxford. He obtained the bound written there so delta sub primes event so the density of the largest subset of one up to n with no two elements differing by P minus one is bounded above by basically e to the minus log n to the one third. Okay. So all of the works that I've just mentioned use. Some variants of what's called the density increment method. And that method has its origins in the work of place Roth from the fifties on three term arithmetic progression so on similarities there are for progressions at length three. So let me just very very briefly sketch how that works just in an overview without any details. So what you do is you suppose you have a subset of one up to n with some density alpha, and it doesn't contain two elements and a prime differing by a shifted prime P minus one. Then you use some for your analysis, aka the circle method. It's somewhat complicated. And using that you locate a sub progression P, which has which is very big. So it's common difference is quite small. So it's common differences at most alpha to the minus C for some constant C, and its length is at least alpha to the constant times n, on which the density of a is a bit bigger. So it's no longer alpha it's now a small multiple a little bit bigger than one of alpha. And then you iterate that argument, keep increasing the density. And after a certain number of iterations about log one over alpha, the density will get all the way up to one, and it can't pass one otherwise you'd have a contradiction. Now, that's an oversimplification in many many ways so I haven't said anything about what using for your analysis means, but also when you pass to a sub progression. You no longer have the shifted primes but you have the shifted primes restricted to that sub progression and you're going to need to understand something about how those behave. And understanding how the price behave on progressions is a very very famous classical problem. It has to do with zeros of Dirichlet functions. If you allowed yourself to assume the GRH, you're not likely to be able to understand what happens when the common difference goes much beyond square root event so that's a sort of hard barrier to that method. Let's look at what this gives this limits, even if you use the generalized Riemann hypothesis, this limits the density increment argument to proving something like that the delta sub primes event. The biggest density of a subset of one up to n with no two elements differing by a shifted prime is something like e to the minus root log n. It's a bit better than Zoe Wang's bound of e to the minus log n to the one third, but actually not all that much. So the main thing I want to talk about today is a result I proved last year, which gives a substantially stronger bound for this delta sub primes event of power saving. Another way of saying this is that there exists a constant C bigger than zero, such that if I take n to the one minus C elements up to n, some two of them will differ by a shifted prime. So if you allow yourself to assume GRH, you can actually take that constant C to be somewhat reasonable about 112, anything less than 112. So let me make a couple of remarks about that theorem. Well, first of all, what should probably be true. It should probably really be true that this delta sub primes event behaves almost like n to the minus one. So in other words, as soon as you take a subset of one up to n, even of size n to the 0.01, which is tiny, there probably should be two elements in there that differ by P minus one. But that does seem to be just a completely hopeless problem. Now, the most naive sort of random heuristics might suggest that even a set of size something like log n squared should contain two elements that differ by a shifted prime. But Roger actually constructed a much bigger set of size almost the power of n but not quite so e to the log n over log log n, with no difference of the form P minus one. Now, another remark is suppose you have proven the theorem that I stated here or you want to prove the thermo stated here. Well, just by considering a particular set a, so the set of all a less than n which are divisible by some number q. Now, you can convince yourself that the fact that that set a contains two elements differing by a prime minus one implies that there is a prime congruent to one mod q. And of size at most q to the one over this exponent C. Now that's basically Linux theorem. The theorem that I stated at the top implies Linux theorem that the least prime in a progression one mod q is at most q to some fixed power L. So those are some remarks about the theorem. And we can't hope to get the value of little C being better than one over L where L is the best known constant in Linux theorem, and you probably shouldn't even hope to get anywhere near that either because this theorem is a more general thing than Linux there. Because parenthetically, the best values for that Linux constant L that are currently known are a little bit bigger than five due to Zylorus building on some celebrated work of Heath Brown, and even on GRH, and only two, two plus a bit. Okay. So what are you going to have to do to prove that main theorem. Well, from the discussion so far. First of all, you'd have to do something other than the density increment argument because as I sketched that density increment argument that has been used in all of the previous works, even on GRH won't get you such a power saving. You're probably going to need, because you've got to prove Linux theorem along the way, you either have to use Linux as a black box, or you have to use somewhere the things that are used in proving Linux theorem, which are basically a lot of classical about zero density estimates how many zeros of our functions there are near off the half line near the one line, and something called exceptional zero repulsion, which basically says that if there's a zero very very close to one, it sort of repels all the other potential zeros a little bit away from it. So, you kind of know that you're going to have to do these two things to prove the main theorem. So for the rest of this talk I'm going to be, I'm going to assume the generalized Riemann hypothesis. And that easily implies Linux theorem so all of the zero density estimates and exceptional zeros and so on, basically become trivial if you assume GRH. And so for the most part going to ignore that second point. But I can't I still can't ignore the first point, even if I am assuming GRH. So what is the plan if we're not going to use a density increment strategy. Well, what I do is to prove that actually the shifted primes have a stronger property than being intersective which is called the band of corporate property. And rather than stating that in the abstract. Let me tell you the theorem that I shall prove. So assume GRH. Then there's a function psi from the integers to R, which has the following properties. So first of all, it's supported on the set of shifted primes less than M. So it's only non zero on that set. Secondly, it's Fourier transform is exponential sum or rather the real part of its exponential sum is almost non negative everywhere. So it's lower bounded by essentially minus n to the 11 over 12. But if you think about it the sort of if psi is a function that has some average value one, then that exponential sum could be as small as minus M, which is a lot smaller than what I've written there. So this is sort of saying that the exponential sum is close to being positive everywhere. So this is this is just a normalization condition, and I'll be a bit vague about what that tilde means but yeah I'm going to assume that this side does in fact have average value one. So there is a function supported on the shifted primes, which has almost positive for your transform, and that's what's called the van der Korpo property. It's too hard to show that if you have a function psi like this, that it implies a good bound for the shark as he type theorem for the intersected activity property. In other words that this implies the main theorem that I stated on the previous slide. So I won't actually prove that in this talk but I do want to show you a model statement of the same type, which gives the rough idea. So just before going on to that let me remark that this van der Korpo property about having a function supported on your set with an essentially a positive for your transform. It implies being intersective, but it's actually strictly stronger because Morgan constructed an example of a set that is intersective, but which doesn't have this van der Korpo property. We're getting something here by aiming for a strictly stronger result. Okay, so now let me, first of all let me just check the time one moment. Okay time seems to be going fine. Yeah, so I want to give you a kind of model version of the implication that having positive for your transform implies being intersected. And I'm not going to talk about this setting exactly. I'm going to talk about something else. And I'm going to show you an argument due to love us, which proves the following theorem. So suppose that q is a product of distinct primes that are all one month for, and then I'm going to be interested in essentially the shark is a problem for squares, mod q. So suppose that a contained in Z mod q Z doesn't have two elements and a prime differing by a square mod q. Then the conclusion is that the size of a is at most square root q so this is a very good bound, but it's for a, it's for a sort of a weaker problem because being a square mod q is somehow a lot easier than being a square. I mean, you can of course see that most clearly when q is a prime half the residues mod q then our squares, but it's much easier to be a square mod q than it is to be a genuine square but nonetheless this is a really interesting theorem. And that's a theorem, which I think is due to love us and it uses something called love us is theta functional not going to phrase it that way. So let me, let me sort of go through the argument a little bit. So first of all, let's consider a fixed prime that's one mod four. And we define the following function mod p. So the function is is one at zero mod p, and then at the squares mod p, other than zero, it takes the value two over one plus root p. The function supported on the squares mod p, and it's one at zero and this two over one plus root p at the squares, other than zero. Now you can compute the discrete Fourier transform of this function. And so what I've written on the left here is basically the the discrete Fourier transform of the non zero squares. E of t means e to the two pi it is usual. And this is basically just a guy some evaluation. And what you can compute is that the value of this is what's written here. So half times the genre are over p square p might minus one. And what you can observe about that is that it's real, first of all, and this is basically because the squares are symmetric. X is a square then minus x is a square because he is one mod four. And so it's real. And the smallest it can be as is minus a half one plus root p. And therefore the discrete Fourier transform of F, which is defined like that is a real and non negative function. So we've constructed a function here that supported on the squares mod p, and it's discrete Fourier transform is real and non negative. So this is kind of like a nice discrete analog of that band of corporate property that I mentioned before, albeit for a different problem. So for the problem of squares mod q rather than anything to do with shifted prime. And we can also note this normalization properties. So the Fourier transform at zero is one over squaring p. Okay. So that was just for one prime p that's one mod four. Now if I consider a q that's a product of prime to the one mod four. Well I can take a product of F's like the ones that I just constructed. And with a little bit of work with the Chinese remainder theorem what you obtain is a function on Z mod q Z, which has the following properties. It's supported on the squares mod q. It's one at zero. It's discrete Fourier transform at zero is one over root q, and it's discrete Fourier transform everywhere is real and non negative. So that's a function that's just an explicit construction of the product of the functions that I showed you on the previous slide. So how do we use a function like that to get an upper bound on the size of a set a for which a minus a contains no squares mod q. So suppose I have such a set a, so I suppose I've got a set a in the residues mod q such that the only square in a minus a is zero. Well then I can write down the following. So the expression on the left here. There's some circle here means the, it's basically the convolution of a with with the characteristic function of minus a. But yeah this this one a circle one a of x is basically the number of ways of writing x as a difference of two elements of a times f of x now f of x is supported on the squares. And so the only place at which these two functions are non zero at the same time is x equals zero. So that explains this equality here, and then by the normalizations on the previous slide well that's a over q. So as I said this f circle G is a slightly. It's it's like convolution but with one of the functions has been reflected. Okay, so we have that equation. So this expression here you can take the Fourier transform of that and use the functional identity. So the Fourier transform of this is an inner product basically of a convolution and f. So it's the same as the inner product of the Fourier transforms. The Fourier transform of this convolution is the square of the Fourier transform of a. And so the left hand side once you've worked out the normalizations the left hand side is this expression here. The right hand side is non negative. So of course, the absolute value of the Fourier transform of a is non negative of course, but I constructed that weight function f to also be real and non negative. So I can just throw away all of the terms except the R equals zero term. And then again if I work through the normalizations what I get is this expression here a squared over q to the three halves. So if you compare those two expressions. So the first one's an equality of the seconds and inequality you get that the size of a is at most squaring q. So as I said I believe that argument is due to Lasa Lovas, he originally formulated it in the context of what's called the Lovas theta function. But you can decode it into essentially just a, a fairly elementary discrete Fourier transform argument. Now as I said this is a model question that's quite different to what I've been talking about in the main talk, but nonetheless I do want to make a couple of remarks just about that. So the first remark is that actually, no bound like that is known, at least in certain cases where all of the primes pi are three mod four. So really use the fact that the, the primes, when pi is a prime that's one mod four, the squares mod p are invariant under x goes to minus x, and therefore the Fourier transform of them is automatically real. And of course when p is three mod four that's not the case, and things get a lot more difficult. And as far as I'm aware, no bound of the type, the size of a is less than q to the one minus a constant is known, even for this mod q problem. There's a serious obstruction to proving power type savings for shark is his problem for squares, basically you can't even do it mod q, at least if you choose q to be as some products of primes that are three mod four. And then another remark, at the general scheme of argument, this sort of for a transform positivity is actually remarkably similar to the co Nelki scheme for getting bands for sphere packings and various other things so it's quite a common mode of argument retrospect. So let me go back to the main thing that I was talking about, which is the shifted primes. And so let me state again this band of corporate property for shifted primes. Let's assume the generalized Riemann hypothesis so that all Dirichlet functions have all their zeros all their non trivial zeros on the half line. Then I claim that there is a function a weight function psi with the following properties. So it's supported on the shifted primes less than 10. It's almost non negative real part of its for your transform. And it's normalized to have average value roughly one. Maybe I should say to avoid confusion this site will depend on and so there's not one function psi that works for every. So that's the band of corporate property. As I remarked, that implies that a good bound for the shark is the property for shifted primes for the set of intersectivity property, and I just sketched a model argument for how that sort of implication goes. So how are we going to construct this psi. Well I'll build up to that in various failed attempts. Is there a question. Right so what's what what's the most naive thing you can try and do to construct such a function. Well you could just take the function that's one on the shifted primes and zero everywhere else. But if you have any experience at all in analytic number theory, you're tempted to not do exactly that but rather to take the phone bongo function so to weight the primes with a logarithmic term so that their average value. So it's just being the sort of naive first attempt, why don't we just take basically the characteristic function of the shifted. So does that work. Well, it's, it does satisfy the first property obviously. Actually, you might object it doesn't quite satisfy the first property because the from one got function is also supported on prime powers not just primes but let me ignore that. So the third property that's the prime number theorem, but it does not satisfy the second property. And in fact it seems the best way to try and obtain the second property is to is to first of all ensure at least that the Fourier transform is already almost real rather than try and get a real for your part. And that's definitely not the case for this for mongot weight. So for example if I take the Fourier transform with that just at one third. It will already be an imaginary number. So, and over two times one plus E of two thirds. Okay. So what about a second attempt and this looks a bit strange at first site. So psi two of n I basically take lambda for mongot function of n plus one times lambda of n minus one. So it's basically kind of a characteristic function of numbers for which both n plus one and n minus one are primes. So you do bear with me for a moment, you do expect this to have a real Fourier transform, at least at rational numbers. And that's because it's symmetric modular p. So for instance, modular five, we would expect that function to be supported on the value zero two and three modular five, roughly equally. And so it's Fourier transform should be something like this expression here so the sum of these frequencies, zero two and three, which is a real number. Because the set of frequencies to three and zero is invariant under x goes to minus. So there's something where you at least at least at rationals. You expect it to have real Fourier transform. There are some problems. So first of all, there's no reason to expect this to be real and positive. And in fact, in this example mod five if I take articles three here. It won't be positive, even though it's real. And then the problem that sort of staring us in the face is that we don't have any hope whatsoever of saying anything rigorously about this site to because even basically even the fact that this is non zero is the twin prime conjecture. Certainly the fact that this, this has more than finally many numbers and it supports is the twin prime conjecture. So it's an interesting thought experiment. It's a bit hopeless. Nonetheless, I'm going to carry on with that thought experiment and try and address the positivity property. So the function I just defined lambda n plus one times lambda n minus one. If I assume standard kind of hardy little would conjectures on twin primes. I'm not expecting it to have real for your transform but not necessarily a positive one. But to make something that ought to have positive for your transform as well. We're going to put in a further weight which is highly concentrated near zero modulo p which has the effect of making the for your transform modulo p tend to be positive. Let's write down the following size of three of n is what we have before lambda of n plus one lambda of n minus one where lambda is from Mongol times basically the square of a divisive function. So the number of devices event normalized by its average value one over log n. Okay, so the divisive function tends to have extra weight modulo p at zero and the reason for that is roughly the following. So the average of this function is one normalized device function, but the average of this divisive function over integers divisible by p is roughly to essentially because because you're divisible by p that gives you an extra. So you get all of the divisors of n over p, as well as the divisors of co prime to p. So you get twice as many advisors as a typical integer. So this divisive function you expect it to have some extra weight at zero mod p. So if you square it, you get a weight around four over p at zero mod p and that is just enough, at least conjecturally, to make the formula transform of this size three, both real and positive, at least at rationals. So now we have a function that supported on the shifted primes. And if we assume all sorts of high powered hardy little wood correlation conjectures, we can convince ourselves that we expect it to have real and basically positive Fourier transform. Well, at rationals with small denominator. That's a third attempt, but unfortunately, it doesn't, we don't expect it to have positive Fourier transform near zero. And that's basically because near zero, we expect it's for your transform to behave just like the Fourier transform of the unweighted integers really. And that's a Dirichlet kernel, which is something like some sine and theta over sine theta, which is not positive everywhere. So there's a standard technique in harmonic analysis to get around that, which is to introduce what's called a failure kernel instead of the Dirichlet kernel. So instead of just considering the cutoff a sharp cutoff to one up to n, you instead introduce a tent function. This function psi four is is what I had before, but times as a sort of tent function this this one minus n over absolute value of little n over big n positive part, which is basically is what's called the failure kernel. So that is a function with positive Fourier transform. So I think that this function psi four probably ought to have basically real and positive Fourier transform everywhere. So it should be satisfied as Fanda corporate property that I stated before. However, there remains this issue that we have absolutely no hope whatsoever of proving anything about this psi four, because even to show that it's non zero infinitely often is the twin prime conjecture that hasn't made things better by introducing these these are these arithmetic weights to all squared of n or this fair kind of weight. So I've still got that very fundamental issue that can't say anything about this without proving the twin prime conjecture. But regardless of what I sort of might believe on heuristic grounds proving anything about this is impossible. Okay, so what are we going to do about that. So there's the that's our fourth attempt at constructing a function psi with the Vanda corporate property. So here's the sort of another key idea is to replace some of those terms by what I call Fourier truncated proxies for these functions so this is an idea that originates in in Civ theory. For example, the form one goal of n minus one the characteristic function of the primes plus one. Let's replace it by something that's basically a kind of a characteristic function of almost primes, but sort of on has nice properties on the Fourier side. So here is an example of a Fourier truncated proxy for the form one goal function and this is something that I learned from work of Heath Brown. So this is a function lambda q. So it's the sum of a little q less than big q merbius of q over five q times basically a remanage and some so the Fourier, the discrete Fourier transform of the residues co prime to q. So this is a function that's been cooked up to have two properties. First of all, it's been deliberately cooked up so that its behavior it's Fourier behavior at rationals mimics that of the form one goal function. But it's entirely supported on rational frequencies with fairly small denominator. So this is what I'd call a Fourier truncated proxy for the form one goal function. And curated especially to mimic the behavior of one month old on progressions to small modular or equivalently tested against rational frequencies with small denominator. And there are corresponding Fourier truncated proxies for the divisor function as well that you can introduce. So, because those functions are going to be more tractable. When you try and multiply them by this term, the m plus one. Let's just replace these two terms here by their for a truncated proxies. So, as I said, a given example of how these are easier to work with. If I don't want to prove the twin prime conjecture but I'm happy to replace one of the instances of the for Mongol function with its Fourier truncated proxy. Then at least assuming GRH I can hope to get an asymptotic for this sort of an expression with moderately large values of q so even up to q being about into the one half. That doesn't prove twin primes because this lambda q is not supported on shifted primes anymore. But that's that that's not important for what we're doing. Okay, so here is a fifth attempt at defining a function that's going to work for us. So we take all of the features that I've introduced so far. I'm going to replace m plus one, but now I'm going to replace the from one goal to then minus one, and the divisor function by for a truncated proxies for them. And then I'm going to keep the Fayette kernel in there. So that does turn out to work. So to actually show that this is something who's for a transform is basically real and positive. You need quite a lot of machinery. The Hardy little with circle method. And then also some bespoke estimates relating to basically correlations of these truncated for a truncated functions, which I haven't seen anywhere else in the literature. So that turns out to work all of this assuming GRH. And that completes my discussion of the main theorem assuming generalized remand hypothesis. So just to conclude, I want to say a few words about what happens if you don't assume the generalized remand hypothesis. So this makes matters very significantly more complicated. And the paper. So the only paper I've made public is the the one that doesn't assume GRH. I have a manuscript that does it with GRH and that's 33 pages long but without GRH you need a hundred needs 104 pages so why is it so much more complicated. So basically without GRH. This he's Brown approximate the lambda sub q cannot be shown to approximate the from one got function nearly so well. But basically there might be zeros of zeta or of other digital functions that are sort of skewing the behavior of the from one got function. And that lambda qed can't see any of those. So what one needs to do is to introduce a more complicated approximate that basically sees all of those zeros. So here's roughly what it looks like. We call it lambda sharp event. So it's the approximate I had before lambda q event. But with a correction term corresponding to a sum over all of the zeros not just a zeta but of L functions of small conductor. So I'm going to go over all of those zeros I'll say exactly what there is in the moment. And to the row minus one times a term F here which I won't describe it. So row ranges over non trivial zeros of all directly our functions of conductor up to a small power of n, and which have real part, greater than or equal to 0.99. Which there are no such zeros so this degenerates back to lambda q. But of course unconditionally we cannot show that there are no such zeros. All we can do is get a bound on how many there are. So as I say unconditionally you do have these zero density estimates which basically tell you that there aren't too many of these zeros row and they get even less frequent as the real part of row tends to one. So actually that some row over zeros is actually is convergent which is nice. So converges nicely. So that is a kind of corrected truncated approximate to the mongot function, which also sees potential zeros of Dirichlet functions and hence sees more of the potential irregularities of the of the price. But I'm not going to describe what those f terms are. They're not usually complicated but I'm just going to emit a discussion of them. Okay. So you can show that that lambda shop is very close to lambda in the Fourier space, which is a suggestion that it's going to be a helpful thing. So let's go back to trying to construct this weight function psi on the shifted primes. Whatever in here. It's not assuming GRH. It's the same as what I had on GRH, except I've replaced Heath Brown's lambda q function by this new approximate that also sees some of the zeros. So that turns out not quite to work. And you've got to include a couple of extra terms. I've got what I call a damping term. And that is needed to deal with the contributions from those correction terms that I introduced. So that's a very complicated thing. The construction of that is iterative. And as I say pretty complicated involves summing various things over zeros. So that's a really bad zero of a Dirichlet L function. Very, very close to one. And if there is such a thing we basically mod out by by just right at the start restricting to the progression. Zero mod q1 where q1 is the modulus of such a single zero. It also turns out and this this is really annoying this fair kernel. It's quite right. And the reason is that you need to control not just the exponential sum of that, but the exponential sum of that twisted by these end to the row minus one terms. And those. It's quite possible for some of those to be large when the actual the exponential sum of the fair kernel itself is not so large, which is a real issue. But after some messing about. There's a pretty elaborate construction where you replace the fair kernel with a much more singular kernel based on this function x to the minus one half e to the minus x. And essentially some stationary phase type estimates with the gamma function allow you to show that you do have the relevant control there. So as I said this this results in the length of the paper expanding by a factor of three to include all of these extra complexities that you need if there might be. Non trivial zeros of the of C2 and L functions. Okay, so that concludes everything that I wanted to say so thank you for listening.