 Kiitos paljon for invite me to talk here again, it's a great pleasure to talk about detecting primes in multiplicatively structured sequences. So this is joint work with Yuri Merikoski and Jionita Raveinen. Okay, so I will start with a sort of a gentle introduction to sieves and detecting primes by sieves and then I will go on to talk about our recent work but I will just start with the introduction to sieve problems. So the basic question with the in a sieve problem is that they have some sequence A of natural numbers and we want to know if there are primes in A or if there are many primes in A and of course we can take the set A to be anything and then ask this question and I have some examples here. So if we take A to be the set of integrals up to x such that n plus 2 is a prime then we are asking if there are thin primes if n such that n plus 2 are both prime so this is asking about thin primes and the second example we can take a co-prime integrals a and q and we can take A to be the set of integrals up to x such that n is a mod q and then we can ask if there are primes that are a mod q so it's a old and famous theorem of Dirichlet that they are indeed infinitely primes b which are a mod q but if we take x to be not too big compared to q then the problem becomes more difficult and in particular if we have x to be q to l for some constant l then this is called linix problem because linix wasn't the first one to show that there exists a constant l such that we actually have a prime b which is a mod q and it's a size at most q to l for some constant l and the third example we could take the set A to be an interval which is short compared to the point where the interval is on the real line so we could look at the interval from x minus x to alpha to x and with alpha being between zero and one and then this is called primes in short intervals if we want to deduct primes in this set and then the fourth and final example which is a bit different from the others we can take the set A to be the set of all integrals which have an even number of prime factors so obviously this set doesn't have any primes because the primes have exactly one prime factor but for reasons that I will explain so this is actually an important example of a sequence that we might want to try to see and in general what see methods cannot see they can give us upper bounds of correct order of magnitude for quite a wide range of parameters so for instance here we can get the correct order of magnitude upper bounds for the number of between primes and for the number of primes in arithmetic progression and so on I will discuss more about what is known about these examples in the next slide and that's for the lower bounds if we use just sort of classical see methods they can give us lower bounds for the set which are called SHZ which is the set of numbers in A whose all prime factors are at most Z so a bright BZ for the product of all primes up to Z so if n is called prime to BZ that means that all the prime factors of n are at least Z and if I take A to be a subset of one to X and then if I take Z to be X to half then all the numbers in SHZ are primes so this is indeed real one this function okay so this is the basic thing that the see methods aim for we are given some set and we want to know if they are primes and so what's known about these examples so for the twin primes we have a upper bound due to prune using prune sieve which is of expected order of magnitude we know that the number twin primes up to X is at most a constant times X over log squared X and this is what we expect and on the other hand we do have a lower bound for almost twin primes which is primes such as B plus 2 has at most two prime factors so omega is the total number prime factors and for them we have the lower bound of size X over log squared of X and this is due to Chen using a classical sieve plus a switching trick as for the example 2 which was the Linux problem so asking for the least L such that for any code prime A and Q the X is the prime B which is A mod Q and of size at most Q to L and this is known for LA going 5 due to work of Xuris following and refining the work of Heatprown who had 5.5 and Unic who saw that there exists an L and then there are lots and lots of presets in between Unic and Heatprown basically giving different improving values of L and in order to get this the proof of the Unic's cure one needs to use deep information about zeroes of L functions one needs the zero free region one needs log free zero density results and the one needs the sort of repulsion of the zeroes and the third example primes in short intervals again we get an upper bound of correct order of magnitude for intervals of length X to epsilon but a lower bound for the primes we get for intervals of length X to 0.525 by the work of Baker Harman and Pins and this uses a Harman's prime detecting sieve okay and for the example 4 which was the numbers that have an even number of prime factors for this three of course no good upper and lower bounds for the number primes that zero primes in this set so that's of not very interesting okay so I mentioned a classical sieve and what I mean by a classical sieve is a sieve like prunes sieve or rosa evaniet sieve or beta sieve or whatever it is something that takes type one information I will shortly explain what is type one information for the set A as an input and it returns as an output upper and lower bounds for S8Z which was the number of integrals in A which I call prime 2 which has all prime factors at least so we have some type one information for A and then we feed it into a classical sieve and then we get upper and lower bounds for S8Z and remember that SA X to half is the prime basically the primes in A if A is contained in integrals up to X so this is ready to the primes and we in particular get upper bounds for the primes and type one information is information about the size of the sets A D which is the integrals N such that D times N belongs to A so in order to have good type one information we have to be understood the number of elements in A which are divisible by D and more precisely what is type one information so we have write this A D for the set of N such that D N belongs to A and in particular the size of A D is the number of elements in A that are divisible by D and then we say that the set A okay so this is a subset of integrals up to X I'm sorry there's a typo I should not have I but I should have the integrals from one to X it is given distribution theta if we have some sort of the main term X which is basically the size of the set A and a multiplicative function H such that if we look at the number of elements in A which are divisible by D it's about X over D times HD so quite often in application HD is basically one for all D and the number of elements in A that are divisible by D is about X over D where X is about the size of A and if we know for this for the D going up to X to theta on average then we say that we have a wealth distribution theta and this is what we call type one information this thinking that we have some wealth distribution so we understand the distribution of A in arithmetic progressions up to X to theta and if this HP is one on average then we have a linear saving problem and this can pretend more precisely in a I mean there's a more precise definition of what means being one on average so it means that this product of one minus HP over B one's one has an upper bound which corresponds to the upper bound that one would have if HP was one for the primes but it's not very important in our examples basically HD will be just one okay so this is what we need for the level distribution we need to understand the number of elements in A divisible by D and in our examples in the first example we had now twin primes so the A is the set of X so 10 plus 2 is a prime and for this if you look at now elements which are divisible by D it means that the prime is minus 2 mod D so this is prime genetic progressions on average so Bombieri Vino Grado gives us level distribution half minus epsilon in this case and in the case of our second example the linear type king we are looking at the set A which is ends up to X that the A mod Q and here if X is Q to L the trivial estimate which I will discuss in next slide gives us the level distribution one minus one over L minus epsilon and in the short interval case a similar trivial estimate gives us the level distribution alpha minus epsilon for any epsilon and in the fourth case where we had n being the numbers that have an even number of prime factors we actually have a level distribution one minus epsilon which is the best possible basically level distribution we get it from the prime number theorems cousin for the linear function so from the fact that the sum of linear function up to X is we do X we get the level distribution one minus epsilon so in this case where we don't have any primes we have a very good level distribution so this sum tells us something about the restrictions that we have with the trying to detect primes with the classical C method even if we have a so the idea was that we see if takes type one information and then it gives us information about the SHZ but even if we have very very good type one information it might be that there are no primes so this is why this is an important example okay so I promise that I will tell you how to get this sort of trivial estimate for the level distribution so in this case we had this A being the integrals up to X that are a mod Q and I choose my function H to be the characteristic characteristic function of the numbers that are co-prime to Q and I take my main term X to be X over Q okay and then in order to study the level distribution I need to study the difference of A D minus HD over D times X and plugging in the definition of A we see that the size of A D is precisely the number of ends up to X over D such that N times D is A mod Q and we had that this HD was this characteristic function of D and Q being co-prime and then we have X over Q D here and if you look at this thing here then if we had a D it has some common factor with Q okay so I think I'm messing things up a bit I should have here the numbers that are cool and being comfortable anyway if we count the number of integrals up to X over D such that M times D is a mod Q we get X over Q D usually and with an error term we go one yeah if D is not co-prime to A Q then we don't have any solution at all because A is co-prime to Q yeah in our case so in that case we get a zero for the sum as we should because we have the characteristic function here and in case that D is co-prime to Q we can just look at numbers N up to X over D such that N is A times D inverse mod Q and we get X over Q D plus we go one of them and so we get the upper bound X to theta here and in order to get some level distribution theta we needed the upper bound to be X over Q times say log X to minus 100 and this whole shape theta is at most 1 minus 1 over L in case that X is Q to L so in this case it's quite easy to calculate the to get a level distribution 1 minus 1 over L and I should say that in all these three cases it's possible to get better level distribution that one that is stated in bodyguide one uses a linear form of the error term but I just gave some indication of what sort of level distribution we can get so let's now return to the example four which was the set A the N has even number of prime factors so it had essentially the best possible level distribution 1 minus epsilon but no primes so we can't hope that this type one information alone detects primes but we still get upland lower bounds from the linear sieve for S8Z but we can't just get it for if A is a subset of 1 to X we can't take Z to be X to 1 over 2 in particular the lower bound that the linear sieve gives is non-trivial once we have Z to be X to theta over 2 minus epsilon or something smaller and the theta is always at most one so we can't really hope to detect primes but we can hope to detect say things that our prime or product of two primes like in sense theorem for the twin primes one gets that people's two is infinity of an either prime or product of two primes but in particular as I said we can't get from the linear sieve alone a non-trivial lower bound for the number of primes in our set A but it turns out that this is basically the only obstacle that there exists this example so there is a Bombieri's asymptotic sieve at first I unfortunately have a bit technical formulation but on the next slide I will give a very key consequence of this thing so what Bombieri's asymptotic sieve tells us that is that if we have a linear sieving problem and we have level distribution 1 minus epsilon for every epsilon then there exists delta x between 0 and 2 such that if we write dr for the set of r-tables where the sum is 1 and everything is between 0 and 1 and we have any smooth function c from dr to r and we want to count the number of products of exactly r primes in the set A waiting each prime according to the to its logarithmic size compared to n so if all of these are size say n to 1 over r of similar size then this is c of 1 over r to 1 over r and so on so this is some smooth function for any smooth function we can calculate this to be a expected main term times something which depends only on this delta x and only on the parity of the r so we obtain main terms times delta x if i is odd and main terms times 2 minus delta x if i is even in particular the sort of the number of products of r primes in the set A if we have put an appropriate sort of normalization it depends only on whether i is odd or whether i is even so that's why we can sort of detect numbers that have either one or two prime factors but we can't find primes well because for one either delta x or two minus delta x must be non-zero but it might be that we have a delta x is zero and then two minus delta x is two and this is the case in our fourth example where we had this the set to consist of numbers that have an even number of prime factors in that case we have delta x being zero we don't have anything if we are looking for numbers that have an odd prime factor or number of prime factors but we have a lot of them if we are looking for numbers that have an even number of prime factors and this is sort of a special case of selberg's famous formula no a special case of this is selberg's famous formula that he used in his elementary proof of prime number theorem which says that if we take log x times the sum of primes up to x log b and add the corresponding sum for products of two primes log b log q when we get two x log x plus an error term so this corresponds to summing this for i multiplying one and i going two when this delta x disappears and we get the two from here anyway this is slightly technical but i have a nice consequence on the following slide which says that if we have a linear sieving problem where the level of distribution is very good one minus epsilon for every epsilon and if we know that we have some products of three primes in the set so this is the expected overall point for the products of three primes and if we know that they are the expected order of magnitude of products of three primes then bombier receive tells us that this delta x must be positive and then we can apply the bombier the same formula with r equaling one and so we get that there are also primes in the set a the correct order of magnitude so this is the expected overall point for the number of primes and actually this holds the product of three primes replaced by the product of r primes for any fixed order r so basically if we have this very good level of distribution and we can find the correct order of magnitude number of products of 223 primes in our set then we immediately get that we also have primes in the set so this is pretty interesting and this shows that this sort of my example four having only numbers that have an even number of prime factors is the only example that we have this the only abstraction in case we have this very good level of distribution when minus epsilon of course in the real life we often have a worse level of distribution like for the twin primes we only have half and then we don't have quite similar thing happening okay so know that in the fourth example all the sets are empty so this doesn't sort of give a contradiction with our problematic scenario okay so let us return to winnings problem and see how this is helpful for the winnings problem so recall example two where we had a to be the set of integrals up to x such that n is a mod q and x is q to l and remember that the level distribution was one minus one over l minus epsilon and so for large l this is very close to one so we don't have one minus epsilon for any epsilon but we have something which is very close to one very sorry i'm just one question from Igor Spalinski Igor do you want one newt okay so sorry just wanted to check you had strict inequalities in your condition don't you forbid i equals one yeah i think you're right that this is not immediately i mean so it should be less equal yeah i think you get simulating also in the case that you just have this i mean you are being one and you want being one and nothing else okay yeah yeah you're right yes may i answer the question yes the i equal to one is just an axiom that's it in delta x so it's another condition somehow that i mean yeah so i don't see a problem here yeah yeah okay i mean delta x is just the distribution of the sequence over primes which is not subject to verification it's a simply notation no no but i'm asking about you yeah well yeah yeah i mean so the the this is the the second line is just assumption right for i equal to or from take r less than equal to larger than equal to two no if i equal to one it's not an assumption essentially it's it's it's it's it's involved in delta x i mean it has to be phrased differently right but anyway i don't see a problem yeah yeah there is no problem but it could have been written out better yeah i agree i just wanted to write it down in a short short way yeah okay thank you thank you yeah no problem thanks for the question so back to the Linux problem so we had to have a distribution one minus one well which is close to one if it is large and we also have the reason theorem by myself and you need a revenue there we proved that if we take a and q to be co-prime then there exist three primes b1 b2 b3 that have size at most q to 1 plus epsilon such that b1 times b2 times b3 is a mod q and then q is q3 we can take epsilon to be 0 and well let me just say a few words about the proof it's not important here except that we are using some of the same methods right but in this proof we introduced the so-called multiplicative transference principle and we used popular knesses theorem and studied subgroups of z-q star with small index so we used some sort of additive combinatorial tools to prove this and but the point is that this theorem here shows that this set a has some products of three primes and earlier i said that if we have a very good level distribution and we have products of three primes then we can find primes too so and that's our question to ask is if this combination of these two things could give us a new l-function free proof of Unix theorem because now we have this Bombieri type thing which tells us that it suffices to find products of three primes and we have this theorem which tells us that there are products of three primes there are some issues like in Bombieri's theorem we want the level distribution to be one minus epsilon for every epsilon and also that if there are exceptional characters then the global bond for the products of three primes is necessarily small as there are few primes that are a mod q for certain a's and q's okay and so this is the question can we get the new l-function free proof of Unix theorem and the answer is yes we can so together with Jori Merikoske and Unideravan we are showing that the x is the prime p of size at most q to l so that p is a mod q and so this is a new l-function free proof so the only thing we need about l-function is that l1 chi is at least of size q to minus half actually if we were happy to increase the value of l it would suffice to have any polynomial lower bound for the l1 chi but we can just use this classical bound we don't need the q to minus epsilon bound here and at the moment we have l equaling 350 but this might still change if you heard me talking about this in July I think this has changed but for the better this exponent and we are still checking some details so it might might be that it changes a little bit either to the better or to the worse but definitely we have not optimized the exponent we are aiming for simplicity rather than optimality so we know several ways how we could make this exponent better but it will always complicate the proof of it and we want to make it simple as simple as possible basically and this could be compared with some previous results like Friedland and Univaniits and opera the they also give an almost l-function free proof of Linux theorem and then in the recent papers they calculated the constant and the result was that they get l being 75,744,000 which is quite a bit bigger than 350 and also it was not quite l-function free they used the zero free region for the l-function so they avoided the zero density and the zero repulsion but they still used the zero free region and we don't use it and they also use a low one for l1k so this something that everybody needs to use and there is also a relatively recent l-function free proof of Cranville Harper and Sounder and Sound and well they did not calculate the l but according to them and some other people the l for the proof will be quite big as well and well regardless the very optimized l-function full proof yields l8.5 so this is still of different order of magnitude but on the other hand there are previous works before heat brown where once they used quite deep things about l-functions and got something bigger than this 350 so there's something I mean depending on what you compare to is how happy you are if you compare to this 75 million then it makes you happy but if you compare with this five then it doesn't make you too happy anyway this is what we what we get from this process okay so let me explain how we actually sort of implement this strategy so let us take a to be the familiar set of ends up to x that I am on q and we had a rubber distribution d to be x to 1 minus 1 over l minus epsilon and then we want to find primes in the set a so we study the set s a square root of x which has the primes from a between square root of x and x and now we use bookstops identity as a first step to notice that numbers whose all prime factors are at least square root of x are the same as the numbers whose all prime factors are at least d to half except that we have to subtract numbers who have a prime factor between d to half and x to half and having s a p p here make sure that we take every number only once away okay and now as I told previously we can use linear sieve to get upper and lower bound for this so we can use linear sieve to get the lower bound for this and the upper bound for this and the only problem really is that the lower bound we get from the linear sieve is basically negative and so for the first term we can use the linear sieve to get the lower bound which looks like this there is a function f2 from the linear sieve and then there is the expected main term but and the sad thing is that this f2 is zero so this first line is what one gets if one applies the linear sieve in the traditional way so we get that the number of integration a whose all prime factors are at least d to half is at least zero and this is not very helpful because I mean we have a trivial over bound of zero in any case but the following three blend anyone gets we actually know very clearly very well what is what the linear sieve sort of leaves behind what are the numbers that cause the fact that we get the lower bound f2 rather than something nicer so we can add the things that the linear sieve did not count I will have a much nicer formula on the next slide here we just have the general definition of including everything that we give away in this lower bound which are products of some so n primes times a number m which has all prime factors at least a bit well and it's worth noticing that if we had as I said a we had this set of example four numbers with even number of prime factors then all these extra times would be zero and we would not gain anything but here we gain something and we can do a similar upper bound from the linear sieve for the second term in our lower bound and for that we get some upper bound from the linear sieve where we have a capital f and this capital f is positive so at the first place the lower bound we get for the number of primes is actually negative but then we again get these extra terms and again they will come to the top with a positive sign and combining these things and throwing out some positive terms we get something which is turns out to be useful but for technical reasons I first point out that we will actually want to study numbers between x to half and x with logarithmic weights so one over n type weights and then using this linear sieve taking into account what it use behind we get the lower bound for the sum of one over p of primes between x to half and x which i mod q and this lower bound includes first first negative term which comes from the applying the upper bound sieve for this sap so this first term gives zero basically for the momentum and this gives something negative so the first term here is negative it looks a bit ugly but it's something that you can compute if you have some given value of l you can just compute what is the first term and the second term is just the part of n being two here combined with something else some part from here probably but in any case this one of the things that the linear sieve gives behind is the products of three primes of which have size between x to one over six and x to one over three so we get this sort of lower bound and we write minus s two for this first term and plus s three for the second term and then our aim is to show that this s three is lots on s two if we can do it then we get primes in that are a mod q and have size at most x and here it's worth noticing that if our l when our l gets bigger and bigger this inner integral in the term s two gets more and more narrow so even so it's the negative turns gets very very small when l gets large and then for the second term the l doesn't have such it doesn't make such a huge difference what is the size of l it gets easier to deal with the second term than l gets larger it doesn't sort of shrink like the first term so this gives us some hope we just have to get some positive lower bound of correct order of magnitude lower bound for the s three so this is what we have reduced to proving and if we want to get the explicit constant like 350 then we of course have to calculate this first term and then we have to get something bigger here than what we get from the negative term but anyway we have reduced to studying products of three primes and now what we do here we want to study these products of three primes using additive combinatorics so this is the new innovation in our proof so in the fridlanda anyone it's they instead go into products of five primes which makes they go upon they can get worse already and then they use i think some uh the expansion as with the riket characters and then even so they use the trivial three rates on there for one of the primes in order to and the sort of second moment for the two of the primes to get some something out of it to find the products of five primes instead we study products of three primes and use some additive combinatorial tools to do it so i will to get rid of the primes and turn into sort of problem in additive combinatorics i defined cb to be a normalized count of primes b from this interval that are b mod q one over b so this five q over log two is a normalization factor why if mertens prime number theorem we know that the sum over primes between x one over six and x one over three is log two and we expect that the proportion one over five q of the primes is b mod q so this is normalized so that they expect this to be about one and now if we in s3 we split each prime into residue classes b i so we sum according to the bi being bi mod q we can get a lower bound for s3 in terms of this function cb so we have this normalization factor cubed and when we are summing over a being b one times b two times b three of cb one cb two cb three so this is a triple convolution of the function c and now we know that the average value of c is one because if we sum over all the residue classes then this is just the sum of primes between x two one over six and x two one over three one over b and we know why mertens prime number theorem that it's going to so on average cb is one and on the other hand by the prune ditch must or just the basically the linear c one over distribution one minus one over we get an upper point wise upper bound for cb which is actually two plus twice log one plus three over l minus six over log two it's not too important what's here but notice that when l is getting large this sorry being inside log he's getting close to one and so this cb is quite close to two in case that l is large okay so we need a lower bound for this triple convolution and we had that this c is on average one and we have a point wise upper bound for it and there are two ways to proceed from here we can either use Fourier analysis or we can use a popular cognacet theorem and either way we can show that this holds unless there exists a quadratic character psi so the psi b is not psi a very often okay so I will briefly sketch both approaches to this so the Fourier approach turns to give us a better constant so that's what we use at the end this is what we need and so we have by using the orthogonality of characters we can write this convolution as some of the characters of the Fourier transform or the this is the character sum of c cubed and now the principal character contributes just 5 q squared and the real characters for which chi times c hat is positive make a positive contribution and for all the other characters we can get an l instead of b for c hat xi unless there exists a real character so the psi b is different from psi a very often so I don't go into any details when this is just the sort of the sketch I hope we can do it and to get the pass over this infinity bound is sufficient and on the other hand the other way to do this triple convolution is via knesio so in this case we want to dispose of the function c and instead we'll get sets so we've write a to be the set of b's for which cb has size at least epsilon and then it suffices to show that a belongs to a times a times a popularly basically because this triple convolution is at least epsilon cubed times the size of the set a times a times a or the number of representations that a has in the set a times a times a and now from the average value of g and the upper bound point wise upper bound for g we can deduce that the a has size at least 1 over kappa l and this kappa l is quite close to 2 and on the other hand if we know that a times a is at least has size 5q minus a plus epsilon 5q let by pitch on hole we can see that a times a times a is actually everything and on the other hand we do have a lower bound for a times a from knesio's theorem which is that a times a is at least twice a times h minus size of h where h is the stabilizer of a times a and in particular if the stabilizer has index larger than 2 then this lower bound from knesio is better than what we requested here and the only case where we are in trouble is that h has index 2 and also the only case where we are in trouble turns out to be the case that a is not in h and think about the characters this is the same thing that there exists a real character psi mod q such that psi p is not equal to psi a very often so either way we can get a lower bound for this triple convolution and then we are left with the exceptional case that there does exist a real character such that very often we have that psi p is different from psi a so the very often means like 95 percent of the time for instance or something like this to sort of optimize the numerics we are having a precise proportion something like 90 percent or something like this so it's not like almost always one thing like this and it's close related to having an exceptional zero for the corresponding l functions but we don't have to talk about the exceptional zeroes of l functions we can just discuss whether we have this relation or not and first we notice that it's not really possible that psi p is plus one very very often because well we can use a similar method to show that there are either quite many prime species of the psi p of minus one or quite many products of two prime such that psi of p one times p two is minus one so there's a typo here which I forgot to correct so we either have a lot of prime such that psi p is minus one or a lot of products of two prime such that psi p one p two is minus one but in this other case we must have either that psi p one is minus one or psi p two is minus one because the product is minus one so we can't really have that psi p equals one very often and in the case we have that psi a equals one we can get back to following predland any one it's and we can see the sequence of one convolved website and now if psi p is minus one very often then this saving problem is not anymore again a saving problem it is a very low dimensional saving problem and actually this we can detect primes from this sequence and we can do it precisely say in the case that we have this happening for 95 percent of the time then we can get a get a low point for the number of primes so in either case we can also deal with the exceptional case and this sort of finishes the proof of the Phoenix theorem and then I want to discuss some briefly discuss some more general things that we can do so we can do this more generally there is nothing very specific about the Phoenix problem we can try to see my argument for instance in the case where a is the set of n such that fn is c for some more difficulty function f and some c from a finite group so in case of Phoenix theorem we had c to be a from zq star and fn to be just the reduction of n mod q and that we can also its office is to have some more general structure we don't have to have even this structure but we need to be able to say something about the products of three primes and we can probably say at least something concerning the sabotarev analog following but this is very much in progress so don't ask questions about because I don't yet know what we get there and what's the final result will be and also instead of finite groups we can also get a variant for infinite groups and this for instance implies that we can do primes in short intervals so we can there exists a delta such that if you got primes in it also x to 1 minus delta there's the correct order of magnitude of them and we have not yet calculated this delta but we expect it to be reasonable but not cost will be called 0.525 and in the case we again obtain a proof without utilizing zeroes of the zeta function and then crime will happen and also had such a proof but they did not again calculate the value of delta but we expect to get something pretty reasonable but I don't yet know what and the basic idea here is also that the level distribution is 1 minus delta minus epsilon for the epsilon so it suffices to find products of three primes and in order to do this I don't go into details here I wrote a bit too many formulas on this slide but what we want to do is we want to have these products of three primes and we can again sort of define the appropriate function c which has average value of one and which is at the point wise bounds two plus something and for which we have that we can get into studying a triple convolution of c and basically the idea is that if we multiply three things from short intervals then we end up with a thing in a short interval so we want the product is in the interval of length x to 1 minus delta so we take our primes to be from intervals of length x to 1 over say 1 over 3 to 1 over 3 plus x to minus delta over 100 or something like that and then we end up with primes in the wanted interval but I don't go into details but in any case we get similar problem but now the Fourier analysis is a bit different and also the combinatory is a bit different because we have an infinite group instead of final group so instead of knesses theorem we could apply sort of a popular form of primes 3k minus 3 theorem yeah but on the other hand there are no problems if subgroups of index 2 here so that helps a little bit okay so finally let me give a summary of what I have talked in now here so we saw that if we have a sequence which has given distribution sufficiently close to one and has a multiplicative structure and we can deal with obstructions of subgroups of index 2 then we can detect primes and how to do this is that we use exact form of linear sieve to find products of three primes so to show that it suffices to find products of three primes and then we use some additive combinatorial tools to find products of three primes in the set and in particle we obtain a new L function 3 proof of Linux theorem with a reasonable constant and the new set of function 3 proof of Hoheiser's theorem for primes in sort intervals except that we get the lower bound for an asymptotic formula for the number of primes okay so that's it thank you