 Thank you very much. It's a pleasure to be here virtually. And so we talked about some recent joint work with Tamar Zeigler from the Hebrew University of Jerusalem, which we worked out while we were together at the Institute this year. Okay, so the general question we have here is we're trying to find some sets inside the primes. So just to remind you, if you take two sets A and B of images, we define the sum set A plus B to be the set of all possible pairs or also sums of pairs of one number from A, one number from B. This is sometimes also called the Mikoski sum of A and B. And the general question is that inside the prime numbers, the rational primes, two, three, five and so forth, what kind of sum sets can we find inside the primes? So there are many results of this slater. So you all know Wittegard of three primes theorem that every large odd number is a sum of three primes proven by the circle method. A very similar technique shows, for instance, that you can find infinitely many pairs of primes, N1 and N2, such that N1 plus N2 plus one is also prime. We need the plus one here because, of course, primes tend to be odd and if you add two odd numbers, you don't usually get a prime. But if you add the plus one, then it's fine. In fact, you can generalize it. So in 1992, there was essentially due to anti-allog, he put something, the wording is slightly different, his method also applies here. Using the circle method, you can generalize what I just said. In fact, for any fixed K, you can find infinitely many K-tubbles of primes, N1 up to NK, such that all the pairwise sums, if you say subtract one are also prime. You can also add one if you wish. But so you can find K-primes such that the K choose two different sums, plus or minus one are also prime. So you can find basically finite subsets, finite sum sets in the primes, just from the circle method. And you can do a bit more actually using some more recent results of linear equations and prime student myself and Ben Green and Tamar. But okay, another way of saying what I just said is that you can find large arbitrary large finite sets B whose restricted sum set lies in the plus one lies in the primes. So the restricted sum set of B plus B with a little hat here is the same as the sum set, but we exclude the diagonal sum. So you have to add distinct elements of B rather than the same elements of B. So that's just a slightly smaller sum set. And so this result of Balog that I just mentioned is equivalent to saying that you can find arbitrary large sets of primes. Oh yeah, I should have said that these are also primes. Whose restricted sum set shift by one is also prime. The restriction is kind of necessary at our current state of knowledge because if you don't restrict to the sums will be different and you want the Bs to be prime, then in order to have the full sum set plus one B prime, you need to find infinitely in primes P such that two P plus one is also prime. And this is the surface domain prime conjecture which is believed to be as hard as the current prime conjecture which is open. So we need that restriction of having distinct sums, but as long as you're only adding distinct primes, then we're fine. Okay. One corollary of this result is that the primes can, I say, so this is restriction that you have to take distinct sums, but you can remove it by, you can divide your set of, your set B into two equal halves and your restricted sum set of the whole set will contain the full sum set of the two halves. So as a corollary, there exists arbitrary large finite sets A and B such that whose full sum set A plus B lies in the primes. Yeah, so the primes contain large finite subsets. And as I said, so these methods were proven using the circle method several years ago, myself and Tammy and Ben Green developed the theory of what we now call higher order Fourier analysis or sort of a higher order circle method, which is a whole story in itself. But as a consequence, we can actually solve other linear equations in prime. So just to give you one example of what these results can prove, you can now show that you can find arbitrary large sums, sets A of natural numbers, where not only do the pairwise sums lie in say primes minus one, but in fact, you can make all the non-empty sums. So the sums of two numbers or three numbers or four numbers or any non-empty subset, you can make one less than a prime. Another thing that is the primes minus one is what I call an IP0 set. So that's kind of a variant of these Balak results. But these are all still finite patterns. Okay, so all these techniques allow us to only create finite subsets inside the primes. And so we would like to also say something about infinite subsets. And for a long time, these results were considered kind of out of reach to a large extent, but in recent years, there's been some breakthroughs in finding infinite subsets, not inside primes, but inside sets of integers of positive upper density. So if you have a set of integers, the upper density of that set, you restrict the set to a finite interval, a minus center n, say you look at the density inside the interval, you take Lin's soup, that's called the upper density. Okay, so sets of positive upper density are kind of large, maybe like, say the square free numbers have upper density, pi squared over six or six over pi squared, for instance. Okay, so I just mentioned some recent results. So if you have a set, if you have a large set of positive upper density, firstly, it's known that it can be an infinite subset. It contains the sum of two infinite sets. This was an old conjecture of Erdisch and it was only proven in 2019 by Mariah Richter and Richardson. And then this was very recently extended by a slightly different method, that in fact, you can contain some sets of K infinite sets, not just two. So for any K, any set of positive upper density contains the subset of K infinite sets. So that generalizes the previous result that was proven by Kraut, Mariah and Richardson last year. And there's a variant, also conjectured by Erdisch that A also contains not quite a full subset. That's actually not possible. There are examples of sets that don't contain full subset of any infinite set, but you can contain the restricted subset of an infinite set, which is also inside A, but you have to shift the restricted subset by a T. For example, if A consists of odd numbers, then you need to shift by an odd shift in order for this to work. So every set of positive density contains the infinite subset B, whose restricted sums also lie in A after a shift. Okay, so this is similar to this without a baller I mentioned before, but now we have an infinite set. And the set of primes has been replaced with a set of positive density. So these results were conjectured by Erdisch or these questions asked by Erdisch and they're proven by actually methods from ergodic theory and topological dynamics. They're quite nice proofs. There's actually a quantum article about it if you're interested in reading more, but I will not discuss them further here because we will not use these ergodic theory methods. But these results inspire Tammy and I to look at whether we could say something similar for the primes, even though the primes of course do not have positive upper density. So the first observation is that all these results continue to hold for the primes if you assume a really powerful conjecture, which is the Dixon-Hardy-Lewer primetubules conjecture. So this is a well-known conjecture. Just stated, I need a bit of definition notation. So we call a subset of the integers admissible if it avoids at least one residue class mod p for every prime p. So it has to avoid one of the two residue classes mod two, has to avoid one of the three residue classes mod three, mod five, mod seven, and so forth. So for instance, this small set zero comma two is admissible, it avoids the odd numbers and it also avoids say one mod three, one mod five, and so forth. But zero comma one is not admissible because it doesn't avoid any residue class mod two. And zero two four is also not admissible because it doesn't avoid a residue class mod three. So some sets are admissible, some sets are not. And you can even have quite large admissible sets well, again, they all have to have density zero, but you can have quite large ones. Okay, for example, one can check that the set of odd square numbers is an infinite set, which is admissible basically because only half of the residue classes are quadratic residues. Okay. All right, so just to remind you what the primetubules conjecture is. So if you have an admissible k-tubble, so k-numbers which avoid a residue class mod p for every p, then this tuple is what we call prime producing that you can find infinitely many n such that when you shift the tuple by n n plus h one up to n plus h k that they're all prime. And in fact, the conjecture is more quantitative. So not only do we have infinitely many prime k-tubbles but if we sum the Vomangold function on those k-tuples and you sum them, we should get an asymptotic that the sum of this correlation k-point correlation of the Vomangold function up to x should be x times a certain explicit constant depending on h one h k, which is called the singular series, which I won't define it because I don't need it, plus a lower-order term. Okay, so this is the primetubules conjecture is the standard conjecture and it's very powerful and widely open. So even, so for k equals one, it's just the prime number theorem. But for k bigger than one, we have, in fact, there's no known k-tuple with two or more elements for which this conjecture is known. And for example, if you could prove it for zero comma two, you've proven the true prime conjecture. There are various results. So this is true on the average in some average sense, but as stated, it is not known for any fixed k-tuple. In function fields, we're beginning to get some progress but that's another story. All right, so if you have this conjecture, then you can do all kinds of things. So it was observed by Andrew, who might even be the audience, I think, some years ago that if you assume this conjecture, then also this a should be a prime. Then the primes contain some sets of k-infinite sets for any k. So for any k, you can find k-infinite sets b-1 through bk, where the sum set b-1 plus b-drop to bk, they're all prime. All right, so this a should be a p-chips, all right? And Tammy and I observed that you can modify the argument and can get sort of the other type of result we mentioned that in fact, you can find an infinite set of primes who's restricted some set, shifted by one is also prime, consistent how they're primes. Okay, so for finite b, this is the result of Ballog and that you can also do infinitely if you assume this conjecture. And these results are not difficult. I think Reynolds paper is like two pages long and this proof is also like half a page. You basically construct these sets by a greedy algorithm. You construct finite sets with these properties and you just because of the prime tuples conjecture, you can always keep adding elements to each of these sets while keeping all the sums that you need to be prime. This is, you have to be slightly careful to make sure that you always keep your tuples admissible so that you don't get blocked and you can add another prime. But that's not too hard to ensure. Okay, so these are relatively easy results and they're easy because the prime tuples conjecture is really, really strong. So of course, we're more interested about what happens unconditionally if you don't assume this conjecture. And that's a much harder question. And in fact, there were no results at all in this direction until about 10 years ago. So the first result is well-known but it's not usually stated in terms of infinite sumsets but I will reformulate it in terms of infinite sumsets. So in 2014, there was this big breakthrough on the question of finding bounded gap between primes. So Yisheng Zhang in 2014 famously showed that there is some H such that, well, that there were bounded gap between primes that go infinitely often so that you can, there's an H such that there are infinitely many intervals in N plus H which contain two or more primes. And H was initially, I think 70 million but right now the best value of H has been reduced to 246. And then surely afterwards may not give a simpler proof of Yisheng's result and generalized so not only could you find intervals containing two primes but you can find intervals containing K primes. So for any K, there was some H, I should be H subset K, sorry, such that this interval contains K primes for infinitely many N. And the HK in fact grows about exponentially in N in K. It shouldn't, it should grow like K log K but this is the limitation of the method. Main add uses what we now call the main add sieve which is so barely able to capture multiple primes but not super efficiently. So this is unwanted exponential which we would love to remove but unfortunately so far that's not been possible. Okay, so these are well known results and we can reformulate them in terms of infinite some sets. So after one application of the Puginal principle these results imply the following statements. Okay, so Zhang's result tells us there's at least one pair at one two element set and an infinite set such that the sum set of the infinite set and this two elements at zero comma H all lie in the primes. This is a very opaque way of saying that that they're bound to get between primes because if you think of what zero comma H plus B is it's the set of all pairs and N plus H or the union of all pairs and N plus H where N ranges over B. So saying that there's infinitely many pairs of primes with a gap of H is exactly the same as saying that the primes contain the sum set of one infinite set and a two element set. And because of the subsequent balance by polymath this H we don't know what H is. Conjectually H is two. So if you have the trim prime conjecture you can take H equals two but we know that H is that whatever this H is somewhere between two and 246. But I cannot tell you effectively what H is currently. Okay. And the main answer result similarly is a similar statement that for any K you can find a K element set A and an infinite set B such that A plus B lies because this is entirely a primes. So main answer result about K tuples of small diameter implies that there's a sum set in fact is equivalent to saying that there's a sum set of a arbitrary large but finite set A and an infinite set B like lying inside the primes. So this is the state of the art. So we can get infinite sum sets where one of the two sets is infinite but the other set is finite. So that is what was known. And so what we've been able to do is get it. So we were initially hoping to get something like a false infinite sum set like the sum set the two infinite sets A and B such that all sums A plus B are prime. We can't do that. We can do it with the hardware bitwork injection back that was grand also so. But unconditionally we can't do that yet. And I think it's beyond reach of our methods but we can get half of an infinite sum set in the primes. So I can find one infinite sequence A1, A2, A3 and another infinite sequence B1, B2, B3 and so forth such that not all pairs A plus BJ are prime but only those A plus BJ were i's less than j. So if you think of the sums as being arranged in an infinite grid sort of the only the upper triangular or lower triangular portion of those sums are prime. So we can make half of the sum prime but so we have sort of an infinite partial sum set in the primes. Yeah, so this result is stronger than may not result because if you restrict the A set to just the first K elements then we get infinitely many shifts of the first K tuple A1 to AK being prime. So this result generalizes may not result and we use the same sieve that may not users to prove his result. Okay, so this is our main theorem. There's another way of phrasing it in terms of what we call prime producing tuples. Okay, so a tuple of K numbers A1 to AK we say it's prime producing if there are infinitely many shifts of the tuple N plus A1 up to N plus AK, which are all prime. So for example, the trim prime trajectory is asserting that zero comma two is a prime producing tuple. So we don't know that. The only prime producing tuples we can actually prove unconditionally are singletons. There's no larger set than a singleton for which is prime producing which is kind of embarrassing but that's the state of the art. At least there's no explicit one but because of the result of Shang we know that there exists a prime producing pair. There exists a pair of two numbers A1 and A2 which is prime producing. I can't tell you which pair it is. I know that I can make the first element zero and the second element somewhere between two and 246 but I can't tell you exactly what it is. Main arts theorem asserts that you can find arbitrary large finite prime producing tuples. So for any K there is some prime producing tuple A1 to AK which is prime producing. Again, I can't tell you exactly what the tuple is. I can see that its diameter is at most exponential in K and I can normalize one of the elements to be zero and say, but that's about all I can do. And of course it has to be admissible. And so our result phrasing this language is a refinement of main arts result. So main arts result produces a prime producing K tuple for every K, but for different K you have different K tuples and they are not related to each other. What ours are proof, what ours is is an equivalent statement if you think about it is that we produce an infinite sequence A1, A2, A3 and so forth. We don't claim that the infinite sequence is prime producing that would be the same as getting the full subset infinite subset inside the prize. But we can make every finite segment of this infinite sequence prime producing. So there was an infinite sequence A1, A2, A3 and so forth system, the first K elements are prime producing for every K. So we are producing prime producing K tuples that are nested that each the K tuple was contained in the K plus one tuple. Whereas main art could create a K tuple that were not nested. So that is our contribution. And there's an elementary argument that says that this version of the theorem is equivalent to the previous version I said before. It's basically expand all the definitions and then you'll see it. Okay, so it's basically just saying a little bit more about the structure of prime producing tuples. All right, so now we'll talk about the proofs. So all these results of Yutang Jiang and polymath and main art and ourselves they rely on the strategy of Goldstein-Pinzen-Yordem. So Goldstein-Pinzen-Yordem or first Goldstein-Yordem and then Goldstein-Pinzen-Yordem had this strategy over many years to try to produce small gas regime crimes and they were partially successful. But the ultimate strategy is basically to rely on the vision principle. And so there's the following elementary fact that if you can, so suppose you want to find, suppose you have a tuple K one to AK. Oh, sorry, this should be H one, H K, that's a typo. So if you have some admissible tuple H one, H K and you want to find an N such that all the shifts N plus H one up to N plus H K prime or maybe all is too much, but maybe at least some at least say L plus one where L is something less than K. So if you want to to get at least L of these K shifts to be prime, one way to do it is that if you can find some sort of sieve weight new, if you can make some function on the integers which is non-negative and let's say it's finally supported so that all these sums are finite. If you can find some non-negative function new such that if for all your shifts, you sum new weighted by the indicator function of N plus H J being prime. If this sum can be made larger than L times just the raw sum of new, then just by the push-null principle that means that there was at least one value of N for which this sum, okay. So this sum therefore has to be at least L for some N in the support of new and that's precisely saying that at least L plus one of these kinds of prime. So for instance, if you can get, if you can put this down for L equals one that you can get this expression bigger than one times this sum here, then you know that at least two of these numbers are prime and this was Z-Tang Jiang's approach to get boundary castration primes. But if you can get this bigger than a larger number then you can get many primes in a small interval when this was main arc strategy. So basically the strategy is then to come up with a clever choice of shift function, a new such that you can make this expression or the ratio between this sum and this sum as large as possible, okay. So larger than one would give you pairs of primes, larger than two would give you triples of primes and so forth. Okay, so this is the basic strategy. Okay, I like to view this strategy probabilistically. So these weights, they're usually defined by some, by some sieve, usually some variant of a solar serve actually, but you can also think of this weight as a probability distribution. Well, you have to normalize it. You divide the weight by the total sum and that gives you a probability density. And so you can think of new as just defining some random natural number, random variable with this density function. And another way of saying the previous strategy is that if you can find a random distribution of natural numbers with the property that all the shifts M plus H1 and up to M plus HK have a reasonably large chance of being prime that if all of these events, M plus H1 prime, M plus H2 prime and so forth, if they are primed with probability strictly bigger than L over K so that when you sum over K, they're strictly bigger than L then the pigeonhole principle will force at least for one of these ends that you can get L plus one of these K numbers prime. All right, so that's just a reformulation of this pigeonhole argument that I said before. So basically you wanna find a sieve or a random variable and such that all these numbers are, all these shifts M plus HJ have a reasonably large likelihood of being prime. If you can get the probability bigger than one over K you can get pairs of primes bigger than two over K you get triples of primes and so forth. So you want something larger over K. All right, now there's an obvious choice. Okay, so you could just declare new, I mean, you can just force all the ends to be prime by fear you can make, you can define new to be the indicator function of the event that all the M plus H1 up to M plus HK prime and then maybe you can restrict end to some dietic range to make it probably supported. If you could choose this, then theoretically all these probabilities would be one because now by construction every single M plus HJ is prime. Unfortunately, this doesn't work because we don't even know, because we can't prove that this function doesn't vanish identically. In fact, showing that this function doesn't vanish is basically equivalent to this Dixon-Hardee literal conjecture. Of course, if we had the practical conjecture we had asymptotics for this guy. And so the sum of new is non-zero but we can't choose this because the sum of new could vanish. So we can't actually restrict the primes. And so what we do instead is that we replace the restriction of primes by some upper bound sieve like a silver sieve because those things we can sum and we lose something by doing so. And what we lose is that these probabilities drop below one but if you can keep them above one of a K you can still get something non-trivial. Okay, so the problem now converts to that of finding a really good sieve approximate to something like this. All right, so you can go apply a standard sieve so you can just apply a standard silver sieve to this problem. And if you do it naively you get something but not something great. Okay, so if you don't optimize anything you can find a standard sieve and where the probabilities of N plus H one N plus HK over prime is something like exponential small on K. I think if you're really careful you can make it polynomially small on K but remember we need to get at least one of a K in order to be useful. Okay, so this by itself doesn't really help. Now, so the breakthrough of Godson-Pitts and Yoderam was to choose a certain silver sieve type expression here which, well, okay, I won't go through it so it looks like what you would see in a standard silver sieve just a few tweaks. And if you combine it with the Bombay-Venegrado theorem they could get, they could almost get boundary gaps. So they needed to exceed one of a K and they just missed, they got one of a K minus something that goes to zero. So they couldn't quite get boundary gaps between primes. This was good enough to get small gaps between primes. Gaps between primes are smaller than the average gap, but not actually bounded gaps. And if you assumed anything better than the Bombay-Venegrado theorem you could get boundary gaps. For example, if you assumed the Elliott-Hubert-Stamm conjecture. So this is what Godson-Pitts and Yoderam did. Yuteng Zhang, he made a slight modification to the Godson-Pitts Yoderam set. He basically restricted the modulite to be smooth but the main work he did was to improve the Bombay-Venegrado inequality. So the Bombay-Venegrado inequality is a statement about primes and arithmetic progressions. Usually it only is useful for primes of spacing up to like one half but he was able to get a little bit above X one half a smooth modulite. And the upshot of that rather complicated work was that he was able to improve the Godson-Pitts Yoderam bound from one of a K to one plus a small constant of a K. And this was just good enough to get boundary gaps between primes. All right. So this was basically how Yuteng Zhang's work proceeded over simplifying a lot. And the polymath project sort of optimized that and also main arts work. Okay, so Maynard then introduced what we now call the main arts sieve and it's a similar sieve. So the Godson-Pitts Yoderam sieve, you take all these numbers H one up H K you multiply more together and you sieve over the factors of the product and then with some sieve weight. So Maynard observed that you can do better by being more flexible. So rather than sieve over devices of the product, you factor the product into individual pieces, you sum over devices of the pieces and you choose a more flexible weight to sieve over. Like technically, this is what we would call the smooth main arts sieve. He used what we might call the roughs main arts sieve that's where F is not necessarily a smooth function but it's a bit more athletic structure but it doesn't really matter that both sieves give the same results pretty much. But anyway, Maynard worked with sieves like this and this is a function of many variables D one up to D K. So there's another type over here, which would be D K. And you can optimize an F and it turns out that because of the multi-variable nature of this function, you can squeeze an extra log K out of this sieve compared to the GVY sieve. And so the upshot was that by a careful choice of F and just using the vanilla Bumbier-Venegado theme, you did not need Zhang's fancy Bumbier-Venegado theme. You could show that with this weight, the probability that each of the events is prime is now a bit bigger than one of the K log K of a K. And so you can force about log K of the K numbers to be prime. And so this is how he was able to create arbitrary large parameters in tuples. All right, so this Maynard sieve it actually has other properties. Okay, so I mean, this is the basic property that makes the Maynard sieve useful that it is somewhat likely that each of these shifts are prime. But you would like more. Okay, so all these events have somewhat large probability. It'll be nice to have them all independent. Then you could do a lot more, but proving independence of these events is about as hard as the four-hardied literal prime tuples conjecture. So that we can't do. But thanks for your work and Maynard observed that you can at least get correlation upper bounds. So for example, if you want the event to control the event that both of these numbers are prime, you'd expect this to be about the square of the event that one of them is prime. If you believe that these events are independent. So we can't prove that, but by using just a bit more sieve theory, you can show that you can at least get the expected upper bound up to a constant that lose a factor like four or something. But the probability that both of them are prime is bound up above by constant times the square of the previous bound. So we do get some second moment control, but only upper bounds. You can also control third moments and fourth moments by a similar method, but they're not super useful. The bounds just get worse and worse. But this extra information is quite handy. So Banks, Tribe, and Maynard use this to get new results about, I think, the limit points of normalized gap between prime. So you take the gap between consecutive primes, PN plus 1 minus PN, you divide by log N, and they show that this set converged to, the limit points of a set are quite large. I think that has measure at least one ninth or something. So these are quite useful additional estimates that this sieve enjoys. So the way our Tammy and I prove our theorem is that we modify this method. So instead of having just sort of one K double and sort of design a sieve attached to one K double, we have a tuple which we arrange into buckets. So we have an admissible tuple which consists of I sub tuples, one of size J1, one of size J2, and so forth up to a bucket of size J, I plus one. So we have a big tuple of primes that we split up into a certain number of buckets. And for each bucket, for the Ith bucket, we can make all the shifts in the Ith bucket prime with a reasonably high probability that, so in the Ith bucket, there are J sub I primes and we can make each of these shifts prime with probably about one of a JI. And also within each bucket, we can make the probability that a pair of them are both prime, about one of a JI squared. Okay, so we have JI events which are all the size of one of a JI and they don't overlap too much. They're somewhat disjoint. So they're still pretty small, but the advantage of getting bars like this is that you can just take them, you can combine them using either inclusion, exclusion or cashier shorts. And if you combine all the events in a single bucket into one big event, so if you define EI to be the event that at least one of the shifted primes in the Ith bucket is prime, then because each of the events has size, probably one of a JI and there's JI of them and they don't overlap too much, you can show that the total probability is somewhat like spent by an absolute constant. Okay, so to summarize, we found some random variable N and then we found these finite number events E1 up to EI where each of them occur somewhat often that the probability of each of these which is a prime producing event is bounded away from zero. It's bounded by an absolute constant. Now here we are dealing with a finite number of events, but it turns out that there's a standard trick in agotic theory that once you have a situation involving a finite number of events of one up to I, but I can be set arbitrarily large, there's a way to pass through limits in I infinity and you can actually work in an abstract probability space where instead of having a finite number of events, you have an infinite number of events, E1, E2, E3, E4, so forth, which all occur with large probability. And if you can prove some combinatorial statement about the infinite family events, you can deduce various things about these finite configurations of events. So the fancy name for this is the first one because one is principle. But let me just wave my hands and say that you can send I to infinity by a trick. Basically the idea is a little schooly theorem. It's just some tricks from analysis. Okay, so now we have an infinite sequence of events in a probability space and they're all somewhat large, but they could still be somewhat disjoint and we want them to intersect each other in some nested way. And it turns out that there was a tool already just ready made for this that was proven by Vitaly Berglson almost 40 years ago. So it's an abstract lemma in probability theory that is very convenient, which is that suppose you have a sequence of events an infinite sequence in a probability space E1, E2, E3 and they are all large. So they all occur with probably at least say 1%, okay. Now they could still be somewhat disjoint, okay. So maybe E2 and E4 don't overlap or maybe E2 and E4 overlap but the intersection doesn't intersect E6. But among this infinite sequence of events, you can show that there exists a subsequence of events EI1, EI2, EI3 and so forth, which intersect each other a lot in the sense that any finite number of the subsequence of events has positive intersection. So while the whole sequence doesn't need to intersect each other, there was a subsequence and in fact, even a subsequence positive density there's a subsequence of events which such that any finite number of these events overlap. So any finite number has a positive intersection. So it's kind of a fancy version of the Pigeonal Principle. I mean, intuitively if you're trying to cram together an infinite number of events of large probability into a probability space of measure one, the Pigeonal Principle already tells us that at least two of them have to overlap but in fact, you can get a whole infinite sequence of which all overlap each other. The proof is actually not hard. You take the average of the indicator functions and you take the limit and you apply for two's lemma and this is like a five line argument but I won't give the proof here, this is elementary but if you apply this lemma to the previous situation, you can find a subsequence of buckets. So you have these infinite buckets, infinite sequence of buckets and each one corresponds to an event that something is prior and applying this lemma, you can find it and a subsequence of buckets where any finite number of these buckets, the events intersect. So that any k, the probability that the first k buckets of a subsequence produce a prime is positive. And that pretty much closes the argument. Yeah, you need one more application, the original principle but that's basically how we prove our mean theory which I'm restating here that you can find two infinite sequences such that the partial sum is our prime. We can actually move a little bit more than this. So it's not only just the case that there exists two sequences but we can say a bit more about the sequence a1, a2 and so forth. In fact, a method of proof shows that even any infinite admissible set looked on the odd squares, there is an infinite subset and we can place the AIs inside any admissible set we wish. So for example, the odd squares are admissible so I can make the a1, a2, a3 or odd squares if I wish or any other admissible set. So it's like a slightly stronger statement than what I said here. That type of claim had actually previously been proven by this McGrath, he's a student of James Maynard. If you replace primes by sums of two squares then there's a similar statement. Now, of course, this theme I stated is trivial for sums of two squares because I can just take A is to be the squares and B is to be the squares and then A is to be get all the sums of two squares but McGrath observed that you can actually place one of these sets inside any admissible set you wish and that's not a trivial statement. So McGrath actually used a similar method. The difference between McGrath's method and our method, so he also used the Maynard set adapted to sums of two squares. The difference is that for sums of two squares, so as I said, for primes, we don't get asymptotic so pair collisions of primes are hard. This is as hard as the trim prime conjecture at least. So we can only get upper bounds for these pair correlations and not asymptotics. But for sums of two squares, the situation is better and actually for pair correlations or sums of two squares, asymptotics are unknown. The problem is sort of comparable in strength to representing another sum of four squares or actually the difference of two squares and then two other squares. And so the Clusterman type methods work in this case. So McGrath was able to take advantage of asymptotics in the pair correlations. And so he could bypass the use of this intersectively level. So our innovation basically is that we only have upper bounds on pair correlations, but we can still proceed by using this intersectively level. There's some amusing corollaries to our result, which, okay, I mean, they come from dynamics and so maybe they're not as much of interest to another theory as I'll state them anyway. So you can think of the primes actually as an element of the power set of the integers. So the power set of the integers, I do not choose the z, topologically, this is a cantile space. If the product of zero comma one, infinite meaning copies of zero comma one. So it has the topology of a cantile space and the primes are just one point in this cantile space. And then if you shift the primes, you get by some shift H, you get another point in the cantile space and so you get what's called the orbit of the primes and that's some countable set inside this cantile space. And then you can take the orbit closure. So you take the closure into the weak topology and that gives you what's called the orbit closure of the primes and that's some compact subset of the cantile space. And in principle, knowing this orbit closure tells you a lot about the primes. So the reason why this is a nice object is compact and it also has a shift in variant. So it has a nice continuous shift and then you can apply methods of topological dynamics to analyze this space. We think we know what this orbit closure is. So if we assume the Dixon-Hardigou-Prime-Tippos conjecture, the orbit closure is just the originals to the shifts together with all the admissible subsets of C. So every admissible set should be in the orbit closure. Conversely, we can show that every non-trivial any limit point of this set must actually be admissible, that's not hard. But it's conjectured that all the admissible points, the admissible sets are in this closure. So we can't show that, but we can at least show this orbit closure is large. So in the set theoretic sense, we can show that the cardinality of this orbit closure is the carnival of a continuum. In fact, it's basically the union of a carnival set and a perfect set and perfect sets, a non-empty perfect set and perfect sets have the carnival of the continuum. Yeah, so that we could prove by a nice little argument actually supplied to us by Joe Hamkin's set theorist. So that's one cute application. There's also some abstract nonsense. So sets that contain infinite partial sum sets actually have a name which I think they're called RP sets. And one sort of function theoretic consequence of our results is that you can find a function on the primes, a bounded function on the primes, which is not a weakly almost periodic, which means that if you shift it, if you look at all the shifts of this function on the primes, you do not get a pre-compact set in the weak typology of infinity. Yeah, so this is a standard notion of periodicity. Yeah, and it has to do with the fact that every time you can shift the primes so that they contain a k-tuple and then you can shift the primes so that they contain a k plus one tuple that contains the k-tuple and so on and so forth. And if you just sort of take a random function on the primes, you can use that to exhibit a non-weakly almost periodicity. Yeah, so those are just some amusing consequences of our main theorem. I think that's that's all I have. So thank you very much for your attention.