 Thank you very much. So I'd like to first thank the organizers for putting together this very nice seminar and it's a great opportunity to speak today. So today I'd like to talk about a proof of the Erdos Primitive Set Conjecture. So to begin I'd like to start with a basic definition. We say that a set of positive integers greater than one is primitive if no member in the set divides another. Okay so some easy examples to get us started consider first the set of consecutive integers n plus 1, n plus 2, etc up to 2n in a dyadic interval. So this interval is primitive because any number in the set say j, all of its multiples 2j, 3j, 4j etc are all going to exceed 2n and therefore lie outside the interval and hence no two numbers in the interval will divide each other. Another example of a primitive set is the primes. So no prime divides another and more generally for any number k the set of integers with exactly k prime factors counted with repetition also forms a primitive set. So we'll denote in the talk the set of primes by p curl math kelp and the the product set of k almost primes numbers with exactly k prime factors by p to the k and indeed the k most primes are primitive because if a number has k prime factors then all its divisors will have fewer than k prime factors and so no two will divide each other. Another example of historical significance is the set of perfect numbers. So we say a number is perfect if it equals the sum of its proper divisors for example 6 has proper divisors 1, 2, and 3 and 1 plus 2 plus 3 equals 6 again. So this notion of perfect numbers goes dates all the way back to ancient Greece and the Greeks were fascinated by these numbers and had classified all positive integers according to being either perfect deficient or abundant depending on whether the sum of proper divisors of a number was equal to that number was less than that number or was exceeding that number respectively. So it's a bit less trivial to show that the set of perfect numbers forms a primitive set and if anyone is interested out there to talk I can give a quick sketch of this result. So as a kind of origin story for these primitive sets we really begin in the 1930s in which primitive sets were generalizing one special problem. So by a classical theorem of Davenport the set of abundant numbers which we've seen on the previous slide the set of numbers whose sum of proper divisors exceeds that number this has a wet this set of numbers has a well defined natural density and in fact we know today this density is approximately a little less than a quarter so if you sample sample numbers at random on the number line roughly a quarter of the time you'll expect to get a number of the abundant and this result of Davenport was originally proven using sophisticated analytic methods. However in 1934 Erdisch found an elementary proof of Davenport's theorem by using so-called primitive abundant numbers and his elementary proof led him to introduce the fully abstract definition of primitive sets that we've seen on the first slide and in characteristic fashion this spurred Erdisch to study them for their own sake. So this is the story really begins in the 1930s after which it kind of took on a life of its own. So around this time in the 1930s there have been a number of interesting and sometimes unexpected results about primitive sets for example early on many people including Chala, Davenport and Erdisch all believed that any primitive set should have zero natural density for example the set of primes we know by the prime number theorem there are about x over log x number of primes up to x in particular this density tends to zero as x goes to infinity and people believed that this held more generally for all primitive sets. However in 1934 Besikovic constructed a counter example of sets of primitive sets whose upper density became arbitrarily close to one half and this came to a great surprise to the mathematical community at the time but by contrast in the case of lower density Baron and Erdisch showed that any primitive set must have lower density zero in other words a primitive set may or may not have a well-defined natural density so a limit may not exist but you can always consider the lim inf and the lim soup and it was shown that the lim inf must always be zero but the lim soup can be close to one half and this one half is actually sharp one can show and to show this this result about lower density being zero in 1935 Erdisch actually proved the stronger result that the following series of one over n log n for numbers n ranging in a primitive set converges and he did so he showed that the series converges uniformly over all choices of primitive sets a so in this talk we'll be quite interested in this series sum of one over n log n for n in a set a and we'll give it a name call it f of a so Erdisch showed that f of a this series is uniformly bounded overall primitive sets and later in 1988 Erdisch famously asked if this maximum overall primitive sets is actually obtained by the price so in other words the Erdisch primitive set conjecture asserts that for any primitive set a f of a this sum of one over n log n is at most f of the primes so morally this this conjecture is essentially saying that using this series f as a statistic the primes are maximal among all primitive sets and to make this a little bit more concrete perhaps this series after the primes is nothing more than the series one over two log two plus one over three log three plus one over five log five plus one or seven log seven etc one term for every prime and one can show numerically using mathematical for example that this series comes out to about 1.63666 so in other words the conjecture is asking whether for every primitive set the series f of a is at most about 1.6 and throughout the talk if there are any questions feel free to to jump in and just don't hesitate so some early work on this conjecture was in 1990 first in 1993 due to work of Erdisch and Zhang mind you this is a different Zhang at the time and they showed that f of a was at most 1.84 for all primitive sets a and more recently pomerance and i improved this result of showing that f of a was at most e to the gamma which is about 1.781 dot dot dot and here gamma is just the Euler-Mascheroni constant ubiquitous in mathematics so one might approach this conjecture it is natural perhaps to consider when given an infinite series to just truncate and consider partial sums of this series up to x however we know first that the series for the primes f of f of the primes converges quite slowly so if you look at the series over primes greater than x of 1 over p log p this contributes about 1 over log x which is quite slow and moreover for each x there are primitive sets one can construct primitive sets a of x consisting of numbers all larger than x such that f of a of x tends to 1 as x tends to infinity so in other words this is saying for any potential truncation point x there exists a primitive set a of x whose contribution is all being aggregated around the tail so no uh uniform sorry i think there was some i think glitch right so this is just saying that for any potential uniform threshold cut off x there will always exist primitive sets who would fall whose mass would fall outside the tail and so this initial approach turns out to be inadequate for the problem on the other hand a more fruitful approach towards the Ederich conjecture is actually to split up our set a according to the smallest prime factor of its elements more specifically for each prime p we'll define the subset a sub p consisting of all members n in our set a for which n has smallest prime factor equal to p so we've now split up our set a into a bunch of a sub p's ranging over primes now as a definition we can say that a prime p is narrative strong if the following inequality f of a sub p is a most f of p holding for all primitive sets a and f of p here is just the singleton uh one over p log p so in other words this definition is saying that the singleton set of just p just the prime p is maximal over all primitive a sub p's so one may think of breaking up the full problem into a bunch of local problems once for each prime and indeed the con the full conjecture would follow if every prime were known to be either strong since in that case our series of f of a splits as the series f of a sub p for p ranging over primes and if we knew that every prime order strong we'd have a point was bound on every f of a sub p by just f of p giving the full series after the primes so this is a natural approach to consider the problem and in recent work pomerance and i obtained a sufficient condition for a prime to be error just strong but unfortunately that condition already failed at the first prime p equals two which we found quite disappointing however it did hold for the first 10 to the 8 odd primes so this gave some numerical evidence in favor of this approach and moreover this condition held for over 99 percent not over 99.999973 percent of primes under a strong form of the Riemann hypothesis giving further theoretical and conditional support for this approach on the other hand one can might view this result perhaps more pessimistically and indeed even assuming just the Riemann hypothesis this result says that a tiny but a positive proportion of primes fail this condition specifically there is a so-called prime number race in this case that involving the mern's prime product and even conditionally one can show that infinitely many primes p are failing this condition and this perhaps suggests that the Erdisch conjecture could be false or at least beyond the reach of unconditional tools and in addition to contribute more cautionary evidence it turns out that a translated analog of the Erdisch conjecture is false for the translated sum f of a comma h which we define to be the sum of one over n times log n plus h for n ranging on our set and the original series f of a corresponds to when h is equal to zero so it would be natural to extend the original conjecture for these translated series but it turns out there exists a primitive set a for which f of a comma h exceeds f of the primes comma h and this already occurs for quite small translates h a little bit bigger than one and hence the original conjecture when h is equal to zero if true would only barely be so. Jared? Yes. We have a question in the chat. Sure. Would you want to unmute and ask directly or maybe it's about what is the strong form of the Riemann hypothesis? Good question so for the time I've been suppressing it but so this involves the Riemann hypothesis as well as the linear independence hypothesis which okay so the Riemann hypothesis says all the real parts are of non-trivial zeros on line one half and all the the linear independence says that the imaginary parts are linearly independent over the rational. Okay so that's infinitely beyond the Riemann hypothesis. Yeah but for brevity I agree I kind of as a caricature I just said strong form but thank you for the question yeah great so yeah so so perhaps one might when presented with this these results one might have a kind of negative impression of the problem however as the main result I'd like to nevertheless answer Airdish in the affirmative so that is to say for any primitive set a we have that f of a is at most f of the primes so in other words using this series f as some sort of measure of size some sort of measure of the magnitude the primes are maximal among all primitive sets and more over using these proof techniques we were able to show that every odd prime is Airdish strong so namely for any primitive set A and in any prime at least three we have that f of a sub p as at most f of p establishing these kind of local problems for odd primes however these proof techniques sort of break down and it remains an open question in the case of the first prime p equals two whether or not p equal to is strong and so this is a quite a concrete question still open asking whether exists a primitive set of even numbers whose series f of a is less than one over two log two f of two and as often happens in number theory one can sometimes handle the odd primes separately from the prime the even prime p equals two and I guess for some that reason some people call it the oddest prime of them all but this is this is certainly one of those cases and in this talk I'd also like to mention a related question due to Airdish Sharcozy and some ready from the 1960s so in 1968 they were interested in the related question of looking at primitive sets of large numbers so supposing all primitive all numbers in your primitive set or at least x you were they were interested in asking how large could this series f of a be in the limit as x tends to infinity and they conjectured partly motivated by some probabilistic heuristics they believed that this limb soup ranging over a primitive sets larger than a primitive sets of numbers larger than x of f of a is at most one and using the proof techniques from the original conjecture we're able to make the following progress towards the problem where we show that this limb soup of f of a ranging over primitive sets of large numbers is at most e to the gamma times pi over four and so numerically this is about 1.399 about 1.4 and the conjecture is asserting that this limb soup is actually at most one so I'd like to give a sketch of some of the ideas that go into the the proof of these results and to do so I'd like to begin with Airdish's original argument from 1935 in which he showed that f of a was uniformly bounded over all primitive sets but I'm going to present it in a bit more modern notation than Airdish did and this will be helpful for us going forward so in this sketch we'll show that f of a is at most e to the gamma for all primitive sets and for simplicity in the sketch we'll assume that all our elements in A are sufficiently large and in particular all the large prime factors are large so let as some notation we'll let p of a denote the largest prime factor of our number a and so our series f of a which is 1 over a log a easily this is at most 1 over a log p of a and now once we've done this we can appeal to Mern's product theorem which is a classic result already predating the prime number theorem which says that 1 over log p of a is roughly e to the gamma times the so-called Mern's product of primes up to p of a of 1 1 minus 1 over p and this is useful for us when we're assuming that all our numbers are large where this rough equality is is really working for us so at first glance this step might seem a bit arbitrary but the advantage here is that we can now recognize this expression this this Mern's product as representing the natural density of a specific set namely if we introduce the definition l sub a for a number a to be the set of multiples b times a where all the primes dividing b are at least the largest prime p of a and indeed the natural density of this set l sub a contributes to factor 1 over a because all the numbers are multiples of a and in addition there are no small primes p dividing b which corresponds this corresponds to this Mern's product of 1 minus 1 over p for all the small primes and so now what we've what we've shown is that at least when all the numbers are large f of a is at most e to the gamma times the sum of densities of these sets l sub a ranging in our set a and up till now i've i haven't used any properties of the set a at all but clearly in order to say something about primitive sets we need to make use of this condition and in fact the the key property that we use is that for a primitive set these sets of multiples l sub a ranging over a set a and a will be pairwise disjoint and so this is the key this is the key property that Erdogan needs and we'll in fact show this in a few sites but the important consequence for us is that assuming once we know disjointness this tells us that the sum of densities of la equals the density of the union of the l sub a's ranging over our set a so we can give this a name this union as l sub capital a is just the union of l sub little a's so we've shown that essentially our series f of a is at most e to the gamma times the density of our union l sub a and finally the density of any set is the most one so we conclude an upper bound of e to the gamma that our series f of a is at most e to the gamma and this completes the sketch at least of the proof and again in order to make this rigorous fully rigorous one needs to consider the case where there are small numbers involved in order and use explicit estimates in order to make this fully justified so this was the overall kind of strategy in the in a in a proof to get some bounds on our series f of a but in order to make further progress we identify two crude steps used on the previous slide in Erdosha's argument so the first step is is the very first one in which we bounded the largest prime factor p of a by a itself and this step clearly can be wasteful for example if many elements had small primes so in other words suppose we were in the special case where we already knew ahead of time that p of a squared were at most a for all elements on our set then in this special case just going through the argument once again would automatically give us a savings factor of two and therefore improve the bound to e to the gamma over two so this numerically is already less than 0.9 in particular less than f of the primes which is about 1.6 as we recall so in other words in this special case where we had this additional knowledge we get the the full results so the other step is the final step where we bounded the density of this union of survey just by one so this step is a bit more subtle but one can show that this state can be wasteful if the elements on our set have known multiplicative constraints so an easy the one of the easy examples uh of such constraints were if our set a contained only odd numbers in which case one can show that the density of this union else obey is less than one half so in other words in another in the second special case where there are additional constraints on our set we can also get a savings factor of two therefore getting an upper bound of e to the gamma over two which is less than f of the primes thereby giving the result in the second case so these are these these two steps are potentially crude but we caution and note that for when the set a is the full set of primes themselves both of these steps are actually perfectly efficient and indeed if a is a prime then it equals its largest prime factor giving the first step and one can show that the density of this union of else appease is equal to one so this density statement this about the second step is can be viewed as a kind of analytic version of just the basic fact that every positive integer greater than one has a unique smallest prime factor and so this basic fact just by unpacking the definition of density and the definition of else appease gives this result and so you can't get any additional savings when you have the primes but the hope would be that if our set contains some composite numbers that we could get some savings from one of these two cases and moreover one morally should be thinking about i've come to to feel that this these two potential steps of getting savings are actually explaining why the primes are should be maximizing our series f of a from the conjecture because these two steps you can get no additional savings so now i'd like to restrict our attention to sets of composite numbers and in order to refine this argument of erish we showed that if the first step is efficient for a set of composite numbers then the second step needs to be wasteful and in which case we'll get savings so that is to say if all numbers in our set a satisfy p of a is approximately the same size of a then we should be able to get a smaller bound on the density of this union else obey and so to make this quantitatively a bit more precise we have the following proposition so it says that for any primitive set a and any parameter v satisfying the following inequality p of a to the one plus v is bigger than a uniformly for all elements on our set so this is saying that in a quantitative sense that the largest prime factor is quite large then we have a bound on the density of our union else obey by the square root of v so this bound of the square root of v is refining the trivial bound of one that we used on the the final step in our sketch at least in the range where our perimeter v is between zero and one and so morally what this proposition is saying it says that a primitive set cannot contain too many composite elements a and a which have large prime factors which has a very large factor nearly the size of itself so with this key density proposition one can incorporate it into the original argument of Erdos and get additional savings to get a better bound so indeed we've seen from this first step by comparing one over log of a to one over log of p of a in the case of the assumptions of the proposition this corresponds to a factor of one over one plus v in this exponent so we have the savings factor by assumption coming from the first case and we note that the savings factor is a function which improves as v grows and therefore the worst case scenario for us would be when the key density bound of the square root of v held with equality for all v in the range and thus one can show that for each v the subset of a's for which we have an approximate equality holding that p of a to the one plus v is approximately a this subset would contribute about the derivative of the square root of v so one over two root v to the density of l sub a so one should view this morally as saying if we understand if we're given a probability distribution for example and we have information about the cdf for the cumulative distribution function then we can get information about the the probability density function just by taking the derivative and so putting these two pieces together of the savings factor of one over one plus v along with the density at at v the corresponding density of one over two root v then just by integrating for all v between in a range between zero and one we get our savings corresponding to and in this case it just comes out to a nice integral of pi over four so by combining these results all together the refinement of Erdosha's argument leads to the final result that we get a bound of f of a by e to the gamma as we had before but now times the savings pi over four and this is approximately 1.399 numerically but in particular it's less than f of the primes which is about 1.6 and I stress again that for the sake of the sketch we were assuming all the numbers involved were large and in the latter case we were assuming we were dealing with composite numbers so in order to make this rigorous one needs to use explicit estimates for small numbers and also take care of a mixed regime between some primes and some composites but this essential sketch really gives the the flavor of the main ingredients that go into this argument into this bound so we've seen in this sketch that the subset of multiples l sub a numbers of the form b times a where all the primes in b are larger than p of a this this object arose naturally in the proof and here l alludes to the lexographical order of numbers by the prime factorization and so as a definition we'll say that if a number n is in l sub a we say n is an l multiple of a so it's a special kind of multiple and correspondingly a will be an l divisor of n so for example the l divisors of the number 300 so 300 factors as two squared times three times five squared would just be given by taking partial products of these primes so just the first the empty part product of one the first prime two first two primes four first three primes 12 and so on the next uh the force four prime 60 and finally the full product of 300 itself so in order to get these l divisors all you have to do is take the ascending list of partial products so once uh we get a handle on this this notion of l multiples and l divisors uh we observe that there is a very attractive property that they satisfy namely the the following trichotomy so for any numbers a and a prime any integers at all either they're corresponding multiples l sub a and l sub a prime are pairwise disjoint or a is in l of a prime or a prime is in l of a so in other words either one is an l multiple of another or their their sets are pairwise disjoint and in particular as we recall from the sketch in the case where a is primitive you can't have one dividing another and therefore these sets l sub a need to be pairwise disjoint and this was the key property that we we needed chariot yes we have a question in the chat but i see you no night maybe you just want to unmute and ask white so i just wanted to know if a is equal to set of primes okay and if i take a v to be in the key lemma in the dream so so i i'm i was saying that in this uh sketch i was restricting my attention to composite numbers um so i i guess uh just just for simplicity but yeah i think thank you for pointing out the uh that fact yeah so this would not hold for the for the primes the primes attention yeah thank you yeah so we have this following trichotomy and this was turns it gives the the key uh property that we need to in for in Erich's argument and the proof is quite uh simple uh so the idea is that if you have two numbers and they're sets of multiples l sub a and a prime are both uh disjoint then we're happy and we're done so otherwise we can assume that there's an intersection so in other words there are some numbers b and b prime for which b times a is equal to b prime times a prime and so you can just factor that and write it into primes p one up to pr and so what it means uh by definition of l sub a and l sub a prime this means that a and a prime are just partial uh products of primes of the the small primes let's say p one up to pi for some index i less than r and a prime is similarly some partial product p one up to pj but then you observe for example if i is less than j then a will be a partial product of a prime in other words uh a will be an l divisor of a prime and similarly if j is larger then a prime will be an l divisor of a and so not in either case uh we conclude the result and hence we get this trichotomy and so once we have uh this notion of l divisibility uh we can introduce the following key definition so we say that a set of positive integers greater than one is l primitive if no member in the set is an l divisor of another so in particular when we're looking at l divisors not not just any divisor this uh definition is weaker than primitive leading to a broader class of sets and by the trichotomy that we've seen on the previous slide this means that a is l primitive if and only if these sets of multiples l sub a are pairwise disjoint so this gives us a nice characterization of this property of l primitive so it turns out that this trichotomy was the only real property we needed in Erdosha's argument and one can extend this bound to all l primitive sets and show that f of a is less than e to the gamma for all l primitive sets and moreover this notion of l primitive sets plays a key role in in our proof of Erdosha's conjecture and therefore it's natural perhaps to conjecture that there would be an analog for l primitive sets for which the primes could be a maximizer however it turns out that this bound of e to the gamma is best possible in this broader class of l primitive sets so there are a sequence of l primitive sets specifically uh non-primitive l primitive sets whose series f of a gets arbitrarily close to e to the gamma and this in particular is larger than f of the primes which is about 1.6 and hence this potential l primitive analog of the conjecture is false and this highlights perhaps some of the the additional subtlety involved in the original problem where additional information is really crucial so in the remaining part of the talk i'd like to give a perhaps a feeling of some of the uh ingredients that go into the proof of this key density bound and again uh we'll be restricting our attention to sets of composite numbers jared we have another question to chat sure so it's by victor miller let's go ahead and victor hi jared is there a description of the maximal set for l primitive sets since you say that it's false i mean just like you've described the primes whereas it's something really complicated right that i mean that's that's certainly a good question um at least in the counter example i provide it was non-constructive i just proved the existence of these these sets but yeah i really don't uh have a good feeling for what they should look like but yeah that is certainly a good question so in this key proposition i'd like to give a feeling of some of the ingredients that go into this and really at the core of this proof so to recall this this statement says that if we have a set of composite numbers primitive set of composite numbers with p of a to the one plus v exceeding a for all elements then we get the stronger bound on the density of our l sub a of the square root of v and the core of this the proof of this result is really to construct a larger l primitive set c and containing a so c or c will be implicitly dependent on both our given set a as well as our current given parameter v and it's important to stress that this set c will not be genuinely primitive although it will be l primitive and the the key part of the proof is to show that our set c is l primitive and to do so requires the full assumption that our original set a is primitive and not just l primitive so there's a key assumption being used and for the sake of concreteness the set c is given by numbers of the form a times c where a is a number in our set a and all the primes dividing c must lie in the following explicit range between p two of a and p two of a to the one over square root v so here p two of a is just the second largest prime factor of a and again this is necessarily using assuming that the numbers a are composite so this is an explicit construction and the key difficulty is to show that the set c is l primitive but there is a natural easy property just spot just by hand that if we consider when little c is equal to one we recover that a original set is indeed a subset of this larger set c so assuming this result that we can have a this new primitive set c that is in fact l primitive not necessarily primitive then we can actually conclude and get a density bound so indeed if we recall the density of just these these sets l sub a are given by one over a times the mern's product then one can easily show that these sets l sub a and l sub c are self-similar in a in the following precise sense so the density of l sub c assuming that our set c is l primitive allows us to rewrite this density of the union as a sum of densities which is a key property and this is self-similarity by construction of c tells us that the density of l sub a c is equal to the density of l sub a times this product over primes ranging in our set uh ranging in our explicit interval defining our set c of one plus one and from this one can show just by mern's theorem again that this product is roughly the log of the larger end point divided by the log of the smaller end point which at the end of the day just comes to about a factor of one over the square root v so when the dust settles after performing this computation we show the relationship that the density of l sub c is approximately the density of l sub a divided by the square root of v and so if we multiply both sides by the square root of v and use the simple bound that the density of l sub c is at most one then we can deduce this key bound that the density of l sub a is at most the square root of v giving the result so in other words the moral of this result is saying that we can obtain a stronger bound over the trivial bound on our density by exhibiting extra multiplicative structure lurking explicitly in the form of our set c so in the remainder of the talk i might just give some additional applications of some of the ideas that go that come from these results so from an abstract point of view we can view a primitive set so recall the definition no number in the set divides another from a more abstract point of view this is nothing but an anti-chain using the ordering of numbers by divisibility no number is comparable or no element is comparable and so taking the dual notion of a chain where every element is comparable this gives us the dual object in our context of a of a divisibility chain so we have to be explicit we have a sequence of numbers d1 d2 d3 etc where each number divides the next in the chain so these are really the the natural dual objects to primitive sets and a classic theorem of Davenport and Erdisch from 1937 says that if a set a has positive density or more specifically positive upper log density then our set a must contain an infinite divisibility chain so this result exemplifies perhaps a more general theme in combinatorics a combinatorial theme in combinatorial number theory out of combinatorics which says that if a set is large enough then it must contain structure so if we recall many people many people might be familiar with a result of zameradie zameradie's theorem which says if a set has positive density then it must contain arbitrarily long arithmetic progressions so zameradie's result says that if you have a set being large enough then it must contain additive structure and hence analogously Davenport and Erdisch tell us that if a set is large enough it must contain multiplicative structure so in this vein we can analogously introduce notions of L divisibility chains in which a number in the chain so we have a sequence of numbers d1 d2 etc where each number is an L divisor of the next in the chain and in particular this is a stronger condition than a usual divisibility chain and we can upgrade the Davenport-Erdisch theorem to L divisibility chains under the same assumptions so if a set has positive density specifically a positive upper log density then there must exist an infinite L divisibility chain and one can similarly consider quantitative refinements of Davenport-Erdisch quantifying the rate of growth of these divisibility chains and in the 1960s Erdisch-Sarkozy and Zameradie had shown that infinitely often such a divisibility chain which must exist by Davenport-Erdisch must grow infinitely often about size log log y times the density delta divided by e to the gamma and in this case for this problem they were working with log log density which just recall is the lim soup of 1 over log log x times this series 1 over n log n and here we already start to see potentially some connections to our series of the day so analogously using our results we can extend Erdisch-Sarkozy and Zameradie and upgrade it under the same assumptions to get L divisibility chains getting the stronger structure and for the sake of time I only mentioned in passing that it is an open question what the optimal rate of growth should be for these problems finally to conclude I'd like to pose the an open question so or what is the maximum of our series f of a over all primitive sets of composite numbers so this is a natural question we know in the the full case of all numbers we now know the the maximal is obtained by the primes and so what would happen if we threw out the primes and just considered composites so banks and martin had conjectured still open that this maximum is obtained by numbers with exactly two prime factors perhaps suggesting the following vast generalization in which they conjectured for any integer k and any set of odd primes q so here we recall that the numbers with exactly k prime factors all in q these k almost primes are an example of a primitive set and moreover they conjecture that for any primitive set of numbers with at least k prime factors all in q that these subclass of primitive sets should be maximized by the k almost primes q to the k so this really gives a much broader scope to the kind of phenomenon uh hinted at by erdos's question so taking a step back for context to put to put banks and martin's question in context we recall that the definition of a primitive set is quite simple no number divides another and hence it divides this definition induces a very broad class of sets with potential pathologies as we as we recall from besokovitch's count example nevertheless banks and martins are proposing here that the class of primitive sets has structure namely in this hierarchy of maximal elements q to the k of these k almost primes so note that banks and martin's conjecture holds in the special case of k equals one since we now know that odd primes are erdos strong and moreover assuming their conjecture one can show that the erdos schochazi-zamerati conjecture involving the limb soup of primitive sets of large numbers of f of a actually equals one so uh from a broad viewpoint uh we can see that their conjecture of banks and martin offers a potential framework to unify a lot of the results we've seen uh in this talk and with that i thank you for your time