 So hi everybody, we're just gonna wait a minute or two. We have people coming in from from seminars from ants. I understand So we'll wait a minute or two and before we get started. All right So it's it gives me great pleasure to welcome you to the depending on your viewpoint Monday or Tuesday Edition part number three web seminar Wherever you're coming in from I hope you you enjoy this we are recording this so again You can check back in the website. Hopefully it won't be as fuzzy as the last one But hopefully we'll have a good recording up there by sometime tomorrow If you need to miss this or want to go back So as as usual we request if you can keep your your your audio muted Throughout the talk if you want to ask questions, please fire away in the chat or raise your hand and hopefully Either I or Lena or Philip will we'll get that question. We can we can ask sound So without further ado, it gives me great pleasure to Introduce the sound erosion coming to us from the Bay Area Stanford who will speak on a co-distribution from the Chinese remainder theorem joint work with fire away sound Thanks very much Mike. Thanks everyone for coming So I want to discuss some work that Emanuel and I were doing while I was on sabbatical this past year in in Zurich and And it relates to a very elementary problem where we are going to look at residues That you pick for prime numbers and then you find residues modular composite numbers using the Chinese remainder theorem And we want to study whether these residue classes get equidistributed modular a typical composite number Let me first begin by saying the problem that one would really like to study Which is the problem for prime moduli so There is only one Theorem, but it's a beautiful theorem due to Duke Friedland and Ivaniets in this context If you look at a a quadratic congruence like n squared plus one is zero mod p Then if p is 3 mod 4 there are no solutions to this quadratic congruence and if p is 1 mod 4 There are two roots to the quadratic congruence You could look at the two roots that you have found the fractions u over p when u squared is minus 1 mod p And then study these fractions as you vary over all the primes p up to x. So there's about Pi of x such numbers because so x over log x such are fractions because for every prime That's one what for you get two such fractions And the beautiful theorem of Duke Friedland and Ivaniets one of the great theorems from the from the 90s Shows that this set is equidistributed modular one. So if you pick any Interval inside zero one you it gets its fair share of fractions landing inside that into and This theorem applies to the roots of any irreducible quadratic polynomial So in the definite case, it's due to Duke Friedland and Ivaniets and in the indefinite case it's due to Tooth and That's basically all that we know about roots of polynomials to prime moduli. We would very much like to have such a result for say cubic Congruences, but this is entirely open Now that's a relatively new result and there's a much older result of fully Which considers the roots of a polynomial modular q But now q is no longer prime, but q is some composite modulus So the modulus q will typically have a lot of prime factors And then you might if it has a prime factor for which there is no solution then of course, you're out of luck You don't get any residue classes mod q that satisfy it But you could imagine that if the modulus q is divisible by all the primes for which The congruence mod p has a solution and maybe you get a lot of solutions mod q and you can imagine that these solutions get a So what fully does is that he looks at any irreducible polynomial of degree d at least two He solves the congruence f of mu is zero mod q He puts all the fractions new over q as you vary over all the modular q going up to x and Over all the roots for every single modulus q and then he shows that this set is equidistributed modular one so this set might typically have say x times some power of log x numbers inside and The key thing that makes this argument work Which is what I would like to abstract abstract today Is that the roots modular q arise from the roots modular prime powers and then combining the roots modular different prime powers using the Chinese ruin Okay, so in some ways what I want to tell you is that this result is really about the Chinese remain the theorem rather than about roots of a polynomial So that's the The problem that I want to consider Suppose for every prime power. I give you a set of residue classes module of p to the v So this is a the set a p to the v and I'm going to define the cardinality of the set by this new by by row of p to the v And it could be that for some prime powers I give you no residue classes at all so the set could be empty on certain prime powers And then this function row of p to the v would also just be zero And then I use the Chinese remainder theorem to construct a set a queue of residue classes more q So for every prime power that divides q I take an element from this from this given set And when I form the residue classes more q by taking one such element for each one of the prime powers dividing q So I get some number of residue classes modular q and the number of residue classes that I have modular q would be a multiplicative function of q So the size of this would be row q and row q is just a product of the rope at the prime powers dividing q The question is what can we say about the distribution of these fractions a over q as a ranges over the elements in this set Okay, now in order to be able to say anything Maybe I should actually give you some residue classes modular p for enough primes p If I give you no residue classes modular p for every prime p then of course I give you no residue classes modular q and there's nothing to prove So I'm going to focus on the set Q which is all the modular q for which I'm giving you at least one residue class modular q And I want to be assured that I have at least a reasonable supply of such modular and one way to us to to ensure that is to say that the primes on which I give you at least one residue class They kind of have positive density. So for every x. There's at least some density alpha of the primes up to x for which I give you at least one residue class modular modular p So if I make this assumption then this guarantees that the set of modular for which I have at least one residue class is basically what you would expect from the cell Okay, so this would be like The product of one minus one over p over all the primes p not in the set where I give you at least one residue class. Okay, so you're always going to start out with some set which has at least like x over log x times some power of log x some log x to the alpha elements in it at least as a lower What else can go wrong. Suppose I give you for every prime p I only give you one residue class modular p then for every residue class mod q. I also give you only one residue class mod q and I may not be able to get any kind of equity distribution. For example, suppose I only give you the residue class one more p for every prime p then for every composite number I'm giving you the residue class mod q and I may not be able to get any kind of equity distribution For example, suppose I only give you the residue class one more p for every prime p then for every composite number I'm giving you the residue class one more q and there's no equity distribution theorem that I can So I should at least have that for some substantial proportion of primes. I give you more than one residue class. So maybe I give you two residue class for some large number of primes And what I want to show is that if I give you at least two residue classes for lots of primes. I want to prove some kind of equity distribution theorem. So let me set up that equity distribution theorem. So for every modulus q for which I'm giving you at least one residue class. I'm going to associate a probability measure which is just looking at these fractions a over q and putting a delta mass at each one of these a over q's and then normalizing it by the total number of points that I have in that If I want to understand whether these points are close to equity distribution, I can consider the discrepancy, which is to take the worst interval in our mod z for which the difference between the actual number of points that line inside that interval minus the length of that interval is as large as possible So this is the worst interval for which I have either the greatest deviation from equity distribution, either I get too many points or I get too few points inside that And the measure of equity distribution would be to show that this discrepancy is small and I would like to show that this discrepancy is small for many moduli or almost all moduli in my set q of x And as I've already said, I can only hope to get a result of this type if I am insured of a identical supply of primes p on which I'm giving you at least two residue classes. This is necessary in order to be able to prove such a theorem and the main theorem says that this is actually in fact sufficient to prove such a theorem. So let's assume that I have a positive density of primes on which I give you at least one residue class. And then suppose I tell you that I have a large number of primes on which I give you at least two residue class in a very weak sense. I only need to know that the sum of the reciprocals of the primes on which I give you at least two residue classes. That sum of reciprocals is something large. So this p, let's think of as going to infinity. If that's true, then for a typical modulus q on which I have at least one residue class being given for a typical modulus, the discrepancy goes to zero. It goes to goes to zero exponentially in this in e to the minus in in this quantity p is the statement of the theorem, more or less clear. So what it says is that with a small number of exceptions, the discrepancy is small, the discrepancy tends to zero in this in this parameter capital P for all but the size of the of the set of modular times something that's exponentially small again and this is basically the best possible result that you can you can hook to get because I can say that if you if you if all you know about the set of primes P on which I give you at least two residue classes is this. Actually the probability of not being do so by any prime in this set is like e to the minus p. So there will be a set about this big on which the size of on which I'm only going to give you one residue class modular every modulus in this set. So apart from this one six, I can't hope to improve this result. If there's only one modulus for one residue class modular q, the discrepancy will just be one, because there's only one point that you're given. I can take an interval, basically of length one, not containing that point, and you will have a big deviation from equity distribution. One point doesn't get a redistributed. Okay, so, so apart from this constant one six, this result is the best that you could hope to get. Now, this is a generalization of who these results. Because in who these case, what I'm giving you is the roots of a polynomial modular P for every prime P and then building up the roots modular q by the Chinese remainder theorem. And then by the Chebotara density theorem, I can guarantee that there's a positive density of primes on which if I give you an irreducible polynomial, it'll have the roots for an irreducible polynomial on a positive density of primes. And so therefore you can just use the theorem that I described before and get equity distribution for almost all modular q. But there's a small deviation in how we formulated this problem and how fully formulates the problem. So we consider for every modulus q, the point masses, a over q, and then we normalize that measure by dividing by the total number of points. So one of these things that we sum is a probability measure, and then we average those probability measures over a typical modulus q, having at least one point that they're given. What really does is something slightly different. He takes all the roots of a polynomial modular q, and then he takes all the possible q for which you have certain roots, puts them all together, and then divides by the total number of points. Okay, so these are two slightly different problems that you could formulate on equity distribution. And very funnily, the general theorem that I stated holds in this measure, but not in the Holy measure. So kind of paradoxically, you can construct counter examples to Holy's version in the general setup by in fact considering very big sets for every prime P I could give you a huge number of residue classes module of P. I could give you something like P over log P residue classes module of P. The problem with giving you so many residue classes module of P is that if I construct what happens using the Chinese remainder theorem. If you put all the residue classes together and then try to look for equity distribution. In fact, the primes kind of dominate the primes contribute a positive proportion for all the residue classes that you would put in to all composite module I as well. So in the Holy's measure, you don't get equity distribution because on the primes, I can just choose the first P over log P numbers, and they are not going to be equity distributed they're all going to concentrate near zero. On the other hand, in the way in which I have formulated it on the which Emmanuel and I have formulated it, you do get the equity distribution in the setting because every number contributes only one probability measure it only contributes one. And it will still be a small proportion of all the numbers that you have. Okay, so this is the one initial result that we have developing Holy's idea. And I want to show you a couple of ways in which we generalized it. One is to generalize the problem to higher dimensions. And then also to equity distribution in which we restrict the number the module I, not to primes, but to kind of almost primes numbers with a given number of prime factors. It turns out that under reasonably general assumptions, you can even prove an equity distribution on numbers with just two prime factors, although there's no hope of proving an equity distribution theorem, or numbers with just one prime factor, right, because on the primes. On the primes, I can just choose my set arbitrarily, and I can choose it to be just the first 100 numbers 100 module I module of P, and there will be no such equity distribution. So let me try to say how to think about this generalization for higher dimensions. So the situation this is, is like what we had before. For every prime power, I give you an m tuple of residue classes. So it's a residue classes in Z mod P to the BZ to the M. And then I construct residue classes more Q by using the Chinese remainder theorem. Okay, I noticed a question on on chat, whether I need some kind of compatibility for different powers of a fixed prime. Actually, the prime powers are not relevant at all. So you, the whole thing depends only on what you do module of primes. If you look at the statement of the theorem. The assumption was only that for a for a substantial number of primes, I should give you to residue classes module of P. I need no assumptions on prime powers at all. So if you like you can say you can focus only on square free numbers if you like and the theorem would be true for just square free numbers. You can choose anything you want on the prime powers. It doesn't matter. Any other questions before I discuss what happens for prime power for higher dimension. Okay. So in n dimensions. I'm assuming that we are given some set of, of module I of n tuples of module I mod P to the V, which are extended to n tuples module q using the Chinese remainder theorem in each coordinate for every module is key. And the notation is the same as before q is a set of module I on which I give you at least one recipe class. And I should assume that there's a plentiful supply of prime module I a positive density of primes on which I give you at least one recipe class. And then I'm interested in the equity distribution of of these module I a q over, over all these n tuples that I'm given. And once again this equity distribution is going to be quantified using a discrepancy. So here we use the box discrepancy. The box in n dimensions, which is basically a box in our end projected down to our mod z to the end. And then you pick the worst possible box in which there's the biggest deviation from the expected number, the expected number should just be the volume of the box. Of course for you know many module I I must give you at least two rescue classes otherwise, once again I will not be giving you enough rescue classes module q in order to formulate any kind of equity distribution. But here there's one more obstruction that you could have. Suppose for every prime P, I give you some set of rescue classes points in AP, and these points concentrate in a hyperplane. Suppose for example they concentrate on the hyperplane that when you add all the coordinates, you get something like one module p. I put one to indicate that this hyperplane is an affine hyperplane, not a not a projective hyperplane. But if they land inside this hyperplane, then when I use the Chinese tomato theorem, all the modular modular q are also going to land inside this hyperplane, and I'm not going to get equity distribution inside a general box in our mod z to the end. In the one dimensional case, a hyperplane is just a point, and then I'm just saying that if my sets modular p just concentrate on one point, then I don't get equity distribution and I need them to have at least two points in the one dimensional case. So the best thing that you could hope for in the end dimensional case is that you should look for sets that if I give you any affine hyperplane, they should escape that affine hyperplane. There should be enough points lying outside it, and then I should hope to get some kind of equity distribution. And that's basically the theorem that we prove. So I'm going to define this parameter lambda of p. So row of p is the total number of points that I give you in Z mod pz to the end. And lambda of p is going to be the number of points that land inside any given hyperplane. So this is an affine hyperplane, which is non degenerate. So these parameters are not all zero, which of course would be boring. But any non degenerate hyperplane take the worst case situation of the maximum number of points lying inside. So in dimension one this quantity would just be one, and affine hyperplane would just be one point. And the criterion that we would need is that the number of points that I give you is always going to be more than the maximum number of points in a hyperplane. And sometimes I would want this inequality to be strict. I want things to be strictly larger in order to get some kind of distribution. In n dimensions, any n points can be put inside some hyperplane. So basically we should think of as having at least n plus one generic points in n dimensions, in order to be able to get some kind of So in dimension one, this would be getting at least two points and that's enough. And the dimension and roughly I want to think of having at least n plus one points, but these n plus one points should be generic and not land in some type of way. And in fact, that's the theorem that we can prove that If you look at the average size of the discrepancy, it goes to zero in terms of some quantity, which I've written down here. What is this quantity. Imagine that the number of total number of points that I'm given module of P is let's say some two times as large as the maximum number of points that land inside any given hyperplane. Then this will be some quantity that's bounded away from from zero. So I would like to know that that happens for a large number of primes P in the sense that the sum of the reciprocals of those primes is something telling to infinity, something that's large. So if this quantity is large, then I get extra distribution for a typical modular skew that I'm considered. To go back to the, the one dimensional case in the one dimensional case this lambda of P is just one. And so if row of P is at least two, then this quantity is at least a half. This is exactly the same results that I stated before, whether it's constant if you recall was a one six times the sum of there was the primes on which I give you at least two recipe classes. So, basically in higher dimension, the necessary condition for equity distribution is also sufficient. If I escape any given hyperplane, then I will get equity distribution on almost all composite model. Now there are some other variants that we can give of this problem. You can restrict the number of prime factors of your modulus cube and just ask if you can get equity distribution on numbers with exactly k prime factors. So under very mild conditions, you can kind of guarantee that in wide ranges of K. So one result is that if you can guarantee that this sum that I had this quantity that I had before is positive proportion of the sum that you have some of the reciprocals of the primes, so namely positive proportion times log log X, then I can get equity distribution. So long as the number of primes, so long as the number of primes is large enough. So long as K times this density delta goes off to infinity, you get some bound on the discrepancy that goes to zero. So this is pretty mild condition. If you think of delta as being a fixed number, then all it says is that so long as the number of prime factors goes off to infinity. As soon as that happens, no matter how slowly, then almost all such moduli have equity distribution. Now this very much uses the fact that if you have a number with a small number of prime factors, then typically what that number would look like is that it will have K minus one prime factors that are very small and one prime factor that's very large. That's so these are not the numbers that you get out of the sieve. I'm looking for PK numbers out of the sieve, but these are just honestly numbers with K prime factors. And if you make a slightly stronger assumption that if you assume that this lambda of P is substantially smaller than rho of P for basically every prime, let's say, let's just say for every prime that this ratio is small. Then you can actually guarantee that you get equity distribution, even starting with numbers with two prime factors. To give a special case of that in the one dimensional situation, which is the holy result that we started out with. If I gave you something very, very mild, like for every prime P I give you log log log log P residue classes, and then construct something using the Chinese made of theorem. Then as soon as you have a typical number with two prime factors, the residue classes that you construct out of the Chinese made of theorem will get a case to be. So that's a very mild hypothesis. So let me give you some applications of this theorem. And then I will give you a quick sketch of where the how the proof goes. So the most interesting applications are when the Chinese remainder theorem somehow is in the background that I give you an object which is just defined globally, and I don't tell you what the Chinese remainder theorem is doing at the start. So roots of polynomials are a good example for that because I can think of roots of a polynomials module q, and I don't have to tell you that I'm constructing them using the Chinese main theorem. So a generalization of fully set up would be, I give you an irreducible polynomial of degree T, and instead of just looking at the roots of the polynomial in one dimension, I can look at the simultaneous equity distribution of a squared, all the way up to a to the D minus one in D minus one dimensional space. I can give you any hyperplane. And if I ask you how many points on this curve land inside a hyperplane. Well, that's a curve of degree D minus one. So you can only have D minus one points lying inside a hyperplane. So that this is your lambda of P in this case it's going to be bounded by D minus one. So whether there's a positive proportion of primes on which I get the solutions to the congruence module of P by the chair of density theorem. So therefore the theorem applies in this case, the number of residue classes that I'm given is a bit more than the number that can land inside any affine hyperplane. And so therefore I get a quick distribution of these in this D minus one dimensional space for almost all module like you. So in the case of prime module I this was a recent conductor of Khrushchevsky. Apparently this comes up naturally in logic, and I have no idea how this comes up in logic. He conducted this for prime module I and there's no case in which this is known because as I told you, the only thing we know about primes is quadratic polynomials and for quadratic polynomials DS to and D minus one is one. So the theorem Friedland-Rivandius theorem is all that you can hope for to get the next result would be cubic module I and we don't know even the equity distribution of a module of P, let alone the joint equity distribution of a and a square. But more or less for free, you get the result for composite module I, you can play various other games with roots of polynomials. You can restrict the module I to have prime factors and some given subsets. You can, for example, restrict the module I to the primes to be quadratic residues for some. You can restrict the roots to be quadratic residues module of P, you can restrict the primes dividing this to lie in certain progressions if you like. Another result which was proven in the special case of roots module of q is a recent result of Crescent and Pollack. They looked at the smallest solution to the roots, smallest root to f of a is 0 mod q and showed that you can save a log q to some power for many modules like q and you get that again for free from the equity distribution theorem, because you know that the discrepancy is pretty small. So if you take an interval very close to zero, you can show that there will be enough points lying inside that interval close to zero. One more application was mentioned to us by Roger Heath Brown. He's noted that you could take an irreducible form f x y in degree at least two, and you look, you could look at the roots of this. And this result he thinks will have applications to counting points on certain varieties that there are there's one case of maybe forms of degree four where this could be used to actually counting the number of solutions to such form. You could take some strange examples like you could take two curves and then take the points of intersection of the two curves in two dimensions, and then look for the equity distribution of that. So again, what you would need to know is that the intersection of these two curves has enough points to escape any given hyperplane. And that can be guaranteed by typical applications of basis there. There is a very weird example but it's kind of fun so let me mention that this is really strange stuff these are called pseudo polynomials. A polynomial has the interesting property that, if you take a polynomial with integer coefficients, then if you reduce the polynomial, if you reduce the variable mod p then you reduce the polynomial mod p. So in other words, the values of the polynomial are periodic with period p. A pseudo polynomial satisfies the same property. So it is some function from the natural numbers to the integers with the property that a minus b divides f of a minus f of b, or all numbers and. Okay, so f of n depends only on what n is modular q for any modulus. It turns out that there are uncountably many pseudo polynomials. So in particular, there are lots of pseudo polynomials that are not polynomials. There are some very bizarre examples of pseudo polynomials. So the floor of e times n factorial has the strange property that it's a pseudo polynomial closely related to this is basically the floor of one over e times n factorial, which is basically the number of derangements in SM, up to some sign. So both Hall and Ruzer gave a characterization of what are pseudo polynomials. You can write down a fairly explicit condition in terms of some kind of Taylor approximation. You write your pseudo polynomials one plus some constant times n choose one plus some constant times n choose two plus a constant times n choose three and so on. You can actually converge because if n is bigger than however many terms you go to the binomial coefficients will be zero from some point onwards. And then if you prescribe some condition on what these constants are for n choose one and choose two, etc. You can guarantee that the condition of being a pseudo polynomial will be satisfied. So you can ask a really strange question as to whether there's an analog of Hull's theorem, whether there are roots of pseudo polynomials modular q that get equities to be. It turns out that this is actually not true in general, because pseudo polynomials are just crazy objects. I gave a version of this talk before. And one of my students Vivian Kupperberg noted afterwards that you can construct pseudo polynomials that are only do so by Prime's P in as far as the set of primes as you like. And so there's no hope of understanding anything about roots of a pseudo polynomial modular q. But if you take any nice example of pseudo polynomials like this e n factorial or derangements, it looks like if you look at the roots of the modular p there's a plentiful supply of roots of the modular p. And if you can guarantee that then it would apply and give you another example of where our theorem holds. And one funny situation where it holds, where we can prove it holds is the situation where you look at permutations in SN that have exactly one fixed point. These are basically a small cousin of the derangements. We can't prove the same thing for derangements, but for permutations with one fixed point, we can show that these are zero modular p. There are two roots modular p for every prime p. And therefore, if you pick a general composite number q. You can say something about the modular n for which f three of m is a multiple of q. So that's a strange application of this theorem, but purely for amusement. Let's take very quickly, how the proofs of this go the proof ideas go back to cooly to prove equity distribution, I should look at the associated violence sums. So I give you a set a q. The violence sum associated to some given m to pull H is just going to be the natural thing average over all the points inside a queue with the exponential of H dot product. So these violence sums satisfy a twisted multiplicativity relation, which is what the Chinese remainder theorem buys you. So if you factor a number, a modulus q as q one times q two, and q one and q two are co prime, then the violence sum module q one q two factors as the violence sum of q two times the violence sum of q one. So it's not exactly multiplicative it's twisted multiplicative the parameter on which I evaluate the violence sum gets multiplied by the inverse of q one module q two, and the inverse of q two module. Okay, so here's where the Chinese made a theorem comes in the Chinese made a theorem is simply the statement that the fraction one over q one q two is the same as q one bar over q two plus q two bar over q one module one. So if I multiply, if I cross multiply and evaluate this fraction I get q two q two bar us q one q one bar, which is just one module q. That's in the Chinese remain. So, how does the twisted multiplicativity help us. So here's a very quick description of the proof. So let's factor a modulus q a typical modulus q into its rough part and it's smooth. I'm not going to tell you what the parameter Z is, imagine that it's some small number, like x to the some power epsilon, maybe like x to the one of the log log x or something. So this is going to be the Z smooth part, and our is going to be the Z rough part. Then, if I want to understand the average of the violence sums, more overall my modular by the twisted multiplicativity amounts to understanding this factorized file song. The rough part basically doesn't have too many prime factors. If you like to think of the rough part as just being a prime number. The rough part I can't say anything, but this is a normalized while some so it's less than one at any rate. So I'm going to just put a trivial estimate for this rough while some and take it out as being bounded by one. Now the smooth part, I'm going to look at a number where all its prime factors are less than Z. So the smooth part is something that's going to have a lot of prime factors. And so I'm going, I'm hoping that on the smooth part for each modulus, maybe I get some cancellation in a while some and therefore when I put all these cancellations together, because I have lots of prime factors, it's something building up and contributing to the smooth part. So how do I make that precise. Well, there's nothing we can extract on the rough part of this while some would be can say something about this R bar. If I factor this if I if I understand if I range over all the modular are in certain progressions modular as prison that SS say smaller than X to the one third or something like that. The smooth part will usually be reasonably small. So then I can use the sieve to say that in every progression modular as I can't count exactly how many numbers that exist in this progression, but the civil always give me an upper bound of the right. order of magnitude. So if I plug in this upper bound, I would be able to say that I can fight for this into progressions modular as and each progression gets basically fair share. And so this R bar modular as is basically getting equity distributed. So instead of having just one while some modular as I get to play with an average over all the wild sounds modular as averaged over. This m to the full of values of H all multiplied by some average is to class. So this is where we are winning because I, even though I may not know anything about these individual wild sounds, I only have to understand what they are doing on average. I can try to understand that by using kosher Schwartz. I have to understand the, the L two norm of these wild sums on average or as what is the L two norm. If I expand it out, I get two, two variables X one and X two, ranging over the modular. And then what is the average over a giving me, I would want H times X one to be the same as H times X two. So I have to understand the L two norm of these wild sums on average over as what is the L two norm. If I expand it out, I get two, two variables X one and X two, ranging over the modular that I give you in S. And then what is the average over a giving me, I would want H times X one to be the same as H times X two modular as. So the dot product of this end to pull age with X one should be the same as the dot product of age with X two modular assets. So if I fix X one, and if I think of what this is saying in X two. If I fix X one, then the choices that I have for X two are all the choices X two that land inside an affine hyperplane, the dot product of age with X two must be whatever the dot product of age with X one was. So the condition that we impose comes in, we impose the condition that the maximum of this guy over any affine hyperplane was this quantity that we defined to be lambda of S. So a priori, this sum was bounded by one. So I extract a tiny bit more information for X one, the number of choices is the total number of residue classes that I give you more as, which is raw fast. But for X two, if I fix X one for X two, the number of choices that I give you for X two, it's just this new parameter lambda of S. So the average of the wild sums in the L two norm, I have saved this quantity, lambda of S over raw fast, which a priori is less than equal to one, but I can hope that if s has more and more prime factors, then I save a tiny bit on every prime factor, and so I can build up the savings that I get for a typical module. Okay. And that's basically a sketch of the proof of the main theorems that I stated before. I have a few more minutes and I'm going to end by giving you one last application of this circle of ideas, which is to averages of exponential sums. And this is one of the ideas that was that originally motivated who Lee, who Lee noticed that that the idea that we that he had on this equity distribution of roots of a polynomial modular q gave you also some small cancellation in the example that he gave was to size of Clusterman sums modular composite model IQ. In terms of Clusterman sums modular q, you have a connection with automotive forms as well. So you can say a lot more in that context. But what he did was that he saved a small power of log X. You can do that in general. And sometimes that kind of input has even qualitative application. So the context of wild sums. This was also developed further by Fouffry and Michelle in a paper about 15 years back. So the kind of quantity that you can consider is you can fix a polynomial F of degree D in integer coefficients, and then consider the wild sum. So for a q, I've normalized this wild sum by dividing by root q, so that on the prime numbers, a typical such wild sum would be bounded by D minus one, if D is the degree of the polynomial. For quadratic polynomials, you get a Gaussian and you can never get anything better than one, but for cubic ones, you get a bound of two. So in fact, the wild sums satisfy some other kind of distribution, some kind of SATA date or some other kind of distribution in general. Now, Fouffry and Michelle considered averages of this normalized wild sums over all module IQ, and they noted that for certain classes of polynomials, you can actually save something over the trivial bound that you would get by the waybound. So this theorem kind of adds a little bit to that literature. And the main thing is over what Fouffry and Michelle did is that you can actually give a pretty simple criterion for when you can prove a result like the L2 norm of these wild sums, you're fixing A and only varying the composite modulus q, the L2 norm is bounded by x times some power of log log x. So the trivial bound would have been x times some power of log x, and we're basically saying that that power of log x goes away. You might expect that this result is kind of best possible. Maybe the answer here is actually that this is bigger than bigger than x for a typical polynomial. This can be guaranteed so long as the polynomial F is indie composable. And indie composable simply means that F is not the composition of two polynomials G and H of both of degree bigger than 1. Okay, so in particular, if you take any polynomial of prime degree, then and if you look at the wild sum that's formed by it, then on average over all module IQ, you can bound the mean square of those wild sums by just x times some power of log. So the difference between this and what Fouffry and Michelle did is that they used the work of cats, which is basically something that's going to guarantee that for these these kinds of wild sums of a prime module i, as you vary the a, you can try to bound the L1 norm in a of such of such wild sums. But Katz's theorem is quite which is some monodromy calculation showing that these wild sums get equidistributed in some group. But to guarantee that the distribution is quite hard in general, you would need to assume something about the roots of the derivative of the polynomial left, and try to show that those don't satisfy certain conditions. So the condition in Fouffry and Michelle is a little bit complicated, and you can now replace that condition by by a much more easily easy to check condition, for example for prime, for prime degree there's no condition at all. And we also don't need Katz's work in this in this proof. So it's an idea of that goes back to Fernando Shaw, at least I learned it from him, when he was a student of mine on on polynomials and two variables over the over the over the rationals. So a theorem of Mike Fried, which I've stated here which I find quite beautiful and I didn't know about it until fairly recently. This is a condition of sure that goes back to sure. Take a polynomial F, and you look at the polynomial in two variables f of x minus f of y, which is obviously going to be used by x minus y. So you factor out x minus y, then that ratio is absolutely reducible. It's reducible as a polynomial over the complex numbers in x and y, unless F is decomposable, or F is basically a power x to the D, or it's basically a Chebyshev polynomial, and basically means that you compose a power with a linear polynomial, or you compose a Chebyshev polynomial with a linear polynomial. So like a x to the D plus B would be an example of what this basically means. So this is a very beautiful theorem, I think, and this theorem actually gives you, along with who these idea of how to think about these vials sums to composite model why this gives you the result on on on while some that I've given here. Well, that's all I wanted to say so thank you very much for your attention.