 Hello everyone, first I would like to thank the organizers for inviting me and also for having all this energy to organize these online seminars in the last few years, especially during the first year of COVID. This seminar has helped me immensely to stay in contact with the math community, so it has been super helpful and I hope it continues. Okay, so today I would like to talk about the second moment method for a rational point. So this is a method that people in analysis or analytic number theory have been using almost for, I don't know, like 20 centuries, if we can say this, but in the last two or three years people who do geometry and rational points, they use it more and more and it's almost like a small industry now, so I thought I would give some kind of introductory talk around these topics. Okay, so let me, okay, it works, right? I'm in the next page. Okay, so one important question in rational points is Hilbert's 10th problem. So this question broadly says, give me any polynomial equation with integer coefficients and say I want to solve it in the integers or in their astronauts. Is there a finite time algorithm to solve it? And this was asked, finally, by Hilbert in 1900 and Matiasiewicz in 1970 is building on the work of many, many others. Amazingly, he proved that there are some strange equations where there is no finite time algorithm and if you've seen these equations and you are an analytic number theorist, what you could think is these equations maybe are not so typical, these counter examples, I mean, they require like half a page to write and maybe they are a bit complicated. So yeah, you could ask what happens if I pick like a totally random, like a typical diophantine equation. Is there a finite time algorithm? So that's the main question I'm going to talk about today. Okay, so here is one of the very central conjectures in my area, it's by Jean-Louis Coyote-Len and it says the following. If you give me some equation which defines some kind of variety over the rascunals and if there is simple, if the geometry is single, so professionally we say smooth, projective and rationally connected, then there should be a finite time algorithm. Why? So there is the Brouwer group, this is a finite group you can calculate for every diophantine equation and just by looking at this group, whether it's trivial or not, you can decide if you believe the conjecture, you can decide whether there has a principle holds. And there has a principle also is a finite time algorithm. Why? Because if you have a smooth equation, then you only have to check so your ability at the finite list of primes. And for every prime, there is a finite time algorithm, basically Hensel's lifting. So for every prime there is a finite time algorithm to check if there is a periodic solution. Okay, so today I'm going to start by giving a few examples and then I'm going to mention very philosophically what is the second moment for rational points and in the last section I'm going to go into a bit more details about one specific infinite family of equations. So these are called Satellite surfaces and together with Tim Browning and Joni Derevine, we proved that 100% of these equations, of these Satellite surfaces, they satisfy the Hasse principle. Okay, so if you don't come from analytic number theory, one question you could ask is what exactly is a typical diophantine equation? Like how do you define this thing? And well, like if you ask somebody who is in the first year, what probably they would tell you is, okay, you take all cubic equations or all quadratic equations in, I don't know, like 20 variables and you make a list, you let all the coefficients go from one up to a million. So all the coefficients go from one up to a million and then you check if every one of these equations has a solution and if it is everywhere locally soluble. So I mean, what I'm trying to say, the obvious way to define typical is to order these polynomial equations by the size of the coefficients. This is the most natural way to do that. So one example in, I guess 2014, so Manzolberg have approved that if you take a cubic equation in three variables, homogeneous, then there are 10 coefficients. Okay, so if you let these 10 coefficients to leave to vary in a big interval, so from minus a billion up to plus a billion and you let the length of the interval go to infinity, then positive percentage of these equations will satisfy the Hasse principle. And interestingly, he proved that the positive percentage will also fail the Hasse principle. And yeah, so the technology he used is a lot, he was building on a lot of the things he was doing before about averages of Selmerangs. So you know these questions even like for some cubic equations, like in three variables, these questions are not so trivial as you might think when you first see them. Okay, so one very, very recent result from a couple of years ago, so Gos and Sarnak, they proved that the Hasse principle holds for 100% of these cubic surfaces. These are cubic affine surfaces. So the equation is an integer is equal to a sum of three squares minus the product of these variables. And these are very difficult. You cannot deal with them with like, I don't know, like the circle method, because the number of variables is very, very small. And they used a lot of ideas from coin Lennstra, the circle of coin Lennstra heuristics, like averages of class numbers of quadratic fields to prove that the Hasse principle holds for 100% of coefficients k. So you let k be in a big interval, a random integer, and then you check local to global principle. Okay, so another example more related to what I'm going to talk about today is that together with Alexis Korobogatov in 2020, we proved that the Hasse principle holds for satellite surfaces with positive percentage. I'm going to define satellite surfaces a bit later. But let me just say for a moment, another result that I really like, it was published, I believe, last year in the Annals. So it's by Tim Brown in Pierre Le Bouteck and Bill Savin. So they proved that some very difficult varieties in very small number of variables satisfy the Hasse principle with probability 100%. So one of these examples are homogeneous equations in degree four and in five variables. Or in homogeneous equations in degree five and six variables. So these are like really difficult. I mean, for the circular method to work, you need at least at the very, very least the number of variables to be two times the degree. And yeah, so this result is proving a conjecture of Bjorn Pooner actually. Okay, so let me try to give a well defined question. Also, what are we trying to do here? We fix an infinite family of equations. And we want to follow the conjecture by Coyo Telen. And so what we will do is we fix an infinite family of projective rationally connected varieties with rational coefficient. And then we assume that the Brouwer group is trivial, the generic Brouwer group. Because you could just take one equation, for example, for which the Hasse principle is not satisfied, there are some counter examples. And then just take some infinite twist of this equation. But then the generic Brouwer group, it will not be trivial. So first we clear out any possible algebraic abstractions. And then we order the size of these integer coefficients of these equations, we order them by absolute value. And then the main question is, does the Hasse principle hold for typical varieties in this infinite family? Okay, that's philosophically the main question. Okay, so what is the second moment? I mean, for the Hasse principle, the first example I could find possibly there is an older example is in the 1920s, so Hardy and Lithwood, they use the GRH to show that the Goldbach conjecture holds for 100% of even numbers. So they proved that if you look at the random even number between one and X, then there is 100% probability that it is a sum of two primes. In my mind, it is a Hasse principle question. Although you can argue about this. Okay, so what are the philosophical steps for this second moment method for proving existence? Okay, so let's say you fix a subset of Zn and then you're asking, is some polynomial equation soluble in this subset? So you want to solve this polynomial equation where every xi is in this subset, maybe it's the primes or maybe it is the cubes or I don't know, it could be anything. Okay, so the first thing you have to do is you try to make a conjecture for the counting, for the number of these solutions. Okay, so you're asking how many integer vectors are solutions to this fixed polynomial equation and then typically the answer will be some function of X like X or X over log X or X square, some simple function and then you have to multiply by a product by an Euler product of some periodic densities. I mean, yeah, if the family is difficult, it's not so easy to try and predict a conjecture but sometimes you can just work on the major, let the minor arcs be zero, just equate them equal to zero and then see what the major arcs give you. This is what Hardin-Littgood always did. Okay, and then the main object in the second moment method is the following expression. So you sum over the integer polynomials f whose size of the coefficients goes up to h. Okay, h will go to infinity, it's the main parameter. Okay, and for every polynomial f you're adding a square of something, basically you're adding the square of the error term. So this is just the number of solutions minus the conjectured value from the previous step. Okay, and the substantial step in this method is to prove that on average, this thing inside the square is smaller than g of X squared. Okay, so by much smaller, I mean little low and why is this useful? Because typically for a random polynomial, this product of periodic density tends to behave like a constant on average. Okay, and then so from Chebyshev's inequality, you can show just by using the statement that the number of solutions will be lower bounded by this Euler product times g of X. And the last step, which is very easy in more analytic problems but becomes harder and harder when you go to geometric problems, the last problem is to show that if your polynomial f is, has periodic solutions, then this periodic factor Cp is positive. And actually you have to prove that this product is not close to zero as a function of X. Okay, so and if you do these three steps, then the conclusion is that 400% of polynomials f in your favorite family, if there are periodic solutions, then there are integer solutions. Okay, so let me just give you one very single example of this second moment method. Okay, so let's recall what is Schindsel's hypothesis. I'm only going to state the very old version already known by Bunyakowski. So it states that if you give me any reducible polynomial in one variable with positive leading coefficient, you assume there is no periodic abstraction. So in particular, you assume there's no fixed prime divisor. The conclusion is that it represents infinitely many primes. Equivalently, it represents at least one prime. Okay, as in many problems in analytic number theory, infinitely many is equivalent to just existence. Okay, so this is a very hard problem. It's only one case that is known when you have one polynomial of three one, and this is Dirichlet's theorem in progressions. And here, what is the second moment expression you would like to work on? Well, you average over integer polynomials of pre-fixed degree, let's say degree 10, and you're looking at only reducible polynomials of degree 10, and all the coefficients go up to eight, and then you add the error term. Now, luckily, this error term, you don't have to make a conjecture. This is the Bateman-Horn conjecture. This is the error term in the Bateman-Horn question. So we know what these periodic factors are. I have included infinity here, because c infinity is one over d. Okay, and then together with Alexis Corbogatofi in Imperial College London, we proved that on average, this error term has a logarithmic saving. When the counting parameter x goes up to a fixed power of log of h. And this is already enough if you only care about existence. So if you follow the previous steps in the previous slide, you can prove already that the central hypothesis in my mind has a principle holds for this problem for 100% of polynomials. Okay, and using ideas of Coyote-Len from the 70s, you can prove directly results from this corollary, results about the off-and-in equations. So one very, very simple example is, let's say you're looking at this equation. A sum of two squares is equal to a polynomial. This is the simplest example of a satellite equation. You want to solve it with an integer t. Okay, and directly from this corollary and using like ideas of sans-souq, certain-dier, and Coyote-Len, we proved that the has a principle holds with positive probability for this surface. The problem is there is always some loss of information when you're just using simply primes, prime problems to solve passive principle problems. I'm going to talk about this a bit later. So let me just very briefly talk about the idea of the proof. So let's say you only want to do the first moment. We shouldn't go into a lot of details. So you're averaging over all integer polynomials of fixed degree d, and then you're just adding the counting function of the number of prime values for the polynomial. So we freeze the value of the polynomial p of n. What it means is that we're summing over all integers that could assume the value of p of n. And then stupidly we then add the indicator function of the event that this integer k is equal to the value of the polynomial. Okay, and then what we do is we use the standard circle method identity to detect whether an integer is equal to zero. And if you do a few steps, a few single manipulations, you get an integral of basically the unit circle of some exponential sums. Now these are very easy exponential sums because the coefficients, they vary in an interval. So this will give you linear exponential sums. It will give you, to be precise, it will give you the Dirichlet kernel. And the primes will be separated from the polynomials. That's why you do the circle method. And what will be left of the primes will be the classical binogrado exponential sum. So you're just summing an exponential where the argument k is just a prime in a big interval. Okay, and roughly what goes on, if alpha is very close to erasional, then you just use sigil, wallfeeds, theorems to control the sum. And if alpha is not close to erasional, already just the estimates by binogrado are okay for this problem. You can upper bound this sum. Okay, and then we have this, we uploaded this paper and after a few months, there was this online meeting in Mitter-Gleffler and there were a lot of talented PhD students there. It was a good place for me to feel very old. Okay, so there was an open problem session and I asked a little problem. I asked, can we prove better error terms and better results if we don't care about the primes, if we only want to control the Moebius function. So here there's the big conjecture called the Tsaola conjecture and it says that if the polynomial Pn is not a constant times the square of a polynomial, then the Moebius evaluated that this polynomial should take random values one or minus one. So, I mean, this sum should be a little low of x. Okay, and then Jonitor Avainan, he came on very quickly with a very general method which applies not just to Moebius, not just to primes, but to any function that has some general properties. Okay, so what are the properties? Still, you have a function defined on the integers that has zero average in arithmetic progressions and in short intervals. I will explain in a few minutes what exactly that means. But let's just leave it at that for now. Then fix any number D. This will be the degree of the random polynomial. Then for 100% of degree the integer polynomials, this sum one over x times the average of f over the polynomial value, this sum converges to zero. So yeah, so in particular you could say the Tsaola conjecture holds on average for generic for typical polynomials. Now, this theorem, the assumptions and the conclusions are not well defined at all. So let me just tell you what they really mean. So the assumptions, so the actual thing in the theorem says that there exists two constants a and delta such that for every moduli going up to a power of x, so this is much stronger than single wall feet. You want the function to have zero average when you look at the progression, but even when you have small intervals. So the intervals would go from x up to x plus a small power of x, but specifically one minus delta. And okay, so x to the one minus delta is just the length q is the number of terms one over q normalized by one over q. And so you you're asking that your function f has zero average in this very strong sense and you want to get a logarithmic saving. Now you're only asking it for almost all q. And by almost all q, I'm excluding basically Ziegel zeros. So this kind of assumption is difficult to prove, but it is doable using Huxley zero density theorems and other like deep ideas. So for so it is difficult to verify this assumption, but it can be it can be done in some cases. Okay, and then what is the actual conclusion? Well, if you take x the length of the sum to be a small power of eight, namely one over two times the degree, then the second moment of these sums will have zero average. So you'll get logarithmic saving. Okay. So let me give you a few comments. Actually, this theorem, if you look at the paper, is not there because we we have the more general version where you don't have to randomize every coefficient. You can fix almost every coefficient except two coefficients, like what you would do, let's say in Barband, Davenport, Albertan theorems. So we have a more general version of this. And of course, you can ask, why do we have short intervals? Where does it come from? And the reason is, it's very simple, actually, let's say you're looking at a degree deep polynomial, a random degree deep polynomial. Let's say you write the value p of n is equal to a random coefficient cd times n to the d and so on, plus a constant coefficient c0. Okay. And now let's say you freeze, you fix the values for the input n, the integer n, and you freeze every coefficient except c0, except the constant coefficient. Okay. And now you're taking random values for c0 from one up to eight, like random in these values. Then what happens to this integer, p of n? So it moves around some fixed integer, which is of course, this fixed number. It's the part of the polynomial that has no, that doesn't have c0. And the interval around the center has size h, because of course, that's the size of c0, that's the typical size of c0. And now if x is like a power of h, then this becomes a very short interval. So the length is much smaller than the center. Okay. And then you could also ask, why do we care about arithmetic progressions? I mean, with the previous approach, with the second method approach, they naturally come up when you look at the major arts, but here they naturally come up in another way. So when you actually do this, the second moment, you will have a system of two polynomial equations. And now let's say you fix nm for two integers, two different integers, n and m. And you also fix every coefficient except the constant and c1. And now, so now everything is fixed except c0 and c1. These are like the two variables that are between minus h and h. Okay. Then these two numbers, if you take the difference, they are the same modulo n minus m. This is trivial because, I mean, yes, c0 will vanish and then you have a multiple of n minus m. And so n minus m will be the moduli of a congruence in a very, in a sum of length, basically a power of x. So you're starting to fall into the dangerous zone of Bombieri-Vinogradov. Okay. And if you actually follow the previous third method approach I had with Alexei Skorvogadov, you would only be able to do that with x to be a logarithmic power of h. So one of the improvements is that we prove this prime number conjectures in the toddler conjecture on average, but the size of the sums is much bigger. Okay. So let me be more precise. One of the theorems we have, oh, sorry, is somebody speaking. Okay. All right. So let me give the actual theorem for the Moebius function. So you fix a number, an integer d, and then you're looking at random integer coefficients, random integer polynomials of x degree d. And you're letting all d plus 1 coefficients be between minus h and plus h. Then this estimate here, which proves the toddler conjecture, it holds when x goes up to a somewhat large power of h. And as you know, maybe from other second moment problems, on DRH you cannot improve much. I mean, if you use DRH, you just increase the value of x just a little bit by a fixed constant. So you will be able to go up to h to the 1 over d. Okay. And this was the question that I had in mind when I asked Joni Thervinen about this problem. And soon after this, Tim Browning from ISD in Vienna, he joined the project because he found a way to apply this general tool about general functions with zero average. He found a way to apply this to has a principal problem. Okay. And now there is a lot of notation. I have to define what are these the often-dined equations, the satellite equations. Okay. So fix any number filled k and denote this degree k over k. Let's denote it by e. And denote the ring of indices as usual by okay. All right. And then we denote by n k over q, the node of the number filled. Okay. Now you fix any integral basis of the number of the ring of indices. It doesn't matter which, omega 1 up to omega e. If you don't like any of this, just think that k is the q adjoint i, q adjoint the square root of minus 1. And then just take omega 1 to be 1 and omega 2 to be square root of minus 1. It's the same proof and the same ideas throughout. Okay. And then the norm form is defined in the following way. You take the norm of z 1 omega 1 plus z t omega e. You just define it to be the norm of this element. Okay. So I mean, this generalizes the norm z 1 square plus z 2 square, which is what you get when you start from the q adjoint square at minus 1. Okay. And what people call satellite equation is the following thing. You take a norm in variables. So this is, this will be an integer polynomial like z 1 square plus z 2 square. It will have degree e. And it will have the number of variables here will be e. So the variables are z 1 up to z e. These are integer variables. And this is equal to an integer polynomial. That's what people call the satellite equation. Okay. So if k is a quadratic number field, then this is z 1 square minus a constant z 2 square is equal to a polynomial. And this is basically an infinite family of phonics. And geometry is called these, this professionally called a conic bundle surface. Okay. And there is quite a lot of work done in the 80s about the Hasse principle for these equations. The reason is the first nice counter examples for the Hasse principle were done, were made for equations of this kind. It was by Skovsky in the 70s. And his counter example was a sum of two squares is equal to a quadratic polynomial times another quadratic polynomial. There are some of these polynomials that don't satisfy the Hasse principle. And Manin in the, in his ICM talk in the 1970, he tried to give an algebraic explanation by, he invented this Brouwer-Manin abstraction. And then he never worked on it again. And the, this subject was taken up by Coyote Len and, sorry, this is a typo by Harari and Skoro Bogatov. And they proved a lot of results. Yeah. So, so this satellite equation is important because it's a, basically have a surface that you can break an infinite family of curves. And usually for these curves, you can prove the Hasse principle. So this is an effort to prove the Hasse principle basically by induction on the number of variables. That's why people in geometry, they care about conic bundles and these satellite equations. But, but still many things are basically open. Okay. So one of, of the important examples is the following. If you start with a cyclic number field extension, and if you give me an irreducible polynomial, so forget these weird counter examples of the Hasse principle, just take a reducible B, then it was proved by Coyote Len, Harari and Skoro Bogatov that there is no Brouwer-Manin abstraction for rational points. What it means is that the algebra predicts the Hasse principle should always hold for these equation. If you have a nice number field, and then it reducibly polynomial. Okay. So let's, let's talk about a few of these results. There are actually quite a lot of results. So I'm not going to focus on the irreducible polynomial equations. So the Hasse principle has been known to hold in the following cases. Actually, that's another type. I should have said the Brouwer group controls the Hasse principle in the following cases. So it's, it's when you have a quadratic norm is equal to a low-degree polynomial. So the polynomial must have a degree only up to four. And, and this was a gigantic effort using a lot of algebraic methods in, in some papers in Crele in the 80s. Actually, it's two papers and it was by Coyote Len's and Skoro Bogatov. Okay. So you can ask what happens if you have other norms coming from like cubic or quartic number fields and Salberga and Coyote Len again in the, in the 80s, they proved that if you have a cubic norm and an irreducible cubic polynomial, then the Brouwer group controls the Hasse principle. Yeah. And then if you just ask what happens if we have completely general norms, then the, the Hasse principle is controlled by the Brouwer group only if the degree of the polynomial goes up to two. So you see these questions are difficult. And you cannot have any degrees. Somehow the becomes much, much harder when the degree becomes bigger. And, and these results, it was proved in a paper by Browning and Heath Brown in, in GAFA. And then a few years later by Elza Brack methods, it was proved again by Derren Hall-Smithson way. Okay. And there's a very general result. It was proved in 94 by Coyote Len, a certain tire. This is basically one of the guiding results in this area. It assumes a heavy conjecture. It assumes central hypothesis and it shows that all satellite equations satisfy the Hasse principle if there is no Brouwer obstruction for all number fields and all polynomials. Actually, no, not just irreducible. Yeah. Okay. So, so let me just give you the last result from, from this paper with Tim Browning and Yonita Ravine. And it says the following, fix any number field and fix a positive integer D. This will be the degree of a random integer polynomial. Then you look at the random satellite equation and then basically we prove the Hasse principle for 100% of these equations. Okay. To be more precise, ordering polynomials of degree D by size of the coefficients and only looking, only focusing on, on polynomials with positive leading coefficient, the satellite equation from the previous slide satisfies the integral Hasse principle. Okay. So there are a few comments here. All the previous results I talked about are about the Hasse principle for rational points. In this paper we, we proved things about the, the integral Hasse principle. And this is a bit more recent area. What is the Brouwer group for the integral Hasse principle, for example? This is not so wrong. And Coyote Len with his collaborators has studied this. And so for example, he has shown that if you pick a completely random polynomial, then this satellite equation should have void Brouwer abstraction. If you're asking for integral Hasse principle. And actually it is a bit more subtle. So Jennifer Berg, she had a very interesting result. She proved that there are some single equations as sum of two squares is equal to some degree for polynomial. There are some simple equations like in low degree where the integral Hasse principle fails, but also the Brouwer abstraction does not explain this. So, yeah. So, so you cannot really expect to have 100% Hasse principle for all equations. And somehow restricting on positive leading coefficient, it, it kills all these problems that Jennifer Berg proved. So here are counter examples they had to do a little bit about so you believe in the real numbers. Okay. And actually I should have mentioned there are a few results. So there are papers by Mitankin which prove that if you assume that the central hypothesis, then you can prove some version of Hasse principle for integer points as well. And there are other papers. But in general, the integral Hasse principle is not so well studied, I would say. And as always, okay, you can always complain, we only look at random polynomial equations. We don't prove anything for any given equation. But the advantage is that there is nothing known for these equations for, for irreducible polynomials when the degree is like six or seven or anything bigger than that. Okay. So let me talk a bit in the remaining, I guess, 10 minutes. Let me talk a bit about the proof in the paper with Tim Browning and John Iteravine. All right. So then the idea is to try and prove a second moment estimate. Try to use the second moment method directly to the counting function for integer solutions. You should not be going through prime numbers, as I did in my previous paper with Alexis Kropogatov. Okay. So let's define the main arithmetic function. So yeah, again, if you don't like the notation, just think of this function out of n, it is counting the number of representations that you can write n as a sum of two squares. But if you have a general number field, you define it as the number of integer solutions to this equation, the norm is equal to n. And for to make things simple, we restrict the summation into a box just to avoid casps because the accounting is a bit awkward in casps. Okay. And the main sum we want to study is the following. You want to sum this r function at the value of a polynomial p of n. And your average overall integers n going up to x. So if you want to prove that p of n has an integer solution that is equal to a norm of this number field, all you want to prove is that this sum is positive. This cp of x. All right. And we will prove slightly a bit more. We will prove that it goes to infinity, polynomially fast. This is important if you want to prove stronger statements than existence of solutions. So yeah, I mean, in the paper, we actually don't just prove the Hasse principle, we prove some kind of Zarisky density result. So we prove that 100% of these satellite equations satisfy a weak form of Zarisky density. This is slightly bit more than Hasse principle and weak approximation. Okay. There is one little problem. You want to use this previous tool about functions with zero average. But I mean, these functions here, these r functions, they have positive average. So it's not obvious how to use them. Well, you have to use a model, you have to use a function, let's call it r hat, another function r hat that has the same average like r, even when you look in small intervals and in arithmetic progressions. And it turns out that there are many people in analytic number theory who have worked on this kind of ideas. So there is the Grammer Granville model. So basically what it does, maybe you've seen it, if you've studied the Grammer model for the primes, if you give me any multiplicative function, I can try to replace it by looking at what happens at the small prime powers. All right. So to be more precise, I will define r hat of n to be a function gamma that takes into account periodic solutions and some real density, some function omega that has to do with real density. Okay. So how is gamma defined? So here we do the W trick. So we will look at this polynomial equation norm is equal to a number n, modulo w, and we will count solutions. And what is w? It is the product of all primes up to something that goes to infinity pretty fast, this exponential of square root log x. And you also have to allow the exponents to go to infinity. So a bit like log log x. So if you do the primes, the exponents are always one. If you do the star, the old grammar model. But if you want to prove has a principle theorems, you have to allow the exponents to go to infinity. This is so you can cut solutions coming from maybe like modulo p square, because hence maybe you start lifting solutions from p cubed or p to the four. Yeah, so I'm not I'm not going to say anything about the omega function is it's an integral analog, a real analog of this gamma. This gamma is a local sounding function. Okay, so let's see how one can use this tool for these from the previous slides. This tool I talked about before about functions with zero other. Okay, so you can you can show that the average of our heart is equal to the average of far by using some some machinery from that is a function. I mean already for the sum of two squares, I guess there is very old work by Selberg and huli, which gives a positive level of distribution. And if you want to work for completely general number fields, there are a lot of algebraic issues. And, and luckily team, he had a paper with Brown, where they prove that has a principle for some satellite equations for all norms. So, so we can we could use some of these machinery. Okay, so if you use all of that then then what you get is that this grandville grammar model are hot minus the function aren, it will have zero average, even on progressions and in small intervals. Okay, what does it actually mean for us. So it means that this second moment will be small. So what is this moment you're summing overall degree deep polynomials whose coefficients go up to eight, and you have the square of CP minus C heart P. So what is C heart P is defined like CP effects, but you just replace R by our heart. So this is what is this sum, you're summing this fake approximation are hot at a polynomial value when n goes from one up to X. It is a bit like summing the grammar model. Or like if you're trying to start to make a bateman horn prediction using the grammar model, you would have some kind of some like that. Okay, so for our theorem, it suffices to prove a lower bound for this average of them are hard function. And so the goal for the remaining of the talk is to give an indication how you can prove that for 100% of admissible polynomials. So these are polynomials such that the equation they have an equation with periodic solutions for every prime P. So you have to prove that for 100% of these integer polynomials, the satellite equation will be such that this sum will be growing like a function of X. It's a bit like X over some powers of long. Okay, and so what is nice about this R hat function is that it has to do with things that happen locally. So things that happen modulo some w modulo some small number w. So you can study its average its average, even over polynomial values in the same way that you would use a rosette one in its sieve to work with the grammar model over polynomial values. So this step is not very difficult. You can prove that if the polynomial P is everywhere locally soluble, then this sum Cpx will be lower bounded by a nice constant times X. And this constant, it could be zero if there are no periodic solutions. How is it defined? It is defined by looking at the density of solutions of the original satellite equation modulo w. So you're looking at all vectors z1 zd and t modulo of this number w that goes to infinity with X. And you want to prove that this is bounded away from zero. So by ranking streak, you can do the following thing. You want to pound the probability that this density of solution is close to zero. So how would you do that? Well, the number of degree d polynomials modulo w such that this periodic density is say something like one over log X by ranking streak, you can upper bounded by one over a power of log X. And then for every polynomial modulo w, so you have d plus one element modulo w and you form this polynomial P of t, for every such polynomial, you're adding one over this density. Of course, this density, it could be zero. There are no periodic solutions, but you only do it for everywhere locally soluble polynomials and it's easy to prove that this density is non-zero. Okay. So now the goal is to try and bound this weird expression. I mean, in some areas, in analytic number theory, you're averaging singular series and you're averaging. It's not uncommon to see this kind of expressions, averages of periodic densities. However, in these areas, you're averaging the density itself. So you would sum sigma, not one over sigma. So this is a bit unusual. Okay. So in any case, you can prove, you can try to give some kind of cancel arguments, some kind of cancel lift argument, and you can prove that this one over sigma will be large, but only when the first solutions modulo primes come from very high exponents. And this is reasonable to expect. If there are no solutions modulo p, no solutions modulo p squared, no solutions modulo p cubed, but the first solutions you can lift come from modulo p to the 10, then this sigma will be small. And that's, and you can prove that it's if and only if. Okay. And this will mean in particular that the polynomial p of t in its derivative will have a common root modulo, a large power of the prime p. And the last tool we use is the Iguza-Zeta function. So if you have a random polynomial with d plus one coefficients, then you can look at the discriminant. Now this is a fixed polynomial, right? So if you randomize the polynomial, the discriminant is fixed. It's not random anymore, because its variables are the coefficients of the random polynomial. So you can use estimates about the illusions of polynomial congruences if you just apply them not to the random polynomial, but the discriminant of p. Okay. And this discriminant of p is a polynomial in the coefficients of p that is in general very complicated. And there's not much you can do. So you have to use some general theory which tells you that if you give me any integer polynomial in any number of variables, homogeneous or not, or irreducible or not, or whatever, then the number of solutions modulo high prime powers is not so big. And this comes basically it's very easy. It comes from the first pole of the Iguza-Zeta function. And actually in the paper we use some explicit version of this result from a paper of Lillian Pierce and Damaris Schindler, I think. Yeah. And then this means that this blue sun, it will be kind of not, it will not explode. It will have a very small value because one over sigma will be big only when the discriminant has a lot of polynomial solutions modulo high prime powers. And therefore this means that this probability, the probability of these densities to be small, this probability will be close to zero. So this proves the main theorem. So you have how that the percent has a principle for these satellite equations. So just very quickly to close, let me summarize today's talk. So the first thing I'd like to say is that the second moment method, there's a new tool you can use. This result we have with the Iguza-Zeta and Damaris Schindler team about arithmetic functions with zero average. And the second point is that to use this tool, it's complicated. You have to use it by choosing some smart model like the Grammer-Granville model, for example, or maybe some other additive model in other programs. And the last step is that the main theorem is that the has a principle for integer points holds for 100% of these satellite equations. Yeah, and that's all I have to say.