 Thank you very much for the invitation. So it's a pleasure to give a talk. So I will speak about a joint work with Daniel Fiorilli. And I will focus my talks on arithmetic progression. So I will speak about higher moments of primes in arithmetic progression. So as many people here know, the prime number in arithmetic progression is expressed using the Psi function, Psi xqa. So it is the sum of the von Munger function, capital lambda of n for n less than x and n congruent to a mod q. So we take a prime to q, of course. And what we expect is that this cardinality is asymptotic to Psi of x, so the sum without the conditional on the congruence over phi of q. And in fact, this is the case uniformly in the range when q is less than the power of log x. So under generalized Riemann hypothesis, so I mean Riemann hypothesis on zeta function, but also on l function associated to Dirichlet character, we can go up to q less than the power of x, little less than the square root of x. And following Montgomery, we can expect that the error term is less than a small power of x times the square root of the expected value. So this is x over phi of q. So of course, we can prove such a result. So we will see what we can do. So what we will do is that we will try to study the equity distribution, studying the moments when we sum over a copram to q. So I will define a capital M of n, index n of x, q, the mean value of the n power of the difference between Psi of x, q, a and the expected value. So for n equal to, you have a variance and there is some result of Barban, Davenport, Albertam, and then also Montgomery, Ouley, who provide an expected, well, an asymptotic relation. But when you sum over q, so you take one over capital Q, sum of q less than capital Q of phi q, m, index 2 of x, q. And this is asymptotic to x log of capital Q for large q. So q between x and x over power of log x. And you see it's easier in an analytic number theory when you do an average on q. So but if you want a smaller value of capital Q, there is a very nice result, a recent result of Harper and a sound. And they prove a lower bond of the size of 1 minus epsilon times x times log of q squared over x minus c log log x. But uniformly in a large range, so in between x and square root, the time of power of log x. And the method is based on a second method. But a very nice result. So very few results were obtained when n equal is larger than 3. So Houlet, in 1977, conjectured the following estimate. So if you take the mean value of phi q power n over 2 times m of n of x q, he expected that it's x log of capital Q power n over 2 times the value mu of n. So it's a little o of 1, not a capital O of 1. So it's an asymptotic correlation. And mu of n is the nth moment of the standard Gaussian law. And with Daniel, we will try to convince you that we can conjecture that in a large range of small q, so q between log log x power 1 plus epsilon and x power 1 minus epsilon, we can expect the asymptotic formula, m index n of x q is equal or is asymptotic to mu of n times x log q over phi of q power n over 2. And to support this conjecture, we will establish lower bonds of moments of weight moments. And I will present a probabilistic model based on linear independence of the zero of the Dirichlet L functions. But I will underline the fact that we don't have to assume the linear independence. So linear independence that I will present, it's just a guide to support a probabilistic model. OK, so I will explain what sum I will try to study. So I will take eta function. And I will add on the sum that we previously studied a weight. So 1 over square root of n times eta of log of n over x. So if you take eta with a compact support, you see that you will have n. For example, if the support is included in 0, 1, you have n less than x. And so you have the weight 1 over square root of n. And it's very important. So what we expect, the value that we expect is the same sum with the co-primality conditions that n is co-prime to q and factor 1 over phi q. So here, chi0q is the principal character modulo q. And then what we want to study is the asymptotic relation about the moment capital M index n of xq eta is the mean value of the nth power of the difference between c eta xqa and c eta x of the principal character over phi q. So it's very close to what we introduced previously. So we will take q equal 3 since for q equal 1, for q equal 2 is nearly 0. And using a orthogonality of Dirichlet character, so we have the nice formula. So we take some times to explain it. So the sum here of c of eta x chi is the sum of the character with a weight chi of n. And the sum is over nth n character, none equal to the principal character. But such that the product of this character is the principal character modulo q. So here, the character chi1 chi n is modulo q, OK? So of course, it's a trivial result, but very important to see that we can link the nth moment to such a sum associated to a character which is non-principle. So I will try to explain how we can explicitly link such a sum with a character to the sum over 0 of Dirichlet L function. So I will take eta an even function differential satisfying oh, sorry, a bond for eta, but also for the derivative of eta. So the bond is exponential of minus 1 half plus delta times the absolute value of t. And for the Fourier transform, we have also a bond. But so it's a little smaller than 1 plus 1 over the absolute value of c plus 1. And the following explicit formula. So here, I assume GRH. But of course, we can set up such a formula without assuming GRH. And we have such a nice formula. So it's a sum over a non-trivial 0 of the L function. So what is the non-trivial 0? It's the 0 between which is the, well, here. As I assume GRH, the 0 is on a critical line. So the real part is 1 half, but non-trivial 0 is the 0 between where the real part is between 0 and 1. OK? So the first step to get an asymptotic for my nth moment is to replace the first sum by the sum over non-trivial 0. And is that why we will introduce a probabilistic model? So we take capital Z index gamma and random variable, such that the expectation of a product of such random variable is 1 if the sum of the gamma is 0 and 0 otherwise. So it's a kind of orthogonality relation. And then if we define a capital H index n, which is written here on my slide. So it's minus 1 power n over pq power n, a summation of a character like previously. So it's non-principle. But the product of the n characters is principal. And with sum over the 0, here the imaginary part of the 0, we have the product of the Fourier transform of eta with the argument gamma over 2 pi. And the product of the random variable associated to gamma k1, gamma kn. And the expectation of such a variable, H index n, is the sum where you add the conditions that the sum of the imaginary part of the 0 is 0. And we can show that H of n is a limiting distribution of m index n. And so now I will explain what I will study. So I will study moments of moments. What does it mean? So you remember that m index n is a moment over a mod q. And now I will take the moment of such a quantity. So I will introduce the five functions, capital 5 in L1, R, even. And I will assume later that the Fourier transform of this function is non-negative. So it's a very important assumption. But if I take this moment, I will be able replacing m of n by the sum of the 0 of L function. We have such a nice formula. So it's a summation where there is a factor minus 1 power s time n over 2 of phi q power s n. And we have a sum of a set of arrays of character. So I have s time n character. And each one is non-principled. And I have for each index mu less than s, the product of the n's character, so the product of a j less than n, is the principal character. And then after, I have a second sum over 0. Oh, sorry. Yeah, OK. So I have a set of associated arrays of non-trivial 0. And I have what I write, the eta hat with gamma. This is the product over the s n value of the Fourier transform of eta in gamma over 2 pi. And then I have also the Fourier transform of the phi function in t times a sum of all imaginary parts over 2 pi. And then I have also an error term, but I want to focus my talk on the main term. OK. So is it clear? Is there any question about such a formula? So you see, it's very simple. You use the explicit formula, and you introduce moment of moments, and you have such a formula. OK. So now I will compare capital M of index n to its mean value. So I will see that small m index n. So it's the limit when t tends to infinity of the moment of m of n. Of the mean value of m of n. So I see that this is exactly what we have in the probabilistic model. And then what we want to, in fact, study is the moment of moment. But when you take capital M of n, less minus this expected value. And for technical reason, it's very important to choose a small m index n here in the moment, rather than the empirical expectation. I mean, when you have the system, so what we have in the limit. So 1 over t with the mean value of capital phi and the integral between 0 and infinity with m of n exponential t. So it's a really important aspect, because without this choice, we can't prove our result. So now through capital, well, I will say capital V of s of n, such a moment, I will try to use the previous formula that I wrote with the moment. So you remember the main term. And then we have such a formula. With a function that I wrote in red. So capital of delta index s. And so I have the weight that I previously wrote. So this product of eta transformation of Fourier transform of eta. And I have the nice formula. So what is the sigma mu is the sum of 1 over 2 pi times the sum of j between 1 and n of the gamma associated to chi mu j. And I have so developing the power of s, I have this formula. And what I can prove is this is a nice inequality. So the capital delta index s is larger than this capital delta of s, which is not dependent of t, which is 1 if the sum of the gamma is 0. But for each mu, sigma mu, which is the sum related to the sum of the gamma between j equal 1 and n is not 0. So the complete sum is 0, but each term is not 0. And if I don't have such a relation, I have delta of s equals 0. And you can prove it very easily, but it's a nice result. Because it's the reason why we choose a little m index n, OK? So now I will use this inequality to get a lower bound of my moment of moments. So I mean if I take minus 1 power s n times v of index s n, it's larger than such a sum which is in a certain way the sum associated to the probabilistic model plus an error term. So I will have the same notation as previously. That means that I have an array of character, an array of imaginary part of the non-trivial 0 associated to the l function. And I have this capital delta of s function, which is 0, except if the sum of each subsum is 0 and the subsum is never 0. OK, so now I will try to explain what is hypothesis Li, so it's an hypothesis on linear idempendency. So he says that there is no non-trivial relation between the 0 of the l function. So for example, what is a trivial relation? For example, if 1 half plus i gamma is 0 of the l function associated to chi, a character modulo q, then the conjugate 1 half minus i gamma is 0 of the l function associated to chi bar, which is the conjugate character. So this means that if we have 1 gamma n, the ordinate of non-trivial 0 of the l function associated to chi 1, chi n, non-principle character under this assumption, when you have the sum equals 0, then n is odd, is even, sorry, so n is equal to 2 times n, m. And we can split the set in two parts with index i, 1 until i index m, and g1 until g index m, such that the sum of two ordinate associated to i k and j k, the sum of the two ordinate equals 0, and the two associated character is conjugate. Of course, the choice of this index is not always unique, and it will be very difficult. There will be a lot of combinatorics to handle this problem, so the problem is that the indexes are not unique. OK, so the easy consequences is that if you believe to g or h on a linear dependency, and you take odd power of h of n, associated to the probabilistic model, the expectation equals 0. So why? Because if you replace h of n by the sum on the ordinate of the 0, you will need, you have the sum over mu between 1 to 2m plus 1, and j between 1 to 2r plus 1. So I mean there is an odd number of gamma, so such that if the sum is 0, it can be satisfied because we believe to the linear dependency. So this means that we will restrict or study to the even number, the sum on the even number of 0. OK, is it clear? So I will continue. So but to get a lower bound, I put this l i isn't necessary. Why? Because in the sum, we have non-negative term, so we can delete, we can avoid any term that doesn't satisfy our hypothesis. I mean, we can just restrict our sum to the sum over the 0, such that in fact, when you have such a sum equals 0, then we can gather the 0 by pairs, such that the sum of the two ordinate equals 0 and such that the characters are conjugate. And this is possible thanks to the positivity of the Fourier transform of eta of the phi function. So now we will introduce more combinatorics. So it's a very difficult part of the paper, but I will try to explain using some example of how does it work. So I will assume that there is a, I will introduce a set of involution between two, with a set of the, between one and two r times one and two until n. So the Cartesian product. And what I will expect is that there is no fixed point in this set of involution. I mean, when I have the relation, such as the gamma associated to chi mu j is a minus of gamma associated to chi pi mu j. Then mu j is not equal to pi mu j. And then we have also the conjugate relation between the character. OK, so I will introduce a set capital J of mu nu such that so the first coordinate of pi mu cross one n is intersect mu. So I mean, so the first coordinate is of the image of pi is mu. And as this is an involution, there is a link between the cardinality of such a set. So capital J mu nu and capital J mu nu. So we have some relations that we can prove on the cardinality. I will not explain why. And so we gather the character in a product, which is a product of all value in capital J mu nu. And what we can prove is that the main contribution comes from the fact where there exists only for each mu, there exists only one mu such that capital J mu nu of associated to the involution is not zero. So I mean, assume an array of character I have written all the relations that you need on the Dirichlet character. So I have 16 products. But as each product is related, I have only eight products. And you will see that each product such that mu equal mu is the principal character. So the condition that I get is the following four conditions on each product. And what I wrote here is that I take an example. So I assume that we assume that I only do product of capital of character such that it's non-empty. So for example, I will take 1, 2, and 3, 4. And the other one is just an empty product. So then with this additional assumption, I have only two relations to be checked. And so this is the main contribution come from such a combinatorial situation. And after, it's easy to count as the number of such arrays of characters. And I now can restrict my counting on such set on involution. And I will have some nice calculation to get. So I will explain what will be my result. So my result for my result on the moment of moment in the probabilistic model. So I will use a nice formula. So I mean, I will have to count such a sum. So with eta transform square. But in the sum, I have also the square of the multiplicity. But as I just interested by lower bound, I use that this square is less than the multiplicity. And then I have such a lower bound. And it's an important fact. And it's not, well, we expect that the multiplicity is 1. So we expect that is an equality and not an inequality. But now we can estimate such a sum using some result on the 0 of the L function. It's around the mean value of the square of the Fourier transform times log q. And then we can set our theorem, so our main theorem. So if we respect to R, so an even moment, we get a lower bound with the variance V, capital V of n is a constant that I wrote here, times the alpha value times log q power n over q power n plus 1. And for the under a moment, it's much more intricate, difficult to state. So I have written here the result. So what we see is here V index n is power 2 R plus 1 over 2. And I have an additional term, 1 over phi, the square root of phi q. So this means that when q tends to infinity, this is a little low of Vn power 2 R plus 1 over 2. It's what we expect in a Gaussian distribution. And if we be, so this result, I need to underline it. This result is without the L-E hypothesis. But if we believe to the L-E hypothesis, we can check that this is a good estimate. I mean, so it's a very difficult result to prove because the combinatorics is very difficult to handle. But we are able to prove that, to prove an asymptotic relation for the moments of the model. And moreover, we have some uniform result on R and n. And it's a good point because we can prove some omega result on the difference between CxqA minus what we expect. And I had also a nice result. If you take A equal 1, we don't need to have a moment or a sum over A. But we can just look at this moment and we can prove a lower bond. Thank you for your attention.