 Thank you very much and thank you for the invitation. I'm very pleased to speak in this number theory web seminar. It's really a nice initiative that you took. So, let me speak about the recent progress on this method in analytic number theory, which is a method for finding asymptotic estimates for sums of arithmetic functions. So, let me start with some historical background. So, let's consider the set of crimes and little omega n is the number of prime devices of a number n. And we are first interested in the number of integers, which have exactly k distinct prime factors. So this is the local load of the distribution of the little omega function. Now, the first case is when k is equal to one, and this is this essentially amounts to the prime number theorem. So it was proved in 1896 by Adamar and LaValley-Cousin, and pi one of x is asymptotic to x divided by log x. Now, in 1909, as a consequence of the prime number theorem, Landau proved that for if k is fixed and x tends to infinity, then the probability that a number has k prime factors is actually close to the probability of a Poisson law with a parameter log log x. So here I use a log index two for log log and log index three for triple log and so on. The next step was made by Hardy and Ramanjan in 1917 in a very famous paper, where they showed that the upper bound provided by the Poisson law is actually valid uniformly for all k. And all x. So there is no, in this upper bound, there is no limit. I mean, there is no condition on k and x, although it turns out it could be seen later that for very large k, this upper bound is actually too large. And then, Erdosch, in 1948, could show that if k is close to the mean value log log x, then the Poisson law that Landau found is still true. So we have this asymptotic formula when k is close to the mean value up to the root of the variance. Now, it is, we will see later that actually it's the only two cases in which the constant in front of the Poisson law is equal to one is actually when k divided by log log x tends to zero. So essentially the Landau case and the Erdosch case. Now, the next work was made by LG SAFE in 1953-1954, where he proved that if k divided by log log x is strictly less than two, then we have an asymptotic formula. But with a constant depending on the ratio k minus one divided by log log x. And the constant is given by this formula. SAFE did not exactly put it in this way, but anyway, this is what it is. And the proof that SAFE gave was incredibly complicated. He used the induction, starting with the square free integers. So the pi k star is the same definition but for square free integers. And then he uses the representation of a number of the product of a square free integer and the square. All together, I mean he wrote four papers to prove this. And it was, I don't remember exactly but probably over 150 pages. Because you can imagine that if you want to, I mean with this type of induction formula to keep track of the remainders when k grows, it becomes something incredibly complicated. Next, Selberg, just after the publication of SAFE papers, found a very short and very elegant way to prove, to re-prove all over SAFE results and actually a bit more. And his idea was to use Cauchy's formula to identify pi k of x as the coefficient of z to the k in this sum. And now, if you look at the Dirichlet series for z to the omega n, when the coefficients are z to the omega n, then you can write it for when the real part of S is larger than one. You can write it as this earlier product, which turns out to be very close to the power of the zeta function, zeta of S to the z multiplied by an Euler product, which is also the Dirichlet series of a multiplicative function which converges to the left of the one line. Now around S equal one, zeta S looks like one over S minus one, as everybody knows. So zeta S to the z looks like S minus one to the minus z. And we can use Hankel's formula. So using a Hankel contour about around S equal one. To identify, I mean, if you put it, oh, here, I mean, there is a, there is a, no, sorry, no, that's fine. So when you, when you identify this, this sum, using Hankel's formula, then you get, you can use this formula to get the, the main term of the Pi K of X. So truncating the control and appealing to homomorphic continuation of zeta S in a zero free region. Here, I just use the Ademar-Levalier-Poussin zero free region yields this formula, which is actually more precise and valid in a larger range than a safe formula. And now we, again, we have, we have this lambda, this constant lambda of R, where R is still K minus one divided by log log X. Because in Selberg's result now for this function little omega, the formula is valid for when R is bounded, not just less than two as in a safe result. And you also have a remainder. So K divided by log log X squared, this, this always tends to zero, of course, because, because K is less than constant times log log X. So after using the saddle point method, yes, for, so this is obtained after using the saddle point method for estimating the Cauchy integral, which I wrote in the previous slide. Of course, I mean this gives a natural explanation for the formula for this lambda R, since it is closely linked to the, to the D-replay, to the form of the D-replay series of the, with coefficients z to the omega M. And you can see here that lambda of R is equal to one, only for R equals zero or one. So the lambda O and the other spaces are the only ones for which the, we have exactly the frequency is asymptotically equal to the Poisson law with mean log X. In 1959 and 1971, Delange wrote two papers where he studies systematically, symmetry formula for coefficients of D-replay series, which are of the form zeta S to the row. So I've changed the previous z to complex number of rows for the rest of the talk times another D-replay series, G of S, which is supposed to be, in a sense, more easier to handle. So this provided many, I mean Delange found many applications of this method. He gave a new proof of the optimal form of the Erdeschka theorem, which provides the normal distribution of omega N divided by log log X, sorry, omega N minus the mean value divided by root log log X. So of course, I mean, this is to be expected because if you have the local laws, then you can raise some to get the global law. So it is expected. So he found an explicit estimate for the second term in the normal distribution, which is quite, I mean, quite accurate. He also gave estimates for moments of omega N and for centered moments of the omega N function in arithmetic progressions. So here I wrote the arithmetic progression when omega N is congruent to A mod Q, but he also did the same when N itself is an arithmetic progression. And he found the distribution in subsequences, asymptotic expansions and so on. So, when, when I wrote the first edition of my book on number theory, I, I mean, I studied this method again. And I provided some effective forms of the previous results. And I named this method after Selberg and DeLonge. So one of one of the of the results, which I found, maybe easily, maybe easily stated, so suppose we have a Dirichlet series, which can be written as a product of a complex power of the zeta function times a multiplicative not necessarily multiplicative times a another Dirichlet series with coefficients, little g of n, and suppose that you have absolute convergence of the this series of g of n and of a number of derivatives at s equal one. So, and you suppose so this quantity is is finite. Then you can write the mean value of FN. And this FN coming from this function capital F as you have a polynomial in one of the log x. Well, general polynomial and remainder. And the remainder can be bounded in terms of this HK by this formula. This result is completely uniform in H and K. So it means that you can take here K to tend to infinity. Just you take the minimum value of all capital K is here to get the remainder as small as possible. So, another hypothesis are analytic continuation for this function g of s in a zero free region of zeta of s. There is an alternative form of the result. If the real part of row is less strictly less than one, then there is a function alpha row, which is continuous on 01 and with the values in with complex values. You can write the mean value in this form. So then you always have a remainder term which is essentially of the size of the remainder in the prime number theorem. So this close form of the main term is the analogous to the to the logarithmic integral in the prime number theorem. In some cases, it is especially if you want to have a re-summation or if the function f will depend on the parameter and you want to integrate or to have to average over this parameter. This close form is sometimes very, very useful. So, this method all together is very, very useful, very handy. And there are plenty of examples of applications because there are plenty of functions in number theory for which the Dirichlet series in a way is close to a complex power of the zeta function. First of all, I mean, you can exploit the complete uniformity in x and k. For instance, if you look at the Dirichlet series of one over phi n to the s where phi n is the Euler is the Euler function. Then you can write it as zeta s times g of s and g of s is this function. So, then, of course, you have to understand, I mean, it's not completely obvious to understand this function, but you know that you have, of course, analytic continuation at the left of the line real part of s is equal to one. The row is one, of course, the exponent, and you immediately get that the number of integers for which phi of n is less than x is asymptotic to, so this just stems out of the value of g of s. And with this, with this remain the term. So, of course, this means that if you look at the number of integers, which have a given phi of n, then you can subtract, I mean, the main terms between x and x minus one. And this bound for the number of integers with a given phi of n, which is actually quite a good bound. Even though, I mean, this formula is not made for that, but it's not easy to find a good bound for this quantity. So, one, analyzes the function of g of s further. One can go, one can use the method, one can push the method a little further, and we can get the remainder is actually exactly of the size of the remainder in the prime number here. So, I mean, uses the exponential sums p minus one to the it. Another application just is the average distribution of devices. This was done in a paper with the rest and Disney a long time ago. So, suppose you count the devices of an integer and with with uniform probability, one of the total number of devices. And you look you at the average of the probability that the divisor is less than n to the u for you between zero and one. Using the, the separate the launch method with now row is equal to one half, the one half comes from the, from the fact that the probability, the total problem. The total number of devices is, is a total vendor, and you look, you will have coefficients one of a total event, which are associated to the, to the one half because the, the number of devices of a prime number is two. And you have to divide by this so you get one half. And this, you can get, I mean, this formula, which shows that on average, the devices of an integer are distributed along, I mean, with the arcs and low. Another application. So suppose you want to estimate the, the, the mean value of one over omega n, then you use this formula, this trivial formula that you just have to integrate between zero and one. The, there is a remainder which I didn't write, but it's, yes, one should maybe integrate between zero and infinity instead of zero and one. So you get this formula. And this to the omega n is of course, I mean, you can use the cell back the launch, the launch method with the row is equal to two. And you find this formula. So, you have an asymptotic series with beta j of log log x divided by by powers of log x and the beta j's are. I mean, are given by this. By this formula for which you also have a sympathetic series. So there are plenty, plenty of applications. I just wrote three of them, but we can find really many. Now, what about new hypothesis in this method. So, to summarize, the standard approach is to have a different series, which is written as the product of power, a complex power of the function times another directly series associated to function to a narrative function G. So, the coefficients of zeta s to the row are known as the piece divisor function. So which I write to to index row. So, and to this product of the series. You can associate directly convolution, F is the directly convolution of tow in tow row. And the function G, which is associated to the to the function to the series. Now, you can apply the cell back the launch method to zeta s to the row itself. I mean, when there is no function G, then the method gives something very good, which, which I wrote in this way. And here, you have this remainder term which is, which is completely uniform in K. So you can, you can, you can have your K equal to some power of luggage for instance. So you have this very good approximation of tow index row. Yeah, then you can write the convolution formula. And the mean value of F will be given by GN times this tow row of X over N. So, if you, if you apply this formula here, you find, or at least you expect in the general case, then you have this type of of approximation. And lambda J are linear combinations of these of the derivatives of the function g of s at s equal one. Now, of course, depending on the regularity of the function G. The limit, I mean, the largest capital J here, which you can use with a useful remainder depends on this function G. So, the, the hypothesis that we considered to this point implied convergence of this gamma J is this gamma gamma H. So, if the, if the, I mean this converse, then we, we know what the lambda J are supposed to be, and we can expect using convolution and the theory player problem method. We can expect to have such a synthetic formula. Last year, a grand deal and cuckoo cuckoos published a paper where they investigate new hypothesis, which do not. They don't consider hypothesis on the degree class series G of s, but directly on the function little G, which is associated to this function. So, they, they only consider the case when G is multiplicative function. So, they assume that that we have this condition GP, which I call GP and I will refer to it as GP in the rest of this talk where GP. So, suppose that G is essentially bounded on the prime and small on average. So, you gain. So if GP was one, you would get the sum of love piece which is equivalent to X, and you suppose that you, you have a, you can spare a power as positive power of log X. And also, F is an absolute value less than to index R. So, so it means that essentially F, FP is less than R. As we shall see later, this is not stated in the paper by Conville and coquilopoulos, but it turns out that the G H that the derivative of the of the corresponding degree class series at one may, I mean, they could diverge. So, so we can't use I mean this formula because this this series is not convergent in the general case. However, this, this condition enables one to to use the right derivative of the function capital G, and these are defined as long as this H and the order of the derivative is less than capital A. So, we expect to have this formula this we expect to have this approximation with now with J is the largest, the largest integer, which is strictly less than a, because we know that we can't. We don't have, I mean, this gamma gamma H star are not defined for larger H. And this is actually what the Grandville and coquilopoulos find. So we assume that F is less than to R. We assume GP with this capital A, strictly positive and the remainder is less than X times log log X. This is the Kroniker lambda delta and divided by log X to the A plus one minus R, but not minus J minus R. Or not, I mean here, I mean, I will comment this. So the land that J's are defined just as in the classical case, but now with the gamma H star in place of the gamma J. But remember, if I go back to this formula. The main term in this formula is obtained when J is equal to zero. So it is a constant times X multiplied by log X to the row minus one. So the size of this, of this, of this mean value is expected to be X times log X to the row minus one. The remainder here is X. Multiply by say log X to the R minus one minus eight. So contrary to pure Selberg Delange estimate. This formula does not necessarily provide an asymptotic formula. Because we could have that the main term is smaller, actually much smaller than what is presented the remainder. So in that case, this theorem only furnishes an upper bound and no more for the for the mean value of it. There are results in the literature. Which provide the upper bound. In, in, in, in actually more general cases. So, at least I mean under this hypothesis, we can prove that the mean value is less than this X divided by log X to this exponent, and this exponent. So here we have here a constant K which turns out to be optimal when for F real, the real case, I didn't state other cases. And it turns out to be a better upper bound, if a turns out to be far and small. And of course, in the, the hypothesis GP. Well, the smaller, I mean, the smaller a is the more general is the result so we are, we are inclined to consider small a rather than larger values of this parameter. The row is equal to zero and F is real. Actually, this constant K may be replaced by one minus two over pie, which, which turns out also will come back to this two over pie which turns out to be also optimal. Now, with this result. We have a number of questions now. The first is, can we modify the hypothesis the assumptions so that we are sure that we have a genuine asymptotic expansion for the, for the, for the mean value of X, and not just an upper bound. This is of course the main, the main question. Is it. I mean, we have here rather strong assumption on the F, it is F is in absolute value less than to our can we relax the assumptions aside of GP, of course. And simplify the proof because the proof in Grandville to close is rather complicated, the paper is over 30 to 30 pages long. And the proof is some not, let's say not not quite natural. So, the third question is, I mean, is the results optimal. If we use GP alone, if we have this assumption on the, on the values of the function g at times. What, what are the limitations. I will try to answer these, these three questions. Now. So the results, I will shall describe the is a recent work with the register of rotation. So, we, regarding question one. Question one is, can we, what hypothesis can you the genuine asymptotic expansion. So we propose the following answer. So recall, F is always is the convolution of the device of function that is the coefficient of the test to the row. So, we assume conditions which are fairly standard in analytic number theory, which come from a well known paper of shoe in 1980. So, the FP, if the P new is less than some power, some concern to the new, and, on average, FP is less than our, but we don't ask that FP is always less than our but only on average. So, we replace the function, the, the assumption GP by an SM by the same thing, but in short intervals. So, the intervals are short in a way that the Z is can go down to some power strictly less than one of X, but arbitrary. I mean, this alpha can be arbitrary small. And then we have the remainder. Now, we have the exponent here is a plus one minus the maximum of zero and the real part of row. It turns out that if the real part of row is positive, I mean is non negative, then we get. Now we are sure that we get an asymptotic formula. Now, it turns out that this limitation is necessary. We can find, we can construct a function F for which row is equal to minus R so any negative real number. That the, that actually we have the mean value is larger than that. So, we can, we can do better essentially as apart apart from this log log log the power blog log X, we cannot do better than this. And the, the, the function is actually rather easy to construct. We rewrite F as the convolution of toe to the index minus R and G, and GP is just this, this function. I divided by log P to the A. And of course, it satisfies the short interval condition, even on absolute value. But this P to the I modifies the behavior and we have this control. What can we do to relax the assumptions of GP and simplify, simplify a side of GP and simplify the proof. We use the friable summation, I will say more later. And we can using the friable summation we can get the following thing. So we have now we assume that FP is less than R in some average sense, which is the usual average sense for the use in the, in civil theory. And we have this milder condition of the growth of the FP's. And in that case, we get that the remainder in the setback the large formula is less than X. Now we have log log triple iterated logarithm. And still log X to the A plus one minus R. And so these assumptions are not directly comparable to those which are described by Grandville and Coulopoulos, Coulopoulos for an extension of theorem A. But they authorize occasional large values of FP, which the latter does not. And we still have this exponent minus R here instead of the expected role. And unlike the analysis of Grandville and Coulopoulos knows, I mean, our analysis does not ascribe a special any special role to the case when a is an integer. What are the limitations under the assumption GP role. Suppose C is less than one, we can exhibit the function satisfying GP with row is equal to zero, and such that the remainder term is that large. So, well, I don't have time to describe the counter example in detail but anyway, what this, this counter example says is that essentially the exponent are under GP alone without the short interval condition is that this exponent are is essentially so I just I do not insist on the on the counter example. Now, I will, I would like to describe briefly the ideas of the proofs. This theorem one, which gives the result with the maximum of row, the real part of row and and and zero. And we know that if we have, if we have only GP, then the series, the series GN over and a for sure he, the derivative may diverge. So, if we use the short interval condition. We can show that, for instance, G is supported on square free integers, which is the case, which one can reduce with one can reduce. And if we look at the sum of GN over and over integers, which k prime factors, we can actually split the prime factors into these joint intervals with with the acceptable remainder. And then we just have that the, the size of the of these very small intervals agree to the fact that n is less than x. And it turns out that this will be the general term of a conversion series. So we get that the under the short interval condition, the series GN over and and the necessary derivatives actually do converge. Because we can use then some standard bounds, which are, by the way, also obtained using the cell by the long method for integers with for which the little omega function the number of prime factors is too large, which shows the contribution will be negligible. So eventually we get convergence of these gamma H. And then we can finish the proof using the very good approximation for this toe T row of wire obtained from the classical cell by the long and the convergence of these coefficients. Now for CRM to we use friable summation. So, um, friable summation is, it was, I think, the first time it was defined by the thing in 1957, it was rediscovered by three and I, 1991. We say that the series as friable sum. If when you sum over integers for which the largest point factor is less than why. So, first of all, you want that this converges and it has a limit, which is equal to us. So friable summation is a very useful tool, very interesting. So, for instance, for instance, if you use if you look at this, at this series, it has a friable sum, which is not equal, not always equal to the usual sum. But when it is equal we say that the series is regular for the friable summation. The friable regularity is, is, in a way, central in, in a number theory. Since it is very easy to see that the friable sum of new N over N is zero. So, to show that it is regular means that the series actually converging is equal to zero is equivalent, of course, to the prime number theorem. Another very interesting thing is that the prime number theorem is actually equivalent. I wrote just one way but it's both ways to the, to the fact that. The prime number theorem implies that this series has friable sum zeta of one of one plus RT, although we know that this series is not converging. So the prime number theorem is actually equivalent to the, the fact that this series has a friable sum. The friable summation of four series verifies Jordan theorem but avoids the Gibbs phenomenon. This is a result of a laboratory for myself for five years ago. So if you take a function of bounded variation and the CN event CN of F is the three questions. The friable sum converge to the function itself. So it's quite interesting because this is a subsum of the, of the, the original for your series. The supremum converges to the supremum. So there is no Gibbs phenomenon. There is also a case. And there are many applications, but I don't have time now to describe them. So let's go back to the, to the cell that the launch. So, we know that under GP we know that the series may diverge, but this series has a friable sum, actually, which is the derivative at sigma equal one of the friable of the of the friable sum with exponent sigma. So we are in business, we can use a friable summation to, to get to try to estimate the mean values under this assumption GP. So we just have we just use a convolution. We have a simple convolution method. We have this FY, we convert ggy is just GP of new, but restricted to be less than why. And we have this function FY. So FY of P will be equal to F of P is less than why, and it will exactly grow with these larger than why. And when we with a with a small, I mean, with a small sieving over the friability. We actually have a very good control of the mean value of FY. And we get, we get exactly everything which is needed for FY. And then we can handle the, the, the, we can handle the, the rest using essentially a safe argument. And then we have an inductive trick which enables to replace the powers of log log X by powers of triple log X. So let me finish. We have a constant concern in analytic number theory to try to evaluate the mean values of a multiplicative function, knowing the values at prime. There have been many theorems, which are, which enter in the class of comparison theorems, using volas Montgomery Montgomery bone and so on, which, for which the hypothesis are that suppose fn is less than some multiplicative function are n, in absolute value or in modules, suppose that the RPs are sufficiently regular. The RP to the new, with new larger than two not too large. And the, the real part of, of, of fp is, is close to to RP on average. So this is an asymptotic estimate for the ratio of the mean value of f and the mean value of r. In the recent paper of mine. I made this effective. So there is an epsilon here, which, and if we have these conditions. So it's here are RP minus the real part of fp should be small on average. With a weight one over P, only when P essentially is larger than x to the epsilon, and you get the epsilon in the, in the remainder, so you have a genuine comparison theorem. The general philosophy of several the long estimates is that you don't know you compare F with toe index row instead of with a major one. And you look at asymptotic expansion instead of asymptotic formula. Now, we have to note that fp. This hypothesis a implies that fp is constant on average, which was, which was, which happened in the, in the first paper but not later, while comparison theorems actually do not imply that f should be constant on average. Thank you very much.