 Yeah, thank you very much for the invitation. I hope everybody hears me well. And then I start with the first slide with the remainder term of the prime number formula. And we look for the explicit formula. And then we can see it's a standard fact that a little heuristic meaning that we will say that an oscillation caused by a zero rho will be x to the beta. So beta is the real part of the zero divided by the modulus of rho, which is asymptotically the same as x to the beta divided by the modulus of gamma. And so as I say that the oscillation is caused by this zero, this is not an exact mathematical definition, but naturally we all understand the meaning of that. And a problem posed by Littlewood more than 80 years ago was to prove an explicit oscillation result for the remainder term, for the modulus of the remainder term. It is supposed the existence of a hypothetical zero. So here, the question was that to replace the ineffective problem in the CORM is an effective result. So we knew already that this oscillation is at least as large as x to the beta minus epsilon. But the epsilon was not made precise. And what was emphasized by this problem was that the whole oscillation result was ineffective. And this problem was we can just remark that Littlewood proved already in 1914 that the oscillation of delta x, the remainder term in the form psi x minus x, is even in both directions, a little bit larger than square root of x, namely a three times iterated log vector log x larger. And here is an error because this should be square root x divided by log x. So that means the same assertion holds for the difference of the pi x minus li x, which was the answer for a question of Riemann stated in his memoir, or even as a conjecture, that he said or he believed that the function leaks gives always an upper estimation for the true number of primes. But Littlewood showed that this is not the case. It has infinitely many positive and infinitely many negative values of an x tends to infinity to infinity. And we see from this formula that actually in the case if the Riemann hypothesis is good is true, then we have even a larger oscillation than caused by a single zeroes. So many zeros just stretch in each other. And so he gets the three times iterated log x. And it's strange to remark that this result is still after more than 100 years still not improved. So that means we think that the true oscillation is somewhat larger. But we don't have any proof which would go beyond Littlewood's result. Concerning there are numerical results that might be the pi x minus dx is at least probably until 10 to 20 really negative, as Riemann suggested or conjectured it. But we know that there is some sign change below 10 to the 320. So that means some positive value already. But this will be not the actual content of our talk. We will talk just about the size of the oscillation. But in contrast to just concerning the oscillation, we will consider both the average value and the maximal value of the oscillation. Now Turan was the first who solved Littlewood's problem in 1950. And he succeeded both to give a lower bound for this maximum of delta x, so for this Sy function. And it was effective. I will not emphasize that all the results which I will list later will be explicit. So that means C1 rho is explicit and C2 is explicit as well. And he gave a function which was a little bit better than just the y to the better not minus epsilon. Knopowski later proved the same result for the average value of the remainder of the modulus of the remainder term also using both words used Turan's power sum method. And more specifically, they used the so-called second main theorem of Turan's power sum method, which shows that if we have a power sum and we consider concept of power sum normalized in that way that the modulus of the largest number as vj is equal to 1. And in this case, we expect that, so to say, the function should be near to 1 or near to a constant, at least at some places. Now what Turan could show was that if we take an interval of lengths n, n is the number of the complex numbers that j, then at least one of these values is as large as something little bit smaller than e to the minus n multiplied by this minimum b1 plus bj, which in the case of special case, if all the bj are equal to 1 and this special case was used also in this result, then we get that among the n consecutive values of these so-called pure power sum, we get this function, which I said is a little bit smaller than e to the minus n. Now concerning, we don't know how well this approximates the truth, but at least one remark that we need at least n consecutive values of these power sums, because if we take just roots of unity and roots of unity, then we get n minus 1 consecutive power sums equal to 0. So that means this is truly, surely necessary. And whether the right-hand side, this is a big question, how much near is the right-hand side to the truth. Now I succeeded to show using somehow the same machinery, but not exactly any of Turan's CRM, but something about the generalization of this CRM about naryfantine approximation. I could show that the oscillation is really as large as expected up to a factor of 1 minus epsilon for explicit if y is enough large, depending on Rho not and epsilon, and so to say, the starting point depends in an effective way from this. And the proof, as I say, we use Turan's method, but not the power sum of Turan, but other features of Turan's method. Now, one remark is that if we consider and ask what is the optimal oscillation result, is it true with 2 minus epsilon instead of 1 minus epsilon? This is naturally arising the question because with any 0, we have a conjugate 0. So there is, let's say, a plausible conjecture that it should be as large as 2 minus epsilon. And this would be really the case if the Rho not 0 would be, let's say, isolated. We don't define exactly what it means, but we can feel it up to some extent. But then somewhat later, Silla-Adreves could show that this optimal constant is not 2 minus epsilon, but for a large class of functions, we have pi over 2 minus epsilon, which means that for some functions, for example, for the zeta function, it can be valid that it is 2 minus epsilon. For example, if the Riemann hypothesis is true, then this constant is even infinity. We have seen little but result. So then instead of a constant, we would have basically three times separated logarithm of x extra. So that means it would be very hard to answer the question for zeta. But definitely for zeta, we can improve it. One can improve it for pi over 2 minus epsilon. And for a large class of functions, one can improve it to pi half minus epsilon. And for a large class of functions, so that means for many functions or for some functions from this large class, this is even optimal. Now, concerning this average and supremum or maximum value, I improved improving earlier results of Turan and Knahovsky. I proved several results. Also, Schlage proved to show the improvement. And so in general, different theorems have different conditions. So we can focus on the dependence how large y should be compared to gamma naught, the imaginary part of the 0, then how good is the lower bound of the function of y and gamma naught, then the localization of the large values or large average. So not just that somewhere between 1 and y, we have a large value or we have on the average large values, but whether we can localize better these estimates, then in general, the question whether we have an effective or ineffective estimate. And I had another method which gave a lower bound which depended on zeta prime of rho naught. If rho naught was simple, it had a larger multiplicity than analogously something. So that had the disadvantage that this was a quantity which depended, let's say, explicitly on this 0, but we could not give a lower estimate for it as a function of the size of the 0. And as I mentioned already, the real difficult case is if rho naught is not on the critical line, that means Riemann hypothesis is supposed to be false. I mentioned a few of the earlier results. For example, I showed that the average is at least the constant depending on gamma naught times y to the beta naught. Now here, this constant is, for example, would be already ineffective so that I was not right that everything will be effective later on. But supposing the Riemann hypothesis, we can really show that the average is an effective constant times square root of 5. And Kramer showed already 100 years ago that it is the opposite inequality as well. So our knowledge is today that under the Riemann hypothesis, the average of the remainder term is determined up to a constant and its square root of 5. And the maximum of the function is at least naturally as large as the average. But it can be larger with the log square y, which is, again, some quantity which was, apart from the constant, not improved in this last 180 years since. This is, by the way, an easy consequence of the expected formula with the upper bound. So what we will show exactly will be that if we have a given 0 of Riemann's data function, and y is, so roughly speaking, bigger than e to square root of gamma, which means that gamma should be less than, say, log square of y, then we have a lower bound for the average. And here, we have also a localized average. So this is some advantage of the thing. And also, we have here a relatively good lower bound. So that means something which is relatively near to y0. So you might remember for the first result here, we had to run something which is more near to y to better not minus epsilon. And here, we have something which is, so to say, more near on the logarithmic scale to y to better not. Now, this can be proved for the different forms of the error terms as well. But we will concentrate in the groups and just for this case of psi x minus x, because here, we have not big difference between others. Now, the general strategy is still the same as by Turan that we consider a weighted mean value, which is, in our case, now concentrated to some interval near to y. Where near to y means more exactly that's a log on the logarithmic scale. So that means logarithm of Ay is near to log y. So on the logarithmic scale, we have, so to say, Ay near to y. And so that means this weighted mean value of the error term can be expressed as an integral of the zeta function plus some auxiliary functions on the line real part of SQ2, for example. And this can be transferred into a sum of residues containing the zeros. And the weight is chosen so that only zeros not far from row not should get non-legible weights. And this is the classical treatment of Turan as well by the power sum method. By this method, we construct the weight functions with some special properties, which is asymptotically the same. For a small group, a group means small set of zeta zeros near to row not. The group can consist, perhaps, of just a row not. So this, so to say, plays in some sense the role of the isolated zero. Then the residue would be zero. That means there are no residues at a constant distance from the given one. And apart from this small group of specialized zeta zeros. And then if we are more far from the constant from the original zero, then the weight is already negligible. Exactly the way how we will prove it will be as you can see here. So that means here we have a localization for the average. Instead of delpaks, we convert this x to 1 plus beta not. And the lower bound what we get will be this. And here, so what we have to keep in mind is L is log log y squared. Gamma not is, that was the condition, small compared to log y. Lambda is log y and L is log log y. So lambda is log y, L is log log y, and little L is the square root of log log y. So the first part of the procedure is relatively clear. If we have a zero and near to this zero, we have another one which is somewhat more to the right than we jump to the other zero. And so in this way, we can after most log y steps, we arrive to a zero which has already a little bit larger or the same real part as the original one. And the imaginary part is perhaps a little bit larger. Longer is log log y squared. So it has the same order of magnitude than the original one. So we replace the original zero with rho not with this rho k. And in this way, we will have the property that all the zeros which are near in height to our different zero are perhaps a little bit with a quantity one over log y to the right from the original zero. So these zeros were called at Turan as extreme right hand zeros. And this is sort of a routine procedure to work with such type of zeros. Now the next step is already different than by Turan. This means instead of having one isolated zero, what we cannot guarantee, we try to reach a small set or small group of zeros which are very near to each other. And on the other hand, all the other zeros are this large quantity more far from this group than the members of the groups from each other. In order to do this, we will say we will introduce this definition that two zeros are epsilon connected. If there is a chain of zeros from rho to rho star, such that each of these zeros have at most epsilon as a distance. And then the procedure is the following. That first we consider those zeros. So we have one extreme right hand zero. This is rho k wave. Then we consider the set of all zeros which are very near to these zeros. And this is defined in that way that they are epsilon 1 connected with this original zero. And epsilon 1 is 1 over log y divided by 1 over log log y on the cube. So that means in this way, we get already some group of zeros which are near to each other. But we cannot, so to say, to distinguish about a group of zeros being very near to each other and saying that all the other zeros are much more far from these zeros than the members of the group from each other. So what we do, so first we note that due to the result of backland, how many zeros we have between high k and t plus 1, we know that this is at most the quantity is at most log gamma naught. And the log gamma naught is at most log log y due to our supposition at the beginning. So that means we know that the number of these elements of this group is at most log log y. And we continue this procedure further. So that means inside this set S1, we consider other subsets of zeros, which are very close to each other. Here, the very close is that means epsilon 2 connected. And epsilon 2 is with a factor log log y cubed less than epsilon 1. And in this way, we can have several disjoint subsets of this original S11. And we choose among these subsets the smallest one, smallest one, which has the smallest number of terms. So that means either we have just one, so that either the new subset is either we have just one subset, which is the same as the original set, then we are already contented. Then this set can be considered as a group of this small group g of zeros. Or if we have more such subsets, then we choose among them the subset which has the smallest number of elements. In the worst case, that means we have half times as many elements as the original set. And then we continue this procedure further with again epsilon 3 connected with an epsilon 3, which is again with a factor log log y cubed smaller than the original one. And again, we go as long as we get no new subset. So that means when we have already one original set remains the same in the next step as well. And this must happen sooner or later, since if we have in all steps at least two new subsets, then we see that this whole thing must stop in log s1 divided by log 2 steps. So that means log r divided by log 2. So that means in all together log log y steps, this procedure has to be finished. And once it's finished, then we reached in some sense already our goal. So we are in a situation the same as an isolated zero, just instead of one isolated zero, we have a small group of zeros, which are very near to each other. So they behave like a zero with a large multiplicity. And all the other zeros are with a power of log log y, larger distance from the elements of this group than these from itself. So we will call this small set of zeros r. And that means that within r, the distance is 1 over log y divided by a power of log log y 3 n plus 2. But outside this r, all the other zeros are with the second power with the square of log log y more far from all the elements of the original set. So in this way, we get finally a suitable subdivision of the zeros to exclude this interference of the zeros. And this means that we will have still that the new zero will be nearly as large in the real part of this new zero will be nearly as large as the original one. The height is just a constant times larger. And again, right of these small group of zeros, so right means if we are at least 2 over log y, more right from these zeros, then we have already a zero pre-region. So that means we arrived in some sense to a situation like Turan. But as we will see, our power sum will be in some sense trivial. The largest elements will be equal to each other. And then the other elements will be zero exactly. And then a little bit more far, we will have already just negligible effect for all other zeros. So in order to reach this goal, we define a polynomial. Perhaps this form is better to see. So this means that here rho dash times through those zeros, which are in the distance at most five, so a constant from the original zero. Five has no important meeting. I mean, it has to be at least something C4. But any constant will do here. And so for those zeros, which are at most a constant distance from given zero rho 1 and are not elements of this small group of zeros, which are very near to each other. This function for this rho prime, we define this function here. And then the property will be that if we consider, it turns out that in the expression, not the rho comes, but the distance rho minus rho 1 plays an important role. And then the P of rho at the place is rho minus rho naught will be equal to zero if rho is not element of this small group. And on the other hand, is a distance at most five. So that means those zeros will not count at all, as you will see later. The polynomial will be for all the zeros, which are in the special group or special set being very near to our original rho 1, will be asymptotically equal to 1. So that means what I said that the larger terms of this power sum will be all equal to 1. And we can easily, if we use the procedure and the estimates what we gave for these zeros here, then we can easily estimate the size of this polynomial, which is e to the l, in general, if s is large. If s is not, how to say, less than two times log y. And it is s to the l if the modulus of s is bigger than 2. So that means that we have a control over the size of this function PS. And then we can define the weighted mean value of delta x in that way that this original delta x over x to 1 plus instead of beta, we have this rho 1. And then we attach to this multiplicative weight, which is defined by this malin transform here. And our function what we choose here will be this well-known function raised on the power log y. And this will take care to have the zeros at a distance at least 5 to have a small effect. Then this function here will take care that if the zeros are already as far as square root of y or a little bit more than square root of log y from the original zero, then this should kill everything. And we have a factor PS, this polynomial, on which is so to say, which will grow if s grows. But on the other hand, these two factors will, so to say, will balance the effect of PS because as I said in this last slide, this formula assures us, guarantees us, that PS should grow just moderately. I mean, you see the modulus of s as the power L, e to the power L. And so this Gs will be and first will be an anti-function. So we can move the integration in order to evaluate this weight function. And the Gs function will be very small if the imaginary part of s tends to infinity. If we want to control the size of this weight function w, then moving the line of integration, it's relatively easy to show that if a is large, then the first term will be here, small. If e is small, so I mean much less than 1, then the last term here will be small. So the minimum will be small. And so that means in some sense, this e to L will be the maximum size of this w a, roughly speaking, and not exactly. And if we are, so to say, this a on the logarithmic scale far from 0, so if a is much smaller than 1 or much larger than 1, then we have negligible weights for this weight function. And in order to show it, it's quite a routine work. I mean, if we know the exact form of the function, then these two function, this is the, we can evaluate easily. And for the polynomial, we had already given the size of the polynomial. So it's an easy task to calculate these integrals, both on the line sigma equals 0. This gives the second term here. On the other hand, we have the option to work for small values of a. That means if a is much smaller than 1, it has to be a positive value, because a is equal to y divided by x. So if a is much smaller than 1, then we move the line to the right. And then if we evaluate this thing here, we will see that we will get here an a to the power lambda. And this will be already dominating this whole expression. If a is much larger than, if a is smaller than 1 over e to the l. And this is why we get, in our original CRM, we get always, as you see here, the lower bound is e to the minus l. And the localization from the distance from the logarithmic scale from y is equal to l as well. So this is the way how we get the distance l, both in the lower estimation and both in the localization. And if we move the line to the real line sigma equal minus lambda, then we get for large values of a, large means larger or much larger than e to the lambda, e to the l. So l is log log y squared. Then we get, again, the other term in this minimum. So that means actually this evaluation already shows us that we can localize the values of the weighted mean value of delta x, depending on the exact value of value of x, and so to say, depending on our already given choice of little l, small l, which is log log y squared. Now, the rest is, again, more similar to Turan's scheme of showing these oscillations, CRMs, until we arrive to the so-called power sum, which will be different than by Turan. Now, if we take a look on this, then again, our estimation of the weight function guarantees for us that we have here negligible weight for the values of the remainder term, which are smaller than y times e to minus l. And we have similarly a negligible weight if we have a value of x, which is larger than y, which is a factor e to 2l, where, again, I emphasize that the l is log log y squared. So this shows two things that basically this average u, y can be estimated from an average of delta x, which is only sensible for the values between y times e to minus 2l and y times e to 2l. And the effector of what we lose by this procedure, which is e to log log y squared. And this will appear in the final result of r, so in the final CRM. So that means, in that way, if we can estimate from below the u, y as a function of y and rho naught, then we will get an estimate of the average of delta x, taking into account that this x to 1 plus beta 1 is completely, so to say, transparently behaving in this given interval. So that means now we will use the identity which connects the zeta function and the remainder term. This is by partial integration, so to say, standard. And then if we write in the definition of the weight function, then the definition of the weight function is exactly like this. The second step is, again, standard. We interchange the two integration. And so we arrive that we could express the original weighted mean value of the remainder term. We could express as an integral of the zeta prime divided by zeta. And here we have a function gs, which is formed from two well-known functions. And the third one is our polynomial. So this is the gs, which appears in our expression of the integral. And so that means if we want to calculate this integral, the mean value of the remainder term through this complex formula, then we get a power sum where we have, so to say, this is not interesting. This is the only interesting part, naturally. And so that means we have this function here. And we have to control. We have to check where this is really easier, manageable, and easier to be estimated from below than in the case of Turan. Now, there are two remarks that if we are already at a distance, at least log y from the original, let's say, the dominant 0, rho 1, then our dominant term is here that this function will kill everything. And so if we are at a distance at least two times log y away from our dominant 0, then this is really negligible. Then the second step is if we consider the 0s, which are at a distance between 5 and 2 times log y, the distance from the original 0, I mean not original, but from the chosen 0 is between 5 and 2 times log y, then the g function is actually 0. So in that case, no, no, excuse me. In this case, it is the function e to the s plus e to minus s divided by 2s to the power l. Then this function kills now everything. And so in this way, we can again neglect this set of 0s here. And naturally, so to say here, we could increase the bound to 5, as I said, but we could not decrease it below c or 4 or something like, or below e plus 1 over e, naturally. But that means we have the second set of 0s for which this infinite power sum is negligible. Then the third set of 0s, which can be actually quite near to the original 0s, but which don't belong to this very small set of, so to say, 0s, chosen 0s are set R, which are very near to the gamma 1. And for those 0s, we have actually no residue. So that means this g function is 0, because we constructed our polynomial in that way that the polynomial should be 0 if the chosen row is not in this set of 0s being very near to the row 1. And on the other hand, if the 0s are at most of the distance 5 from it. And so that means this set of 0s, which otherwise should cause, so to say, quite a bit big loss at Turan's method, for this set of 0s, we have just 0, the residue, or we have no residues at all. And then there is the rest is those set of 0s, which are very, very near to row 1. These are row equal an element of R, which I say can be considered as the same as the original 0, row 1, because the g function is asymptotically 1 for these 0s, again, due to the definition of our polynomial and due to the definition of the other wave function here, apart from ps. So that means these functions are also asymptotically equal to 1 if we are so near to our chosen 0, row 1. So that means this whole sum, which appears in our power sum, is then idea 1 for a few times, which are in some sense very near to our chosen 0, row 1. So in this set, the special set R. Or otherwise, they are in, so to say, three steps we showed that they are either 0 or a little bit less negligible or very much negligible. And this means that we can not only estimate from below, but we can even asymptotically evaluate our, I call it a power sum, but the expression is not so much well, so to say, chosen for this function here. So that means for the element of this sum here, every element is the same asymptotically equal to row 1. The number of elements is at least 1. It can be more as well. But therefore, we get for this, the Uy can be even as evaluated asymptotically. And it is this thing R divided by row 1. And so it is definitely, if we take into account all the errors here, it's definitely 1 over 2 times row 1. This means it's a relatively negligible 1 over log of y square. And this means that the weighted mean value is estimated as basically 1 over 2 times 1 over log of y square. And this means that we get really for this quantity, which was appearing in our detailed theorem, for this quantity, we will get the lower bound, basically 1 over e to the l, or to be completely exact, we get the inequality stated here that this mean value will be at least e to the minus 5 over 4 times l. And this finishes the proof of our CRM, which was just not in such a symmetric way defined, but it was x to 1 plus delta naught is given here. And so altogether, we proved our CRM in this way. And then I'll finish it. And thank you for the attention. And I'm actually ready to answer any questions.