 New work about the topic, and so we say that f is multiplicative, if f of m times n is always f of m times f of n, when the greatest common device of f, m and n is 1, and in this talk I will concentrate on multiplicative functions taking values in minus 1 plus 1. So they are bounded and real valued for purposes of this talk, and I give some examples first of multiplicative functions, so the Mobius function, which is 0 if n has a square factor, and otherwise it's minus 1 if n has odd number of distinct prime factors and it's plus 1 if n has even number of distinct prime factors. And another example is the indicator function of the set n of numbers that can be written as a sum of two squares. So this is 1 if p is 102 mod 4, so it's 1 mod 4 or it's the prime 2, and for primes as a 3 mod 4 it's 1 only if the power of prime is even. So in particular, any prime that is 3 mod 4 is not representable as a sum of two squares, and it's 0 if it has an odd power of a prime mod, which is 3 mod 4. And the third example of the multiplicative function taking funded values is the indicator function of the set of y smooth numbers, and by y smooth I mean a number whose all prime factors are up to y. So this is three sort of fundamental examples of multiplicative functions. And if one looks also at unbounded function, then for instance divisor function is an example, but I won't today talk about those. And I'm interested about averages of multiplicative function, what's the average value, and let's first consider long averages over the interval from 1 to x. And these are well understood. It is known that the mean value of the multiplicative function is 0 if f doesn't pretend to be 1, if the values of primes are not close to 1, and otherwise the mean value is non-zero and can be calculated. So I will give the details. So if f is typically about 1, if the sum of 1 minus fp over b converges, this means that for most primes fp is 1 or very close to 1. In this case, the average of the mean value of the function fn can be calculated, and it's this product here. And it's a converging product, thanks to this condition. And on the other hand, if f is not close to 1 for most primes, if this condition doesn't hold, then it is known that the mean value of f is 0. And if we, for instance, look at the Mobius function at primes, the Mobius function takes the value minus 1, so 1 minus minus 1 is plus 2. So this series definitely diverges. So this is the case of Versing's theorem. One doesn't hold. It's a diverging series. And from that, we know that the average value of the Mobius function is 0. And this is actually equivalent to the primes number theorem, saying that the number of primes up to x is x over log x. And also equivalent to the Priemann set of functions not having zeroes real, but s equal to 1. So this sort of fundamental result in number theory. And these are sort of qualitative results, but there's a theorem of how last that gives one quantitative results. One gets more, depending on how this sort of sum behaves, one gets precise amount of consideration in this sum and so on. But what's my topic today are the short averages. If you look at the average of fn over short interval, then previously Maxim and I sold. It was published in 2016. We showed that if one looks at almost all very short intervals of yh with h tending to infinity with x as low as uvis. If we compare the very short average, we take a very short segment of ns and calculate the average of that segment. Then the average is very close to the average over the long segment. And this is what we know by how last theorem and so on. We know how to evaluate this. So essentially we get the same story for the very short intervals for almost all x. So as soon as h tends to infinity, the error term here tends to zero. And the exceptional set tends to be the low of x. So we got a good result for almost all short intervals. And this has led to numerous applications, developments such as 30,000 resolution of logarithmic average in chococonceptual concerning essentially the correlation of nubius function. And also the resolution of the Earth's discrepancy program. But even though there are still few shortcomings about this theorem, and in this talk I will discuss our new work, which addresses these shortcomings. We get better results in several senses. And the first shortcoming is that the quantitative bounds are pretty weak. Here we win the log h to small power and in the paper we get a bit better result than this, but still at the best we win the small power of log x in the exceptional set. So the bounds are not very good. And the second thing is that if we look at the example of the indicator function of the sums of two squares, then this theorem becomes trivial. The sums of two squares means spacing at log x to half, which means that if we look at this long average, it's already zero. It's of size log x to minus half, which means that it's much smaller than the error that we get here. So this theorem doesn't help us in determining how these sort of vanishing multiplicative functions behave in short intervals. It doesn't tell us anything about sums of two squares, because we already know that almost all intervals don't contain any sums of two squares if the interval length is very small. And for long intervals it doesn't give us anything non-trivial. So that's a bad thing about the theorem. And the third thing is that for many applications one needs result for a complex F. This original result is only for the real value of F. And actually in this form it doesn't hold for complex F, because F might be, say, N to IT. And in this case the solved version doesn't quite pretend to be the long version, but there must be a twist. And in this new work we also address this case of complex F, but in this talk I will concentrate on the case of the real function F. Okay, so let's go into the example of sums of two squares. So recall that N is the set of numbers that can be written as a sum of two squares. And it is well known that the number of sums of two squares is x over log x to half times a constant. And in particular this means that if you got the average gap between numbers that can be represented as a sum of two squares, then the average gap is of such log x to half. And in particular this means that if we look at intervals shorter than the average gap, then typically there are no numbers in N in that interval. So for typical y it's more than h1 the short average is simply zero. But if we look at longer intervals than the average gap, then one would expect that we have regular behavior. If we look at a short interval, it's going to be somewhat longer than this average log x to half, then we expect that we get asymptotic formula for almost all short intervals of this gang. As soon as the length divided by the average gap log x to half tends to infinity. So this is what one would expect to hold. And this is what we saw, that it's actually true. So we saw that even any delta and any h0 if we look at the number of integrals that can be represented as a sum of two squares in an interval of length h0 times log x to half. Then it corresponds to the long density. This is the density of integrals in this interval and this is the density of integrals representable as a sum of two squares in long interval in the interval of length x. And what we get is that this is smaller than the momentum delta times the momentum for all but at most x times h0 to minus c delta to 12 integrals. The point is that this is sort of polynomial exceptional set in h0. And this holds for some constant c which we have explicitly but it's quite small. And node here, the exceptional set saves polynomial in h0 and that we get the error term now taking into account the size of the momentum. Previously we just had log h to some small power here which wasn't helpful at all for this sort of very short intervals. But now we get a non-trivial result as soon as the length of the interval tends to log x to half times something tending to infinity. And we get an exceptional set which tends to little of one and it tends so quite rapidly. And this improves on the result of h0, who saw that for almost all x, one has a lower bound for this of right order of magnitude, but he didn't get asymptotic. So we improve by getting an asymptotic formula for almost all x transplants of the order of magnitude. Kajsa, we have a question from Igor Szparlinski. Igor, could you please ask the question by unmuting your microphone. Thank you. Kajsa, sorry, you say that c is a function of delta, but then what's the meaning of delta to the twelfth next to it? Oh, sorry, so it's not a function of delta. It's an absolute constant. Oh, it's an absolute constant. Yeah, it's an absolute constant. Okay, thank you. You can just see delta here, but then it tends to delta to 12. Yeah, it's an absolute constant. I think it's up to 1000 to minus 12. Thank you. No problem. Okay. So this is that the fp has, then it's known that the average value of fp is like log x alpha minus one. More precisely, it's got this. This, you get the product of fp minus one over p, but if it has the average value of alpha, then it's of this as log x alpha minus one. And now we write h1 for the inverse of this. And because fn has mean value like this, then if it takes values zero and one, the absolute value, then the average in the worth of rent h1 is becoming zero. Because this is again similar to the space of two squares, this is the mean spacing of the values because this is the mean. And then the mean spacing is the. So again, if we look at intervals of shorter than h1, then the typically the average of fn is zero. And we don't have an interesting theorem. But again, as soon as the length of the interval is longer than the mean space, we expect that we get a meaningful result, which is that we get that for almost all x. And the third interval average is of the same size as the interval average. And we have a error term, which is of smaller order of magnitude than the main term. This is the order of the main term, and we get that we do all. So this is what one expects to have. And this is what we prove again. So we have f is a multiplicative function, whose average value and the times is average. So if we don't think that the average is exactly something, we just think that if we get it. Kaisa, sorry for interrupting you again. So the audio is a bit uneven. Perhaps, maybe perhaps speak a bit closer to the microphone. Sometimes the volume is a bit uneven. Okay, I will try to do that. Thanks a lot. We have this theorem, which says that if the average value of f at prime is a bit epsilon, so that it's not zero for all the primes in long intervals, then we get the result that we wanted. So h1 is the expected gap between non-zero values of f. And then for any delta and any h0, we get that the short average is the same as long average apart from an error term, which is smaller or smaller size delta times the size of the long average for all. But again, we have a polynomial exceptional set. And now the constant c and kappa depend on the proportion of non-validation. But anyway, we get the same story for all multiplicative functions that don't vanish too much. They have to have a positive average and so get the primes. And then we get the result that in short intervals of the expected length, we can get the same average for f as in the long intervals. And we get the same result for complex valued f, but then we have to have a fist in main term and it doesn't look very nice, but it's a natural fist that we have to take, but I don't talk about the complex case anymore. Okay, so let's get back to the sums of two squares. He called that Huli was able to solve that for almost all x. One gets the correct order of magnitude of sums of two squares in intervals of length h0 times log x12 for any h0 tending to infinity with x. And Huli did this in a different way than us. His main arithmetic information was the solution for the shifted convolution problem for the coefficients of the deducing set of function of the qi. And these are essentially the number of representations of n as sum of two squares. So this is what allowed Huli to solve his result that one has the word. One knows asymptotic formula for this shifted convolution problem. But it's worth pointing out that it is completely out of place if one looks at number field of degree greater than two. If one takes the coefficients of the deducing set of function of any number field that has degree greater than two, then there is no way one can do the shifted convolution problem, which means that if one wants to generalize Huli's result, it doesn't work. It's completely, the approach completely fails because we don't have this crucial arithmetic information. But on the other hand, in our case, we only use multiplicativity of the function. So we have chances to generalize. In particular, we don't need the density of the prime to be half for the case of sum of two squares. The density of the fb at prime is half, but we can have fc1 for the density. So we are allowed to go into higher degree number fields for generalization. And so we will talk about norm forms. The sum of two squares are norm forms in qi. And now I'm going to talk about higher degree extensions. So let's take a number field k. And we say that n is a norm form if it's a norm of an algebraic integer in k. This is a stronger condition that's just requiring that it's norm of an ideal. We make a stronger condition here. And we write ca of n for the indicator function of this case. So in particular, if we look at c of qi, then it's the indicator function of sums of two squares. And there is some work of Odoni on telling these norm forms. And in particular, he thought that the density of norm forms of k is this product, which were quite similar to the products we have had before. So it's the product of all primes for which there doesn't exist an integral idea of norm p1 minus 1 over p. And in particular, if k is a normal extension of degree k, then this is of size of x to minus 1 plus 1 over k. So essentially it means that comparing to the condition we had before, it's that the average times is of size of normal k. And k tends to infinity the average primes n to 0. But there is one issue. When the class number of k is less than 1, then this function tk and is not modificative. So we are not able to directly use our results for this decay of n. Density is not an issue, but the issue is that it's not modificative. But for this, the same work of Odoni helped us to show that tk of n can be written as a linear combination of complex value modificative functions. And then we can apply our results to each function in the linear combination of this theorem in any number field. So let me write this theorem here. So we take k to be any number field over q and we take delta kx to be the density of norm forms in k up to x. And then we get an asymptotic formula for the number of integrals that can be represented as some of norm forms in short intervals as soon as it's possible, I mean in intervals of length than the average gap. And then we get an asymptotic formula for number of norm forms in short intervals for almost all x. And again, we get the polynomial exceptional set. And this k extends who is work, which only works for ui and which didn't have an asymptotic formula. So we get a very wide extension of his work. And who we also studied another question concerning sums of two squares is study the gaps between sums of two squares. So remember that the average gap is log x to half, but then the question is how often do gaps of this length or this length appear. And so writing si for the sequence of numbers that are sums of two squares, it's proved that if we take the gamma movement of the gap function and sum it over the gaps, then we get an asymptotic formula for this. And in other words, this means that for any interval length h, for any interval length h times log x to half, the number of x's such that this interval doesn't contain sums of two squares is at most of all the x times h to minus 2. So this is the equivalent of these two things. So you get a good exceptional set here. Here if one doesn't require an asymptotic formula, but is happy with the existence of sums of two squares in short intervals. And again, who is work is based on the shifted convolution problem for RKN and so it doesn't extend beyond quadratic number fields. If the degree of the number field is greater than 2, who is work doesn't have any chance to generalize the current knowledge about the shift convolution problem. But for us, we again use only multiplicativity. So if we don't anymore want an asymptotic formula before we get an exceptional set for any K, the better exceptional set than we had previously. So this is the result Maxim and I have about gaps. So if K is any number field and delta K is the density of non-forms, then for any interval of h times longer than the average gap, we get that the number of intervals that don't contain the correct order of money to non-forms is of size, the exceptional set is of size h times h to minus half epsilon. So in the previous result, I thought that we had that we got asymptotic formula for this, but we had exceptional set that had a very small power here. Now we have the exceptional set which has size x times h to. Kaisa, I'm not hearing you at the moment. Okay, I'm not sure what I can do about that. Okay, it's better now, I can hear you now. I will try to be closer to the. Okay, thank you. Okay, so if we don't want an asymptotic formula, but just want the correct order of magnitude of non-forms, then we can get a good exceptional set for the number of exceptional intervals. So all but this many intervals contain expected order of magnitude of numbers that are non-forms in K. And if we look at Hooley's problem of looking at the gamma moment of the gaps, then we get the asymptotic formula for gamma up to 3 over 2. And remember that Hooley had this for gamma up to 503, which is a bit larger than our exponent, but worked only for the case of K equal to Q i and our episodic extends to any number fields. And also in the forthcoming work, we are able to do the Q i case for all gamma up to 2 by using some properties of the Deakin set of function for the Q i. So this generalizes who is the result to any number field. And this is not restricted just to non-forms, but this result work for other multiplicative functions. So for instance, we can study gaps between sums of two squares. If we look at the number of intervals from X to X plus 8, which do not contain an extra exceptional number, then the number of them is at most X times H2 minus half plus eta for any eta. And consequently, we get a corresponding result for the gamma powers of the gaps for gamma up to 3 over 2 again. And this improves on the recent result of Hitron, who had this for this gamma moment. He had an upper bound of size X to 1 plus eta. The lower bound that it's at least X is trivial, but he got the upper bound X to 1 plus eta, but now we get the last one. So we can use this also for studying other multiplicative functions and gaps between multiplicative sequences. And let me now move on to some of the ideas in the proofs. I will concentrate on what's new in this work compared to our previous work, but I don't expect that you know how our previous work worked. So for simplicity, let's just concentrate on the case when the average of F is 0. This means that we don't have to worry about the main term. And the starting point for us is Perron's formula, which gets us to express this short average as a Perron in the crowd, as an in the crowd in the complex plane, where we have the corresponding diricate series in the middle and this X plus H and X come from the ending points and starting points of the interval. And this expression here can at least morally be estimated by the mean value theorem to be H times the derivative of X, which is H times X to IT. So we can morally write this sum of Fn over 15.28 and this form we can... And if now we just put absolute values here, we are in trouble, because we always hope in any denominator that we have square root cancellation and we can't hope for more. And if we had square root cancellation in this n-solve, it would be of size X to minus half. And multiplying by H and X over H would mean that the right-hand side with absolute values would be of size X to half, but the trivial bound for the left-hand side is H, which is much smaller. So this is not a good idea to put absolute values here. Instead, we have to somehow... We have to somehow take advantage of the oscillations of X by T. And what one would normally go on to do is to take a square mean. We want to get the result for almost all X, so we can take a square mean of this and if we get a good one for the square mean, then we get that this is more exceptional set. And if we take a square mean of this and square out and do some work, we find out that we get an upper bound of this form, where we get the mean square of the same dirigate polynomial here. And now we don't anymore have this problem that if we have square rule oscillation behind trouble. So that's good. So that's what one would normally do. And recall that we had chosen this H, the length of the interval is H0 times H1. The H1 is the sort of average gap between the numbers. And so if we want to show our desired result that this short average is of size delta times the density of F, then we wanted to get a polynomial exceptional set. Then for this thing here, we would need a bound that is of size this squared times the wanted exceptional set. So what we would like to achieve here would be that the mean square of the dirigate polynomial is of size third time H0 to this power delta kappa times the square of the mean value of F. And if we could show this, it would be done. So this is what we would need to be done. And there is a mean value theorem for dirigate polynomials, which tells us that this mean value here is at most of size the length of the interval plus the length of the sum, as you mean square of the terms. And we can prove a variant of it that works well for the case where Fn has sparse support, like some of those pairs. And in particular, we can show a mean value theorem variant, which has that if we look at the mean value of An, dirigate polynomial, whose coefficients are bounded by a modicative function F, then we get the mean, the square of the density of F. And so we get the same term here and here, but we would need to win this delta square times H0 to minus C delta kappa compared to this sort of trivial mean value theorem bound. And this is exactly the same situation we had in our previous work. We need to save something compared to the mean value theorem bound. The mean value theorem gives us a trivial result. If we had just this, we get that the set of exceptions is of size big O of X, which is completely trivial, because the X numbers, it's not a good thing to have an exceptional set of size big O of X, but we had to win something compared to this mean value theorem bound. And we can do it. We can repeat the same arguments because we have this good mean value theorem now. We have to prove some hollows and giblets that estimates for modicative functions in the sparse setting, where we don't have mean value smaller than zero, but we can repeat all those arguments. That's not a problem, but the problem is that at best this gives us, instead of this power saving H0, this gives us log X2 minus kappa, because hollows can never win more than this and it's an actual problem. There are modicative functions, where you only save this much value. Average is large. And so actually showing what we wanted is in general not possible. There might be some points here, where this did we get polynomial as size log X2 minus kappa and these contribute like log X2 minus kappa to the left-hand side, and we wanted to win the power of H0. So in particular if H0 is larger than the power of log X, we are in trouble here. So we can't quite show this if we want it. But the method would work if we had a good bound for theoretical polynomials, if we had a good bound for theoretical polynomial masterware price in certain industries for say, branch of science object, but if we had good saving in those theoretical polynomials then our previous method could work. We could factor out this theoretical polynomial and then we could use the main value theorem or we could do a similar thing in what we did in our previous method of factoring out different sizes of time even if it would win thanks to this calculation. So if we would have this, we would be happy. But if it was possible for 10 or X, we would need to have that. So we have to FB instead of 1 here. And another thing, we won't have prime factors from some intervals but that's something we can also do. But anyway, the key idea is to handle the exceptional T, the T for which we don't have the desired bound like this before taking the mean square away. So we handle those T for which we don't have this bound before taking the mean square and then once we take the mean square we can assume that we are integrating only over those T for which we have this bound. Okay, so let's get back because of my first formula we had that this thought average and we've written that onto integral and now we split the integration range to two sets T and U where T belongs to T if we have this desired consideration for the theoretical polynomial over primes of sci-tech tools. And by the mean value theorem for theoretical polynomials we can show that for most points T this is actually true. The size of the exceptional set is at most x over h square root. So most of the points we have this and then the next set. And by the previous discussion our order work is basically this of V where we have good consideration in the theoretical polynomial over primes. So for that we get the desired bound and then we are left with handing the integral over U. So we are left for studying this integral over U where the set U is sitting past it's a subset of minus x over h to x over h but it has size only x over h square root. And now we take a new trick. We note that most integrals have at least two prime factors from long interval at least x to epsilon squared to x to epsilon. So we can at least morally we can replace this fn over h we can replace the prior product Diricast polynomial where we have sort of taken out the prime from those intervals. And also we can use Paxi's law theorem which is sort of mean value theorem for Diricast polynomials but over a subset and now we have a subset because we have this one for U and we can use it for those keys for which we have some answer in these polynomials we can again look at the mean square and we can use the point wise point for this. Thanks to this point here and other parts can be taken care of in the mean square thanks to the part set. And on the other hand it's possible to show that this holds for most keys they still are only like x to epsilon exceptions to this. Now we have handed almost all keys they have just like x to epsilon for this one and it's so tiny that we get that this mean here is little of h by using alas mean value theorem for multiplicative functions for this and then some alas-monkometry type large value theorem so like a mean value theorem for Diricast polynomials but over a subset and now we have a very, very subset so we are able to prove a very sparset version that works here but we know we get wanted so this is the theorem we got by looking both moving into the ground formula moving into the alas of Diricast polynomials but working both on L1 and L2 we alternate between looking at the alas over x and not looking at the alas over x to be able to handle this thing here so let's get back to one last thing which is the results about positive proportion lower bound getting the improvement of who is results for gaps so if one only wants for an F a good exceptional set if one only wants to show that in almost all short intervals one has that fn some of average of fn is at least delta times the expected average we don't anymore ask for an asymptotic formula but we just want a positive proportion lower bound whether there exists some sort of spars or whatever we have then it's an easier task because we don't have to get an asymptotic formula but we can hook up in whatever way we wish and only consider those ends and in this case it's a good idea to instead of looking at all possible ends to sum over products of very many numbers we take k to be like 1 over f10 and then we look at product of k things epsilon to 1 over epsilon to 10 things that are small primes we take k minus 1 very small primes and we take m which is also very small body and then we study the product of all these and because fn is multiplicative we can also also split f as a product of k things and now what we want to show is that this has size at least a constant times the right hand side here and this sum over this product is definitely of smaller order of magnitude than this thing here we can get a lower bound like this we might double count some things but at least epsilon to 10 times so we get just a constant depending on epsilon so if we want to count all n we get definitely a lower bound if we just count products of k things in this into us and now the things get easier because if you look at the rocket polynomial for responding to this it can be factored into a product of very short factors now we have like k very short factors and this gives us a lot more flexibility when we use all these here as for the rocket polynomials and this allows us to show these that we want and that's all I wanted to say thank you thank you guys for this excellent talk so you can unmute your microphone so we can thank the speaker altogether and by clapping okay so now we have time for some questions so I will mute all for just a moment so there's time for questions so maybe I'll start with a question you mentioned that let me stop