 Thank you Marco. Thank you all for stopping by this analysis seminar today, analysis slash number theory. It's a pleasure to have Mithun Das with us this afternoon. He will speak about the variance of a general class of multiplicative functions in short interval. Pleasure to have you here. Thank you, thank you Manel and it's my pleasure to be here and I enjoyed it in the lot. It's a nice quality environment. So this is my first time with it also in India and did a talk. So let's see. And yeah, so as you've seen the title of my top variance of a general class of multiplicative functions. So this is joined up with Brandon Darbar from NTNU Norway. So he is posted there. So what are the goals is here. The goal is basically to estimate given a large class of multiplicative functions. A is the set of multiplicative function large class we define later and for it belongs to this class. We want to estimate this quantity that is given H, which is up to a theta, theta less than equal to one. And this is we noted as a short average and started by building the long average or you can say it is a global average. So people the long average as well as you can start with 12x. This is also called long average. And this is easy to handle easy to understand. But once you consider short average, this is really difficult. And usually long average we always given a multiple function will always an interaction with current formula. This is nice device in complex analysis which connect to study the counting problems in a number theory. So we started this difference you can say to know or some say it is called variance also. So our aim to obtain asymptotic formula for this problem. So the main term plus error. That is our aim. So let's motivate why we started this problem. So let's start with the multiplicative functions. So the multiplicative function basically arithmetic function. So which taking from natural number to the complex numbers. And it is called multiplicative if we make like f of f time m, m time n equal to f of m time f of m when they are related with time. So when they are not related with time there is a another function. This is called completely multiplicative function. So completely multiplicative functions. This will not restrict m n. This is called completely multiplicative function. We also use this definition later. So very beautiful example in this direction and very useful example in for number theory system. The Moray's function and Moray's function defined by the mu n equal to minus 1 power k when it is the product of k distinct prime factor and it is one at one and other is zero. For example, if you taking say mu of 2 square times 3, it is actually zero on the definition but it could take mu 2 is 5. This is equal to minus 1 power 3. This is equal to 1. So this is Moray's function defined and it is really connected so many things but here we see that the prime number theorem is basically counting the primes up to n is directly connected with the sum or average of the Moray's function. And prime number theorem is says that how many primes up to x? So these have a nice asymptotic formula that x by log x. It is more or less like that. And let's fix the notation of what is given by asymptotic formula. So asymptotic means we say f of x asymptotic to g of x and x tends to infinity. That means you just limit your taking. We noted f of x equal to more of g of x. We also use another notation. This is a big o notation that f of x we can write big of what is some positive function g of x, big of g of x. If we have less than equal to for some constant c, some absolute positive constant c, we have this relation for all x bigger than x naught. So after certain x, you just getting this less than equal to relation for certain constant. So this is a kind of growth rate. So we use this notation strictly and so prime number theorem as the asymptotic formula for this formula is directly saying that if you getting this asymptotic function is small of x, that's say that this asymptotic formula. And similar another equivalency related to Riemann hypothesis. Riemann hypothesis says that Riemann data function, the Riemann data function have all non-previous zeroes along that critical line or half line in the complex plane. So this is equivalent to saying that this average of this sum have this cancellation actually extensible half cancellation, almost about extensible half cancellation. So this nice cancellation equivalent to Riemann hypothesis but with only less than some cancellation equivalent to prime number theorem. Now the two most fundamental question in the direct sum of short interval. So what is the shortest interval that contain a prime? So this is the question as a study from say around the beginning of 90 but still you have a lot of scope to study because still problem is open. And so this counting of short interval, counting prime in short interval is equivalent to showing that the asymptotic formula this symmetric function of lambda where lambda is the mathematical function, this taking the logarithm of p when n is prime powers, others where it is zero. So this is kind of equivalent. And the second question what is the shortest interval that contain a sign change of most function or some cancellation of the most function? So these two fundamental question still remain open and I mean still have a scope to research and the first case the unconditionally it is known that one can consider the short interval of x2x plus h where h looks like less than equal to extensible alpha by 0.525. So this is due to Brecker-Hurman and PIN in 2000 they prove that this short interval contain a prime. So this is kind of first breaking result after some averaging result or the result by Huxley that if you take h bigger than x2 the powers 1 6 plus epsilon then for almost all x belongs to x 2 capital x 2 capital x then if you consider a n is either lambda n minus 1 or mu n. In this case this some sum of the function in short interval actually is small of h we just writing explicit version but you can consider this small of h. This equivalent to saying that that sum over lambda n is asymptotically h for the short interval for almost all h, x belongs to x2 capital 2 and capital x to capital 2 x or mu v s function this sum over sum of the function mu v s function is small of h these two statement equivalently for almost all small x belongs to capital x to capital 2 x. Then the construction of a camera in 1936 that one can expecting by using he use actually policy model to saying that this short interval can be taken like x to x plus some constant time log square x. So this is still very far on the current status of this question and regarding second question in 2015 mathematical as they will give a breadth through result not only the sign change of the Mobius function is short interval but for generate class of multivariate function the multivariate function with a bounded which bounded by 1 in this case they relating the short average with the long average and so either the difference is very small. That is mean for almost obviously for almost all x not uniform and this give you that you can study this short interval in term of the long short average in term of long average and this give a consequence that the Mobius function have a lot of cancellation in the short interval x to x plus psi x psi x you can take are very slowly. So this psi x no matter how slow you can take but it goes to integrate. So you can take psi x goes to infinity with x but you also can take psi x less than equal to x also. So it is kind of I just write in psi x less than equal to h less than equal to so it is for the interval is for only except finite situation. So but to prove this as you say this is almost all means kind of they consider L2 norms actually. So they consider certain variance so which looks like the difference of the square of this set but restricted any restricted certain subset of integers. So this subset actually it is considered the whole set of integer up to x except some interval which is closer to the log function and it is goes to infinity but goes slowly except some set where we neglect this set. So for example I just writing the picture so it is 1 it is x here it is p here it is q. So they restrict the set of integer except the set the prime not belongs to this this region. That means they restrict n belongs to s sorry that p divided n implies p not belongs to this and there is some nice identity is called Ramara identity this is one of the modern tools people use in my degree number 3 and using this they basically estimate the variance of this quantity restricted with s and then they do this. So now we kind of we see the kind of another set of multiplicity function this is known as k3 numbers. So a k3 numbers is the basically the numbers which is not divisible by any time case power. For example you can see that supported it to p that means we can write 2 into 3 this is equal to 1 but if you write mu 2 2 square times 3 this is again 0 and this is called 2 3 power square 3 if you writing mu 3 here then 2 square 3 this is equal to 1 but mu 3 2 3 cube this is equal to 0. So power at most 2 that is 3 3 but if you taking power 3 this goes to 0 this is kind of identity characteristic function for this k3 number. So this is the way we can find the k3 numbers and for this k5 numbers there is some some result known for limiting average or limiting density for this k3 number it is 1 over zeta k. So k bigger than equal to 2 obviously and if we writing this it is error by using parent formula one can get that this error term actually looks like b go up x to the power x x to the power 1 over k. So that is naturally by using parent formula but they just slight improvement due to wall face they just improving kind of epsilon power we can allow epsilon power that x to the power of 1 over k minus epsilon. So this is the improvement but one can widely conjecture that this actually expecting like that x to the power 1 over 2 plus epsilon. So but in the direction I am sorry so far we talking about the global average but now in the direction of the global average it is an asymptotic known due to rock and a robust term rock that for k equal to 2 case if you taking h over log x to the power 3 by 13 plus epsilon that goes to infinity in the 2 case in this case we have this asymptotic formula and similar for k bigger than equal to 3 if h over x to the power 1 by 2 to the plus epsilon this is goes to infinity in this case we have asymptotic formula. So this is kind of restrict that you cannot take very small h so for large it is well known but small it is there is a problem and so similar to global case in the case of short interval for this kf number one can expect that this have the following estimate that this mean value is this sum actually looks like h over zeta k plus bigger h to the power 2 plus epsilon. But anyway allow that h lies between x to the power epsilon to x. So in this short interval this is kind of conjecture one can expect and next we see the what is the known part is known for the square free case only that variance estimate. So we taking this sum this is we call kind of discrete variance initially I show that taking the integration so variable they are real numbers but here we taking kind of discrete sum. So this is kind of we say called a discrete variance and this if you take the small interval length of this square free number that h up to b small of x to the power 2 by 9 times log x power minus 4 by 9 and in this case have this nice asymptotic formula for this discrete variance where the constant is looks like this and it complicated but it is okay and but one can see that actually this if you write in term of integration form for real variable this actually not much different. So it is kind of in the discrete case at all the integer is taking kind of points like this but if you consider continuous case it is kind of say it fix up to one length then it fix up to one length fix up to one length like that but on the one length integration it is one integration is one. So it is they does imply that integration not much changed if you are writing discrete in term of continuous variable only thus a point there is some problem at problem there maybe it is sometimes taking and not taking. So therefore we can allow a reasonable error that is okay. So with this instead of discrete you can consider continuous version. So that is considered by in 2020 Gadotowski, Matamaki, Reswheel and Eraser they consider this and they kind of stretch the exponent of the short interval from the previous result for result it is like 2 by 9 minus epsilon then they improve from improved up to 6 by 11. So this is 6 by 11 like 0.5454. So this is the case for K equal to 2 case I mean this is sparsely number case. So an underlying railroad hypothesis they also get better version kind of one can stretch this short interval length up to x to the power 2 by 3 minus epsilon. And so as from there here by using simply using JVC inequality one can say that for almost all small x belongs to x to capital 2 x if some eta goes to infinity then in this in short interval x to x plus h h up to x to the power 6 by 11 minus epsilon that this asymptotic formula is hold with this good error term here. So what is the use basically here they basically writing this more function in term of this convolution structure and based on that the small divisor they separate the small divisor part and large divisor part they estimating the variance for that they use free analysis as well as digital approximation theory and for the large divisor case they use the another model in the analytic number theory say large value estimation of digital polynomial as well as some theory of a data function to get this result and now question is what what we can say when k bigger than 3 for a square-t case in the instead of square-t we just looking for the k c k or can we consider a wide class of multiplicity function and get a some better result. So let's see we consider a large large class of multiplicity function this is the way we define it complicated but okay if you don't need to remember this don't want don't need to remember but you remember that we consider large class of multiplicity function with some restriction. So m is the class set of multiplicity functions and g is the set of completely multiplicity function then for a given alpha greater than 0 we define the class m mu m sub alpha mu that collection of multiplicity function which looks like mu to time g d by d power alpha for a given function g and when g h is completely multiplicity instead in this case it's looks like g d by d power alpha. So when we use a completely multiplicity we kind of don't need structure of multiplicity functions but when we consider only multiplicity function in this case we allow the structure of multiplicity function that is why we just writing mu d times this function. So then we for any h belongs to this union of these two classes is called the statistical property a if we fix in the GP at prime value since we given your GP g d so we have to define what g d so it's defined by the prime evolution so it's taking some constant plus minus beta or complex constant if it is p except some finally many times and for all for final many times we fix it as a eta so these are the complex numbers. So then we define a class with this restrictions that h belongs to union of these two classes already defined and satisfy property a and have this nice convolution structure fn looks like d k divided in hd so one can see that this we have this definition of g d we can take g d up to up to size d power epsilon so our purpose to include the divisive generalize divisive function that is why we just define this way and actually this class function structure imply that it's looks like a generalization of the structure one formulation with h any multiplicity function. So this is a class now you see the cases so that you have to remember that now another example in this case it's if you consider a key number this is belongs to the subclass if 0 1 k this is a kind of subclass of the whole class and this is almost key numbers and if you consider generalize divisive function this is basically nth coefficient of this particular series g times s minus alpha is defined and sigma alpha n is defined this way d divided in d power alpha then if you taking alpha less than minus half in this case this sigma alpha belongs to this class f alpha 1 so this is again another subclass of this total class and now it's kind of generalization of Euler key function it's defined even a subclass of silver class so this is kind of generalization you can remember as a consider the Euler product fps at fs it's of this this form when real is bigger than 1 then and if you to fixing the coefficient this alpha jp for all prime is fixed and j equal to what is this d value only for all prime it is fixed but it's j when j value it's changed so then we have a total function this way this is defined by Kajorowski and that p of nf it looks like this nice convolution structure actually so again this is a class this is also kind of subclass of this whole class so the special case is Euler total function pn if you to fixing this silver class as a data s then it's looks like that this pn data is actually pn the classical Euler key function so this is belongs to f111 subclass and similarly if you consider some real non principle characters so we need a real here if you in order to get integer value but you can also allow the complex value but there we can't get a good result therefore it's not mentioned here and so this is again kind of subclass looks like f plus minus 1 so now for this class we will look at what is the mean value so first of all mean value one can derive simply by using parent formula it's looks like some constant c tilde depend upon h and k times x plus something like that so this is the mean value and what is this chk this is the limiting average of this value this is certain digital series hd by d power k so our f has a structure so f of a n looks like dk divide n h of d then this hd by d power k let's come to picture as a limiting average of this so now what is our example of study we consider this quantity this is similar to kind of we can rearrange this rearrange a OS such that it basically we are looking for the variance for the difference of the short average and long average just rearrange when needed and allow some negligible effect that two things we can do so this is enough to consider this kind so limiting mean value this c tilde times h here so then we prove that in the first theorem if we are taking alpha from 0 to half so and beta is an integer here so in this case we restrict epsilon that depend upon alpha and beta and when h goes up to certain h to the power exponent that exponent will describe later it's difficult so all exponents are different later so up to this exponent of sorry that well we get kind of complicated exponent asymptotic formula but chk it's again some long expression we are not writing here time h to the power 1 minus 12 by k times some polynomial of degree beta square minus 1 in luggage plus some error so main point in that the error has some power that is not possible by using current formula as well as other things so as well as barring cell mark the technique that is also difficult to apply here to getting power setting here so here in the case of depend on k we getting this delta fk epsilon that it is epsilon by 8 when k equal to 2 and k because n equal to 3 it's like epsilon by 3 times 1 and here we also stage the exponent of the half interval under this hypothesis and so as a consequence of this you can see that that we get the mean value estimate for almost all x it's looks like this form which allow that eta goes to infinity as n tends to infinity this is again by using jb set inequality another result so previous result is basically 0 to half and this result is half to 2 so when alpha belongs to half to 2 in this case and beta is an integer obviously then what we get that another exponent of the short interval e hat ak so up to this short interval exponent we get the asymptotic formula here some constant here c h c small h capital h then k but here institution point is that this constant we can't separate from capital h it's depend on capital h plus some reasonable error and under landlord hypothesis again we stretch by g hat key alpha so now we describe what is this exponent here so this 2 so total 4 exponent when alpha is between 0 to half in this case yeah so in this case e k alpha and under landlord hypothesis it is g k alpha this is the short interval exponent and when alpha is between half to 2 it is e hat k alpha and under landlord hypothesis g hat k alpha so the 4 exponent now we will look how they behave so to getting this exponent we need some complicated expression this is we defined by new k alpha so this is kind of a optimal solution of some key cards actually so that is why it looks like complicated but it's okay you don't need to remember that we look by plotting these things in a simpler way and when alpha belongs to 0 to half in this case we talking about the exponent so in k equal to 2 it exponents looks like 2 times k plus this is exactly meet with the exponent of broad close key and for the k equal to 2 case and alpha equal to 0 case but otherwise it's for k equal to 3 k greater than equal to 3 case we have this expression and this is depend on new here similar for a g but g k is under landlord hypothesis alpha is between 0 to half so here this g k the exponent of the solid interval is not depend on this alpha here so it is independent of alpha but it's at this form and a similar for a e tilde case when alpha lies between half to 1 this is at this form and when alpha lies between half to 1 under landlord hypothesis we have g k and g hat alpha and when alpha belongs to 1 to 2 we have this so this is kind of exponent I just mentioning but what is that if we plot them so pass to plot for this new function so this new function appear in the exponent but you will forget this pass to picture you can look at the third one third one see that e can 0 is below the green colors and this is difficult to see the lower one is at alpha equal to 0 and the upper one the top one is for alpha equal to 0.45 and k here so you see that when after few points it looks like a straight line so it's this straight line is actually 0.77 so this is kind of bound we can get when k is bit large we can get at most 0.77 of the exponent of the short intervals and similarly for in the case of e hat when alpha lies between half to half to 1 in this case like e hat k i k 0.51 this is the dealer of the first second row first graph and then stretching of alpha we also get different just getting a better bounds so depending on alpha if you are increasing alpha it will get the beta exponent of the short interval so that is the purpose to including alpha here also that one can taking alpha equal to 0 this is also possible to consider a large class but if you are including alpha it also trace the short interval length so it's conditionally we can get bits better this is not difficult to compare here with this graphs but you can say that in the second row second graph that gk here it is just around 0.77 after certain steps so under this is the graph this is again 0.77 after certain step but this is always larger than the condition of one so we see the application so far we give some class of functions as I will do that in special cases how it's going on so if we consider mohge's function and we are taking pn by n in this case we have the variance estimate of this form if you throw the whole interval up to 2 to x to the power 1 minus epsilon this is the main interesting point here that we can consider through all the interval any h it is true and we have this sum of constant c zeta h this is depend upon h the sum error terms the constant is complicated but the point is that it's depend upon the fractional part of this h by d and h by d square so it's depend upon h but it was overback who predict that actually this constant uniformly convergence to this point that this is actually 6 zeta 2 minus 2 by 6 zeta 2 square this is 0.03 but it is from the setting of functional field this happen actually this type of constant from there he actually used kind of hypothesis on that and get this but you see from our result we get when h up to 2 to h to the power 1 minus epsilon in this range it is almost there so they are actually up to 1 but we are taking up to h to the power 1 minus epsilon so in this range our constant actually depend on capital X this is the main difference here so this seems to be inequality of their and if we see that when h as a real number this constant up to 0.833 this is the upper bound and if you taking h as a sequence of prime number if we vary h for the sequence of prime number in this case we get that this sub sequence this constant actually be given 0.13 0.36 so this is quite far from the from the number uniform constant points which so here we plot of this constant so if you this first one is kind of a real h as a real variable and vary from 10,000 to 15,000 but this is okay if you take the second graph this is kind of in the 10 length interval after 10,000 we just draw it satisfied this type of behavior but if I consider h as a integer sequence here we find kind of four layer plotting here so this is we just numerically check that actually first plotting if you h looks like the module 6 convince to 6 first plotting it is convince to 0 module 6 and second plotting congruence to 2 and 4 module 6 and third plotting congruence to 3 module 6 and last one congruence to 3 and 5 module 6 so I don't know why this behavior happened but this is numerically we check it's happening later and another corollary or another application in the case of F3 numbers so here if you're taking epsilon belongs to 0 to 3 by 10 so in the case of square tree number it was like 0 to 1 over 100 we just replace the result and generalize the result that with this epsilon and 2 less than h less than some exponent of the x e k 0 we have this asymptotic formula for the square tree variance for the square tree numbers and it's obviously it's have a power cancelation epsilon by 3 times k plus 1 and this constant here a bit complicated but it's okay you just remember some constant you can upon this k and under let alone we stress the exponent as I said and then if you see that this exponent of the short interval for the case k equal to 2 that means square tree case it was given by that this exponent is 0.5454 but k equal to 3 the exponent is 0.60 and k equal to 4 it's increasing but if you take k equal to 10 to the power 5 in this case it's kind of 0.7 7 7 7 3 4 6 so this is kind of 5 digit approximation of the numbers when k is this quantity and so this is kind of barrier that point as 7 7 4 or 5 it's when you can't get more in the case of k3 and it is already considered by recently the Gordowski, Mangrel and Roger in 2023 that is considered discrete variance and they get the exponent kind of in the case of 3 it's 0.3 in the case of 4 it's 0.34 and 10 to the power 5 it's 0.5 in the case k equal to 2 this is a result by I just recalling here and you see we get the exponent so their case epsilon equals to 0 to 1 over 100 but here we get epsilon equals to 0 to 1 third this is as well as in the case of their error it is epsilon half minus epsilon by 16 we replace by epsilon by 9 the third application is the generalized divisor function case it is Chawla in 1932 we prove that if you take alpha belongs to minus 1 to half minus half so in this case he estimate the global mean value global difference, global mean value estimate that this is some asymptotic or some constant time some error term but in 1998 QC and Tanigwa actually consider the short average and they show that this variance difference in the short intervariance in short interval is less x to the power epsilon so where h is up to x to the power half they able to show this but what we get here that we have a asymptotic formula with certain range of h up to x to the power 2 9 by 113 minus 84 plus 84 alpha so in this range when you are taking alpha belongs to minus half to minus 1 to minus half and when you are taking minus 2 to minus 1 in this range you can allow whole interval we have this asymptotic formula with some constant CH which define this way and underline we stretch the short interval and we see that this is again we get asymptotic formula and as well as the interesting point is the second point that we actually extend the range of the interval when alpha lies between minus up to minus 0.6 instead of h to the power half we get better range for example alpha equal to 0.3 by minus 3 by 4 it is 0.58 and similarly around near 1 it is minus 1 it is 0.97 this is like that so one also can apply several other interesting multiplicative functions but we just mention here 7000 function this basically m consecutive number less than n and relatively prime 2n this is this kind of another multiplicative function and second one it is down to minus 1 power number of primes such that pt divided by n it is these two have nice multiplicative structure so just mention here and now we see the sketch of the proof so how much time is given we use kind of fluid analysis so therefore we separate two case when h is quite a bit small that is a bit large that unit is bigger than x2 by epsilon so in this case what we do that we basically separate according to the divisor of this multiplicative function so when we have a small divisor so given a parameter if divisors for k up to that z we are taking one term and other term when divisor is bigger than z so this is the standard used by hall as well as Goddard's cube matamaki this construction that they break this and this way they study and then by Kojiswad inequality it is if you consider this whatever is minus equal to 2x h times tk less than equal to 2x hd by d4k so this difference by Kojiswad inequality one can write this expression i k1 plus because i k2 times i k2 plus square root i k1 i k2 where i k1 you can remember that i k1 is a variance for small divisor as dk less than z and i k2 is variance for large divisor so we separate the one can separate this way then here we do kind of given a characteristic function for 1 to 2 one can get nice smooth functions as you wish and which have a nice which have a boundary so which Fourier transform has nice support for boundaries over as you wish that's when here we consider here our useful is minus h2 equal to some constant minus hd by epsilon that's required so that the integral difference of the their integral is small that is needed and this have a nice explicit construction due to silver by using boundary functions we construct for explicitly one can for the reference you see the nice survey by Mandagavari 2001 that harmonium analysis can be found in number theory so using this step 3 we can write the small divisor variance of in term of infinite integral allowing some reasonable error like h2 equal to minus epsilon2 that we have to balance later but for the time we will keep it so then we now looking the structure of this sort of average sum we can write it in term of the main term h times dk less than z hd by d4k plus some function in term of this short function the short function y is defined by y minus the integer part of y minus half this is the way this function now we can estimate the Fourier series of this side function so tail part we write it like that and keeping the first n terms so n is our hand it's we can choosing depend on capital X later so using this what we can write this as a smooth version of the small divisor variance in term of Fourier transformation of sigma so now you see that inside sigma we can consider diagonal and non-diagonal as usual but if you consider it is make it 0 the diagonal contribution so this depend upon the solutions of this diagonal when this difference is equal to 0 and non-diagonal solution when this is not equal to 0 when this is not equal to 0 in this case we use the support of this Fourier transformation of sigma so then the diagonal contribution we can write in this form where this omega square actually it was like sin x by pi x square sin pi x by pi x square but we replace a nice smooth function this is a nice approximation of this function and this is possible one can consider complex function also this is also a nice approximation and now we define this small omega most w x by writing capital W x square times e x then we see that the Fourier transformation it's actually this Fourier transformation is actually related with the Mellon transform by this relation that small omega Fourier transform small omega actually the region has a kind of Mellon transformation with some something is not appear here minus 1 but this is balanced with this definition of small omega so by this relation we use this relation here because okay so why we do that because you see that inside omega square it is positive number we need something capital omega positive number in term of small omega Fourier transformation of the omega that is why we just change this way and okay so one can by this definition of small omega one can just partial derivative of this last terms get this is nice in that function and in fact it is uniformly for when your imaginary part of this chi is less than 1 over 2 pi because so that this is the modulus of the mod of y minus 2 pi i xi is bigger than 1 power so with this restriction we have some nice growth that we require so with this restriction this function have this Fourier transformation small omega nice growth that is we require so we provide capital omega say nice growth so that this still happen that omega tilde have enough growth that we require in our result so then by inverse Fourier transformation as this is again related with inverse military transformation this way we can get omega square for r greater than 0 here this form and this valid for any real part c like between minus 1 to 1 so this is the basically just relation between Fourier transformation and military transformation and then the diagonal contribution can be shifted in term of data function form so now we just simply shifting the counter using shifting the counter and using the decay so this decay when you shifting the counter you have to estimate the tail parts that's need the decay of this Fourier transformation of W you need quite a good decay and the next diagonal case you basically estimate the we need to estimate this form so this is kind of counting problem here you see to attack this standard problem we consider the following lemma this turn by Mahler and Stewart and Javelin 19 that you consider any binary form of degree 3 with non zero discriminant and then this have this counting problem that the number of pairs x y belongs to set of z square set of integer solution such that this mod f of x y less than z have this asymptotic formula and the second result by modifying the sudden heat problem result 2000 some result they modify Stewart and Javelin and get this form this is saying that if you taking the solution bit large in this case you get the counting it bit smaller so combining this two actually suitable way by by writing in the dietic forms of the expression and suitably use this to estimate this goes to error that is it and then when the large divisor case we basically apply the again I mentioned that this is one of the modern tool in order number 30 we use this also uses by Goddard's that has a large this rather says that the any you can take it cannot be large too often it's a it have to be small quite frequently that is the main message of this this way with some theory of a function on the here we consider that BHS looks like it goes to asymptotically D HD by so for this we apply this theorem and then some result from the function we estimate this and when a is less than x to the power epsilon the small case now you are looking at the discrete version of the variance as you said they are continuous is as well as discrete they are not much different so just study this this is easy to study for the most case by using in this case when alpha is between 0 to half we get the same asymptotic formula as we get for continuous person and here here exponent obviously expecting very small not not large and similar for my alpha between up to two so in this case again we get same asymptotic formula but here exponent is small and it's getting this way that when 0 to half it's like first one and when up to second one so this is obviously very smaller than the continuous person so if you're combining these two means the when epsilon is small you just discrete variance when epsilon then you replace by the continuous person by error in some data and then the continuous person large age you can get the estimate in questions or comments the state theorem I didn't get okay okay so why this 0.5 0.5 okay so this or that primes this it is and it says you mean for x large or for all x yeah sorry yeah okay for small age there is so many result by Olivier Ramade, Alissa Lumuli they're basically computational way they find that if you're taking some constant and this with a small x they have also mentioned the range this also has a primes that type of result kind of computational but this is reasonable result when you find that any suddenly large x this happen from their result so this case when you consider pressure case this exponent can express small but any somebody large x can be greater considering the difference between the short age, short age and global age is there any interpretation at least if you consider the distance and difference between this yeah so see we already have kind of long average result then if you also compare with the short average that we can relate with the long average that means we get the actually short average result right so this is kind of I just mentioned that two result like here yeah so this is a short interval result so this is mentioned that for almost all age that this mobius function is in short interval if you take a n by mobius function this is small of h and this is again equivalent to prime number theorem that's when you can conclude this short interval also and yeah so similar instead of a n lambda n minus 1 if you put here this is again this sum over lambda asymptotically h that's been that prime number theorem where you can guarantee there is a sign change so what's the the last estimate of this thing conditional if you assume the Riemann hypothesis what can you say about this thing if I give a point x x to x plus something what's the smallest something that it can put there that it can guarantee there will be a sign change for the mobius function let's say when x is large it's the same question yeah I mean okay so this mobius function actually I mean okay this uniform you ask me for uniform right but n is already different so like not kind of monotonous you mentioned something for almost all x but then in which sense so almost all means that yeah yeah this is kind of I mean this is the sense I mean if we have used this almost all this is the so the positive density and zero density kind of things yeah but as you said this Riemann hypothesis don't remember exactly that some component exponent is there means all means you have to allow this also but it is obviously x2d power bigger than x2d power half some bigger than x2d power half because you know that if I put this equal to prime number theorem that is x2d power half plus epsilon and maybe this epsilon can be improved case, this is a result by several that this epsilon can be write again as a log x square. And I think same bound for you up there, I am not sure, but in the almost all case you can do better, because we can allow that any almost all x almost all interval x to psi x this for almost all x belongs to x to x or you can allow 1 to x this have a sign just for any psi goes to infinity this is the and but the problem that is almost all we need to use. So, this is the best result in the sense that you can any exponent if I wanted something that goes for all x. Yeah, but then we can understand that what's the function that you can put there. Yeah, so that is what I'm saying that it's extruded for half plus plus some function, but that is expecting long. This is true, right? Yeah, this is for every interval of the form little x, little x plus x. Yeah, under everyone. And you know that this is can be 1.5, 2.5, this is same. Tension was this sort of averages, right? I mean, everything we do about averages across with a similar problem a few months ago, I mean, people always think the easiest way to do averages is just by, by relating to the characteristic function of the interval, right? So you sum some things over an interval and you divide by the number of things that you sum. That's an average. Yes, that's one average, right? You have infinitely many types of doing average if you just multiply your function by a weight, you divide by the total weight of the time. So at some point in your proof, you said, okay, I'm going to change the characteristic function of the interval by a majorant. Because I want to use for the analysis. Well, what happens if you kind of start to do your average, let's say, if I want an average, not by the characteristic function of the interval, but with a triangle, let's say, you take a sum with the triangle and you subtract from the average value, you could kind of consider the same problem with the triangle instead of the type of the function of an interval, right? Or with the gaussian. Yes, yes, yes, yes, yes, true. People care about this dance. Yeah, so one point is that people use arithmetic a lot in stable places. So I don't know if you consider this way of the function, how much arithmetic. Sometimes from the front, this things look more complicated, but they're actually simpler, you know? Yeah, yeah. Because if you already put the triangle or a gaussian, you know, there's a way to do free analysis, you don't have to take any majorants. It's just people there, and it's already there, you know, the free customer. Thank you. Yeah, so here kind of, as you've seen that counting problem is a very crucial role here, without this counting problem can't kind of go to that. So if you consider other functions, maybe this counting problem not come nicely. Questions, let's thank you again for the last one. I think our next schedule is similar to Diego, one week from now. Okay, next Tuesday at two, and then we're going to send you some more simple emails, okay? So you can stop recording. Recording is a way to stop recording.