 Okay. Okay, thanks. Yeah, thanks for the invitation to speak at this nice seminar. I hope it continues in the future. So first, yeah, before I, before I start, let me just say that pretty much everything I want to talk about today is joint work with in Petro. So I also want to say I've given talks on this on these, some of these works at other venues and, and I've tried to, and so I know some of the specialists have seen this stuff before and so I tried to try to pitch this talk at a more broad audience. You can judge for yourself if I succeeded at that or not. But I've made some attempt at that. So, okay, so let's just start. So, the theme to kind of motivate this overall talk today at least the beginning part is want to compare and contrast the Riemann data function with Dirichlet functions. So for example, going, going way back in history, the, we, it's well known that properties of the Riemann data function can be used for the prime number theorem. There's a pretty standard zero free region for the data function of this, this form so the sort of like this distance one over log T away from the line one, we know that there are no zeros. For Zeta, we do actually know a larger zero free region, like this vinaigrette of Korobov type region, and larger zero free regions lead to better error terms in the prime number theorem. So that's one reason for being interested in that. Likewise, we also have Dirichlet functions, and we know that properties of them especially zero free regions can be used to prove the prime number theorem and arithmetic progressions. So just like for Zeta, there is a sort of standard one over log distance away from from one zero free region with this difficult exception of a possible real zero when Chi is quadratic. I don't really have anything to say about this problem this is just, I want to compare and contrast these things so there's some some things that are similar between Zeta and Dirichlet functions and some things that are different. So one difference here is this possible real zero. That doesn't have an analog with the data function, but there's also this improved zero free region, this vinaigrette of Korobov kind that I didn't even write down explicitly because it's not really going to come up later in the talk. But yeah, so this is a case where we have a better zero free region for Zeta than for Dirichlet functions. Okay, another example, another kind of theme to compare and contrast Zeta and Dirichlet functions are with moments. So there's all sorts of moment problems that we could discuss and I chose the fourth moment of Zeta to be just something to just a sort of representing example. So the state of the art for the fourth moment of Zeta is that we have an asymptotic formula for it. It's T and then there's some polynomial in log of T some degree for polynomial log of T, and then the best known error term is T to the two thirds plus epsilon. I'm not exactly sure where this was first proved. Certainly, a proof can be found in Motohashi's book on the spectral theory of the Riemann data function. I know some people cite Zavaratni for this but I wasn't able to find that paper. I believe it's also in Russian. So anyway, so I'm not sure but it again that this is just a kind of representative thing here for this discussion. And backing up a little bit. So the first asymptotic formula with some kind of power saving so some exponent here that's strictly less than one and this here is two thirds. But the first non trivial power saving was due to Heath Brown 20 years before Motohashi's book, which was of size T to the seven or eight. There's a an analogous question for dear sale function so we can look at the fourth moment of dear sale functions. And again it's a kind of a similar problem we have a asymptotic formula. It's now it's Q let's just say Q is a prime Q times some degree for polynomial and log of Q. Plus some error term of size 19 over 20 is is the state of the art. And this is much more recent. And due to Blomner Fufree Kowalski, Michelle and Melissa from 2017. And then the first power saving error term took much longer than then Heath Brown's result. First appeared in 2011. I don't even remember what the exponent was this has a sort of larger denominator. It's kind of an interesting thing so just, you know, we can see just, even if you don't know about the techniques that might be used we can see that there's a long delay in time, going from zeta to dear sleigh. And also the exponents are not the same. So there's also a, you know, sort of deficient and the same quality of the error. So we're seeing some difference between data and dear sleigh here. So, you know, I think it's good to think about like what what's the underlying reason for this difference between these things. So one, one answer to this is that we often can produce results like we collectively as a community, not me the royal we but we can. So many, many instances of results where dear sleigh functions are more on an equal footing with zeta function. But when our when we look at module IQ that factor into many small prime factors like when Q is smooth. That's kind of the right thing that lets us adapt some of the tools from the Zeta case into the dear sale function case. When we look at prime and we certainly can't, there's no factorization possible. And so, so, so some of those tools are missing. And so this is maybe some explanation for why there's this lag. You have a question in the chat room. Okay. I'll cheat. Maybe please unmute and just ask away. So, maybe if your microphone is not working I'll just question out loud. What is the significance and motivation behind studying the moments functions. Well, so, yeah, that's a good question so we're I mean we're we're interested in statistical properties of like how how big the data function to be or functions can be. But they're they're also used for other problems like if we want to get non trivial bounds or they're also used for proving non vanishing results like if you have an asymptotic formula for some moment of all functions. That means they can't all vanish. So they're they're one of our only ways we can get really our hands on health functions. And so we can get them in families and by studying moments. I don't know if that's good enough answer but there's. So, right. So, for the, for these fourth moment problems, a little more specifically I just I thought I could include a little bit of information about why these for the fourth moment problem why they're different. So one of the reasons is so for the, for the fourth moment of the data function, what we have to study are these shifted divisor problems. So the event time the event plus H, where for Zeta, the, what happens is H, the shifting variable is kind of small. And so that means we're looking at pairs of divisors that are close by, like, meaning close by as real numbers, and then plus HR close to each other. So if you study the same problem for, for the fourth moment of Dirichlet functions, you also get a kind of shifted divisor problem. But these two numbers and then plus HR no longer close as real numbers but they're close modular queue, and they might be quite far apart as real numbers. And this causes all sorts of problems. So we're going to prove things about the fourth moment of data. This is one of the main issues. So it's somehow like this close in our comedian sense you know as real numbers versus close, maybe q adequately is different things have different behaviors. Okay so next so for my, my third sort of warm up. To compare and contrast Zeta and Dirichlet functions is individual bounds for L functions. So, so I want to discuss the vial bound for the Riemann Zeta function. So this was, I think proved by Hardy and Littlewood but using vials differencing method as the key input. So they proved a bound that sets a data on the half line girls like T to the one over six. There's a, what you might, you know, to judge this, we have to compare this to, you know what you might call a trivial bound. So one, one well known way to, you know, represent the trivial bound is called the convexity called the convexity bound and then the bound that results from the fragment Linda law convexity principle. It's not, it's just using the functional equation and the properties of the Zeta function, where it converges absolutely. So it's not really. It's only using that weak information about Zeta. So this file, this method of vial is useful for more than just bounding the Zeta function and it's used to bound pretty general exponential sums. So this goes back to the two pi f of n or f is some reasonable function. And the case for Zeta would be if this thing is into the IT. And it says other applications to lattice point counting problems and now for the case of Dirichlet all functions, it took much longer before anyone obtained a sub a bound that's better than convexity. And so this, this goes back to Burgess 1963. So he proved this bound q to the three over 16. So, like, in this case, again, the convexity bound is still like the same exponent this one over four, so q to the one over over four. So three over 16 is less than one over four and it's also bigger than one over six and somewhere in between those. So this Burgess bound is is non trivial is better than convexity but it's also not as good as this. The vial bound, which predated it by by many decades. Burgess's method has also been useful for for other things like so it's one neat application is for estimating the first quadratic non residue modulo prime. One thing to mention here is that one of the inputs into Burgess's bound is so he needed some get a certain exponential or character sums modulo prime and he needed spare root cancellation in those sums. And this was provided by the remind remind about this is for curves over a finite field. So there's some some some deep algebraic input. Okay, so that was so Burgess's results was the state of the art for a very long time. And in 2000 Conrad Ivanians improved this for quadratic characters. So they proved a vial quality mount so it's the same exponent one over six that vial got for the data function. Okay, so this is a this was a big deal. And so do I want to say, excuse me. So they prove this for chi quadratic what I want to be discussing throughout this talk is generalizing generalizing that's two more general characters. And the method is also very interesting so it's completely different from these earlier results of vial and Burgess and what it does is that it studies moments. And this may be possibly answers one of that earlier question of why are we interested in moments so what one reason is it helps us to understand even individual values. And so the their approach for for getting this bound is so it's quite interesting so that this this this degree one L function that they think of it as a degree to L function. It's squared. Or maybe to put this a better term so. So if you, so they cook up a family, a geotube family of automorphic forms, and that family claims cuss forms and Eisenstein series that come together in a natural way. And the Eisenstein part of that family. That applies to this dear sale function absolutely value squared. Now in an upcoming slide I want to be more precise about what a moment they look at what family look they look at and so on but I'm just saying here that this is an interesting method because it goes to this deal to family. Another, another thing to mention is that an important input in their proof is. It goes beyond Burgess in that there's a, they have a certain character some, but it's in two variables, instead of one variable, and they need square root cancellation in that character some, and that this is crucially provided by three processes for varieties over a finite field through by delay. So it's, it's harder in that sense so. So at the time Burgess was working we did we didn't have a proof of this to lead to this in the 1970s. Another thing to say about their work they their work doesn't just bound the dear sale functions it also bounds quadratic twists of cuss forms. As an example, if you take a level one cuss form like the Ramanujan delta function and twist it by quadratic characters, then they get this bound of q to the one over three plus epsilon. We should think of the one over three as. As it's one, it's really one over six, but it's q squared to the one over six, so the conductor of this twisted L function is q squared. And so this is again the same x point one over six. So this is another reason this is interesting is that by the work of Walt Bruget and for the refined by Conan and soggy a these these families of quadratic twist of these shield to L functions are related to Fourier coefficients of half integral weight cuss forms. And so if we can bound quadratic twist of L functions and we can bound these half integral weight Fourier coefficients. I also, I wrote a paper, some years back that studied some other variants on this problem for like shrinking says and other things like that. And so these, these, these questions, these sub convexity bounds have applications to equity distribution problems for for Hickner points and role points on ellipsoids like integral points on the sphere and other things. So the main, main thing, main result I want to mention today is this theorem with with Ian, which says that for cube free module I, we have this file quality bound q to the one over six. And for cube prime, and for any character chi, and we get this bound so I think this is probably the hardest case to keep in mind is just the case when he was a prime. There, I mean when q has some factorizations there are some other techniques that might be useful, but when q is prime those are certainly off the table. And just like Conor and Ivani is the main idea to get started is to put this Dirichlet L function absolute value squared into a family, so that the corresponding Eisenstein series gives you this, this L function squared. And then to, and then to work with this family. So the input will be deline's bound. Although we have to use some more advanced inputs and to be able to use deline's bound. So we actually wrote two papers. So the first one came out on the archive and 2018 as published last year. And then there's a follow up paper, where basically it just lets us delete cube free and just say that it holds for all q. So one part of the talk is to try to explain why we had this condition that cube was cube free and then how did we remove that condition and for all cube, and just, you know, for the sake of interest. So the method doesn't, I've been mainly just looking at these health functions at a half, just for simplicity but the method also really treats t and q on an equal footing and we get this hybrid type bound so it's t times q to the one over six power. So it's completely uniform in how it treats t and q, at least in terms of the bound. I mean the method is has. We have to work a lot harder for the q aspect than for the t aspect. So, that was the main theorem now I want to set up some notation for a little bit to, and then circle back around and say more about how the result is proved. So, alright, so let's let L of pi s be an automorphic L function, so that I want to use lambda pi event for the Deersley series coefficients. This will have an Euler product. Principles degree D, although, I think, I think for this talk only the only D's of interest are one in two. So you don't lose much by thinking of degree one or degree two. This will have a gamma factor. So again, product of D gamma factors with this form. And then I want to normalize things so that the functional equation relates the L function at s and at one minus s. And then this versus integer q q q pi, which appears in this functional equation. So just as an example, the, the, you know, the most fundamental example would be to take a, you take a primitive Dirichlet character q. And then this seriously all function is the same one of you, I've been talking about this whole talk, and the gamma factor is this gamma r of either s or s plus one depending on if the character is even or odd. And then this integer q is just the usual conductor of the character. There's a notion of an analytic conductor, which is the, it is the product of this integer q that appeared in the functional equation and then this product of these this information that comes from the gamma factors. The analytic conductor incorporates information from both q and or little q the conductor and the Archimedean factor, the gamma factors. The analytic conductor is a good measure of the complexity of an L function. And the way to explain that is, is, is by way of the approximate functional equation, the approximate functional equation, let's just write an L function as essentially as two two finite sums so really, there should be some weight function here that that decays quite rapidly outside of the square root of the conductor. And maybe here. But the point is that to represent the L function to, you know, to find a finite type representation of the L function inside the critical strip. It takes about square root of q squared of the analytic conductor terms of the Dirichlet series. So the Dirichlet series doesn't converge absolutely inside the critical strip but we do have this representation. If you imagine that these lambda pi events are bounded by some kind of divisor function. Then you could just plug that in here, and just put in a trivial bound inside the approximate functional equation. And you would get a bound of q to the this analytic conductor to the one over four power, and that corresponds to the convexity bound. We don't actually know those, those bounds but we. That would be part of the Ramanujian conjectures but we have good bound on average over and you don't really need to meet that bound for all and but on average over and and so. So we do have this convexity bound. So one of the one of the motivating goals and the subject is to prove to beat the convexity bound to prove some. bound that's the analytic conductor to some power strictly less than one over four. In light of this representation of the Dirichlet series, what that amounts to is proving cancellation in these L function coefficients. Because if you just put in you just bound everything by one you get you get the convexity bound so you have to prove there's some cancellation. Okay, so now. Now let's talk more about gel to I said earlier that the, the, the, you know, we're going to, we're going to view these, these Dirichlet functions as the corresponding to the Eisenstein part of some of some gel to family. And so we'll have to talk about gel to families here. So I want to let hk of q psi be the holomorphic new forms of weight k level q and central character psi. And then this itj thing is the same except it's just most forms, the spectral parameter tj. So I'll mostly be stating the results for the most forms because it's the most forms in the Eisenstein series that go together and it's the Eisenstein part that gets the, the dear sale function. But if you don't like most forms as much and you want to think about holomorphic new forms you won't lose a whole lot. So this sub convexity problem for gel to was really kicked off, I would say with my by a series of papers by Duke Friedlander and Ivanians. And, and then there are many, many other authors that have contributed to this problem, far too many for me to try to even begin naming one of the big results in this area in this direction is due to Michelle and bank attach. And in a certain sense they completely solve the sub the sub convexity problem for gel to. They, no matter how you want to vary your gel to cuss form whether you want to vary the weight or the spectral parameter or the level or anything, they proved sub convexity bound. So in that sense is completely solved for this talk. I'm more focused on the actual exponents. And then. So then there's still all sorts of things you can do to compare and contrast the strengths of the results. So, sort of the state of the art. So there's, there's work of Blomner and hardcore from 2008. And there's also a version of paper of woob from 2014. And that's kind of like more in the style of Michelle and Venkatesh that for twisted all functions. They prove a Burgess quality bound. So they prove that amongst the family of twists, they get this exponent three over 16 it's the same exponent that Burgess got. It's using a different method. It's kind of interesting that the exponents are recurring with different techniques. So in general twists, this is the best result that we have this Burgess quality results. Again there's some applications of these problems like so one of the reason you can say that the sub convexity problem is, is a good problem is that for some applications a convexity bound is completely useless gives you no information, but any sub convexity then does give you some, some information. So for example to this question of distribution of finger points or non trivial bounds on Fourier coefficients of ternary theta functions as can tell you about representation numbers. So, so this is a sign that it's a good problem. Okay, so that's, that's the Burgess bound is the best we know for general twisted L functions. There are some cases where in GL to where we have a value quality bound like this one over six exponent. There aren't a whole lot of cases but there are a few. I think the earliest I know of is due to Anton good from 1982. So he was really ahead of his time I think. So like let's say you take a like the rum Nugent delta function look at the L function associated to that, and look at it in the T aspect, and he proved a quality balance there. He has a paper from 2001, and I'll say more about an upcoming slide, where he proves about quality bound for moss for malfunctions of level one where the spectral parameter is becoming large. And also this country and I Bonnie has found that we mentioned earlier about these, these quadratic twists of GL to customers. So, here's, if here's what image does so if it just looks at a cubic moment. It's probably it's probably a lot of notation to take in but but just let me say it in words so you look at the spectral parameters TJ in a small window between T and T plus one. So the TJ is our. We kind of know how many there are there's around capital T of them, or some constant times capital T of them in a small window of the size between T and T plus one so there's like T things in there. And that's what this is. That's what this left hand side that's what the sum is over, and they get these functions cubed and, and get about T to the one plus epsilon. So there's T things in there and so this is like T to the epsilon bound on average. So the conductor for these all functions is around T squared. These all functions are known to be non negative. And so you can drop all but one term and bound an individual function and the bound they get by doing this is the conductor to the one over six. So it's a valid quality exponent. Conor and I Bonnie is also studied a cubic moment. So what's their family. So, you might just think of q being a prime. And, and forget about m equal, you know, I guess you get m equals one and m equals q so you really get the family of cuss forms of level dividing q. So if you want to handle like twists of the ramanian delta function, what you have to do is think about it. You sort of trivially you think of it as a cuss form on level q. As an old form, and put it in that family. And they studied this cubic moment of these quadratic twists over this family. The, this, this term here with this integral. This is the Eisenstein part. And so they actually get this dear sale function absolute value to the sixth. And yeah this thing it goes up to 10 is just just a number so it's a, they get they get for general intervals as well. So now the, the overall idea is now the same so if you have, have this bound then you can drop all but one term, and you'll get a conductor to the one over six bound. And that's, that's how they got their, their vowel exponent for the for the dear sale functions for the quadratic characters. Ian and I also studied this problem with a cubic moment, and our results in this direction is this this so you. So somehow I think that the key thing was figuring out what family to use and in retrospect it's almost embarrassingly simple. But anyway, this is the, this is the family. So, let me just sort of remind you in words again so after is running over cuss forms with these spectral parameter TJ is up to 10 so this, you know this is fixed think of it as fixed. And then we have level m dividing q just think of q is a prime and think of m equals q. And then we're looking at forms with central character chi bar squared. So, in the country of onions case this their character was quadratic and so in a sense they didn't see this this they took, they took the forms of trivial character. But for more general characters chi like complex characters and our squared will not be one. And so it actually has a non trivial central character. And then we cook up this family it's the same sort of thing you twist it by chi. And this, this term here with the integral represents the eyes and science series component and it gives us this absolute value the cell function to the six. And we end up with the same quality bound like the sort of a window off on average bound for the, for the family. Oh, I'm really fast. So then. Okay, so some remarks about this so the first thing is what's what's the deal with this family like why, why take f of central character chi bar squared and then twisted by chi. So, the reason is so if you have central character let's say chi bar squared, and you twist f by your say character psi, then it's a kind of a simple thing with modular forms that the, the, the the, the, the twisted form has central character chi buyer squared times psi squared. So when you, when you twist by dear say character the central character gets multiplied by the square. And so this is cooked up so when you take psi to equal chi, then the twisted form has trivial character again. Okay. So this is how this was sort of engineered to work. And so that's the second line here is just saying that when you take f twisted by chi that just means it's a has level key squared and trivial central character. Luckily we know all these central values are non negative. And this is due to work of Google extending work of walls busy. This is, this is a 1985 paper of walls, which is this is not his earlier one that dealt with quadratic twist this is a different paper. I think lesser known than this is other one. Anyway, so we know that these central values are non negative. And yeah, and so that's so we can drop everything except for one term. And that's what gives this, this file bound. Again, if you if you don't like most forms and prefer to think about holomorphic modular forms. And a similar bound is true, you would have instead of TJ going up to 10 you might have the weight K going up to 10 or anything you like really. You might have this L function bound, but for holomorphic modular forms there's no eyes and science series component so you'd have to delete that, but we get the same type of there. So, I really don't want to get into the any of the technical details of the proof but just to say, what's the overall idea. So, the idea that we want, we're going to follow is to connect this family this cubic moment family of these twisted L functions to a completely different family of L functions. And so the, the, the dual family that get led to is a certain modified fourth moment of seriously all functions. And this, this theme of connecting different families of all functions has been used by other people. I think the first instances are due to Kuznet sov, and Motohashi, and then other authors, especially in Shaolin Venkatesh and Loma Khan, Loma Lee Miller have many others and I'm forgetting have also developed moment identities connecting different families of all functions. And these are just beautiful formulas and they, I know they're just, I just, I just really like them. And, but they're also very useful they give us very powerful. They're the very powerful tools on the subject. So to make a very, very long story short. What happens in our case is by using different summation formulas like the like Peterson formula or the Bruggeman-Kuznetsov formula, various Poisson summation formulas, all sorts of things like that, that for people who work in that area are kind of familiar but if you're not, I don't know how to try to say it in a reasonable way. So I'm just completely avoiding it. But so after you do all these things at the end, you get led to a dual family. The dual family is this fourth moment of seriously all functions module OQ, but it's modified by multiplication by this character sum. That's this we call it G of Chi Psi. And the character sum is this. So it depends on these two characters. Chi, I remember, was our original character that we started with. And then we're summing over Psi. So we get a collection of these character sums as you vary Psi. So Psi of this, it's a linear fractional transformation in T and in U. And then it's a polynomial about here that is u times T minus one. So it's just this character. This is the thing that comes up. So just to say a little bit more about these moment identities. So there's a famous result of Motohashi where he started with a fourth moment of the Riemann Zeta function. After various transformations got led to a cubic moment of MOSFORM L functions. So this is kind of the opposite direction from where I started with. So for this talk that we started with, and like with Ivitch. So Ivitch's case is like this, this cubic moment. And then, so then, so there's, there's maybe moment identities going in both directions. Or with more general weights. For the Dirichlet L function case. So in my paper that I wrote, I noticed that there was some kind of this is very hand wavy some kind of moment identity that involved these cubic moments but multiplied by lambda f of p. So looking at Dirichlet characters mod p, what that does is it introduces this lambda f of p. And this is grossly over-symbolized but this is just to be kind of give you a flavor of these different kinds of moment identities. So I think Ian Petro was the first to notice this kind of structure in this Konrad Ivaniac's cubic moment problem. So he started, so you notice that this cubic, that this kind of identity holds for the cubic moment, and that's the case where chi is quadratic. So the main inputs into our proof of the theorem is this is bound on this character sum. And the bound is that if q is cube free then this character sum is bounded by q to the one plus epsilon. By the Chinese remainder theorem this thing will factor out and multiply out and so really we want to understand this character someone q is a prime power. And the hardest case is when q is a prime. And when q is a prime, then we need to use the Lien's proof of the Riemann hypothesis for varieties over a finite field. So we were lucky that there was some recent work of Fouverie, Kowalski, Michel, and Salwin that really was helpful in proving these results. So it's, it's, it's difficult to bridge the gap from the Lien's work and Katz's work to the applications that we need an analytic number theory, and these authors have been working at making these things more accessible. So if you plug in this point wise bound on G, so just going back so just to remind you so if we have an individual bound on G, that's like q that will cancel with this one over q. And then what we need is a bound on the fourth moment of your cell functions. And getting an asymptotic for that is more difficult but just getting an upper bound is not hard at all. And we can just get q to the one plus epsilon as the bound. So that's the overall structure of how the proof goes. But what's going on with this cube free thing I mentioned, you know we had theorem one and then another theorem, and on the second theorem like that came in a follow up paper and a year or so later year or two later. So what's going on with that. So the problem is that for higher prime powers, there's not, there's not always square root cancellation of those characters up. And so this to be a little more specific about that so if q is p to the k k is at least three, then in general, her typically there will exist characters such that this character some is bigger than q time some power like let's say p to the alpha for some alpha, and this exponent alpha will be a half integer. So, okay, so that's, that's sort of the bad news but maybe we can get around it. So, idea to get around this problem is to, first of all, understand well how big can this alpha be. And then, once we, you know have a handle on how big alpha can be for a given value of alpha. What are the characters where this large value is attained. So can we understand the structure of this set of these. We call them the singular characters where they were this we get exceptionally large value of this character some. If we can get our hands on what those sort of bad or the singular characters are, then what we could try to do is bound the fourth moment along among only the subset of the of the singular characters. So, our G some is for character some G is sort of p to the alpha times as big, and we hope to save this factor p to the alpha. When we sum over when we when we focus only on the singular characters. So that's the overall strategy. So, the singular characters are not just some random set of characters it turns out that they form a coset of the group of characters modulo D for some D between P and P to the k minus one. So, this leads us to want to understand the fourth moment of Dirichlet functions along a coset. So just to be a little more specific about it. So let's take q to be a prime cubed. So, it's not only it's only when P is one mod for that there are singular characters PS 3 mod for them there aren't any singular characters we always get for cancellation for those primes, or those prime powers. But if he is one mod for then they're exactly two times P minus one of the singular characters where this, this G some is exactly an absolute value equal to q times square of p it's definitely square of p times bigger. So let's say that the singular characters are union of two cosets where you see this capital psi here is some coset representative and then it's the subgroup is the group of characters mod P. So, to this end as a sample results in this direction we have a bound. So we take let's let's say chi has conductor P cubed. And then let's say you sum over all the characters modulo p squared. So, chi times this Ada this is running over the, this coset, where chi is the coset representative and it is running over the elements of the subgroup. And so this subgroup has so this is a small sub family in this whole group of characters. And but we get this little off on average bound. And we have a more general version that's just more complicated the state, and that version is sufficient for the, for this other result. So just as one little remark that this, this, this fourth moment along a coset found. I've already argued that this is similar in spirit to a result of my audience. And so I want to. So the result of my finances is this so. So yes this paper from 1980 where he proved a an upper bound on the fourth moment of data in a short interval. And basically he can take the interval of size t to the two thirds to take Delta equals t to the two thirds and he gets this good bound. So this is kind of in line with this when I was connecting back to the beginning of the talk with these fourth moment of the function questions. This is a, this is an upper bound on the short interval. I'd like to say that this is very much analogous to this, this seriously all function problem. Because, why is that so we should think of this, this interval T, T minus delta to T plus delta as a small neighborhood around the character and to the it radius delta, let's say. Okay, so those are the things that are close by. So we want to do the same thing with characters seriously characters so let's take a dear say characters conductor q. And then now let's define a small neighborhood, let's say around that character to be all the characters size such that when you look at Chi times side bar that has conductor at most delta. So, for example, let's take q is equal to prime cubed and delta is equal to the prime squared. And then the set of characters where this conductors at most p squared that's exactly this coset. And that's exactly like this short interval, like I wanted to add and so. So this I think as a similar type of results. It also, I mean, I don't know. The other thing I find interesting about all this is that there is a kind of all looks very circular so we've got this family of this cubic moment family that we start with we end up with some fourth moment problem, and that with modified with the G some. And then if you put an absolute values and bound the G some, then you want to understand fourth moment. And then to solve that problem we go back to a different cubic moment. But it's so it almost seems circular is not because we're putting we're doing estimates long way we're putting in absolute values and. So I think, yeah, I think I should stop around now I had a few last slides I'll just skip through and just say thanks to everyone for for listening.