 Okay, so again I've written down on the blackboards the Riemann hypothesis and the basic examples that I've used. And so today I want to begin by saying a bit more about the conductor of trace functions or of the associative representations. So I give in some sense what's the precise definition without going too much into the details and explain why we need something a bit more than the two informations we already know. So suppose you have trace function k mod p. So you don't need to assume that it's geometrically reducible or anything like that. So it's associated to representation rho, so some elliptic representation or whatever. And so the complexity is really an invariant of the representation for the trace function itself. You would just take the infimum over all possible rho with trace function k. So we already know two components of this. So we want to measure the complexity and we saw that, I mean I said that we'd take first the dimension of v at least. Then there is this singular set. So it's a subset of p1 of fp bar, which is a finite subset which is an invariant of rho and you have to take these two. And this is not enough. So what I'm going to do is going to tell you what is the definition and then give an example to show that these two components which are easy to understand and which are the obvious invariants are not sufficient to get good properties of the conductor. So you need something else which is the way we define it. It's sum of contributions corresponding to every singular point of an invariant called the swan conductor at x of the representation. So where swan x of rho is an integer, non-negative. So measuring what's called wild ramification at a singular point. So it is sometimes zero. Then one speaks of tame ramification. But not always. So I'm going to give an example that shows that if you want to have a statement like the Riemann hypothesis, so if you want a statement like this to be true with an invariant, a complexity invariant, it would not suffice to define just the dimension of v plus the set, the number of singular points. This would not be enough. And then I'll give a few examples but not much more. So this is an invariant that's quite subtle and delicate. And in many applications, if you don't know the conductor and you need to estimate the conductor, this will be the R part to control. Because it's really a subtle algebraic invariant of the representation, which is not so accessible from purely diophantine or concrete properties of the trace function. So why do we need this? Okay, so there are two reasons. So the first one is if we want the complexity to be some kind of height. So what do we want? What does it mean to be a kind of height? That means we want the sets of rho such that the complexity of rho is less than x and let's say rho is geometrically reducible. Modulo-geometric isomorphism is finite for every x. So this is the more precise form of what I described in the first lecture when I said this complexity has a height property. At that time I had not said what it means to be geometrically reducible. So this is the right thing to do. These are the building blocks of all trace functions or all representations. And you have to take modulo-geometric isomorphism because otherwise multiplying by a scalar with absolute value 1 gives you infinitely many which are geometrically isomorphic to a given one. But if you do it this way then actually it's a finite set. So this would not be true if you just take the dimension of v plus the size of the singular set. Then we have... So if you take the set of all rho where... So rho is geometrically reducible but the dimension of v plus the set of singular points is less than x. So modulo isomorphism, geometrically isomorphism as above. Then this is not finite. And that's simply because it contains all... kind of all rho associated to trace functions of the type E p of f of x where f is a polynomial of arbitrary degree. So when one does this type of correspondence it's possible to do it in such a way that it's essentially injective from f p of x to the set of representations. So I won't go into details. And here the point is that the dimension of v for all of these will always be 1 independently of f. And the singular points will always be restricted to infinity independently of f. So this object, a polynomial, we really strangely behave at infinity. Then when we take the exponential every value which is not infinity behaves the same. So intuitively it's somewhat clear that this is the only point that could be singular and in fact it is. At least if f is non-constant. So this last part would be for degree of f at least. And so we have infinitely many polynomials because the degree is unbounded and all of them have the same dimension and same singular set. So this set is infinite but we want the i property to be true. So we need something else that in this case has to take into account the degree of f. So what's missing in this case is very clear. If we bound the degree of f then we have a finite set. Going with p, p to the power d, but nevertheless finite. So here, so we miss here bound on the degree of f and it is a fact. So once the Swine conductor is defined rigorously and so on we have that the Swine conductor at infinity of such a rho let me maybe call it rho f so that associated to this will be the degree of f at least if the degree is prime to p. And in any case it's always at most the degree of f. So the Swine conductor of an additive character of a function will always be at most the order of the pole of the function at the singular point. So in this case the pole is at infinity so it's a degree. And there's equality if they are co-prime with the characteristic. And the second example is in some sense similar that the issue will also be of the same type that there's additive characters of polynomials with unbounded degree. But it's more concrete in the sense but it's not the essence of what we want to do in I think number theory. So this is much more important. However this would fail with a conductor defined just as the sum of the dimension and s rho essentially for the same type of reasons. So consider an integer k at least one such that p is congruent to one mod k or consider k and then look at all the primes congruent to one mod k and define k of x to be e p of x to the power k so then if you look at the sum s k p which is just the Gauss sum built with this e p of x k so it's a generalization of quadratic Gauss sums x in f p so it's a generalization of the fact that the quadratic Gauss sum is the same as the quadratic Gauss sum for the real character that this under this condition you detect the fact of being a case power using characters of the quotient group f p star modulo f p star to the power k there are k such characters one of them will give a trivial contribution and the other ones will give k minus one Gauss sum so this is in this condition the sum of k minus one Gauss sum so by Gauss sums I mean things like e p of x k of x where k will be of order k and non-trivial and under this condition of course we know there are such characters k of them and k minus one non-trivial so it's not easy not hard to find to deduce that if you take so if you try and do the inner product with the constant function one so as p goes to infinity p congruent to one mod k then this will be k minus one normalized by square root of x on the other hand the associated dimension of which I call vk plus the set of singular points is again one plus one for the same reason as before it's an example of previous construction with f of power so dimension is one and there's just one singularity at one so you see that an estimate like the first part of the Riemann hypothesis does not hold with c1 and c2 replaced by just the dimension plus the number of singular points so rh would fail with c of k i replaced by dimension plus the number of points so it's clear that this is geometrically non-trivial this is not constant as x varies well maybe it's constant with p is equal to two not even that I think no because it's one at yeah it's e of one over p for x equals one and it's one at x equals zero so this is not proportional to a constant so it is geometrically non-constant geometrically not isomorphic to the trace function one so we can apply the first part of Riemann hypothesis and this would not be true without this assumption yeah so in both of these cases the dimension of e is one in an example so suppose that we restrict this set to dimension of e greater that doesn't work either so it would fail for instance with hypoclusterman sums when r goes to infinity but these are the simplest examples so I don't want to go into more details about this because in some sense this is the trickiest part and analytically it's the one that's the most painful so one tries to avoid it as much as possible again if you're working with well-known trace functions meaning those you would find in a table or in a list of examples then that means that these tables are only good if they include information on the swing conductor so this is known so let's go back to the examples so here for additive characters as I said this is a polynomial let's say well so if x is a pole of f then the swing conductor at x is at most the order of the pole with equality if it's co-prime to p so that's generalization of what I did so we see here the feature that the swing conductor is local in some sense here it only depends on the order of the pole of f at this point it's local in a more precise technical sense that is sometimes useful here the singularities are zero and pole of f but all the swing conductors are zero so multiplicative characters in this sense are easier to handle in some cases because they are tamely ramified everywhere so all the swing conductors are zero hyperclustrum and sums so singular set was zero and infinity it's tamely ramified at zero so swing conductor at zero is zero and at infinity it's one here let's say that p does not divide the degree of f then the swing conductor at every singular point will be zero so point counting functions are also tamely ramified and for the Legendre family this is also one at zero equals one at one equals one at infinity is also zero let's see if p is at least five and this actually is related so if you know a bit of the analytic theory of elliptic curves the exponent of the conductor of an elliptic curve defined over q is always bounded by two at most two except for primes p equals two or three and this is the same type of behavior so the fact that the conductor has exponent bounded by two is also a feature of tamely ramification and in the case of trace functions it goes into this type of properties so this is going to be typically true for any family of algebraic varieties if the equations are defined over z and fixed and p goes to infinity for p large enough then everything would be tamed if you just count the number of solutions so this is all I want to say about the conductor at least for today maybe tomorrow I will have a little bit more in one of the applications I want to discuss so what I'm going to do now the remainder of today and tomorrow is give some kind of survey of some of the applications trying to highlight some of the features our one uses the Riemann hypothesis and how it meshes up with the kinetic number theory so some applications so first part so in the first part I will discuss the results in some series of papers with Fouffry and Michel which is where we started working with these objects where we are trying to understand their correlations with automorphic data so the motivating problem or one motivating problem it's not quite exactly where things completely started but it's something that I'm going to take as motivation is to understand so given a modular form on a congruent subgroup let's say gamma zero n so this is classical modular forms subgroup of SL2z but it could be a mass form or automorphic so we have applications in all cases or it could in fact be Eisenstein or Kospital again there are applications of both cases I mentioned some of them with so it's not so important for us whether it's full coefficients or acre eigenvalues but let's assume that it's acre eigenform with acre eigenvalues lambda f of n n at least one so normalize so that the mean square is about one so lambda f of n squared is equivalent to a constant depending on f times x so we consider so fx sub prime p which will then tend to infinity and the trace function k mod p we consider sums like SF k p which is smooth sums obtained by comparing lambda f of n so an automorphic isometric function against this trace function which we view as a function defined by all integers by periodicity and the critical length is about p which is natural because k is periodic modulo p so we take a smooth cutoff at n over p so this is a sum which you can think of as being between one and p cutoff and the estimates will be such that the sum could be a little bit shorter p to the one minus some small constant when n gets extremely large so if the length was let's say p cubed or p4 or something then you would just use periodicity and replace this as a sum over a of k of a times the sum over integers congruent to a and apply known results for coefficients in isometric progressions so where p is a smooth cutoff which in fact can oscillate a little bit but the first idea to have in mind is the usual picture which I think Henry Kivaniets has drawn on his blackboard every single time I've spoken with him and which is the right way of doing these things to avoid analytic issues until you really need to handle them so so we start with that and we want to obtain estimate so the philosophy is the usual one that unless these objects are collated in a strong sense there should be some cancellation in the sum so lambda f of n is roughly bounded on average at least in mean square k of n is actually bounded by the conductor so if the length is about p then you can expect in good cases to get square root of p cancellation and we try to get some cancellation at least which would show that automorphic data cannot be modelled in an easy way with this kind of trace functions so the goal is cancellation so this can be seen as a challenge so we think we know something about automorphic forms, we think we know something about trace functions, can we actually show something about them or it has applications so I'll start by discussing this kind of as a challenge and then we'll see the applications so a basic example that you can have in mind are two examples so actually the first one we'll immediately show that it's not a purely academic game so a simple example is a multiplicative character so k of n is k of n where k is mod p and non-trivial let's say that's the character then because the length is p detecting cancellation in that case k of n is more or less the same so that's the right length because of the conductor of the twist is p squared times n maybe p is co-point to n so the sum in that case Sfkp controls subconvexity for the twisted L function at 1F in level aspect meaning that if we get power cancellation then we will get some kind of subconvex estimate for these L functions which are very important in I think number series and in applications so kind of one of the guiding examples we had so at first we had no idea what would come out so I don't mention the example of additive characters the example of additive characters is one that's very well known due to there's an estimate of Wilton p but the first example we kind of used as trying to see whether we could do it so our challenge was to try and do it with ep of n bar so ep of 1 over n modulo p which is kind of a building block of Clusom and Soms can we show that the sine changes of the inverse modulo p are sufficiently random are sufficiently independent of let's say the coefficients of the delta function of Ramanujan of basic question which is not easy even for this very special case so it was one challenging example and then so maybe I'll just in fact give another example immediately so this is where we vary the k but there's also quickly there are special cases of automorphic forms where you take for f an Isenstein series so let's take the Isenstein series at 1f plus it is a real number then lambda f of n is the sum of a over b to the power i t where a b is equal to n so strictly speaking for t equals 0 we have to take the derivative because this is literally 0 but I'm ignoring this so for t equals 0 the four equations are the divisor function and therefore you get the type of sum that we're looking at for e 1f plus it k and p is a sum over integers mn at least one of k of mn v of mn over p so this would be for t equals 0 and in general you have m over n to the power i t and these are very interesting because they give you special building your forms when you're trying to apply combinatorial identities to detect primes and the presence of the i t and the fact that we're going to get uniform estimates with respect to t to some extent allows you to balance the size of m and n in certain intervals so you get special building your forms but in great generality when the coefficient is k of mn where k is a trace function so this will be the application later so now I'll state the results and then say a few words on the proof and in particular I like what is used about trace functions in this proof so it will depend on the Riemann hypothesis but it's interesting to see analytically how it's used how it comes up so there exists a positive absolute constant I think we checked that it can be taken to be 9 but I'm not sure such that for every prime p for every k mod p let's say geometrically reducible so this is not strictly speaking necessary but ways so in the in the Eisenstein case so suppose t equals 0 you take the Aginstein case you have sum of d of n times k of n n less than p essentially so if k is a constant you're not going to get cancellation sum of d of n has no cancellation and if k is an additive character it also doesn't quite work so we assume that k is not an additive character e p of a n in the special case if f is an Eisenstein series so in the cuspital case there's no condition and in the Eisenstein series case this is the only exception so if you think that the representation associated to it is not the representation associated to that or you can think that k is not proportional to one of these which is a weaker condition then so if I do this sum so we get cancellation so for every positive epsilon so I write that the implied constant depends on the test function v they depend in a completely explicit way so we can actually play games with some oscillating v's or slightly shorter intervals and lengths p but I don't write this explicitly so in terms of k it depends on the conductor but polynomially and the exponent is not terrible but I don't remember exactly if it's 9 or slightly bigger and then with respect to p we get cancellation and the cancellation is completely uniform 1 minus 1 h plus epsilon for every positive epsilon one of the striking things at least is the complete uniformity of the exponent in the game so we always gain 1 h independently of the complexity of k by complexity I mean how complicated it looks when you write it on paper so to say that this is efficient it means that the c of k the actual conductor has to be bounded or growing relatively slowly so that ck to the power is less than p to the 1 h because the trivial bound is roughly p but for instance if you take any of these examples with fixed polynomials or rational functions f here or there or here fixed r then you can apply this theorem for every one of these and the exponent gain will be the same and the fact that this gain is uniform ultimately it just comes from the fact that the Riemann hypothesis is just as uniform so the Riemann hypothesis has a quote of p independently of which shape of k you take so before so of course the subconvex bound was known and the exponent is the same as the best which is known in that case is also due I think first to Bikovsky and then was generalized so this is for prime moduli the exponent is known for subconvex bounds even for general moduli and also there's this paper of Blomner and Akush so it's also known as I said for additive characters for hospital case e p of a n that's due to Wilton this is an old estimate and I guess one can interpret various statements in the literature as extremely special cases maybe with slightly different kaian e p but apart from that essentially nothing was known or had been tried I guess Philippe's paper with Venkatesh can be interpreted as giving some of these for a transform of multiplicative character I guess which is the same okay so this is the first theorem now as I said because we can handle the so here also one doesn't see the dependency on t when you take Eisenstein series but it's under control it's polynomially bounded so we can use this estimate for Eisenstein series interpreting it as special bilinear forms together with extra ingredients coming from the polyovenia of method and combinator tricks and identities like these browns to handle sums over primes so again there exists a positive a such that for all p for all k phrase function mod p again I assume for simplicity geometrically reducible with so in that case for sums over primes we have to exclude a bit more so it cannot be a multiplicative character times an additive character okay so in that case it's only clear why we need that for multiplicative characters because the estimator I'm going to write would be a quasi-reman hypothesis if we could handle multiplicative characters so just like Vinogadoff's old ideas involving bilinear forms and so on this breaks down when the function is multiplicative and kioven is roughly the only trace function which is actually multiplicative okay and the ep is not so clear but it also is needed then we can do sums over primes so sum of lambda of n k of n v n over p and again we get with implied concern depending on epsilon and on v and on v in a well controlled way polynomial bound in terms of the discriminant of the conductor and then we again gain something times 1 minus 1 over 24 plus epsilon okay so again the exponent gain is completely uniform independently of what the trace function looks like it only needs to be a trace function there's no lambda here it's we pass from auto-automorphic data through the spectrum the fact that we have Isonsand series which allows us to control devices sums like that we can actually put the lambda so adding an addition lambda f would be another problem which we don't know how to do so there is no dependency at all on parmenigian Peterson type approximations this is all valid for mass forms and homomorphic forms uniformly yes if you replace the smooth wave by like a sharp cutoff do you get any harm saving? yeah so we just divide the saving by two so we get 1 over 16 for 0 and 1 and for 0 and 2 we get 1 over 48 okay so let me say some highlights of the proof not really even ideas because it's really quite long altogether so focusing on where the trace functions are used so there's a lot of analytic numbers here involved but I focus on the trace functions so on the analytic side so I begin with 0 and 1 on the analytic side we use amplification so we view f so f was of level n we view it as a cuss form or modular form of level p times n where p is the prime corresponding to the trace function and we amplify in all the space of forms of level pn forms so this is an idea that has been used also in particular by these papers of Bikovsky and Beaumar Kosh and also been used by even yet very effectively to amplify certain results so in some sense here we use yeah so it's this idea of putting f of level pn is very natural a posteriori but a priori one has to think about it so that's the analytic step then once we amplify so we use Kuznetsof formula we could use the Pedersen formula in the holomorphic case but we would then lose quite a lot in certain estimates so in particular so even if you're only interested in the divisor function so divisor function level 1 no special features of its very coefficients we go to full spectrum of level p including cuspital spectrum so we treat the divisor function fully as an automorphic object and in particular we use cuss forms of level p in order to get in the end an estimate for the divisor function which is non-estimating in the case where we applied so even for divisor function so analytically this is the feature so once you do this so there are many complicated steps but you end up with so we are led to certain exponential sums mod p or generalized exponential sums which we call correlation sums of K with respect to gamma so gamma will be an element in pgl2 of fp and this tries to understand the correlations between not K but its Fourier transform and the transform of the Fourier transform when you replace x by gamma x so k hat of x k hat of gamma acting on x conjugate where so k hat is the Fourier transform which is the sum over y k of y E p of xy over p at least when x is in fp so we here you could exclude the value the pole of gamma x if you want but it actually works out fine if you add it since this trace function as I said extend naturally at infinity and then we are going to play a game so of course so k was reducible that's my assumption so Fourier transform as we said last week is also geometrically reducible we make a change of variable this is also allowed so k hat of gamma x is also a trace function and it's also geometrically reducible because the change of variable is bijective and k hat is geometrically reducible so we are exactly in the situation where we can apply the Riemann hypothesis with k1 is k hat and k2 is k hat of gamma x and k hat is can we apply the Riemann hypothesis in the sense of are we going to get result a or result p and what we need so we need squared cancellation however not always this is if we need squared cancellation always there will be an obvious problem is that you can take gamma equals the identity and then there is no cancellation so we are just a feature of this type of method that they are they allow for a small number of diagonal cases where you have no cancellation with few exceptions which one calls a diagonal so then what happens is we balance two facts so which matrix is gamma come into the argument so on the analytic side we don't get a sum of all matrices in pgl2 of fp over some specific matrices of relatively complicated shape but which have one feature is that they are not algebraically structured in some sense they do not form a subgroup so from one meaning from the analytic argument the gammas to control are not algebraically structured I mean they actually form algebraic families but not subgroups and we can check that typically the multiplication will not look like it's stable however from the Riemann hypothesis let's try and understand those gamma for which there is no squared cancellation so meaning in which case do we not apply this but only that so if m is a constant large enough depending only on the conductor of k so that it will control also the conductor of the Fourier transform as I said also last week the conductor of Fourier transform is controlled in terms of the conductor of k so if m is large enough then the only possibility to not get squared cancellation with a constant which is m is that this condition fails so this means the opposite here must fail and so we are in case b and case b that means that k1 is proportional to k2 so if gamma satisfies that correlation sum gamma is larger than m squared of p then in particular k1 that means k hat is proportional to k hat of gamma x there exists some alpha depending on gamma such that we have that for every x I should say I actually change the notation here so in the paper what no that's the right notation sorry so we sometimes change notation in our values paper so we sometimes have to be careful with so whether this should be associated to k or to k hat is not entirely clear so here I use the same definition as in the first paper ok anyway so we have this conclusion what is this so if m is large enough so roughly speaking m must be larger than 3 times c1 squared c2 squared where c1 equals c2 is the conductor of the free transform of k then a fails so b must be true and that means in particular that there has to be this correlation ok now the point of this is that this shows that the set of bad gammas so this condition defines a subgroup of pgl2 of fp so the set of bad gamma meaning those for which star fails is then a subgroup ok that's quite a strong feature so if you look in I think number theory at sets of things where some correlation estimates failed and typically I mean you might have intuitively the idea that they would look like something structured but you would never usually dream to get something that's actually a subgroup and then what we can do is we can use the classification of subgroups of pgl2 of fp they are very well understood for a very long time and using this we can show them that they do not mess up the outcome of part 1 so we get a good estimate from part 1 using this kind of fighting between the non-algebraic structure of the analysis and the algebraic structure of the correlation sense so we use the structure of subgroups of pgl2 of fp which is almost the same as sl2 of fp to get an angle of the gammas in one so meaning that really extremely few of them can possibly create a trouble so the diagonal cases would be very few and it's a feature of the situation that very few diagonal cases do not affect the argument so the nice thing of working or one nice thing of working in great generality is that we can have an idea now of what do we need to get beyond the 1-8 so the 1-8 if you think of the subconvexity case this is the Burgess type exponent which has never been beaten as far as I know for subconvex bounds at least in level aspect 4 prime levels 4 prime levels yes and we can see clearly in this argument where we get where we would need to have something better so we would need to understand the variation of the correlation sense as gamma varies but in the matrices coming from step one so we would need to get extra cancellation from these families of correlation sense which we can expect to hold but which sounds extremely difficult to get so for the moment we haven't really any progress it has been beaten for quadratic characters yeah but this is very different anyway so that's the kind of the highlights of the part of the ideas that come from the Riemann hypothesis for theorem 1 so theorem 2 so builds on this meaning on the Eisenstein case of theorem 1 to understand special building forms with smooth coefficients plus so on the analytic side so we need to handle sums of our prime so we use the Isberl identity which is one of the reasons we can get very uniform estimates because we use it in a quite complicated way so with many summands if you remember Terry's lecture he was taking 10 summands so the parameter in the Isberl identity was 10 for the gas between primes and here we need to take a number of summands that can tend to infinity with the conductor so it's quite tricky and so we need that and we need a bit more on trace functions so we need more about the subgroup so the set of bad gammas in some sense so here where I just said subgroup in fact we can show that it's the fp points of an algebraic subgroup a linear algebraic subgroup of pgl2 and this turns out to be important in one of the steps here so algebraicity so this comes into the poly-Avnogalov method so when we do kind of general building of forms we are led to special cases where a gamma is a protrangular and we need to be sure that there are not too many bad gammas which are a protrangular and this turns out to be proving that it's an algebraic subgroup ok so I don't have a lot of time so I'll mention then a few additional applications but first an example so kind of a nice example so we have a set of bad gammas so you might ask well is this set of bad gammas does it really exist or could you just assume that the set of bad gammas is trivial or something it can exist and can be quite hidden and not obvious from the definition so let me give an example that we actually found purely experimentally which is a posterior kind I have no idea who found that but so it's one of the classical example that has been studied by Katz is to take a Clusterman sum at n squared then we take its square minus 1 so it's like the symmetric square of the Clusterman sums with this argument n squared so if you use the standard notation of Clusterman sums SABP so this would be SNNP minus P minus 1 square so the fact of taking NN means that if you just have one factor for the Clusterman sums it's n squared ok so if you do this then one shows that for this k if you look at this correlation sum with a factor with a matrix gamma this will be of size P so no correlation no compensation meaning it is one of the bad matrices for all gamma which are elements in PGH2 of FP which permutes 0 infinity 4 and minus 4 for gamma in the subgroup H which is a set of gamma in PGH2 such that gamma permutes so the singular points are 0 infinity 4 and minus 4 I mean this is not obvious so these are the singular points of the Fourier transform of this Clusterman sum so this is not obvious but follows for instance from all the work of Katz and Romon which allows you to determine the singularities of Fourier transforms and this I guess it's an exercise that can be used if you teach algebra 1 so this group is a diagonal group of order 8 so there is something to do with bi-ratio being of one of the special values so we have 4 points in the projective line so usually there are not so many matrices which permute the 4 points except when they have bi-ratio which is of special type and this is one of them so because the singular points of the Fourier transform at this 4 point it's clear that the bad matrices must be in this subgroup because they have to respect the singular points but the fact that conversely if gamma permutes the 4 points then this sum with the Fourier transform of this Cl2 of n squared square minus 1 is actually big is not obvious so the best proof so we have 2 or 3 proofs but the best proof is follows from a result of the linear flicker so we have a computer algebra proof and David Zewin has sent us an automorphic proof so that's a kind of a nice example ok and in the last few minutes I'll mention some further applications of this or basically the results in these two theorems so some of these applications are new things so one is the equity distribution of twisted some kind of twisted over cycles so I'll present this quite quickly so this is the the application actually I used we used as motivation in this PISA survey paper that I mentioned in the first talk so this is what I I was giving that talk in PISA and I used that as somewhat geometrically accessible even for students not of number theorists so you can find more details by reading the first few pages of this survey so the idea is really following so we have the fundamental domain we are looking at I1 over P we are looking at the point WJP equals J plus I over P where J is between 0 and P minus 1 so all these points correspond to points in the fundamental domain let's call them tau JP in F so the usual fundamental domain corresponding points and the issue is as a equilibrator the answer is yes like this this flows from bounds on eigenvalues of rake operators but what we do is we twist kind of the sampling measure we twist it with a trace function K of n so what we prove is then so for any KP with complexity bounded uniformly in P any F let's say continues with compact support on the fundamental domain meaning on the modular surface if you average K of n at this point this is tau JP J minus 1 and this is J this is going to go to 0 so with K is not constant if K is not additive compact and I should say so geometrically reducible as usual so a consequence of this if you use for K of n example 3 so counting functions of how many times the polynomial it's a given value say that instead of taking this point omega JP for all J's you can restrict to the subset where J let's say is a quadratic residue or cubic residue of the form n square or nq plus 2 where n is modulo p and you will still get this equal distribution so you cannot force these points to lie let's say below the ordinate 2 for most of J just taking those J which have values of a polynomial this is the type of thing that it means so it's geometrically quite nice and it can have application so in the paper of Michel and Venkatesh on subconvexity they explained that in some cases this is also equivalent to subconvexity for twisted L functions so that's example one of application so interestingly to apply to prove this theorem we apply theorem 1 for the Fourier transform of K which means that in fact if you want just to prove this theorem you don't need to know the existence of the Fourier transform of trace functions because you end up taking twice the Fourier transform which gives you back your original thing so that's one application so theorem 2 has some applications which we haven't quite written down in detail to low line because I won't give any more information on that of certain L functions so symmetric square L functions essentially in level aspect this is because when you apply the explicit formula you have sums over primes of closed sum and sums with prime arguments which is okay I raised it, it's the type of sum that we underline example 2 in theorem 2 so what we also did is D3 in arithmetic progressions which was used at least some of the ideas in the polymath A paper so here the point was we have a much more streamlined proof of the theorem of Fridlander-Ivanietz on getting exponential distribution larger than 1 half for D3 in arithmetic progressions with a much better exponent than even E's Brown's which had improved Fridlander-Ivanietz in fact we don't even need the Fridlander-Ivanietz sum so it's all incorporated in the statement in general statements we can just almost use just generic properties of trace functions so it's streamlined and stronger compared with Fridlander-Ivanietz and E's Brown but it's only for prime moduli so it would be a bit more complicated to do for the general moduli example 4 so if you're really nasty you would say well you're just either proving non-interesting new results or improving known good results so let's prove something that's not known so this is something which is still work in progress with Glomer and Militjevich so we can handle with power saving a twisted moment of I mean not quite Dirichlet character L values at 1 half so there's a conjugate squared times twisted L function with a fixed modular form with power saving so why is that interesting so well if you replace the cusp form by an understand series you're going to get modulus of L chi 1f4 which was done then by Matiang a few years ago and on the other hand so we can handle this anyway at least we have some we reduce the case where you would have Lf chi moduli squared so without this first part but just a pure cusp form second moment which is a well known open problem with power saving we reduce that to estimates for trace function which we hope one day to to obtain and again even in the Matiang case our proof is much more streamlined exponent is much better so all together we have some quite nice applications I think of this ok so I'll stop for today question comment if the trace functions are not geometrically irreducible no I mean I just put it in so in the proof it's useful because you see these correlation sums are not linear so but in fact the estimates the final estimates are linear so you would just decompose and apply to every component