 Okay, so I wrote down again here the main theorem, so the Riemann hypothesis in the way I phrased it yesterday, and here I recall the four examples I had written, and I wrote down with the normalization the way I defined Lussmann sums, this is indeed the right value. Okay, so I want to do, as a first step, now is to try and give another example of trying to apply this really as a pure black box and see where we get into a little bit of difficulties and then introduce further invariance which will solve these difficulties. Before I do this I want to make one or two more comments on this condition of geometric reducibility which necessarily comes into the into the CRM. So first what happens among the examples? So examples one are always geometrically reducible, so this I mentioned yesterday because the underlying vector space as dimension one and one dimensional representation is always irreducible. Two one can show is geometrically reducible. This follows either from the construction or by the diaphragm criterion which I will state as the second part of this remark. Example three is never geometrically reducible except if f has degree one, so f here is a polynomial which is non-constant but is one. So three is never geometrically reducible if the degree of f is at least two. Simply because there's always the kind of the average value of this representation number is always one and that corresponds to the fact that there is a trivial component which one can take off and so k of n minus one so when it's defined in this way is also a trace function and is very often geometrically reducible. So if you remember the simplest example was when the polynomial is x squared then k of n minus one is just the Legendre symbol n mod p and this is of an example of case one so it's geometrically reducible. I should say maybe in that case the dimension of the underlying v for k is the degree of the polynomial. And case four is like most families at least of elliptic curves it is geometrically reducible. So that was example going through the examples and trying to see which ones are geometrically reducible. So you can try to apply this here and then to many exponential sums. Just on this example you already get many statements most of which you would not be able to prove directly without applying the Riemann hypothesis. So this code cancellation if you take let's say a Cluson and some times example four conjugate this is not something you can hope to prove directly. Okay so second remark is what is the meaning so there is often a very convenient diaphragm team interpretation of geometric irreducibility that can be used at least to guess that things are geometrically reducible quite often actually can be used to prove it. So it's also a consequence of the Riemann hypothesis and it was I guess first stated explicitly by Katz and it states the following. So I'm going to state it formally as a proposition in a way which is convenient for I think number series. So there's a more general version which is more convenient for certain more algebraic questions. So assume we have for every prime p trace function kp with complexity uniformly bounded as p varies. So in I think number series this is often the situation so for every prime p there's a different trace function and the complexity on the other end does not grow with p. Then if so one can phrase this as an if and only if for the application I want to point out it's more interesting in the direction of proving geometric irreducibility from something so I'll just take the if component. So if you compute the L2 norm of kp over fp and if this is equivalent to p when p goes to infinity so this is something you might open a number of cases to compute analytically and if you have an asymptotic formula that begins with p so intuitively this means that on average these things are one so because they are bounded and the size of the sum is p then kp is geometrically reducible for p large enough and one can be more precise but I think this gives a rough indication and in many analytic applications this is only enough to guess at least that something is geometrically reducible. You might not be able to get the asymptotic formula but you might have good heuristics to guess that the leading term is p yeah. Yes that's a good remark so let me first address the remark and then come to my example so what Philippe says is that so something I have not said but the underlying proof of this gives more information and one component of this information is that so if one can show so if you can find some exponent delta positive independent of p such that this L2 norm is at least p to the 1f plus delta squared then in fact it will have to be at least of size p it won't tell you that it's equivalent to p because it could be more than one but at least you would get that then one gets for free that the sum is at least p with these conditions so I'm still working under this assumption that the complexity is bounded so this forms follows from the underlying type of results of the lean which actually I specialize to state this theorem and this I think so the one case I know where this is used is there's a paper of Bombieri and Bourguin so a paper on applications to a problem of harmonic analysis cans ultra flat polynomials so it's not really necessary because they first give a proof by cats of the estimate they need and then they reprove the same estimate kind of more directly by using the fact that if you get a tiny compensation then you get at least code of p so it's more in the opposite direction but so here I could actually say so if you can this process goes both ways so if you can bound well I guess then it would not be the other way so let me not try to give the two so but the principle of this discrepancy this fact that the exponents and to jump between one half and one with no gap is found in this paper of is used in this paper of is there any quality between the complexity and the dimension of the underlying being yes so I think I said that yesterday so the complexities is among other things so we I stated that it's at least yell infinity norm but in fact it's also at least the dimension of so if I write so by the end of this lecture we'll have almost all the components of the actual definition of complexity and then on next week I will have to introduce the last one which is necessary to end also in cases okay so as an application I think that you can now so you can certainly now check irreducibility so geometric irreducibility of examples well one we already said but if you didn't know that there's an underlying vector space so the nice thing about this criterion is that in some sense you don't need to know that there's an underlying vector space it's just a purely diaphragm in statement you might be able to verify it without knowing anything about any underlying algebra so you regain case one if f is a fixed polynomial or fixed function defined over Q because then the modulus is one except at boundedly many points where it's zero so this is only true you can check easily case two so it's for our equal to two the module square of custom some is very easy actually it's a Fourier transform so it's a planche formula for our at least three it's a slightly more complicated computation but it's still elementary so when you get that it's equivalent to p for case three you can do some cases at least so because I didn't state and if and only if this would not allow you to deduce that it's not genetically irreducible but that's not the most interesting application and for case four I guess it would work let me say it's an exercise I didn't verify whether the mean square of the number of points on the Legend family is actually easy to compute I would guess that it is should be of the same complexity as klusum and some's okay okay so now we have maybe so we have a black box at least some of the terms in the black box we can interpret very concretely even if we don't care at all about the algebraic geometry so geometric irreducibility you can replace to some extent by this criterion and geometric isomorphism you can just try and replace it by proportionality of trace functions so let's try with these to prove an important exponential some estimate and see whether there are difficulties so this example comes from the work of Fred Lander and Ivan yet so in studying the tannery device of function in arithmetic progressions to large modular as explained by Terry Tau last week or early this week I guess so they found that in the end everything could be reduced to a good estimate for the following exponential sounds I will call it fi it depends what you the way they write it is a bit different I'll write it with two parameters a and b so a is non-zero mod p and b is anything in fp okay so it's not actually exactly the way they state it but that's the way it comes up in the proof so that's the natural way of writing it that's the way that Terry wrote it last time so KL3 of x so viable plus some and some with three variables or two variables then multiplicatively shifted one KL3 of a x p and then because they have to use completion techniques to some of bound I mean shorter intervals you need an additive character so e p of b x okay so estimating the sum was the crucial step for them and they basically need as code cancellation so the way they actually write it down or one way to write it down is as expanding the two-clusome and some so if one does that which is not actually the best way to think about it well I mean really I mean it seems hard for me to get well I mean they expand but they expand after they never write it this way that's true yeah but so I think so this is only a parent appearing in the paper of his brown so he writes this explicitly okay so if one expands the X the close someone's arms the Ipocluse someone's arms naively this is a five viable additive character some and the bound they need is code cancellation so if you write it as a five viable character some and if you just shifted a little tiny bit the coefficients in a random way then you would get a genuinely irreducibly five viable character something basically you would be in most cases rather stuck even now so at that time this was a little bit before the ideas behind such very general versions were completely available in the literature so the proof was of the necessary estimate was given by birch and bomby are in an appendix to the paper of freedom during vignette so they prove this required code cancellation so in the cases where it makes sense so what are the obvious obstructions so the only completely obvious naive problem that can occur so I forgot the complex conjugate is when a is equal to 1 then there's a square here and b is equal to 0 in that in that case it's really the module square of a KL3 and that cannot be there cannot be cancellation so if a is different from 1 or b is non-zero so a equal to 1 is permitted if b is non-zero then you have a uniform code cancellation estimate for all primes and all a and b so uniform in terms of a and b and p so why do I say it's all cancellation remember the KL3s have been normalized by dividing by p because i is equal to 3 so the fact that these are bounded in a infinity norm already tells you that the KL3 is bounded by p which is already scored cancellation actually by p to the 3 half sorry so it's already scored cancellation there it's encoded in the KL3 and so what we want is code cancellation in this sum of bounded terms but when you expand everything into a five variable sum this would be for this five variable additive character sum p to the 5 half so this is equivalent to p to the 5 half for the five variable sum so full scored cancellation and I think I don't remember exactly how much they can lose about that in the argument a little tiny bit but not very much so you really need suddenly p squared would not work if only because this would be the trivial bound from this side so p here which corresponds to p squared would be a trivial bound so let's try and give a proof of this using what we know and see whether there's a problem okay so we want to write this as k1 times k2 so you can define k1 of x for instance to be KL3 xp k2 of x is KL3 a xp e p of minus b x so that the conjugate is the second term so this one is a trace function so the second one it's part of relatively easy part of the formalism which I will state in more general versions later that this is also a trace function it's not very difficult to prove and in both cases the complexity is uniformly bounded with p and with a and with b okay basically because here we're taking a product of two trace functions and each of them as bounded complexity as p varies either by things I've already stated or by easy formalism that I will state later or for the complexity it's very easy but I mean it's to some extent so this complexity you have to think of as a naive height so it would be if you think of actually a stock it would be like the more naive versions of the actual art that occurs in the exact formula so for analytic purposes for bounds it's enough to have a rough quantity but when you want to do really algebraic things you might want to have a definition that's more more precise but here it doesn't really matter okay so so k1 is geometrically reducible this is fine we just checked it and by using the daffodil criteria and it can really be checked without knowing anything k2 also we're just multiplying with something of module one so it doesn't change the geometry the daffodil criterion and x goes to a x is a bijection on fp so it also doesn't change anything so now the question is whether they are geometrically isomorphic or not so if k1 or the underlying row one and row two k1 and k2 are not geometrically isomorphic then we are done then rh gives Birch-Bombier okay because this estimate the first part so the c1 square c2 squared is an absolute bounded by an absolute constant so this is exactly what I said now this can fail so an obvious case where it's going to fail if we think of this proportionality criterion is if a is equal to 1 equal to 0 then this of course fails and we want to show these are the only cases so is that the only case so in other words we reduce to a priori simple looking problem concrete problem does there exist an alpha such that k3 of x is always with modus one alpha times ep of minus bx k3 of ax so it is not I mean it looks very concrete this is something you can certainly check by computer for small primes that it's unlikely to be true but it's also obvious a priori how to prove that so last time when we approved the kind of valid bound qualitatively from a valid bound we use the fact that the the summand of the clue summand sum was 0 at a point where the other summand which was constant one was not here there's no in the obvious point where things are strange is 0 but it's the same on both sides you can try and take the modulus of course k3 of xp should not have the same modus as k3 of axp if a is not one but how do you actually prove that it's not so simple so it's not entirely clear so I think you can probably extract some kind of proof by by some means but I want to use this as a pretext to introduce more of the underlying data to see so so there are actually different ways of doing this there's one which starts with a little trick which I think is also good to see there's nothing that says that we cannot try to use analytic techniques to study this so many if you actually use a little bit of analysis so you can try and do for a transform apply plunge rail or you can just expand the custom and some then be a little bit more careful than writing it as a five variable sum and if you do this you will reduce to a three variable character some by using orthogonality so expanding k3 is applying orthogonality and regrouping in some ways you obtain a different formula which is actually the way it's written explicitly in the at least in the appendix of birch and bombieri so they actually write it in this way so you sum over t and you have now a two variable character x custom and some at one over t and a k2 at a over t plus p so I don't put the p it's implicit and here you have to think that t is nonzero and not equal to minus b to not have infinity or you can simply sum over fp the same thing and just use the fact the definition that k2 at infinity is equal to and actually this is a reasonable thing to do as well so what we have gained here is so for for birch and bombieri this was important because now if you expand it's a three variable character so they actually succeeded in applying the lin's work for i-dimensional sums to estimate it and gets code cancellation you can also use this form to get a quicker approach to this black box version of the human hypothesis the point being that here so can there's a special point which is t equals 0 but it corresponds to t equals minus b on this side so you can immediately guess that if t if b is not 0 this will have code cancellation so applying h to so now we do another change of variable which is again permitted by the formalism that I recall which again because it's just a bijection at least of the projective line is always bounded complexity k2 of a over t plus b is also bounded so we see that so k1 till k2 tilled are geometrically reducible for instance by the diaphragm team criterion and k1 tilde is not proportional to k2 tilde at least if b is non-zero because the value so values at t equals 0 do not coincide so one value at t equals 0 is 0 and the other not so at least it's not so this is k2 of a over b and so when p varies there's no reason for k2 of a over b to always be 0 actually if you think about it's not obvious you need to actually prove something so I should say probably not so that's the point is that I mean this only is very suggestive but because we don't have as simple cases as like the veil case where everything is of modus iso one or zero it's easy to see that the KL2 should not be always 0 at a over b but actually proving it might require some argument so actually proving that Clusterman's sums are non-zero because it's a KL2 you can do by hand but in more complicated cases this type of trying to use just proportionality will give you completely O plus type of very concrete looking very obvious looking question but not situations where you can do something so we need a little bit of extra formalism to handle that case so I will now go into this direction but I would like to first so I wanted to present another application where just the black box version is enough but I don't think I will have time so I'll do that later next week if I have the opportunity so we need some more formalism and then we'll come back to this exponential some estimate and there will always remain the case where b is 0 but we have to prove that when a is not one it's not proportional that's also not obvious okay so so formalism so we already saw some of it coming naturally in defining even these very special cases so one was something that was mentioned already it's the decomposition in geometrically reducible pieces so to be precise this does not quite work in the generality I'm going to state there's a slight technical issue between representation which could be reducible as representations of the group I called pi1 whereas geometrically reducibility is reducibility as representation of a slightly smaller subgroup so there could be a discrepancy it's not out to deal with it's just to be technical so I'm going to state something that's a little bit false but in papers you can see the actual version stated correctly so any k which is a trace function can be written as a sum of at most the complexity of k geometrically reducible summands ki with complexity of ki at most the complexity of k so if I don't put geometrically reducible this would be literally correct but geometrically reducibility is what you need to have this form of the Riemann hypothesis so this is a technical issue which in applications has never actually come up concretely so it's it's it's worth not trying to think about it too much at the moment so elementary more elementary things so if you have representations if you have two of them you can build a direct sum that's the most obvious way of constructing new representations out of old ones and the corresponding operation is just the sum of the trace functions so the sum of two trace function is trace function so typically v1 and v2 are not zero then this can never be geometrically reducible it will have pieces which are invariant and of course in any of such operations kind of the first thing is to know that you can do the operation of trace function but you should always immediately then think how does the complexity change without control of the complexity these operations are useless for applications and what happens is in that case it's very simple whatever is the definition the complexity of k1 plus k2 is basically the sum of the complexities of the cement very easy operation also is so there's a twisting operation which on v would be denoted by alpha power deg tensor v where alpha is modulus 1 so whatever that is this is an operation which maps k to an alpha k this because alpha is modulus 1 it sends geometrically reducible to geometrically reducible and the complexity doesn't change then if you go to the dual representation then one shows that the corresponding operation is just complex conjugation this is actually quite deep at the missing points to check that the complex conjugate is the right thing is a deep fact due to offer gabber but this works and the complexity doesn't change and I should say that one could work without the missing points and just add them afterwards so that is the fact that it's a deep fact is very convenient that one doesn't have to make changes when taking conjugates but it would not be a massive problem it was not the case okay change of viable so suppose you have a gamma in pgl2 of fp which acts on p1 of fp let's say of fp bar then a k of gamma x is a trace function so this uses the fact that so there is a natural extension of the trace function on fp to p1 of fp so one can always speak for any given trace function of what is k of infinity in a completely canonical way so the same way that here I said that kl2 of infinity is 0 this is not just a convention this is the right value in other cases it might be something else so natural extension of k to define k of infinity and the complexity of k of gamma x is the same as the complexity of k so in particular k of ax plus b for any fixed a and b with a non-zero is also a trace one can actually do other operations you can also change variables by polynomial and not just by homography but for the moment I won't go into this with this yes yeah and you can prove it with the diaphragmatic criterion when you change this variable the sum is basically the same except you might have switched infinity and the pre-image of infinity so the sum has changed by bounded amount between the two and it's also completely obvious once one knows the in the formalism it's just I guess pre-composition by a homomorphism of the group automorphism of the group so I should say here also it preserves irreducibility so this so when I put a star this means it preserves irreducibility and it also preserves non-geometric irreducibility if you want to think about it okay now something that's just a little tiny bit more complicated I guess it's six yeah so what about products so products is a bit more complicated because so of course what you want to do is if you have two representation to map it to the tensor product so the characters should multiply however just like the product of two usually characters which are primitive might not be primitive think of legend times legend is the trivial character so here you get something which should be k1k2 but is not quite so so this is again a technical issue which has to do with the choice of formalism so either you want to have analogs of DHL characters or analogs of primitive DHL characters both have their advantages I prefer the primitive one and so I have to change a little bit sometimes the product so there exists a set S in FP of size bounded by the sum of the complexities of the two terms in the product such that and and a k3 trace function such that k3 of x is k1k2 outside of this bounded set and well the complexity of k3 is bounded I have to remember something like okay let me write 10 c1 squared c2 squared don't remember if it's correct but it's almost certainly so it's only bounded in terms of complexity of k1 and k2 and because k3 is bounded by this quantity and infinity norm is bounded by this quantity k1 times k2 is bounded by c1 times c2 because the infinity norm is bounded by complexity at these boundedly many points where the two functions don't coincide the difference is also bounded so for x in S k3 of x minus k1 of x k2 of x will be bounded by one depending on c1 and c2 so you can write down the inequalities that come out it's not very important so for all purposes that means that the fact that it's not quite trace function is just a technical nuisance just like taking products of Dirichlet characters which is not always primitive okay right now something which is not quite for my zone which goes a little bit deeper in explaining what is the complexity it's the issue of ramification points so a Dirichlet character as a conductor and I did I said the complexity is an IT conductor but it also has some features similar to the fact that the prime dividing a conductor in for Dirichlet character is a places where something strange is happening and here we have something similar so one defines finite subset okay so this is an invariant I should say of really of the representation so I should say we start with the row so given a representation row there is a finite subset of so-called ramification points I will call it s row singularities this is most naturally seen as a subset of p1 of fq bar so it's either of fp bar sorry so it's not necessarily an element of fp could be something which lies in an extension or it could be infinity such that so how did I right one way of phrasing it is this way so first the size of s row is again bounded by the complexity so that's the second ingredient in the complexity so there's the L infinity norm or if you want the dimension of V and there is this set of singular points and if the sum of these two invariants was enough to bound the complexities would be great so it turns out it's not the case but we'll see that next week and what is the meaning of these singular points so one way of saying this is that for x not a singular point so un-ramified I'm going to phrase it this way so k of x can be written as the trace of a unitary matrix of size dimension of V so k is the trace function associated to so if x is un-ramified so x is in fp but not ramified then k of x is the trace of a unitary matrix of size dimension of V okay so in particular of course this recovers the fact that the L infinity norm is bounded by the dimension which is bounded by the complexity at least for this un-ramified things implies the value of k of x as an equal dimension of V and so this would not be so useful but if x is in s row this fails more precisely so k of x is the trace of a matrix of strictly smaller size so really strictly smaller and whose eigenvalues are all of size at most one over square root of p so the eigenvalues drop actually all the eigenvalues no it's only not so one of which at least one eigenvalue of which so dropping in modulus of the eigenvalues if you know a little bit about modular forms this is very analog to the fact that at the ramified prime the set of k parameters drop in modulus so Ramejan Peterson conjecture says modulus is one at the for all the local parameters at un-ramified primes and at the ramified primes it is known that actually drops in modulus so this in this case because it's not necessarily reducible it could be that some eigenvalues remain of modulus one but that's the main point is this and from this or from other fact it's easy to check then that s row is an invariant of row up to geometric isomorphism and therefore this gives us a way sometimes to say that two things are not geometrically isomorphic simply because we know what is this finite set and if the two sets are not the same they are not geometrically isomorphic so where does this dimension that's in complexity come from this something I said so so in the way we define the complexity so we actually define the complexity as the sum of three terms the dimension of the representation the number of singular points and an invariant of world ramification also it's not just the analytic conductor no if you took just the analytic conductor what problems with you so if you just take the anti conductor then it would not be a height in water like which steam so typically for example all hypergeometric sheaves in the sense of cats would have the same conductor and some of these are unbounded rank and so on so you I mean it depends what you mean by anti conductor so it's not the arting conductor for instance one needs a little bit more information yeah so I come from and some there were breaks at one over one over the index of the close common some so they have unbounded rank yes yeah so but actually so we've been told by will saw in that so there's a better thing to take which would be actually I'll state it later because I need something else so there are more algebraic versions of the definition of the conductor but I mean we cannot just rewrite all our papers for the moment so we're keeping the old one up until the point where it's really not possible to ignore a better definition I mean fighting number theory this is not this is not the issue so okay so to remain remember to recall repeat a little bit so we have one extra invariant this set of singular points so let's go through the examples and say what we can say about this singular point case one so for additive characters this is included so there could be some issues for small prime but it's including the set of poles of F for EP of F of n for multiplicative characters it's zeros or poles of F and one can be more precise and say exactly which ones are which ones are not so to see an example where it's not equality here if you have a double pole and a character of order 2 so double pole means you're a factor let's say 1 over x minus alpha squared and this is not ramified for the quality character so so if you you can be more precise but it's enough for the moment for 2 for Clusterman sums it's 0 and infinity and for for the case 3 k of n so for counting function of number of solutions again I think there might be some things happening which I don't remember exactly but it's only included in the set of critical values of F so critical values means you look at the zeros of the derivative and the image of F under this so this is a set of F of zeros of F prime so when I say there could be issues typically if the polynomial as degree divisible by p or something you could expect strange things to happen but one can be more precise this is just to be to begin with and for the Legendre curve so this k of n which counts number of points on the Legendre family of elliptic curves this is zero one infinity so which is typically here you see this is the situation where things behave a bit strangely so here if you take n equals zero or one you have a double root and infinity seems to be always a little bit stranger infinity is not always so yeah I should oh it's actually so here if you have a rational function which is regular at infinity like one over x then this is not a singularity of e of one over x for instance okay so with this done we can go back slightly to what I what we're trying to do with the Freelander even get some once it's transformed into a sum of k l twos so back to f i a b p so this was the sum of after transformation t in f p k l two of one over t k l two of a over t plus b okay and now we can use this extra information to see that if b is non-zero then k one so this was okay till the one is rummified at zero but not k two two so for t equals zero it's like k two of infinity and k two at infinity is very fine this is example two above and for t equals zero we have a over b which in that case is just an element which is nicer zero no infinity so it is so they are not geometric so there remains the case b is equal to zero and a different from one where we expect they are not geometrically a geometric isomorphic but it's not obvious in that case both are rummified at zero and infinity actually I think in the paper they don't need that case so if I remember right for some reason I will have to check I think it's enough for for the Freelander-Ivanier's argument to deal with the case where b is non-zero I don't know I have to check that anyway so next week we'll see further invariance which allow you to distinguish k l two of one over t with k l two of a over t and show that when a is not one they are indeed not isomorphic so this is still not clear with the amount of information we have okay now in the last few minutes I want to introduce one last piece of formalism which is much deeper than what I've already described and indicate some other application natural analytic application I guess it's number seven yeah it's the Fourier transform now this is extremely deep if only because there's no analog of this at all for classical modular forms or for modular forms of a number fields so this is an operation that makes sense over trace functions and which has an incarnation at the level of the underlying representations but for which there are no analogs over number fields so it's really a deep deep construction which is again due to the lean and which was studied extensively by Lomo and by Kat and a few others okay so here's the thing I want to say so let k be a trace function yes I think once you have eliminated the case B nonzero you are done because then you change variable to t goes to one over t there and then you can evaluate ah yes that's the way to do it yeah I remember now yeah you're right yeah so you can check if B is actually zero you can actually compute this by by plancherelle formula essentially yes thanks so these two I mean it will be clear so this is the if you replace t by one over t this will be transfer Fourier transform of characters one dimensional thing and you can execute this sum when when B is equal to zero that's actually the way three under any violence to wait for in the paper okay which again exemplifies the fact that one should not just blindly try to apply formalism I mean your analytic instincts and your ninting knowledge can always or often still be used even in situations where algebraically it looks a bit iffy and not obvious so one can also prove it algebraically of course but one requires then a little bit more work okay thanks okay so let k be a trace function not p we need an assumption we need that either it's geometrically reducible and not a negative character or I could say that with no factor so when I mean factor I mean some end of the type alpha times e p of ax over p in its decomposition so if k is geometrically reducible that just means that k is not proportional to this or it's not geometrically isomorphic to so probably speaking this has to be a statement at the level of the underlying representation but for small complexity it's the same as saying that if you decompose it as a sum of trace functions which are geometrically reducible there's none which is just proportional to an additive character and the reason we need this is simply that the Fourier transform of an additive character is a delta function and delta functions are not well handled by the formalism I'm describing there's another formalism which can do it but it's even more abstract and I want to go into this okay so for instance any of the examples we've seen as this property including case 3 which is not entirely obvious but it is if the degree of f is less than p provided f in that case is not of degree 1 so all the examples here work except this when f is of degree 1 or sometimes this when the degree of f is larger than p larger equal to p so it's a very generic assumption so then I define the Fourier transform of k as the map the function on fp which sends h to so we normalize by 1 over the square of p it's the most natural in this case so I think at first we were using just a naive sum and then Paul Nelson pointed out that it was much better to be unitary in this case so it's just the usual Fourier transform we've taken the bad habit of writing e of a n over p instead of minus a n over p that that's from cats because that's what he uses in these books hmm that's an h yes thank you and that's an h yeah hn over p so it's just the discrete Fourier transform with this normalization so the meaning of the normalization is that when there's code cancellation this will be bounded so it has a chance of being a trace function of something with bounded complexity and this is what happens so there's a serum it's really deep it also uses the Riemann hypothesis in a fairly strong form so if so so there exists representation ft of rho such that the trace function is exactly that including all the bad points there's no not change has to be made so this is the lean minor addition an epsilon s condition that was not in the literature but was not particularly hard to check based on the work of Le Mans in particular is that as I said this would be nice theoretically but not very useful for anything numbers theory unless we control the complexity and from the work of Le Mans which says lots about the invariance of the free transform it's not hard to check then that the complexity is controlled and what we got was 10 complexity of k squared which is probably not best possible but certainly is enough for applications okay so that's a no I don't want to be pedantic because I didn't do it for the crystal and some so yeah the most I mean the most proper way of defining this would have a minus sign okay so examples why is that well it's an h1 it's a trace on each one it has to be an alternating sum yeah I know I have not mentioned the word homology a single time except like two seconds ago okay oops I'm already over time do I want to say anything that's right let me state an immediate corollary of this which actually in principle goes back to more or less the thesis of Philip is that you can have poly-avino grad of bounds for any trace function so let k satisfying so trace function which has a free transform as in the sense above also I modulo p is any interval so meaning the projection modulo p of an interval integers of length strictly less than or equal to p then what you did use from this in the standard completion method that was explained by Terry for instance is that you get well the conductor of the free transform so c will be the complexity of k so 10c squared scout of p and log of 3 times p we actually had a paper where there was a big symbol and then sir sent us the remark that you could replace the big or symbol by less than equal to log of 3p so we had less less than log p but it doesn't like implicit constants okay so and this is completely straightforward you just apply you write this as you apply partial formula so this would be the sum of the free transform of the interval characteristic function interval times free transform of k you bound the free transform of k by its anything to know which is bounded by the complexity which is at most 10c squared and then you sum the free transform of the interval which is with our normalization at most code of p times log of 3 times p and that's the proof okay so I'll stop here for today any questions yes so it's in this case it's completely obvious from the diaphragm thing criterion free transform the way I define is unitary and therefore the the l2 quantity that you want to control is the same so if k is geometrically reducible so is the free transform and of course it can also be proved differently but from the analytic point of view there's no mystery in this fact it's a very useful fact because the free transform of something of rank 1 can be quite complicated can have arbitrary rank if you have an arbitrary rank object it's sometimes hard to check that it's irreducible but if the free is free transform of rank 1 it's obvious on this side so in the example free is it a theorem that if you have non-singular critical value then then it is included in SRO yes only at least if the degree is co-prime with p if the degree of f is less than p let's say strictly less than p yeah that's only the case