 Okay, so I'll start my series of lectures. So this is about so-called trace functions of a finite field. And so this will cover a lot of ground, but most of it is joint work with, especially, Jean Fouffry and Philippe Michel. And there are pieces which are joint works also with Sattelgangouli and Guillaume Ricotta and ongoing joint work with Paul Nelson and Florent Jouves and Will Savin. And some stuff is in the Polymath 8 project. So a whole bunch of people. So what I'm going to try and describe is, to some extent, a little bit, a piece of applied mathematics. So the mathematics we apply is the series developed essentially around here, between 50 and 30 years ago by Grottendijk, especially Deline and Katz, Lomo and a few others, culminating with the Riemann hypothesis of a finite field due to Deline and the applications. So what we are interested in is the applications to analytic number theory or to number theory more generally, but most applications that I'm going to discuss are really analytic number theory. And what I'm trying to do during these lectures will be to present this part hopefully for the largest possible audience so that you can also apply it to your own problems in an efficient way. So what I'm going to do is that we're going to look, especially at the Riemann hypothesis of finite fields, and at first begin by seeing it as a black box. And then fairly naturally we'll see that, so this black box, to apply it to use it efficiently one needs to know a little bit of what's inside. And over the course of the four lectures, the goal will be to introduce more and more of what's inside the black box so that the applications become more and more refined. And in the end, what turns out to be the case is that one reaches quickly or not so quickly, but one reaches points at where you cannot just take, let's say, especially the works of cats and apply them immediately, we reach open questions, very interesting open questions, even from the point of view of algebraic geometry where hopefully insights on aging numbers here are useful. So this will come towards the end, but today I'm going to begin by trying to present a version of the Riemann hypothesis of finite fields in an extremely general version, but as hopefully a very applicable one as a black box and we'll learn a bit of what's inside the black box to be able to apply it from this point of view. So as a first kind of reference for what I'm describing, so there's a trace function survey from PISA. So the PISA trace function survey that Philippe and Fouffry and I wrote, which is available on my web page, for instance. Okay, so the main objects will be these trace functions. So we'll be interested in prime number P. And so I say prime over finite fields, really what I'm going to describe certainly applies to any finite field, but because our applications are mostly over Q, the finite fields will be just, for the most part, just prime fields. So fpz not pz, the prime field. And we are looking at certain functions. So trace functions will be certain functions, complex valued functions defined on fp, which have very algebraic flavors. So trace functions, so algebraic structure. And the actual precise definition is quite involved. So I think that by the end, I should be able to give a full definition, but I want to try and show that it's possible to begin a little bit like when you study automorphic forms at the beginning. First time I saw at least myself the Peterson formula. Well, there was a Bessel function. There were Bessel functions. But I just started, I knew in such and such a place I found some bounds and I found some identities in Grazstein-Ritzchik. And you can build a lot just by knowing that at some point you want to know a bit better what is a Bessel function, but certainly you don't need to start by studying differential equations abstractly to study automorphic forms using the Peterson formula. So I'm going to take a little bit these special functions upwards at the beginning. I'll give examples and at some point we'll see that we need to know a bit more where these functions come from. Otherwise, we're kind of stuck when applying the Riemann hypothesis. And then I'll open the box further and further toward the end. Hopefully there'll be a full, strict, rigorous definition of trace functions. Okay, so what are the examples? The basic examples, so the first examples we'll see are additive and multiplicative characters. So first just additive characters, E p of n, so E p will be exponential of 2i pi n over p, chi is a multiplicative character of F p. So these are the more elementary ones that come in, but then the number theory very quickly you reach things where you replace the argument by a polynomial or a rational function. So in both of these I think that F is F1 over F2. F1 and F2 are, let's say, polynomials with integer coefficients. F2 is non-zero and they are co-prime. And F2 is non-zero mod p, so we can divide at least for most n, we can compute F2 of n and compute F of n modulo p and compute the inverse and so on. So these are the simplest examples and then one very quickly learns that it's useful to be able to mix them, taking multiplicative characters times an exponential. And these objects, in I think number theory, they don't arise usually just like that. We define this function and we want to say something about it. They come out in a natural way and what one is interested in, very often, maybe most often, is not so much one value or average value, so sums of these things, meaning exponential sums. So these come usually in applications in the form of sums. The first one may be that one encounters is following Gauss, something like E p of n squared and ranges over F p. This is a quadratic Gauss sum. Quite quickly also called Gauss sums are Gauss sums attached to characters. So as long as one is interested just in upper bounds for this, so it might be that you want something more refined. So for quadratic Gauss sums, the most important application doesn't just want the size, it wants to sign the argument of the Gauss sum. But in many applications, it's really the upper bounds. And as long as you just have these two and you only want upper bounds, you're in very good shape. In a few cases where the modulus of the sum is known, so here everything is nice, meaning if I call the first one S1 and second one S2, then S1 and S2 have modulus code of p and S2, this is when Ca is non-trivial. And then you might want to tweak these things. Sometimes you see that extra parameters come in, a in fp, and this would then still be true for a non-zero. But basically also one quickly realizes that these are essentially almost the only ones where you can actually compute the modulus exactly. So I guess maybe an even more trivial case is just the full complete sum of just a character over a group, in which case when the character is non-trivial it's zero. So almost any other sum does not give a simple formula either for the sum or for the modulus. When you take a sum and one of these special functions are already wrote down, so I guess maybe the other nice exception are Jacobi sums. Sallier sums even do have some formula, but it's not actually such a nice simple formula. And here the kind of basic case is the Clusamansum. So classical Clusamansum in one variable, which I will denote by kl2. So I think this was the notation implicitly used by Tau also, which for us it is convenient to normalize by a score of p immediately. And then this is something which sums over invertible elements mod p and ep of nx plus inverse of x computed mod p, often denoted x bar. So Clusamansum were introduced somewhat implicitly by Poincare when computing the full expansion of Poincare series and explicitly and studied already quite deeply by Clusamans to study representations of integers by quaternary quadratic forms, integral quadratic forms. And for these, so they have many, many, many applications in attic numbers theory, and there is no explicit simple formula to tell you what this is or what the modulus is you need just estimates. So here one can only hope and it's a generic situation. Now one of the important features of this example, which already throws you way beyond the theory of André Vêle. So the best estimates for Clusamansum, the integral Clusamansum are due to André Vêle. But now if you start thinking of this as a function of n, so an important feature is that these sums themselves as functions of n are kind of nice. And so it's important to define it also at zero which I will do by sending it to minus one or a square root of p and that's the right definition. Actually that's the definition that comes in there. Naturally I don't touch anything. So is also such a special function, modulo prime. And more generally many families of exponential sum parameterized by some element in fp will themselves be nice functions in this sense which immediately will give us a lot more applications of the Riemann hypothesis than just playing around with the first example. So from the point of view of more classical things you should think of this like KL2s are analogs of Bessel functions in a figuratively precise sense and so saying that we want to see these as nice functions is like saying that Bessel functions are just as nice as the exponential function essentially or like power functions. So in classical analysis the first interesting functions are exponential functions and power functions and when you start going to Bessel functions these are just as nice once you start learning about them and so on and this step of going from characters to Cluson and sums or such exponential sums is also of this kind. So the crucial feature that the Riemann hypothesis tells you is that if you look at these trace functions so whatever they are we already have a few examples I will give some more in a second provided you restrict to a certain subset which are defined by some kind of irreducibility or normalization condition then they behave like quasi-ortogonal vectors in a finite dimension real-world space and this quasi-ortogonality gives you code cancellation in a generality which is way, way beyond what the serial your Andrevelle can give you for just one variable character sum. Okay, so our h will be a statement so now I have to here say a little bit on where the functions come in but then this almost a definition but not quite a definition will be replaced by again a series of examples and then the statement of the Riemann hypothesis. So what if you speaking what happens is that you have certain algebraic geometric data which can be defined in different ways so it could be garrow representations of the rational function field in one variable which is not quite the same but you could also think of this as automorphic forms for this field but automorphic forms of arbitrary rank are just classical GL2 things or you can think of this so the geometric point of view is something called eladic shift of a certain type and it doesn't really matter whether you understand any of these words for most of these lectures so we'll see that it's there are questions at some point that you can only answer if you really have an idea of what one of these means the most convenient I find is the eladic shift point of view but the easiest to define would be the garrow representation so that's probably what will emerge as a definition as time passes now these are algebraic geometric data and you can think of them really as analogs of automorphic forms so you know an automorphic form over a number field is a relatively complicated object but with many nice properties our garrow representation is a nice object which is highly algebraic at least structures with many environs and we will send these to the space of functions let me call it c of fp which is the space of functions k defined on fp with values in c so we have some highly complicated objects with problem normalization these typically define countable infinite sets and we send these to this five dimensional vector space of dimension p over the complex numbers so the trace function is an invariant which again if you think of analogies with automorphic forms or even with Dirichlet characters modular large modulus q you should think of this maybe as being the data of the first values of the character or the first free coefficients of a modular form so it's not full information about the modular form if you think of the classical case of gl2 forms you just take let's say the first q to the 10 echo eigenvalues or something like this or q to the 1f echo eigenvalues but this is enough to get a good handle on the algebraic object as well so there's one very important fact that is analytically completely crucial so from the point of view that we have some complicated algebraic object and then we just take this invariant which is the trace function the trace function only a priori contains some of the information and the map is not injective so in particular there are literally infinitely many objects which have the trace function identically zero modulo p so if you just think of the trace function by itself as a function defined modulo p doesn't tell you that much because it comes from infinitely many sheaves or representations and some of these are highly complicated and some are a little bit less complicated so we need a way of measuring the complexity of a trace function so that we can work with objects with small complexity as a way of saying that we have a control on the function we get and that the function gives information so it's like saying that if you just look at the value at 2 of the Dirichlet character it's not telling you that much but if you know that the conductor of the character is extremely small then it's giving you a little piece of information so there is an invariant which is an analog quite close analog of the analytic conductor for L functions of so either of these objects or sheaf representation automorphic form and we will usually work as p varies with so for every prime p you would have a trace function kp which as p varies so they vary really with p but they come from sheaves or whatever which have complexity bounded uniformly in terms of p from objects with uniformly bounded complexity ok and so by abuse of notation we will write c of k for so maybe the smallest complexity of an object which gives rise to a trace function k so let me say that I will fix so these objects you don't necessarily need to know whatever they are at the moment let me call them representations let me call them rho and t rho will be the trace function and assume that trace function is k and I will use c of rho for the complexity of representation ok and as you will see just like you can say a lot about Bessel functions or you can use Bessel functions quite efficiently just by knowing that they occur in certain formulas and satisfy certain bounds at infinity and certain integral identities one can say quite a lot about trace functions one can apply them quite successfully just knowing many examples bounds on the complexity and the Riemann hypothesis which will come in a second so first more examples with examples of the complexities ok and so whatever I'm going to say later if it seems abstract you can always go back to these examples and usually even with just these examples and a few basic operations you already have lots of interesting features phenomena and what you need for many applications so the abstract theory is useful but is not necessary to already say quite a lot ok so first there are the examples I already gave I have to tell you something about the complexity of these objects so we take a rational function as before so now I'm going to take the polynomials in Fp of x co-prime F2 non-zero and so in the theory because we are looking at objects which are a little bit like primitive Dirichlet characters in contrast with just all characters so the primitivity means we have to be very careful with the value at all so which would be the prime numbers or integers in the classical case so we cannot just say that let's say at the point of interminacy we don't say what happens so where F2 as a pole we also have F as a pole we have to say precisely what is the right value of the function this is important for some statements to actually work so precisely so we define k of n to be Ep of F of n when F2 of n is non-zero not p so in this case F1 of n is non-zero because the two polynomials are co-prime so this thing makes sense and zero otherwise so for k of F of n if F2 of n is non-zero and F1 of n is non-zero so for multiplicative characters we also have to say what happens when we evaluate the multiplicative character at zero and there are two further cases so if k is non-zero sorry if k is not the trivial character and F1 of n is zero we take zero so the classical extension of a Dirichlet character at zero is k of zero is zero if k is non-trivial but if k is trivial it should be one so it's one if k is equal to one and F1 of n or F2 of n is zero so to a certain extent this is technical so in I think number theory if you were to have a different convention for zero you would just switch the value at this point and do trivial estimates for the switching but for the algebraic statement it's crucial to have the right conventions and normalizations so that's one so we have to say what is the complexity in both cases the complexity is bounded I mean I didn't define it but I can tell you this is going to be bounded by the maximum or the sum let's say of the degree of F1 and the degree of F2 with an absolute implied constant would be more precise when one has the correct definition of the complexity but for basic applications you just need to know so how is this useful it's going to be useful by saying you take F to be a ratio of integral coefficient polynomials which are fixed and then you take P to be a variable prime going to infinity and if F1 and F2 are fixed the conductor of whatever you have as a trace function modulo P will be bounded independently of P so that's the first example second examples are hyperclustrum and sums so let's pick an integer r at least 2 and kr of n and P is r ok so here I have to be careful so 1 over P to the r over 2 r minus 1 sorry sum over x1 to xn in Fp xr thank you such that the product is equal to n of ep of the sum xr sorry and that's when n is non-zero and when n is equal to 0 again one has to be careful and if I remember things right it's minus 1 to the r divided by P Philippe do you remember I think it's that ok anyway so it's important that it's there when n is equal to 0 so there's a way of computing these values right so it doesn't agree with r what? so r equals 2 I had written 1 over P yeah so is it r minus 1 over 2 I always forget this one so ok I'll write this for the moment so that it coincides with the case n equals r equals 2 and I'll correct that the next lecture because this is something I always forget ok so for r is equal to 3 these are the 3 variable custom ensems or 2 variable custom ensems that Tao spoke about when explaining how one handles the so-called type 3 sums in John's work all the ternary divisor function in arithmetic progressions you can write it this way so if n is 0 I mean if n is a multiple of P that means n equals 0 then you have to use the second formula anyway and if n is non-zero then the x i have to be invertible yeah yeah or everything is different over Fp all my functions will be different just over Fp ok so you could actually write Fp star yeah but I mean I always forget how you don't defend this the first and if you write it correctly then the n equals 0 case is given by the same formula I'm not sure if the way I wrote it down will actually give the right formula for n equals 0 that needs to be r minus 1 the exponent 1 to the power r minus 1 well yes and no because I I mean for i is equal to 2 I switch the sign I mean the right definition of minus 1 over the code of P times the sum for i is equal to 2 which I didn't do so I just anyway so it's not an important thing I'll clarify this the next time but for i is equal to 2 you should have yeah you're right it's probably r minus 1 anyway ok so this is a deep fact is that these are trace functions and this is due to the linear and I can show you that this is indeed a deep fact by the following first kind of again another fact about the complexity so I didn't tell you about the complexity too much but there are two properties of the complexity which are maybe I should say this after the next example but I might forget so let me say it now so it's a little bit this is kind of a semi-digression but it comes at a good point because the strengths of these remarks will be clear so two facts about the complexity are important so the first it's a little bit like a height in the sense that when you look at the set of trace functions with a suitable normalization which I will discuss later with complexity bounded by a constant this is a finite set so the whole set of trace functions is infinite just like the set of D.H.A. characters is infinite but when you bound the conductor of D.H.A. character you get a finite set and this is a similar feature so this is a rough way of saying it because we need a little bit more information before we can state this precisely but roughly speaking the set of K with complexity less than or equal to X is finite up to trivial modifications of the trace function so this is a relatively rough statement but it gives an idea already of what is the content of it and so as you maybe know is if you take the analytic conductor as defined by Ivanets and Sernak the same statement is true for automorphic forms over a number field and the second remark is explanation of why this is really deep a basic fact about the complexity so one has already a first way of showing that the complexity is interesting the way it's defined it has the probability that the L infinity norm of K is bounded by the complexity so whatever complexity you have to measure how complicated is a function one way that a function can be complicated is by taking large or very oscillating values and here we're saying that the L infinity norm is bounded by complexity and here what I didn't say which I should have said so with the complexity of KLR mod P is bounded by one with a constant depending only on R so let's apply this fact so KLR functions with bounded complexity and the fact that the L infinity norm is bounded by the complexity whatever it is so corollary immediately is that there exists constants AR non-negative such that the hyper KLR sum and P is bounded by AR for all n mod P and all primes P now because I divided by let's say for R equal to I divided by code of P this is a weak form of the veil bound for Clusum and Somes so if A2 was equal to 2 then this would be the veil bound for Clusum and Somes which comes directly from knowing that it's a trace function with this type of properties and so it's not surprising then to see that to prove this that these are trace functions in the sense I'm mentioning already the Riemann hypothesis of fine fields in a non-trivial way How complicated is the 0 function? Bounded complexity it depends the way you define it but I think with our precise definition it would be 1 Oh no, okay I thought you think it's possible to have a trace function which is identically 0? Yeah with large complexity but the smallest complexity would be 1 and actually it's an interesting question to find the smallest complexity of a non-zero object with trace function 0 and these I actually don't know so the upper bounds we know is P or lower bound so we know trace functions with complexity roughly P which have, sorry we know objects representation with complexity roughly P with trace function 0 and what is the actual threshold I don't know So already this is you see that just knowing that an object like Hypoclus and Somes have a trace function without knowing anything else so this is a type of information that maybe in 10 years there will be an analog of Grazstein-Riegic for trace functions and this is the type of thing you would find in the first two pages of this hypothetical motivic Grazstein-Riegic and you wouldn't need to know anything about veil bound or hypoclusom and Somes bound you would just say this is a trace function so it's bounded by complexity, complexity is bounded and then you apply that okay so this is somewhat similar to the fact that Bessel functions decay like one of a code of X at infinity in a very rough analogy it's a much deeper fact but okay so now let me go back to after this small digression to more examples to show you the variety of type of things that can arise so another fairly general class of trace functions are those which you obtain by counting solutions of certain polynomial equations so first example would be you fix a non-constant polynomial F mod P it could be a rational function also but I do it for a polynomial and you count for a given N how many X there are in Fp with F of X is equal to N and you just count them so this is integral value it's only bounded by the degree of F because F is not constant it could be 0 or fairly frequently but it is something so this is a trace function and its complexity is bounded in terms only of the degree of F so then so K is a trace function with C of K again up to an absolute constant bounded by the degree of F and therefore if your polynomial let's say X squared which is an integral polynomial then you get something which has complexity bounded independently of P and in this case of course it's not how to compute the number of solutions to X squared is equal to N it's 1 plus N over P and the last example which is again just a very special case of something more general is if you try and count not the solutions to this type of one variable but number of points on the family of curves and the simplest example is really interesting example is probably the Legendre family of elliptic curves so you take K of N to be P minus in this case one has to normalize in this way so the number of solutions in F P squared of Y squared equals X, X minus 1 X minus N so this is done in such a way that the formula is correct also for an equal 1 or 0 so that works for every N so is a trace function I have to divide by the code of P with my definition so it's the AP of this elliptic curve divided by the code of P so it's a trace function with complexity bounded as P varies and you can basically replace this family of elliptic curves by almost any family of algebraic variety depending on one parameter that you can think of up to technical issues what exactly are they mentioned? Well I mean you might have to extract the right at the bad points well first there's the bad points and also if you take a surface boundary instead of this would only be the right thing if there's just middle dimensional but for families of curves except for the doing the right thing at the bad points this would be the right this would almost always be a trace function ok so this already gives you some insight and so one of the things you can see is that once you have this already just with these examples any statement which is valid for any trace function which there will be such statements will immediately give you many many different types of statements which very often I think number theory would otherwise be kind of treated one by one with maybe different ad hoc methods and we'll see examples of some of the things we did with Fouffry and Michel where we have statements which are valid for every trace function so that looks a bit abstract at first but once you have a list of examples then you see we have actually lots and lots of statements ok now I can state the Riemann hypothesis right I guess I need a bit more room for such a big program so I think I'll keep the examples visible at least their definition if I do it this way so because you will see that I'm going to state the Riemann hypothesis I cannot quite state it just with using only the words I've already introduced I will need to introduce at least one extra property of trace functions which I will then interpret in a concrete way to see that the statement at least implies almost immediately the qualitative version of veils estimates for exponential something one variable so I keep this definition because I want to be able to refer to them so because the extra word I'm going to introduce will be a conditional trace function will be always satisfied for the example one which is the classical exponential something one variable case so this is essentially the lean but it uses also work of dot and d can show other people but these are the two main ones and the really essential work is that of the lean ok so p is a prime and we take k1 and k2 trace functions modulo p with complexities at most c1 and c2 respectively so I could say equal to c1 equal to c2 I want to emphasize that usually complexity you don't really know what it is because it's not such a very meaningful number what is meaningful is being big or being small so and you have to think let's say of c2 equals 10,000 c1 equals 10,000 and there's already lots and lots of such things as the examples above already show ok now we need an assumption k1 and k2 so the statement would be a quasi-autogonality statement but for that we will need some assumption so we assume k1 and k2 are what is called geometrically reducible so I will have to explain what it is and I will do it afterwards you can already take as information that in case one this is always true so in case one if you have k1 and k2 both of this type so one additive one multiplicative or both multiplicative both additive whichever way then it is always automatic then there are two cases so if rho1 and they are associated to representations let's call them whatever these underlying objects which I call rho1 and rho2 so if rho1 and rho2 are not geometrically isomorphic so it's another piece of information I will have to explain and I will do it afterwards and in case one in the case of one variable character sums it will again be quite clear what this means then we have code cancellation when we correlate k1 with k2 so we sum over fp k1 times k2 bar so it's an inner product and we get code cancellation with implied constant depending only on the complexity and one can compute that the upper bound can be 3c1 square c2 square with the definition we've taken of complexity so this is uniform code cancellation as long as you have the assumption and as long as c1 and c2 are bounded I mean it's even uniform in such a strong way that you don't need to have c1 and c2 bounded with p and still you get extremely good bound so that's the first statement let me put the second part here so I still have the examples on the blackboard Is this condition independent of the row one and row two are sort of unique? Right, but the condition to have this is that row one and row two are it's a condition about row one and row two so what this will reveal is that once we have the second version what this will reveal is that there can only be one row one with small complexity and with a given trace function so there's a gap between the complexities of the trace function that's a little bit what actually I was asking okay so second statement is what happens in the other case so if row one is geometrically isomorphic to row two okay so first thing is we don't know what that means but it's just a consequence which we might be able to falsify meaning that if we falsify this condition then of course we're in case one so then first information is that k one is proportional to k two for some alpha with modulus one it's really proportional everywhere so including possible bad points that has to do with the fact that we're using implicitly correct definitions of the values at every point it's not orthogonal but quasi normed meaning that the sum of k one over x k two over x conjugate well of course it's alpha times the sum of modulus of k one over x squared so we cannot expect cancellation and in fact what we have is more like an asymptotic formula this is alpha times p so in particular if k one is k two this is exactly p and up to an error which is again cancellation and the same bound as in the case here so I have still about ten minutes so once we understand a bit better what these conditions mean and we have tools to handle these conditions checking them and so on then this is really a an extreme general black box version of the most general form of the Riemann hypothesis of finite fields so it's not quite equivalent because we're still working with just prime fields and sums with non-viable but it's it's basically the whole thing from the at least from the anti-normacy point of view I mean there are other consequences of the lean's work which are more algebraic geometric which this is not really trying to catch okay so now yeah if you take the product of two trace functions I guess maybe it's not quite trace function because of that actually like products of two Dirichlet characters which are primitive might be primitive you might have to change at the bad points the value to get something primitive but there's a unique underlying kind of primitive object that arises from this Emmanuel is twice the character also a trace function uh yes but then it's not geometrically reducible it would be chi plus chi two different components if you multiply chi by a complex number of modules one you can say that it's still a trace function that's geometrically reducible but if you multiply by two I mean there are different ways of doing it but the allowed way is to do chi direct sum with chi and then the trace and this is not irreducible anymore because otherwise it wouldn't work how about the k1 equal to k2 how about the k1 equal to k2 you either you're in this case and then you you say the RU1 jump to chi isomorphic put RU2 then k1 rpk2 but how about k1 not equal to rpk2 if they are geometrically reducible then this is true and otherwise this is going to happen in situations where the complexity is so large it's only can happen that they are not geometrically they are not geometrically isomorphic but k1 is equal to k2 this can happen but then the c1 or c2 will be extremely large so that this bound will be trivial so you see this bound is very powerful when c1 and c2 are small enough that essentially the product is less than the code of p because what is the trivial bound maybe I should have said that so the trivial bound for this sum k1 of x, k2 conjugate of x so trivial bound is p times the L infinity norm on both and we know this is at most c1, c2 so you see that if your complexity is too large then this is meaningless give no information so there is no contradiction so what does it mean to be geometrically reducible so geometric irreducibility and isomorphism yeah in the second statement that c1 is equal to c2 yes so the invariant is an invariant of so these objects one and row two they are really meaningful up to sorenizomorphism class and in particular c1 equals c2 in that case so what does this condition mean so here we have to enter a little bit into where these functions come in so if k is a trace function of a certain object this implies there exists a homomorphism so row is really a homomorphism of a certain group pi, pi1 to a certain five-dimensional vector space so it's a five-dimensional representation you can think of v as a complex vector space this is algebraically not the right thing to do it should be an laddic vector space but it's only fine algebraically to think of it such that k of n is the trace of row of certain elements Frobenius elements depending on n acting on a certain subspace vn so vn is the invariant under inertia which I don't want to go into right now but the subspace okay I'm going to ignore this second information so there is certain object so row is really a map from a certain fixed group so this depends only on p into a five-dimensional vector space so gl of a five-dimensional vector space so if you have a representation you can ask that the representation is irreducible and this is almost what we want so precisely there exists a normal subgroup p1g of p1 and to say that row is geometrically irreducible will mean that there is no this does not exist so means first v is nonzero and there does not exist w inside v not equal to v nonzero stable under the action of the subgroup okay I'm still trying to keep all my examples but it's becoming difficult what? no there exists such a normal subgroup so okay more precisely so I can tell you what is p1 then so that there is no ambiguity so p1 is just the Galois group of a separable closure of fp of t so it's a group that depends only on p obtuseomorphism and the normal subgroup is the Galois group of the same big field so algebraic closure of this function field modulo the smaller group of rational functions with algebraic coefficients so it's not I mean in some sense these are not such complicated objects they are quite mysterious from the group's theoretic point of view and often one doesn't need to know really what they are but that's what it is okay so we have this condition and therefore in particular we suppose that the dimension of v is 1 so this happens in case 1 in the first example so meaning character values add a rational function additive or multiplicative so one has to know that this is the case but this is the case then in this definition you don't need to know anything v is of dimension 1 you're looking for proper subspace so it is automatically geometrically reducible so that's an example okay next geometrically isomorphic so rho1 into glv1 and rho2 into glv2 geometrically isomorphic if and only if a linear map linear isomorphism from v1 to v2 so they are the same dimension there's a linear map such that okay I have to write this diagram correctly so for every element in the group whatever this group is this diagram commutes it just means the action for every g in the subgroup sorry gamma I should call it pi1g so the first instance to begin with what you should think of at the moment as what does it mean to be geometrically isomorphic means trace functions are the same that's what you should think of and we'll see more refined versions of this in a second okay now it's becoming harder and harder to okay I'm starting to get a bit over time but since I'm an organizer I'm allowed that's not the right one so I want to show how this is a qualitative form of vales estimates so just by knowing this you already can apply it well enough to recover vales estimates for one variable character sum okay I guess I have to erase that for example and I'll just do the kluson and sum case for simplicity that's the most interesting one for many applications so r equal 2 so in that case k1 of x to be the kluson and sum you take ep of nx plus x inverse with the usual convention that it's 0 when x is 0 when there's no n going too fast so you have a non-zero element which is fixed let's call it a to avoid confusion and k1 of x is ep of ax plus x inverse for example 0 and is 0 for x modulo p for k2 you take the function 1 which you can see as ep of 0 if you want so it is a trace function and by example 1 and by what I just said so complexity of k1 is bounded independently of p and complexity of k2 also and they are not only geometrically reducible because of rank 1 as I said and therefore now I have to erase that start with these examples again tomorrow so we did use from rh in the form I stated that the kluson and sum with parameter a of p is of modulo is bounded by the code of p with an absolute implied constant provided these two things are not geometrically isomorphic so whatever rho1 and rho2 are not isomorphic geometrically so can it be that they are geometrically isomorphic suppose they are then what happens that means in particular by case 2 if they were geometrically isomorphic k1 would be proportional to k2 k2 is a constant so k1 would be a constant and as soon as you have one element on 0 this function cannot be constant because this one this is 0 so the constant would have to be 0 but then it doesn't work so as long as you have a field with at least two elements you are fine but k1 is not constant because p is at least 2 so we get a qualitative form of vales estimate without having to at all except a black box so I'll stop here and tomorrow I'll start to try and do more complicated examples so I'll try to start doing the sum that comes up in Jean's work and see where we get stuck and why we need a bit more information I think we're buying version of Rh so we could take this in our product of k1 and k2 and say scale that x and k2 by some n and then normalize by one of the repeat would that also be as what do you mean? so I'm trying to generalize the construction of Kluestring and sums here so there are sums of some k1 times k2 where one of them is with a scaled argument so by scaled argument do you mean I want the sum of k1 x times k2 and x right so this will be no problem so we'll see trace functions not only we have these first examples but they also have a rich formalism and in particular if you replace x by n times x for any non-zero element of the field then you get another trace function with the same complexity or you could replace it by one over x works exactly but now this sum is a function and then you want this as a function of n then this will often be a trace function there has to be conditions but for kind of standard random examples this will be a trace function and the complexity will be bounded solely in terms of complexity of k1 complexity of k2 this is highly non-trivial so this will require very recent work of cat on multiplicative convolutions but this exists, yeah in that example where f is x squared and the k of n is 1 plus this doesn't cancel an average because it's not geometrically reducible as you can see so one of the consequence already of what I said is that for small complexity the two estimates I wrote are incompatible so the fact that there's cancellation if you take k2 equal to 1 if you take k2 equal to k1 if sum of k1 squared modulus squared is essentially p this is equivalent to geometric irreducibility for small complexity so one quick way I'll say it tomorrow if you are given a function you know it's a trace function you want to know whether it's geometrically reducible or not, compute the mean square if the mean square is close to p then it is geometrically reducible and we'll see examples so finally you can check that closed sum and sum are geometrically reducible this way the mean square of closed sum and sum is easy to compute and it's essentially p just by i actually so I'll say this tomorrow in more detail there's not irreducible is there some natural way to break it up into pieces so we know there exists a decomposition into sums of geometrically reducible pieces there's a little technical tweak but essentially it's true it might not be easy to find them so in principle you find them by taking inner products it's like a little bit like an orthonormal basis of a Hilbert space I mean there are infinitely many basis vectors so we have a fine dimensional space but somehow with infinitely many basis vectors we can compute all of them to determine the decomposition so if you cannot compute it because you have extra information it can be very hard so a number of the open problems we still are fighting with in various applications is because we have something we don't know that it's geometrically reducible and we don't know exactly what are the components so it's one of the fundamental problems that can arise in applications