 Welcome you all to the ICTP colloquium who will be given by Nicholas Katz. I've known him since 1990. He works in algebraic geometry and number theory, particularly in on periodic methods. He received his PhD in 1966 from Princeton University and the direction of Bernard Dork, and has been in Princeton ever since now as a professor. He was a Guggemann Fellow in 1975, 1987. He's a member of the American Academy of Arts and Sciences since 2003, a member of the National Academy of Sciences in 2004, and received jointly with Peter Sarnak the Levy L. Conant AMS Prize in 2003. He's an author of over 100 papers, totaling about 2,500 citations in MathSynet, and also wrote a remarkable string of seven books in the prestigious series and also Mathematics Studies published by Princeton University Press. He played an important role as sounding board for Andrew Wiles, when Wiles was working on his proof of rematulence theorem, and with Peter Sarnak and others contributed significantly to the study of the connection between the eigenvalue distribution of random matrices and the zeros of L functions. This was summarized in the book that he wrote jointly, Random Matrices for Venus, Eigenvalues and Monodromes of 1998. And it's fair to say that this book changed, among other things, the way people talk about L functions. If you hear an analytic number theorist mentioned the word GUE, or orthogonal symmetries, most likely that is due to the work of Katz and Sarnak. So without further ado, let's hear Life Over Finite Fields. Okay, it's a great honor to be here. So the title, Life Over Finite Fields Before and After Deline, and we'll start before. Oops, nothing happened. There. Okay, so the basic finite field we have in mind is Z mod p, p prime, to note an f sub p. If the field has a cardinality q, it's f sub q. And, okay, so it's the very early history. Fermat understands when minus one is, or is not a square mod p. But it's almost 100 years later, before there's an analysis of when some number a is a square mod p or not. And this is Euler. Then we have Legendre, who formulates quadratic reciprocity. And at various times in his life is convinced he's proven it, but never quite right. Gauss does prove it. What's relevant here in terms of the finite field business is Gauss writes down the formula for how many mod p points there are on this so-called Lemniscate curve, which also was studied by other people, including Abel. If you want to see what these people look like, here's Fermat, Euler. I forgot where I'm up to. Who's this? This is Legendre, I guess. Gauss and Abel. Okay, now Gauss in 1830 is the first person to understand that Z mod p, the finite field f p, has an extension field of every degree. So q could be any power of p. And at least in the English language literature, these were called Galois imaginaries, these elements of bigger finite fields. And you can still find that terminology even through the early 20th century, strangely enough. Okay, so that's Galois. And in 1835, we have Libri, who counts the number of mod p points on both of these equations. Libri, as his name suggests, had an interest in books. This interest had a criminal aspect. He somehow got himself appointed the Inspector General of French Libraries and stole literally thousands of valuable books from these libraries and fled to England with them. Meanwhile, he was having bitter fights with Leeuville. Libri had gotten himself appointed a professor at Collège de France. And, for instance, it was a big political problem at Collège de France, whether to declare his chair vacant after he had fled to England and had been indicted for theft. Okay, so here's Galois and here's an alleged picture of Libri. All right, now we come to the sort of more modern era where, what? There's a question. Well, so a field is a commutative ring where every non-zero element has a multiplicative inverse. Okay, like real numbers, rational numbers, but also Z mod p if p is a prime, okay? But there are other fields, in other words, things satisfying these axioms, that also have finitely many elements. If a field has finitely many elements, then when you add one to itself a bunch of times, you're eventually going to get zero because otherwise it would have infinitely many elements. And the first time that happens is going to be a prime number p. And then this finite field contains the field Z mod p. And if it's finite, it's going to be a finite degree extension. And in fact, there's a finite field of any carnality which is a power of p. There's one and only one extension field of degree n, n-dimensional vector space over fp, that's a field. And that's what we're talking about. Okay, but I'm happy to be interrupted. Okay, so it already starts before the 1920s. There's this awareness of some analogy between the study of number fields and the study of curves over finite fields. But it's only first really made explicit by Amiel Arton who defines the notion of the zeta function of a curve which he saw as analogous to the Riemann zeta function for Q or the Dedekin zeta function of a number field and formulates the Riemann hypothesis for it. We'll explain in a minute more precisely. And barely seven years later, F.K. Schmidt analyzes exactly what the shape of this zeta function is in general. Here's a picture of Arton. Here's a picture of Schmidt. And now let me tell you about this zeta function notion. So let me first start in general if I have... So if you like it better, you can take Q to be P. So this is supposed to be some algebraic variety. So it's defined by finitely many equations with coefficients in this field in finitely many variables. And maybe it's just affine equations. Maybe you think it's in projective space defined by homogeneous equations. And as I was saying a minute ago, this finite field, here's a Q, but for every n, there's a field extension. So this thing, if you think of it as a vector space, it's an n-dimensional vector space over this guy. There's one and only one. And this is not the way Arton originally did it, but what we want to look at is the number of solutions of these equations in these extension fields for every n. And you might think that you should just make a generating series where this is the nth coefficient. But it turns out that the better way to do it, say, well, okay, theta of x, it's going to be a function of a variable t, and it's initially going to be a power series, n greater than or equal to 1, this number of points over the extension of degree n, and then you multiply by t to the n over n. So it's not obvious a priori, but this turns out to be the good way to package this data. Okay, so a priori, this is a power series with rational coefficients and constant term 1. Already what's not completely not obvious at all, unless you rewrite this as some sort of Euler product, is that in fact it has integer. As a power series, it actually has integer coefficients. Okay, now, if this x over fq is a, say, projective, non-singular, geometrically connected, I'm not going to keep saying these fancy words, curve, so to fix ideas, it might be, so an example, x to the d plus y to the d plus z to the d equals 0, thought of projectively in any characteristic p that doesn't divide d. That's a wonderful example of what we're talking about. So you take this and this zeta function, this is what f, so Martin defines this and what fk-Schmidt proves is that this zeta function in this curve case has a very simple shape. It's a rational function. The bottom is 1 minus t times 1 minus qt. The top is a polynomial of degree 2g. g is the genus of this curve and certainly if I write down a nice equation like this, the genus is in fact the genus of the complex curve defined by this. So for example here, in this example, the genus would be d minus 1 times d minus 2 over 2. Anyway, so it's a polynomial of degree 2g and so fk-Schmidt proves that it looks like this and this polynomial we write as a product, it has constant term 1, it has integer coefficients, it has constant term 1 and if we write it in terms of what are called reciprocal roots, these alpha i, then he also proves, fk-Schmidt proves that alpha to q over alpha is an involution of these roots. So it's a kind of functional equation, it is a functional equation for this zeta function. An involution of these reciprocal roots of this piece of 2g. Okay, now this Riemann hypothesis is the statement that all these alpha i's have absolute value, the square root of q. Right, now in Arton's thesis, he defines the zeta function quite generally, but he only computes it in a few examples. He formulates the Riemann hypothesis pretty generally. He computes in a few examples and verifies. Now the question of why we care or why anybody cared at the time is because a long time, this was sort of all through the 20s, this was kind of abstract stuff and nobody had applications for it. But meanwhile the English school independently, the English school was interested in, so to speak, the input data to this zeta function they were interested in the number of mod p points on some x, they only had, they would write down say this kind of equation and they wanted to know how many mod p points it had. Now Mordell, as far as I can tell, was just interested in this question just because he thought it was a nice question. Davenport was his student and Davenport got interested and I'll explain how in a second, Davenport got interested in the following question. Suppose I'm looking at integers mod p and I ask myself, if I know for instance that some number, if I know that 7 is a square mod p, what are the chances that 8 is also? Or if I know that 7 and 8 are both squares what are the chances that 9 is? So the general question is if I take a bunch of offsets which might be the numbers 1, 2, 3, up to k and I ask how often will it be the case that I'll have a number and it and all its offsets are simultaneously squares. So what you would expect is that if you want, yes, how likely is it that n plus a1 and n plus a2 and n plus ak are all squares well you would think that the chance for each is a half so if it's let's say 3, 3x plus a1 actually 2x plus a3 you would think that about 1 out of 8x's would have that property. And in general if you deal with k things you would expect it's 1 over 2 to the k x's out of your possible range of x's would have this property. And you ask yourself, okay what is the actual difference between the actual fraction I'll have if I do this with a large prime and the answer I expect 1 over 2 to the k. Now a strange fact which I only learned by maybe wasting too much time reading history articles was that this problem was considered I think maybe for k equals 3 by the topologist Heinz Hopf and he got an error term of p divided by the square root of 6 that's a little bit less than p and he's got Davenport's attention and right away he could do better than that and so the general game with these but what it amounted to was that for these curves if you go through the analysis you want to know that the number of points on this curve it's p plus an error term and the second spike is well ideally you like the error term to be of size the square root of p now what these people like Davenport and Mordell did with particular values of k or particular degrees in this equation with n and m is they got estimates p plus big O they might get p to the 3 quarters or at various results of this type and Haase that's a young Davenport that's a not so young Mordell Davenport was actually sent to Germany Haase was pals with Mordell and he asked Mordell to send him a young English student who would help him Haase improve his English so Haase sent him Davenport and for a few years there was this ongoing teasing where Mordell and Davenport kept trying to reduce this exponent and Haase would tease them and say well have you reduced any exponents slightly and finally as far as I can tell Davenport got sort of tired of this and he said well if you're such a big time conceptual guy why don't you do something about it and six months after that Haase approved the Riemann hypothesis for elliptic curves now let me explain what it has to do here so look at this formula for the definition of zeta and look at this concrete expression then what this concrete expression says is that in this curve case the number of fq points is q plus one minus the sum of these alphas and some of these reciprocal roots and not only that this extension field the answers q to the n plus one minus the sum of the nth powers of these same numbers in particular let's just do the simplest thing where the c of fp so it's p plus one minus the sum of these alphas from one to two g so if you would know that these alphas have size root p if you know this then you certainly get that the number of points is p plus one plus an error term and this error in absolute value is at most two g root p because that's how many numbers you're adding up of this size so the Riemann hypothesis what gives you this control now again historically it's interesting because it seems part of our sort of early education but it actually took a while it wasn't until the early 30s that it was realized that this Riemann hypothesis for this Arten-Zeta function of a curve actually gave you this kind of control over the number of points so here's Arten with Haase over his shoulder the next thing that happens in 1948 André Vey proves the Riemann hypothesis for curves of any genus and here's a picture of a very young André Vey and at least as important as Arten-Zeta is that the very next year they formulate what everyone agrees are called the vacanjectures unlike some other things that had a certain life of being called the vacanjectures but then they changed big time anyway so what I shouldn't have erased the Zeta function so a minute ago we talked about what the Zeta function of a curve looked like as F.K. Schmidt proved that it had a certain form now for this general thing we take an x over fq which is projective and non-singular and some dimension now n and geometrically connected okay and so you make it Zeta function and now it's supposed to again be a rational function but now there's a lot of terms there's numerator as a product of polynomials with odd indices P1, P3 up to P 2n minus 1 there's P0, P2 up to P2n so in the curve case there's only a P0 and a P2 and downstairs and upstairs a P1 so it's supposed to look like this this guy is going to be 1 minus t this guy is going to be 1 minus q to the nt and that exhausts our knowledge of a priori formulas for what these PIs are each PI is supposed to be an integer polynomial with constant term 1 you're supposed to write it then as a product 1 minus t alpha j I don't know something product over some j's j runs from 1 to the degree of PI and it's a precise form of rationality that it has this particular form with these integer polynomials of course I haven't said very much yet this duality statement is the statement that now alpha sent to q to the n over alpha interchanges the roots of a PI and the roots of a P 2n minus i so in the curve case it's just P1 with itself and the P0 against the P2 in general this already says something it's not obvious from what's written before namely that these two polynomials PI and P2n minus i better have the same degree and this compatibility with the complex situation what that means is the following suppose I start with an x so to speak over the integers in other words it's defined by polynomials with integer coefficients and when I reduce mod P everything is nice and I get my nice variety x that I was looking at so in this situation if I think of the same equations but with complex coefficients and I take the set of complex points this is now I can think of as a complex manifold in particular as the usual topological space it has usual Betty numbers and this compatibility is the statement that the degree of P sub i is the i Betty number of this complex analytic manifold that you get by looking complexically and the final statement the final part of this is this Riemann hypothesis part is a statement that so these are the reciprocal roots from PI that these are supposed to have absolute value root q to the power i so certainly for the period from when these conjectures are formulated in 1949 for the 24 years until they were completely proven by Deline for a significant portion of people on earth who were interested in algebraic geometry this was sort of it in terms of what people dreamed of making some contribution to and I've lost the clicker this is very serious and the situation is kind of stable until the early 1960s although already in 1958 at the Edinburgh International Congress Grunty gives a talk where basically in one page which is just part of a long talk he has these ideas about developing a comology theory that's a combination of topological comology and Galois comology and he thinks it's going to solve the va conjectures it's just quite remarkable anyway in this period of early 1960s Grunty and a whole group of people but primarily among them Mike Artin developed a lot of comology here's a young Grunty and a young Mike Artin and this a lot of comology business it has ordinary comology, it has compact comology any statement you know from a topology course the behavior of these things works in this theory and it's defined for say algebraic varieties over say algebraic and closed fields of any characteristic there's actually it's a little bit tricky because if you're in characteristic P you have to use what's called the elatic theory and you have to take the L as a prime and the coefficients in this elatic theory are something like QL the elatic completion of Q so let's leave that to this side now the compatibility statement here is the following that if you look at H I in this elatic theory of an X that came this way there's this wonderful compatibility that you also could look at the so to speak topological comology of this thing that I called this complex manifold where you would take say with Q coefficients and tensor over Q with QL you extend the scalars from Q to QL but usual comology that's nice it's a nice check that this theory is constructing something that when you're over the complex numbers is basically what you already had except you've extended scalars from Q to elatic numbers now let me say a few words about what all these things are up on the blackboard so this new feature is this when I have my X over FQ and I look at H I or H I compact the Galois group of FQ bar over FQ acts on this and in this Galois group there's a canonical generator which for technical reasons is the inverse of the one that you're taught in number theory but anyway there's this canonical generator and this thing that I'm calling the left sheds trace wrong because it's inspired by that although I didn't put a picture of left sheds in here sadly it's the following statement now for any X over FQ doesn't have to be projective, doesn't have to be smooth anything if I want to count the points that's the alternating sum of the traces of Frobenius sub Q on these H I compact and if I want to put FQN here and if I want to put FQ I have to take the Frobenius with respect to Q to the N which is the Nth power of this guy so that's the left sheds trace formula and I have to explain just for a second this notion of a local system so in usual topology this notion which was invented by the 1940s of a local coefficient system so imagine you have a complex manifold I mean the usual example that we might know about this you have a complex manifold and some nice system of differential equations and at every point it has an N dimensional solution space and you can do analytic continuation to move solutions near here to solutions near there so if we have a morphism and let's say that this for application I only need that this is a scheme where the prime L is invertible and use a lot of cosmology and if this is a proper and smooth, projective and smooth then let's call it F we have these compact cosmology along the fibers with these QL coefficients and this thing is a local system in this generalized sense which I'm not going to define on my ass but just like with the traditional local system if you take two different points of the space that carries this local system the space should be connected then so in classical case if you take a path from this point to this point you can take guys here and move them and get an isomorphism with guys here now in this picture and that's going to be true here also that if you take what this is at one point or what this is at some other point they agree in some non-canonical way so if I take this guy to itself be something like Z or Z with a whole lot of primes inverted but maybe P is still good so I can go from here to Fp or I can embed this into the complex numbers so if I had started with the thing I called bold X before when I look at the fiber here I'm talking about my X over the finite field and when I'm looking here I'm talking about my X of C of C and we're talking about the H I of these guys and so here H I of this complex variety is the same as H I of this variety over a finite field it's that local system business which is how you see this compatibility of the theory with what's happening over the complex numbers okay so right so there's also a left sheds trace formula for these local systems which I won't go into and here's Dean Rudd okay now let me explain why with just what I've told you so far it almost proves three quarters of the vacant structures so I told you there was a left sheds trace formula that's here okay and what that means is that you'll get a description here of Zeta if you put P sub I to be the determinant of 1 minus T times Frobenius on H sub I and because I'm talking about defective things I can put H sub I or H sub I compact it doesn't matter if you do this then by the miracle of the formula for the logarithm you've recreated this Zeta function and by this compatibility about the Betty numbers being the same and this is Frobenii or automorphisms the degree of this polynomial is the same number if there would be a complex guy floating around okay so it's rational it has the polynomials have the right degree and this statement which has briefly disappeared about alpha goes to Q to the n over alpha that in this theory just becomes or just results from Poincare duality in this which relates in H I and H 2 n minus I so now I said almost so what's the matter what's wrong so what's wrong is that I said suppose we're in characteristic 691 okay so I could use two at a comology I could use three at a comology I could use 1789 at a comology alright and every time I do I get a factorization of the Zeta function let's even suppose that my variety so to speak started nicely over the complex numbers so I get a factorization of the Zeta function into these polynomials of these degrees that I like but they're elatic polynomials the whole Zeta function is so to speak has integer coefficients but maybe when I do different L's I get different factorization maybe none of these factorizations for instance I mean if when I use the two attic theory and I somehow knew that these polynomials had integer coefficients and I knew that in the three attic theory they also had integer coefficients I could at least meaningfully ask are they the same polynomials but if I don't even know that boy does it not make sense and then forget that suppose I stick to say the two attic theory so then these reciprocal roots these alphas in the Riemann hypothesis which is now disappeared these reciprocal roots are supposed to have a certain absolute value but they're not complex numbers they're two attic numbers so what sense does it even make okay so Lien cuts the Gordian knot and here's a picture and let me just say it very briefly what he does how he cuts the Gordian knot and it's really I think it's really an apt analogy because I mean the Gordian knot people for hundreds of years have been trying to untie it and they weren't getting anywhere and Alexander just comes along go whoop so what does the Lien say the Lien says so he's going to we have this all attic theory yeah and what's your problem your problem so you have the all attic numbers and these alphas these reciprocal roots there may be an algebraic closure of ql bar and we were somehow supposed to talk about complex numbers so the Lien just says okay pick a field embedding she calls iota of ql bar into the complex numbers if you believe the axiom of choice you can do it their fields are the same cardinality the same characteristic okay just do it alright and then if we go back to our local system he's going to say that a local system script F on this scheme S that we were looking at before is going to say that it's iota pure of some weight W if the following thing happens every time you take a finite field and every time you take a point of S with values in that finite field you have a Frobenius sub S comma Fq and it acts on this in this local system and you ask that it's eigenvalues eigenvalues via iota have absolute value the square root of q to the power W so it looks like okay I mean you can make this definition but my god how could you ever prove anything about this so you can make an observation you could say well if my local system is the constant sheaf so the Frobenius is just X trivially there's one eigenvalue and it's one and the number one you have absolute value one q to the zero so it at least applies to the constant sheaf the constant sheaf is iota pure of weight zero whatever iota you pick in fact okay and the amazing theorem that Deline proves and this I'm just going to state without any even attempted explanation is that so now I have a local system F on S F over F's q which is supposed to be iota pure of some weight W and I look at the compact homology groups of S with this coefficients in F compact homology Frobenius operates on this and I look at the absolute value of any eigenvalue of this action on this space and what Deline proves is that you have an inequality this is at most root q to the W plus i the input F now when you apply this to the constant sheaf it says that the eigenvalues when h i compact have weight less than or equal to i but if that's true for the projective smooth guys if it's true for the eigenvalues in h i and also in h2n minus i you have these two inequalities so they both have to be equalities because of this alpha to q to the n over alpha so that's how he proves the Riemann hypothesis for projective smooth varieties but as a consequence of something much stronger which turns out to have zillions and zillions of applications so so I mention that as Deline says it's inspired by reading Rankin's paper where Rankin gets in the late 1930s what were at the time the best known estimates for the size of Ramanujan tau function and here's a picture of Rankin kind of blurry that's a picture of Sophis Lee because Lee groups also come into this in a big way and so this is the theorem I was just stating and as I say the zillions of applications a few are written down here lots of applications in number theory so in my remaining I don't know if I have three minutes or eight minutes okay in my remaining eight minutes I'll show you a baby application already of the classical I mean the 1948 type of they bound just coming from Ramanu hypothesis for curves the fact that this bound that I'm going to tell you results from the Ramanu hypothesis for curves this was already pointed out in an article by Gavin Port and Haase in the 1930s Ramanu hypothesis for certain cleverly chosen curves you would get the kind of estimate I'm going to state and of course so here it is it's about an additive character sum so you take some polynomial of some degree and you look at e to the 2 pi i f of t over p so it's a sum of p through its immunity okay and so there are p terms in the sum okay each term is the p through its immunity so each term has absolute value 1 so the trivial bound for this sum is p but the they bound is that it's in fact at most d minus 1 the degree minus 1 times root p okay now if you apply this in a clever way that I'm not going to go into you get the following kind of statement that if you take a polynomial function that's of degree 2 or more and you plot its graph as a function from fp to fp then so you do this for each p so the graph is a little picture bunch of thoughts you scale it so that instead of being in a box of size p you shrink it down to be in a box of size 1 and then the statement is that as p gets larger and larger these collections of points in the box are equidistributed for usual measure on the box which you should think of as a product of two circles so let me show you so this is a graph of the fourth power function the Archimedean fourth power function and the next one is I think I was a bad boy and didn't write down this is for not a humongously large fine the graph of the fourth power function as a function from fp to fp and if you look at it it kind of looks like some sort of indian karmic art there's sort of patterns in there and if we take p a little bit bigger then it looks a little more like that and then it's kind of neat looking so you can make these pretty pictures and you get in the unit square as p grows you get sets of p points in the unit square that become equidistributed for hard measure so that's kind of a cute thing now let me end because I promise to talk about things we don't know as well let me end with another example which right now it's what I'm going to say is only empirical there's no actual theorem behind it so in this v-bound you have the polynomial degree d and the v-bound is d-1 root p the trivial bound is p so it's not a very good bound unless d itself less than root p otherwise you should use a trivial bound instead okay so because of some work of Zanier I looked at the following polynomial so I take xx-1 x- I call it t there but secretly it's lambda I mean it's you raise it to the p-1 over 2 power and you look at the coefficient of x to the p-2 that's some polynomial in t whose degree is p-1 over 2 it's a humongous polynomial so the v-bound is like a complete disaster because it would be something like p-halves times root p which is way worse than just the trivial bound of p okay nonetheless it seems that this sum is bounded not quite by a constant times root p but maybe the square root of 2 log p in front now I have to confess that so I have some heuristic based on a model which of course I have no idea if it actually applies I mean as I say there are no theorems here which leads to this square root of 2 log p okay and when I've done sort of randomly picked some large primes the biggest around 100,000 and in fact it always seems to work with instead of this funny thing in front of root p seems to already work with 4 but I don't believe that's really right I believe that if I computed more I would get cases where we would start to see that something like this is sort of the right answer so I leave it as a challenge to the high precision guys in the audience to do some experiments because when I do it it takes a long time to compute these things and in fact when I do it I say to myself well maybe so if I think of this as a number in the cyclotomic field q joins a to p if I would take a different additive character it means I would in front of the f of t I would put a non-zero number mod p to raise each term in the sum to the power a where a is a number between 1 and p minus 1 and I mean this is supposed to be true for all such things so I compute like the biggest experiment I did was with p was I don't know 104,000 or something computed all the numbers all the 100 and some thousand numbers and looked at the biggest absolute value but sadly it was only three points something so it's not going to tell me if it's really four or if it's really this thing I think and it took all night to do it so let me just end now because time is up right there are other polynomials if instead of x to the p minus 3 you took the x to the p minus 1 coefficient the thing would completely fail this sum would really be basically of size p so it's not just any old polynomial some polynomials are better than others we don't understand why alright a little more okay at the very end let me just say early in his life before he proved Riemann hypothesis the lean had proved that the vague conjectures for projective smooth varieties implied the Ramanujan conjecture so in particular it showed that and I'll end with maybe the only well I guess there's an e to the i pi equals minus 1 on a stamp and here's Ramanujan statement on a stamp thank you are there questions for our speaker I have a question so just a historical question when the Riemann hypothesis for curves was proved I presume the connection with the actual Riemann hypothesis was clearly understood or not I mean already when Arten formulates the Riemann hypothesis in 1923 he understands that it's an analog of the so that was already yeah I mean it's guided by that analogy I mean they completely had in their heads the function field of a curve over a finite field was like an umber field so they do say the functions had a similar behavior any question maybe some of the students are brave enough peaceful free this is your time to ask questions the voting with their feet no okay well let's thank Nick again and there's a reception outside