 Good afternoon, everybody. Should we start with acknowledging the great music presentation by our violinist today? His name is Uruj Bupnik. So please join me to congratulate him. So this is a special day today. We have combined several things together, which I think is very interesting for STP's activities. So the main event today is the celebration of the Ramanujan Prize. This is an award that we give every year to a mathematician who is younger than 45 years. And it is organized by STP together with the International Mathematics Union, IMU. And the Indian Department for Science and Technology, DST. So we have been working together to recognize the work of a mathematician from developing countries who are young and active. And we have a great selection today. So for that, I would like also to thank IMU for their support and the government of India in general for their support. And he is represented here by Mr. Jain, who is representing the ambassador of India, who kindly sent her regards. And he will say some words later on. So this year's prize is awarded to Ritabretta Mungi of the Indian Statistical Institute in Calcutta. And he is also at the Tata Institute of Fundamental Research in Mumbai. And for recognition for his work in number theory. Fernando Villegas will describe more detail what his work is about. And there was a committee who selected the winner. And the committee is combined by several mathematicians representing all parties who have expertise in the field. And I will mention the names. It's Rajendra Bhatia, Alicia Dickenstein, Stefano Dussato from XTP, who was the chair. And he, unfortunately, cannot be with us today. Phil Bergnang and Van Vul. So before I continue, I want to have some words from each of the three parties that is awarded the prize. So I would like to ask Dr. Jain to say some words. Distinguished guests, ladies and gentlemen, it is indeed a pleasure and honor for me to be present here today. On behalf of the Embassy of India in Rome and the Department of Science and Technology in India, I congratulate Professor Ritibrat Munshi for having been awarded this year's Ramanujan Prize for his outstanding work in number theory. Professor Munshi is associated with two leading academic and research institutions in India, the Tata Institute of Fundamental Research and Indian Statistical Institute. I also felicitate the ICTP for establishing this prize in recognition of the extraordinary contribution of Srinivas Ramanujan to mathematics. The prize not only recognizes the significant contribution made by the awardees, but also their capability to overcome challenges in achieving distinction in mathematics. Moreover, such awards and prizes attract scholars to carry out fundamental research in basic sciences, especially in developing countries. I wish you all a very memorable evening. Thank you so much. Thank you very much for the kind words. So the representative of IMU cannot be here, but the president of IMU sent a message that I will now read. So it's a laureate, distinguished guests, ladies and gentlemen. It is my great pleasure as president of International Mathematics Union, IMU, to participate in the ceremony of the Ramanujan Prize 2018 through this message. IMU is an international, non-governmental, and nonprofit scientific organization with the purpose of promoting international cooperation in mathematics. IMU is therefore extremely proud to have been a part of the Ramanujan Prize since its inception in 2005, which awards annually a young mathematician from a developing country who has done outstanding research in a developing country, because it is critical for IMU to foster researches of the next generation, especially from developing countries. IMU is grateful to International Center for Theoretical Physics, ICDP, and the ABEL Fund from Norway for starting the prize with IMU, and also to the Ministry of Science and Technology of India and ICDP for currently supporting the prize with IMU. The Ramanujan Prize winners have been brilliant mathematicians, and I'm very happy to see that Ritabrata Munchi from India joined them this year. I believe that the Prize Selection Committee has been very successful in their choices, and I'm very grateful to them. Munchi is an excellent researcher in number theory with tremendous potential and with great achievements, particularly on analytic properties of L-functions and automorphic forms. I hope that this will encourage more researchers in developing countries, and it will further contribute to the development of mathematics worldwide. Congratulations, and I wish you all the best. Shijiju Fumi Mori, President of IMU. OK. Very good. So now I will proceed with the rest of the ceremony. So you have the program there. You will see that I will award the prize to Professor Munchi, and then there will be huge give a presentation, and then we have here Professor Brian Connery, who will give a special lecture, and being also in terms of L-functions. And then at the end of the day, we all will have enough energy, I'm sure. We'll have a special event, which is we'll have this film that was made on the life of Abdul Salam, the founder of ICTP. I understand the people who made the film are present here, at least one of them. I haven't seen him, but I think he was supposed to arrive. And it will be a very good opportunity for us, because the film is not yet out officially, to experience this film, which is an important part of ICTP, because it's about the life of the founder of ICTP. So we have a very, very busy afternoon. So I think we can move on and start with awarding the prize. And since the price is between IMU, DST, and ICTP, so will ask also Dr. Jain to join me to give the award to Professor Munchi. And I think we will do it this here so that we can have a nice photo. Very good. So now I think Fernando Villegas will take over, and then we will introduce the awardees and the next seminars. Thank you, Fernando. I'll say a few words about Ramanujan and the context of the prize, as well as about our winner this year. So the Ramanujan story, I think, is very well known. There was a Hollywood movie that was made a few years back and, in fact, was shown in this very room in occasion of another instance of the Ramanujan Prize. But I thought still I would say just quickly a few things. So Ramanujan was born in 1887. In 1913, his mentors from India tried to present his work to British mathematicians. He lived during the British rule in India. One of them was M.J. M. Hill of University College London. Said the Ramanujan's papers were riddled with holes. And though he has a taste for mathematics, some ability, he lacked the educational background and foundation needed to be accepted by mathematicians. Hill did give him thorough and serious professional advice, but did not offer to take him as his student. With the help of others, Ramanujan wrote letters to Cambridge professors. Two of these letters were returned with no comment. And G.H. Hardy famously was intrigued. And he first thought letters were probably a fraud. Then later he said that some of the formula seemed scarcely possible to believe. And later, the results must be true because if they were not true, no one would have the imagination to invent them. And I think it's a moment to think about ICTP's role. In my mind, one of its roles is to help people make the jump from taste for mathematics and some ability to being recognized by mathematicians. The Ramanujan Prize rewards achievement in mathematics done in the developing world in the spirit of Ramanujan. This year's winner, Ritabrata Munchi, certainly embodies the spirit. He got his PhD in 2006 at Princeton University under the supervision of Andrew Wiles, who you probably know famously did work that ended in the proof of Fermat-Lasteram. And with his degree, Ritabrata, undergraduate degree was in India. After completing his PhD, he was a Hill Assistant Professor at Rakesh University from 2006 to 2009, a member of the Institute for Advanced Study from 2009 and 2010, and returned to India in 2010. And as mentioned, he is currently in the faculty in the Indian Statistical Institute in Calcutta. He was awarded several prizes already. In 1999, he was alumni gold medal for the Indian Institute of Statistics. 2017, the B. M. Berla Science Prize in Mathematics. 2015, Shanti's Prize for Science and Technology. And I always like to try to understand I don't actually know Ritabrata personally before. I tried to get a sense of the person and transmit some of that in the introduction. So I will only mention one story that I thought it was interesting that was told by one of his collaborators. And I think the story, to me, illustrates an approach to science. And it's an approach I think many of us have. So the story was that the collaborator was visiting Ritabrata in Bangalore. And they were going off to work and Ritabrata driving his very nice car in the streets of Bangalore. I don't know if you've ever been to Bangalore and you know what that could mean to drive on the streets. But it's an incredible chaos of vehicles, pedestrians, you name it. And through all of this, Ritabrata kept very calm and focused. And his collaborator said he worried about his car. I mean, if that was his car, he would very be worried that it would be ruined. And I suspect he was also worried about his own life, although he didn't say that. And through it all, Ritabrata kept calm and focused. And in fact, he said, yeah, it does look like chaos, but it's actually very efficient. Everybody is solving a minimization problem locally trying to achieve their goal of getting whatever is that they want to go. I'm not completely sure that's true. But calm and focused among the chaos, both of life and of ideas swirling around you, is, I think, a very good impression of what doing successful science. So with no further ado, I'd like to introduce Ritabrata Munshi, who will tell us about the sub-convexity problem for L functions. Thank you, Fernando, for the nice introduction. So let me begin by saying that it's a great honor for me to receive this prize. I'd like to thank ICTP, IMU, and DST, and the selection committee for finding my work worthy for this prize. I also take this opportunity to thank my colleagues back home, my collaborators, and especially my family. For their help. So I'm going to talk about the sub-convexity problem for L functions. And that, as you can guess, it's kind of technical. So my next speaker, Brian, will tell you about L functions. And he's the, I can assure you, one of the best speakers. And he'll not get a better teacher for L functions than him. But so what I'll do is I'll try to introduce the L functions through some example. And the first example that comes to mind is something that we all have seen at some point of our time. It's called the Riemann Jeter function. And it's given by this series over here, 1 by n to the power s, in going from 1 to infinity. And it's also equal to this product. It's a product over all prime numbers, 1 minus 1 by p to the power s inverse. And this equality is just the fundamental theorem of arithmetic that every integer can be written as a product of primes in a unique fashion. So the easy thing that you can say about this Riemann Jeter function, so this s is going to be a complex variable, which we are going to write as sigma plus i t. Sigma is the real part, t is the imaginary part. And if sigma is larger than 1, so if you take s in this right half plane, then you can take absolute values and you can show that this series and also this product are absolutely converging. And that defines, so you can take point term by term differentiation of this. And you can show that in this region, the Jeter function is differentiable. It's complex differentiable. And as a consequence, it is infinitely only differentiable. And you can differentiate the Riemann Jeter function as many times as you like. So the more difficult things are to show that this Riemann Jeter function actually makes sense for any other complex number, s, except the problem is that s equals to 1. There is a pole. So that's called the analytic continuation. And I'm going to tell you how to do it. So you define, this is one way of showing that the analytic continuation of the Jeter function. So you start by defining a Theta series. So Theta x is e to the power minus pi n squared x in running over all integers. And x is going to be a positive real number over here. Then if you know the definition of gamma function, then you can quickly check that if sigma is larger than 1, then this integral is actually equal to this product of the gamma function and the Riemann Jeter function. So the series and the product that defines the Riemann Jeter function now equals this integral. And this is called the integral representation of the Jeter function. And now you can use this integral representation to analytically continue this Jeter function beyond the region of absolute convergence. So this is how you do it. So if you look at this function Theta x minus 1, so when a equals to 0, you have 1 over here. That minus 1 takes care of that 1. And so Theta x minus 1, if you take x going near infinity, it's rapidly decaying. So that means near infinity there is no problem. You can actually integrate this. The problem lies near 0. And so I want to say that this makes sense for all complex values of s except 1 and 0. And to do that, I have to say something what happens to this integral near x equals to 0. So for that, there is this nice property of the Theta function which comes from what's called the Poisson summation formula. So f hat is the Fourier transform of f. So this is something very basic. And this implies that if you apply this to the function that I had before, then you get the Theta of 1 minus x. 1 by x is root x Theta x. So there is a relation between the value of Theta at 1 by x and the value of Theta at x. And you can use that to split the integral. So the integral a priority is from 0 to infinity. The problem was at 0. So you split it as one integral from 0 to 1, the other from 1 to infinity. And for the integral from 1 to infinity is nice. And so you have to look at the integral from 0 to 1. And in that region, you just use this transformation to flip to 1 by x. And when you do that, you get two integrals of this type. And this one will contribute these two functions over there. Now if you look at the integral from this first integral and the second integral, they actually make sense for any complex value of s. And they are nice. You can differentiate it as many times as you wish. And so they are analytic functions. And so it means that we can make sense of this product, the J Theta s times gamma. For all complex number except for s equals to 1 and s equals to 0, the problem at s equals to 0 comes from the gamma function because it has a pole at s equals to 0. So J Theta s actually makes sense at s equals to 0. But at s equals to 1, it actually has a genuine pole. It's going to be infinite at that. So if you throw out from the complex plane the point s equals to 1, then this representation gives a sense of value for J Theta for all other complex numbers. So that's the analytic continuation. And if you look at it more closely, you will see that if you change s to 1 minus s, this integral becomes this integral and this one becomes that. And so you have this relation that xi s is xi of 1 minus s. And that's called the functional equation. So this is the summary of what we know. The basic properties of the J Theta function. So first it's given by Dirichlet series and Weller product, which are absolutely converging in certain half planes sigma larger than 1. There is a way to analytically continue this, except for a pole at s equals to 1. And there is a functional equation which relates the value at s with the value of 1 minus s. A better way of writing it is saying that J Theta s is related to J Theta of 1 minus s bar bar. It's a complex conjugation. So now we'll be looking at functions which have similar type of properties. And that's the basic idea of L functions. And so here again is a picture for the complex plane and the properties of the Riemann J Theta function. So this is the real axis and this is the complex axis over here. And on the right half of 1, the J Theta function is given by this absolutely converging series and product. And now we have continuity to pole of s, except s equals to 1. And there is a relation between if you take a point over here, say s, then you have to reflect it at half and it will give you 1 minus s. And there is a relation between the value at s at 1 minus s. So we know the traditional series and the Weller product are good enough to say a lot of things about the Riemann J Theta function on the half plane sigma larger than 1. And using the functional equation, you can flip this half plane to sigma less than 0. And so you can say a lot about what happens to the J Theta function at sigma less than 0 over here. Then you are left with a region which looks like a strip, an infinite strip bounded by the line passing through 0 and by the line passing through 1 over here. And this is called the critical strip. And what happens to the Riemann J Theta function inside this is a mystery. And the central line over here is called the critical line, the line passing through 1 half. So this is a picture to keep in mind. So if you are thinking about L functions. So for L functions, I will be just writing down the properties which kind of define what those objects are. So the priority will be given by some Dirichlet series, so which looks like a n by n to the power s. So for the J Theta function, we had just 1 by n to the power s. Now we will be throwing in some sequence a n over here. And we want this to be also equal to some product. So that will mean that the a n should satisfy some recursion property. And now instead of, so we had just 1 minus 1 by p to the power s inverse. Now I have a, so I have an inverse over there all right. But now we have p to the power s, p to the power 2 s and p to the power d s. So I am allowing, d is called the degree of the L function. So I am allowing higher degrees over there. The higher the degree, more complicated is the L function, okay, sorry. Yeah, so this product can also be written in this way, okay. So I running from, so you are just collecting the roots of that polynomial here, okay. Now we want this to be absolutely converging in certain half plane. We want to have an analytic continuation of this, except for a pole that is equals to 1. And we also want a functional equation which will get the value of Ls with L of 1 minus s bar, bar, okay. So these are the properties you are looking at. You are trying to find functions which satisfy this type of properties. So if given a sequence you can very easily form this Dirichlet series and if this a n's are bounded by say 1, then it is absolutely converging in the half plane sigma less than 1, okay. For that to be equal to this Weller product, you need some multiplicative property of this sequence a n, all right. For example, in particular you want that if m and a are co-prime, then a of mn is a n times a m. So some type of that things and also some kind of regression property for powers of prime, okay. So that is, so you can find a lot of sequence which satisfy those two properties. So the first two things are quite easy to prove for a lot of sequences a n. But the other two things are quite difficult. So if you start with some arbitrary sequence satisfying those combinatorial properties, it may be very hard to show that there is analytic continuation that it makes sense for all values of s. And it may be much harder to show that there is a kind of relation between the value at s and value at 1 minus s bar, okay. So the thing is whether there are such L function apart from the Riemann Jeter function and it turns out that we have millions of examples of such things, okay. And one basic example is called the Dirichlet function. So we start with a character of the group. So z modulo mz star are the integers between 0 and m which are co-prime with m. That forms a multiplicative group and you start with a morphism. So it's a group morphism from this group to c star. So the non-zero complex numbers under multiplication. And then you can extend this to define a function over all integers z to c. By setting that if n is not co-prime with m, then I'm going to say that chi n is 0, so it's not defined by this. If n and m are co-prime, then chi n will have an image in z modulo mz star. And then chi n is defined as chi of n modulo m, okay. So if you define this function chi in this way, it will turn out to be a multiplicative function. And it is vanishing exactly at those integers which are not co-prime with m. Now with this chi n, you can form this Dirichlet series. And since it's multiplicative, this will turn out to be equal to this Weller product, right. And if then you can manipulate the proof of the analytic continuation and functional equation of the Riemann Jeter function to show that this series is also has an analytic continuation and functional equation. And actually if this chi is non-trivial, the chi is non-trivial means it's not identically one. Then you can show that this actually has is actually an entire function. There are no poles to this, okay. So this is one set of examples and these are all of degree one. Because the D here goes up to one, right. So now let me give you an example which is of higher degree, okay. So we start with something which Ramanujan was very curious about. So it's, so you take z to be in the upper half plane. So z is like x plus i, y, where y is greater than zero, right. And you define this product. So q is e to the power 2 pi i z. And delta z is defined by this product. q times 1 minus q to the power n to the power 24. And if we expand this product, then you can define this coefficients, this tau n's, okay. So expand and collect the equal powers of q, you get the coefficient tau n's. So those are going to be integer numbers. And Ramanujan was very interested in finding the properties of this coefficients. So they're called the Fourier coefficients of delta, okay. The nice thing about this delta is that this has a nice transformation property that delta at az plus b by cz plus d is equal to cz plus d to the power 12 times delta z, where for any matrix a, b, c, d in sl to z, okay. And that with this property that tau 0 is 0 and this has this expansion. Next delta which is called a modular form or actually a cast form of wet 12 and level 1. So this is an example of a modular form, okay. And Ramanujan made few conjectures about this. So he had his list of tau values and from that list of tau values, he conjectured that those are multiplicative tau mn is tau m times tau n if mn in a co-prime. And this has this recursion formula tau p to the power j plus 1 is related to tau p to the power j and tau p to the power j minus 1. And the tau p's are bounded by 2 times p to the power 11 by 2, okay. So if you can try very hard and you can now look at the internet and get the first thousand values of tau n and you can try to see how on arts can someone come up with such list of conjectures. So that was the genius of Ramanujan. The first two things were proved by model the very next year. Once you know how to prove that, what to prove, maybe it's sometimes it's easy to prove that. The next one, the last one took several years. It was proved by Deline in 1974. So that was like 55 years after Ramanujan made the conjecture. And for this, you need to use a lot of machinery from algebra by 3 hours. Anyway, so Hecke looked at, so you define, so since the tau p's are of size p to the power 11 by 2, you divide tau n by n to the power 11 by 2. So you are trying to get numbers which are roughly of size 1, right. And then you look at this is tau naught n by n to the power s, tau naught n defined in this way. And then this property is the first two properties of tau implies that this is given by Euler product like this, okay. And now you see that the D goes up to 2. So this is the degree 2 Euler product is an example of L function of degree 2, okay. And you can, there is an integral representation of this in terms of delta z and you can use the property of delta z that I listed over here to show that this has an analytic continuation and a functional equation, okay. So this gives you another set of examples of L functions which are of higher degree something. But the properties are almost similar to the properties of the Ramanujan function, okay. So here is the actual list of lists that we will be looking at. So these are called automorphic L functions. So you start with the automorphic form of the general linear group GLDAQ, okay. So till now we will be looking at D equals to 1 and 2. The modular forms are associated to automorphic representation of GL2AQ. And now we will be looking at higher degree analogues of that. They are given by restricted tensor product. So pi, the representation pi can be written as a restricted tensor product of local representation pi p. And then the L function is defined as the Euler product Ls pi p which here, each of these are the local L function associated with the local representation pi p. And that comes out to be same as given like this. So the local L functions is just a product of, a nice simple product like this, okay. And then there is also this product of gamma functions. And you can show that, that Ramanujan and Jack that this L functions have analytic continuation and they satisfy a functional equation like this, okay. So I will just like you to remember at this point that we have this number q pi and this number is mu y. So mu y is that appearing in the gamma factors which appear in the functional equation and q pi are certain integers, positive integers which appear in the functional equation, okay. So later I will use these numbers q pi and mu y to define what is called the conductor of the L function and which will be used to tell you what the sub convexity problem is. Anyway, so once you have a bunch of L functions, you can actually play with them and construct different, a new set of L functions, okay. So one way to do it is that, so I have defined Ls delta and Ls chi. You can put them together. Look at this Dirichlet series tau naught n chi n by n to the power s. This also has all these nice properties. It has analytic continuation, functional equation, all that, right. Actually delta twisted by chi is actually a modular form, right. The other example is instead of looking at the sequence tau naught n, if you look at the tau naught n square, so it is like a sparser sequence. But still it will, if you found this Dirichlet series, this has all those nice properties. The only thing for the Weller product you need to multiply with the delta 2s. But that is okay because if you take s near 1 half which is the central line, 2s is near 1 and we know a lot about Jeta function near 1, okay. So this is not adding to it, to its complexity. But once you multiply this with this, it will have the natural Weller product structure, okay. So that is called the symmetric square L function of delta. And then you can also do something called the Rankine-Cellberg convolution. For example, here I am taking the symmetric square of delta and taking the Rankine-Cellberg convolution with delta itself. So that is given by tau naught n and tau naught n square. You multiply this through your coefficients and form Dirichlet series like that. And it also has all the nice properties like the Riemann Jeta function, okay. So whatever the degrees, this is, as I said, this is still a modular form of weight 12. But now the level is like m square where chi is modular m. So this is an example of degree 2 as another example of degree 2 L function. If you write down the Weller product for this, it turns out to be of degree 3. So symmetric square L function is an example of L function of degree 3. And this is symmetric square, so this is degree 3 times the degree 2. And this gives you an example of an L function which is actually of degree 6, okay. So you can make more and more complicated L functions this way. Okay, so what is the main conjecture in this field? That's of course the grand Riemann hypothesis that says that the all non-trivial zeroes of Lspi, and we know that there are infinitely many non-trivial zeroes of Lspi inside the critical strip, they all are supposed to lie on the central line passing through the line 0.1 half, okay. And so Riemann came up with this hypothesis in a study of prime numbers. And he had this explicit formula where on this side you have this sum over primes. And on the right hand side you have a sum over the zeros of the Riemann zeta function. So this is called the Riemann's explicit formula which relates the sum over primes to sum over zeros. So if you have some information about the zeros of the zeta function, you can feed that in over here and get some information about the prime numbers, okay. So that's the advantage of this explicit formula, okay. But that is only about the Riemann hypothesis for the zeta function. So what about other L functions? Do they have, does the Riemann hypothesis say something? For example, this is something proved by Hardy and little back in 1923 that if you assume they're generalized Riemann hypothesis for Dirichlet L function, then you can show that all sufficiently large odd integer is a sum of three prime numbers. And we all of course know that it's an open problem to show that all sufficiently large even integer is a sum of two primes, okay. But if you allow odd integers or then it's a sum of three primes that under GRH it was proved back in 1923, but only recently it has been completely removed. So we don't really need a GRH to say this now, okay. But it's a nice application of GRH. But still it's saying something about the prime numbers, okay. So can we have an application of GRH which actually doesn't have anything to say about prime numbers apparently? So this is an example. So if you look at this quadratic form, so this is something again, Ramanujan was very interested in. So it's x square plus y square plus 10 z square, okay. And then you can show very easily if you look at this modulo 16 that if an integer leaves a remainder 6 modulo 16 then it cannot be written in this way. For example, if you take the number 6, it cannot be written as a square plus a square plus 10 times a square, okay. But any integer other than 6 between 1 to 16 can be written in this way, okay, yeah. And so it follows quite easily that numbers of this form 4 to the power j 16 k plus 6 cannot be written as by this quadratic form. And then Ramanujan writes down a list of 16 numbers, but here you have 18. And he says that probably all other numbers are represented by this quadratic form, okay. And now it's a theorem due to Ken Ono and Sunder Rajan back from 1997 that if you assume the GRH then these are the only exceptions. So all integers which are not of this form and not in this list can be represented by x square plus y square plus 10 z square, okay. So at least in this statement I haven't written the word prime anywhere. So this doesn't have anything directly to do with prime number, okay. So a general consequence of the GRH is it's called the Lindl of hypothesis, okay. And here I need to define the conductor and this is the q pi mu i, the numbers that appeared in the functional equation. And I'm looking at l half plus i t, so that's the t over here. And so this is the definitions for any l function you can attach a conductor, which actually tells you about the complexity of the l function. For example, if you look at the Dirichlet l function, the conductor is given by m, where chi is modulo m. So m times 1 plus absolute value of t. If you look at the twist of, look at the twist of delta by chi, then it becomes square of that. If you take f to be a modulo form of weight k level m, then this is the conductor and for symmetric square of that form, it's m square and this times this, okay. So you can very easily write down the conductor. So if you give me an l function, I know how to write down the conductor, okay. And the generalized Lindl hypothesis says that the value of the l function cannot be too large compared to the size of the conductor, okay. So given any epsilon greater than 0, however small, so like if you choose to be 1 by a billion or something, I can give you a constant c epsilon such that the l function is bounded by that constant times the conductor to the power 1 by a billion, okay. So that's a consequence of the, so if you assume there even hypothesis, you can actually prove this one, okay. And that it is another way of writing that statement, anyway. So what are the applications of Lindl hypothesis? So one is the Hilbert's 11th problem which asks, so if you're given a quadratic form in a quadratic form over a number field, then you ask what are the algebraic integers which are represented by that quadratic form and so usually the Ziegler mass form is the key. But there is one hard case where q is a positive definite ternary quadratic form, then you need a bound like this. So it is a l function and you want to show that this is bounded by conductor to the power one quarter minus delta. Again, I'm just dropping the dimension of the constant over here. So that's the notation. So if you can prove, so where pi is a fixed gl2 af form, so f is the number field on which you're looking at. And chi is a hecke character over that number field and its conductor is modulus is going to infinity. So if you have such a bound for those l functions, then you can settle the Hilbert's 11th problem and that was in the generality over for f total radial was done by Koggl, Pietersky-Schafirow and Sarnath. So now let me say that you need here conducted to the power one quarter minus delta and you really don't need the full strength of Lindl hypothesis. It says that it will be conducted to the power epsilon. So one quarter is much larger than the epsilon. Okay, another consequence of the Lindl hypothesis is what's called arithmetic quantum chaos. So here you're looking at the upper half plane module SL2z, okay? And take f to be a cost form of weight K, level one. And then using that you can define a measure on this space, X01. So mu fz is a measure on X01. It's actually a probability measure. It's a, if you know a little bit of physics, then it's a probability of a particle to be at location z at state K, right? And then the quantum unicaragord is it says that when K tends to infinity, this mu f, this measure will tend to this uniform measure, okay? And we know that this follows if we have sub convexity, sorry, we have this bound for the symmetric square and the symmetric square twisted by g, where f and g are g l2 forms. And so one way to do that, yeah, so this is how we bound an l function. So this is back to the picture. So we know that it has absolute convergence over here and functional equation here. So we know how to bound the l function on this side. We can use the functional equation to bound it over here. And using complex analysis, we know how to bound the function inside the critical strip. That's called the convexity principle. But using the convexity principle, you can get only one quarter plus epsilon. And in the previous two application, we said that you need something like one quarter minus delta, okay? So this is the easy bound coming from complex analysis called the convexity bound and the sub convexity problem is to get something like one quarter minus delta, okay? And if you look at it, it's actually related to, so this is the l function. These are the coefficients in the l function. Then it's actually related to this partial sum lambda pi n into the power i t, okay? And the sum is going up to x. And the channel is linear, this says that, so the trivial bound for this is like x to the power 1 plus epsilon. But if you assume g lh, then it will mean that you get a better bound as soon as x is larger than conductive to power delta where delta is any positive number. But from functional equation, we know that we can get a better bound once the length of the sum is larger than square root of conductor, okay? So for sub convexity, what you need to show is that there is cancellation in this even when x is slightly smaller than conductor to the power 1 half, okay? So this is the panorama of sub convex bounds. So these are the classic crezels of while Hardy-Little-Lude back from 1920. It's about the Riemann Jetta function. And in the last almost 100 years, we have improved to 1, 6. So 1, t to the power 1, 6 was a sub convex bound. And now we have 1, 6 minus 1 by 84 due to Burga using the Bombardier-Vanage method. And for Dirichlet L function, if you look at L half chi, then it's m to the power 1 quarter minus 6 and 1 quarter is the trivial bound. And for chi quadratic at least, so this still stands as a record. But for chi quadratic, one can improve it and get 1 quarter minus 1 by 12, okay? So that's you have to count rare and we manage. And usually the sub convex bounds are obtained using what's called the moment method. So suppose you want to get a sub convex bound for L half pi naught. You embed the pi naught in some family which are behaving like where all the pies in F are like pi naught. And you compute the second moment. And then you can from this, it follows that L half pi naught is bounded by M to the power 1 half, okay? And so at best, you can show that M F is like the size of the family. So you need the size of the family. So if you want to get sub convex bound over here, you are aiming to get Q pi naught to the power 1 quarter minus something. Then you need at least the size of the family to be square root of Q pi naught, okay? And but on the other hand, you need the size of the family to be big enough, so that you have some sort of orthogonality in this. And so that's a problem. So if you are doing moment method, you want the right family. You want to find the right family which is in one hand small enough, and in the other hand, it's big enough, okay? And usually it's difficult to get hold of such a family. In case of, so we have a nice technique when the family size is exactly equal to square root of the conductor, which is called the amplification technique due to free lander advantage, where you plug in, you throw in an extra weight over here called the amplifier to amplify the contribution of the particular pi naught that you were interested in, okay? Anyway, so that and using those, the moment method or the application technique, you can, so for degree one, at least you can establish the violin barges. For degree two, for modular form or mass formula function, there was a series of papers by Duke Friedlander advantage using amplification technique, settling the sub convexity bound in each of the aspect, the weight and the level. And you can use that to, you know, deal with the degree two cross two, which is the Rankine Selberg of GL two cross GL two, which was done by Sarnak and Michel and Michel Venkates. And so, so you see here we have degree one, degree two and degree four, but the degree three is somehow missing, okay? And there is a problem with degree three. And so there is this result of Conray advantage that I mentioned, where they look at the third moment, okay? And but the third moment only makes sense when you have non-negativity but you have non-negativity in certain L function. And in that case, you can get, so this is symmetric square result for symmetric square, which are degree three, but you have to assume that, for example, here you need to assume that chi-square-ratik and okay? So, you need to note something about non-negativity, okay? All right. So, the problem was that if you have a general GL three form, which is not a symmetric square, which is not self-dual, then you don't have non-negativity. And then how to, you know, tackle this problem? So, here for that we need to have a new approach. And the approach is that you separate the oscillation. So, if you go back to the sub convexity problem that mentioned, I mentioned that it is related to getting cancellation in certain sums, which now look like a n times b n. So, you separate the oscillation of a n from b m using delta n m, which delta n m is defined this way, okay? And then you plug in some expansion of delta n m, which is called the delta method or circle method. And you arrive at this sum and then you use some summation formula for this and summation formula for this to reach something like that. So, it is like dualizing. And if a and b are two different sequences, the sizes are completely different. So, you are not back to the diagonal, the diagonal is completely killed. And then you have to use the Cauchy inequality to get something like this and apply the Poisson summation again or some summation formula again. So, that is the recipe. And that is different from doing the moment method. And if you do that, then you get a series of new results. For degree 3, you can get the T aspect, pi twisted by chi. For degree, for symmetric square, you can also get the weight aspect. For degree 3 cross 2, g l 3, pi is a g l 3 form, f is a g l 2 form, it is a degree 6 l function. For that, you can do the T aspect. And the good thing about this method is that it can also give you the powerful results for degree 1, degree 2 in one go, okay? So, it more or less covers all types of l function. And we also know that it also gives you a lot of not new results, which are not yet written down. So, if you look at L as pi cross f cross chi, where pi is g l 3, g f is degree 2, chi is degree 1, then you can do sub convexity modulo spectral modulus of chi or in the spectral parameter of f or spectral parameter of pi as long as the spectral parameters are in the generic position. Okay, so you have a new method, a new approach to sub convexity and using that, it seems to be quite fruitful and you get new results, which are quite interesting. And so these are the things coming up. And with that, let me thank you. Thank you very much. Maybe we have a few minutes for a couple of questions. Is there any students are brave enough to ask something? Yeah, there you go. In the first of the talk, you talked about constructing high energy L functions using symmetric squares, but is there anything special about squares here? Couldn't we take q-picks or does it satisfy the properties of L functions or it doesn't work? For any symmetric power, you can get a L function for higher degrees. For symmetric square will be degree 3. If you take a modulo form and take the symmetric square, it's a degree 3. Symmetric q will be degree 4 and so on. So if you take symmetric n power in the n plus 1, it just becomes harder and harder. No results so far for the moment. Any other question or remark? Not then. Let's thank Rita Brata one more time. We'll ask our musician to have a little interlude while we set up for Brian Connery's talk. Okay, it's my turn to introduce our next speaker. We would like to think of the Ramanjujan Prize Ceremony as a celebration of mathematics here at ICDP. So it's my pleasure to introduce Brian, who has very kindly agreed to come and give us a talk on the general topic of L functions, expanding on what the more detailed talk of prize winner. So I've known Brian for over 15 years. I really thank him farcefully for agreeing to speak. Brian got his PhD from the University of Michigan in 1980. He was awarded the L Conant Prize for Expository Writing in 2008 on a piece on the Riemann Hypothesis that was already mentioned. And he was elected the Fellow of the AMS in 2015. He is the Executive Director of the American Institute of Mathematics in currently San Diego, Jose. And he was instrumental in the creation of this institute in fact in 1984 with funding from the successful businessman, John Fry, who has a keen interest in mathematics. Since 2002, the aim is now part of the NSF Institute programs, where they held a week-long focus workshops. In fact, one of them was actually held here at ICDP in 2012, a joint workshop. Brian was very much at the center of the development of the study of the zeros of the Riemann's data function and its connections with random matrix theory, which is something that appears naturally in the world of statistical physics. And I'm sure he will tell us how that goes. And I thought it was an appropriate topic to discuss at an institute which is mostly in physics. So with no further to say, I'll ask Brian to come to talk about the world of L functions. Thank you, Fernando. It's a pleasure to be here. And I want to offer my congratulations to Rita Brata on his fantastic work. And Fernando, actually, did you say 15 years we knew each other? With a factor of two in there maybe. What's that? 1990. Right, let's see. 28 years. Okay, so does this sound okay? Can you hear me okay? All right. The world of L functions. Okay. Well, all right, so some of this you might be a little familiar with. Oops. I will start with the Riemann's data function. It's the sort of prototype of all L functions and kind of the starting point, the tip of the iceberg. Let's see, away from that thing maybe. Right. So it has this Dirichlet series and also an Euler product, as Rita Brata said, it's really the fundamental theorem of arithmetic, right? Every integer has a unique factorization into a product of primes. And so you get this kind of formula. And here's zeta for real values of x between say minus one and five. You have this pole at one. Okay. And actually, if you look a little bit further to the left, you see the trivial zeros that Rita Brata was talking about at the negative even integers. It looks like that. Some special values, maybe you know that zeta of two is pi squared over six and zeta of four, some beautiful formulas. And at the negative odd integers, you get rational numbers that are related to the Bernoulli numbers. There's Riemann, 1826 to 1866. He died just before he turned 40. Just wrote one paper on a number theory, eight pages that kind of turned the subject on its head. And he had a number of great discoveries in that one short paper. The first is a functional equation that pi to the minus s over two, gamma s over two, zeta of s, gives you the exact same thing when you replace s by one minus s. Kind of remarkable. Now, because of this functional equation and because of the Euler product, you might ask if zeta has any complex zeros and if so, where are they? Well, there's none out here. And that's basically a consequence of that Euler product. There's no zeros to the right of one because it's an infinite product. You'd have to have a factor that was zero, which you don't. And then because of the functional equation, you don't have any over here except for those trivial ones that we already talked about. So the other zeros are in the critical strip here between zero and one. And Riemann's great discovery number two is about how many zeros there are as you go up that critical strip. And if you go up to a height t, you get around, well, t over two pi log t over two pi e, which you can just think as constant times t log t. So as you go higher and higher, you get more and more zeros. They get denser. And his great discovery number three is this explicit formula for this weighted sum of primes in terms of the zeros. These rho equals beta plus i gamma are the zeros. You need to take a limit of this. And that's an exact formula. Quite nice. Now, this is the right hand side of that formula using 100 zeros just up to height 20. And it makes this nice staircase thing where at every prime or prime power, you jump up by the log of that prime. Okay. And that's just for 100 zeros. If you take all of them, then you get, well, a perfect staircase. Now, here's an attempt at a graph of the zeta function up the critical strip. Okay. So we've gone. So on the red line, we went up the point four line. This is the critical strip between zero and one. And we wound around like that. And on the blue line, we went up the point six line and round around like that. And on the green, is that the one half. And it went round around like that. So that helps give a little bit of an idea of what the, well, it's a four dimensional graph. Obviously, you're doing a graph of a complex function. Oops. Okay. So this is that same picture. So you see you go up the point four line. And the curve kind of wraps around the origin but never goes through it, right? And you go up, oops, you go up the point six line and it stays off to the right. Now, but if you go up the point five line, it seems to go, if you go up the half line seems to go exactly right through the, right through the origin each time. And in fact, that's what Riemann's hypothesis was in his 1859 paper, is that all the complex zeros will be on the half line. And that's still unproven today, despite lots of work. This is Riemann's manuscript from 1859. He was being, he was accepted into the Berlin Academy of Sciences. And so he wrote a paper on the occasion of that award. And that's when he wrote this paper. And these are sort of his handwritten notes from this. Here's the part about the Riemann hypothesis. He had rotated things by 90 degrees. So in his language, it was that all the zeros of sort of the c of half plus it function were real. And that's what he says. One finds indeed approximately this number of real roots, the number, you know, that t log t number within these limits. And it's very probable that all the roots are real. Now, he had done quite a lot of calculations. I actually had the first maybe three zeros explicitly calculated to a few decimal places. But this part, one finds indeed approximately this number of real roots between this bound. That's not even proven yet today. The way we would interpret that would be that 100% of the zeros, almost all the zeros are on the critical line. Whereas the most we know around 41% is all we can prove so far. So this is not so clear. He had lots of notes. So his paper is not actually a reprint, because it was just part of the proceedings of the Berlin Academy. And Riemann didn't get to read it. In fact, it was read by Enka, who was, I don't know, Gauss's student. I don't know if you remember the Gauss story. There was, Gauss and Enka had exchanged letters about how many primes there are. Anyway, it's the same Enka. Okay. So these are the first few zeros. They really are on the half line. The values of them seem to be some strange transcendental numbers. So that green line going around and seeming to go through the origin, well, because of the function, okay, you say, well, how are you sure that went right through and didn't like miss by a little bit? Well, in fact, because of the functional equation, you can make a function that's real for real t. And then to check whether you have a zero or not, you just need to sign changes. So this is Hardy's z of t function. It's basically you take zeta half plus it and multiply by the factors from the functional equation and then divide out their absolute value so that you don't lose kind of scaling. And this is called Hardy's function. It has the same absolute value as zeta on the half line, but it's real. And so it has the exact same zeros. So these are the zeros up to 50, okay, 14, 21, 25, so on. So that's a stretch of 50 there, starting at zero. Now that's a stretch of 50 also along the real axis, but starting at 1000. And so you can see there's a lot more zeros up there, right? Okay, so the Lindelof hypothesis is a consequence of the Riemann hypothesis, and that would be the fact that zeta on the half line is bounded by t to the epsilon. The convexity bound, which you get from functional equation plus fragment Lindelof, is that zeta be bounded by t to the one quarter. The vial bound, you get t to the one six, which was quite a remarkable improvement. And the current record due to Borgain is that 1384. So these were mentioned in Rita Bratis' talk. Here's just a little graph of zeta near a little bit above a million. And so you see, you know, it's just kind of, it's going along quite nicely, and all of a sudden it just does this kind of thing, right? And that gives you a clue or at least some kind of indication that you might have trouble proving the Riemann hypothesis that all the zeros are there, because the behavior is kind of erratic. You know, you think about what functions do we know that have all real zeros, you know, sine and cosine or Bessel functions or whatever. I mean, the zeros of sine are exactly uniformly spaced. The zeros of the Bessel function are approximately uniformly spaced. And so, you know, the methods that we have that kind of do these sorts of things aren't really going to apply to the zeta function, because it has this strange kind of random behavior that is difficult to get a hold of. And if you thought that one was big, look at this one. So this is found by Boeber and Gaith Hari. I don't even know what that number is, but it goes up to, you know, this is a scale of pretty near it. And it goes up to 14,000 there, and you know, it's basically, right? So this is the function that you're dealing with, trying to prove that all the zeros are on the half line. What? Well, there's that. Well, you could have a whole bunch of them nearby that are really close together, and then a big gap all of a sudden, and it shoots way up. Yes, but I don't have any pictures for you. Right, so small peaks. Sure, there are going to be those, and you're interested because they're near misses, but there aren't going to be any misses, so. But no, but this is a better clue that it's really, really hard, right, is that the fact that it goes up so much, not that okay, whatever. Maybe you're right. You could be. Okay, just some more pictures. Here's absolute value of zeta between zero and one. Of course, the half line would be sort of up here. There's one over zeta at some points that are, you see the zero, the peaks are from the zeros, right? And I specifically chose zeros that were, you know, spaced the same distance apart, basically. So all the little peaks are the same height there. Here's absolute value of the real part where you begin, you see this behavior where you've got this one little hump here kind of hiding behind this other one. This is sort of the first spot where that actually happens, and but it's governed by sort of the argument of zeta, which is known to get arbitrarily large, the higher up you go, and so it's going to be super complicated, what it looks like. Well, this is some of the x to the rows. Adding up the zeros, say, and what you see here is that it sort of, these things register at the primes, two, three, smaller at a prime power four, five, seven, eight, nine, or powers, 11. And so that kind of indicates in some way how the primes, the zeros kind of know about the primes. They're sort of a dual set. And then complementary picture, this is a Fourier transform of the error term from the prime number theorem. And here you have some very sharp spikes, but at the zeros. Here's a history of, like people compute how many zeros are on the half line to verify the Riemann hypothesis, right? And actually Riemann should be first here with this, with three zeros, I think. But anyway, you can see this Turing is in there. His experiment didn't quite go exactly the way he wanted, but he did get a little bit of improvement. That was the first use of a computer to do this, all the way down to something like 10 trillion, first 10 trillion zeros are on the half line, proved to be on the half line. Yeah, yeah. Hilbert at International Congress in 1900 had 23 problems for mathematicians to work on. And the Riemann hypothesis was one of them. Most of the problems are solved by now, but the zero points having the real part one half is not done yet. This is an idea about how to prove the Riemann hypothesis is sort of attributed to Hilbert and Polia, that maybe you can interpret the zeros as eigenvalues of some of an operator. And Adlisco wrote to Polia to ask him about this. Polia was getting up there in years, and so he wanted to have some record of this. And Polia wrote back to Adlisco in 1982. Thanks for your letter. I spent two years in Gerdingen around 1914, tried to learn analytic number three from Landau, and Landau asked him, you know some physics. Do you know a physical reason why the Riemann hypothesis should be true? And that's when Polia said that would be the case if the non-trivial zeros of the C function were connected with the physical problem that the Riemann hypothesis would be equivalent to all eigenvalues of the physical problem were real. He never published this, but somehow it became known. Well, in the 1950s, physicists started working on random matrix theory because they realized that sort of energy levels are distributed kind of not randomly, but like random eigenvalues. So I think this is one of the first pictures of that. Oriol Bohigus, I believe is responsible for this, is looking at spacings between energy levels of various resonances of various nuclei. And if they were, this is the space, if they were random, you would expect just a Poisson, e to the minus x kind of distribution, right? But in fact, this plot shows what you get, and it really looks like the, what's called the Gaussian orthogonal ensemble, the nearest neighbor spacings from that. And I think this led to then quite a lot of activity in the world of random matrix theory. And in 1972, there was a connection made with zeros of the Riemann zeta function. Hugh Montgomery, who was a graduate student at the time, actually he was my advisor at Michigan. He was trying to solve the problem of Gauss about showing effectively that the class number of imaginary quadratic fields goes to infinity. So finding an effective lower bound for the class number of imaginary quadratic fields or the same thing, positive definite quadratic forms of a discriminant D, can you find all of those things? And that led him to sum over zeros of the Riemann zeta function. And he talked about this at a meeting in St. Louis. He was a graduate student at Cambridge, England. And on the way back to Cambridge, he stopped at the Institute for Advanced Study, where at T, Chala persuaded him to talk to the great physicist Freeman Dyson and explain what he'd done. And Dyson said that Montgomery's function was just a pair correlation function of zeros of the, of eigenvalues of random GUE matrices. This is a note that Dyson wrote, well, he wrote it to Selberg to give to Montgomery. The reference Montgomery wants is made random matrices showing that the pair correlation function of the zeros of the zeta function is identical with that of eigenvalues of a random complex Hermitian or unitary matrix of large order. So 1972. And so that was the beginning of the connection between random matrix theory and the zeros of the zeta function. And the pair correlation in question is 1 minus sin pi x over pi x quantity squared, which is a solid there. And then Alizko made a plot, did a lot of data. He figured out a quick way to calculate zeros of the zeta function when you do many of them in an interval at a time and made this plot, which then is obviously very striking. Now, actually, so there were table, large tables of zeros of the Riemann zeta function known at that, you know, way back into the 40s, large meaning, you know, in thousands or whatever. So this is what, say, eigenvalues of a random 96 by 96 unitary matrix look like. And if you look at it, you see that the spacings are very kind of pleasing in a way, right? In some ways, this might be what you think in your mind if you don't think too much about it, of what randomness looks like, right? But in fact, if it's really random, like just, points just chosen, like throwing a dart, this is actually what you get. This would be the Poisson. So you've got clumps and spaces like that, right? It's qualitatively very different, right? That's the eigenvalues, which I would say are very pleasing. They have a repulsion. They don't get too close together. And then sort of consequently, they're not really too far apart either. Well, suppose you take 96 zeros of zeta and just wrap them once around a circle, say starting at 1200. Well, this is what you get. And I claim that you can see instantly that this is not the Poisson random thing, but it's more like the random matrix eigenvalues, right? So somebody could have observed that a long time ago. Okay, well, so not all that much happened, actually, after Montgomery's discovery in 1972. It took a really long time before number theorists, I don't know, were able to read the physics literature and understand it and know what to do with it. I mean, there was some other work, but not very much until 1998, when John Keating and Nina Snaith at Bristol University made the observation that, well, not only are the zeros of zeta like the eigenvalues of matrices, but the value distribution of zeta is like the value distribution of the characteristic polynomials. I guess that sounds like a pretty simple observation, but it's not obvious that that should work. But in fact, it does. And, well, there's a graph sort of comparing for appropriate size matrices with, have the same kind of average spacing of eigenvalues as zeros at a certain height, and you get this remarkable agreement. And I'm not going to say much more about this, but in fact, so this was 20 years ago. And this basically started an explosion in the subject that has to do with evaluating moments exactly, giving very precise results about the fluctuations that come from primes of zero spacings and really for pretty much any kind of family of L functions. And so there's been, like I said, a kind of an explosion of work that came about because of that 1998 discovery. Basically what they did, we had this sequence of numbers 1 to 42 that were the sort of the constants in the moments, the second moment, fourth moment, and sixth moment of zeta. And their goal was to explain that 42. And at the same time they did it, there was another data point that came about, which was 24,024, was the fourth number in that sequence. And so independently, they came up with that and the number theorists came up with that. And when that happened then everybody was sort of instant believers. Okay, so now I want to move on to other L functions. We can't just spend all day talking about the Riemann zeta function. I mean, even if that is our favorite function, some of our favorite function. Okay, so there's Dirichlet and actually he's the first, so his work dates back to 1837, which is before Riemann in 1859, right? And I think the, so he's the one who used L for L function. And I, it might be because his first name was, well maybe that's not his first name, maybe because his fourth name was Lejeune. Anyway, and so yeah, so as Dr. Munshi said, they're characters of the multiplicative group of residues. And you can use those, you can form a linear combination to detect like one, you know, a residue class, like the residue class a mod q is like this. And so from a linear combination of, say, L functions, which look like this, they're multiplicative, you can, you can select out that, that residue class. See, Dirichlet was trying to prove that there's infinitely many primes in any primitive arithmetic progression. I mean, you know, sort of those elementary ways to get there's infinitely many primes that form 4n plus 1 or, you know, various things, but, you know, how about 13n plus 7? Are there infinitely many primes of that form? Well, so Dirichlet was trying to wipe out that problem altogether. And so he invented these characters and studied these L functions. But now he was really just studying them as, as a functions of a real variable. So it was Riemann that introduced the idea of having a complex variable. And so here's like, so here's an example, LS chi 3. There's one sort of primitive character mod 3. It goes 1, negative 1, 0, 1, negative 1, 0, 1, negative 1, 0, just repeats like that. Okay? And that has an Euler product. And amazingly enough, it has a functional equation. Square root of 3, well, 3 over pi to the s over 2 times this gamma times that L gives you the same thing when you replace s by 1 minus s. There's four characters mod 5. The first one's a trivial one. So they're all at 5, they're all 0, or at 0, they're all 0. But you have these, this is the characters for this. Okay? And if you take the Dirichlet L function for that third one on the list, it looks like this, and then it just repeats. You get a 0, then it repeats. It's multiplicative. So you have an Euler product. And lo and behold, you have a functional equation that it reflects, oops, sorry, I didn't change those. Those are supposed to be 3, 5, and 2, 5. I'm sorry about that. Okay. Anyway. And the epsilon turns out to be, well, it's a Gauss sum basically. In this case, you can say exactly what it is, but that has absolute value 1, and that's a functional equation for that. So Dirichlet is actually looking at L prime over L because that's a function that's sort of the Dirichlet series that's supported on prime powers. And then you want to isolate a residue class, a mod q or whatever. And so you have this linear combination of logarithmic derivatives of the L function. And you need to draw your conclusion. Well, what he needed to have was that L1 chi was not 0 for these things. And he could do that easily when chi was a complex character. But when chi was a real character, he ran into serious problems and couldn't quite figure out how to do that. But he succeeded to show that it was not 0 by writing down a formula for this class number that I've already mentioned. h of d is a class number of the field, q squared of d, or it's a number of inequivalent binary quadratic forms of discriminant d. If h of d is 1, then that means you have unique factorization of the integers in that field. Anyway, so what Dirichlet proved is this, it's class number formula, that for negative d, you get sort of square root of d times L1 chi d. And for positive d, you have square root of d, but then divided by the log of the fundamental unit. So in the real quadratic fields, you have infinitely many units, these epsilons that are m plus square root of d times n for some m and n that satisfy the Pells equation. And since you're over on the one line, the L shouldn't ever get too small, really. So in principle, for the imaginary quadratic fields, this should go off to infinity quite nicely. W here is just 2 or 4. But for the real quadratic fields, you have this log epsilon, which could be as big basically almost as square root of d quite often. So it's known, this is actually due to, well, Harold Stark or maybe Higner originally baker with transcendence methods. Anyway, H of d is 1. There's only 9 imaginary quadratic fields that have unique factorization. And the largest one is when you adjoin the roots of the square root of minus 163. Well, okay, so there's two problems of Gauss. So are there infinitely many d such that q square root of d? Are there infinitely many real quadratic fields that have unique factorization? Okay, so we don't know the answer to that. And the other problem is find all the finally many imaginary quadratic fields for which the class number is a given number h. Okay, that's the second one. That's a problem that Montgomery was trying to solve when they came up with this Freeman-Dyson thing and the random matrix theory connection with zeros of zeta. So these two Gauss problems that basically are kind of related to L functions. This is just sort of in passing. So what we've been talking about is degree one L functions, Riemann zeta function and Dirichlet L functions. And they have a functional equation that kind of looks like root q to the s. This gamma and then times the L is invariant under s goes to 1 minus s. And the functional equation itself is a little tricky to check. If you have something and you want to check if the functional equation is correct, because you don't have very good convergence of these Dirichlet series, especially, yeah. But here's a way that you can do it. You take the Mellon transform of this thing, which means integrating this thing times y to the minus s. And then you get a really nicely convergent series here. Explicitly, this is what it is, which is an arbitrary a in there. And the functional equation translates to this, which you can check on your computer quite easily. You don't need very many terms to check whether or not you actually have something with a functional equation. And there's, for higher degree L functions, there's variations of this that work quite well. Okay. So here's Dedekind. And he studied the zeta functions of a number fields. So if you have any kind of number field, so finite extension of the integers, and with the integers okay in there. And what, you don't, you no longer have unique factorization, but your ideals have unique factorization. You can uniquely factor ideals into prime ideals and powers of prime ideals. And ideals are just, well, they're closed under outside multiplication by anything in k. So in the integers, it's just like all the multiples of three or four or five or whatever are the ideals. And if you don't have unique factorization, it gets restored by this. And the norm is just the integers, like how many residue classes, well, it's the size of the ring of integers modulo that ideal. And so this is the zeta function of the number field k. These p's are the prime ideals. So here's an example. So let's take the Gaussian integers q and join i. So this does have unique factorization. And all, so all your ideals are principal, which means they're just, so like m plus ni, this bracket around it means the thing in there is just all the multiples of that. Take anything, multiply times your given m plus ni. Now the primes are just one plus i, p, where p is congruent to three month four, and m plus ni, where m squared plus n squared gives you your p, which only happens for primes that are one month four. Only primes that are one month four are sums of two squares. And your datacan zeta function then, well, you divide by four because of the four units, one minus one i and minus i. So there's four ways to generate any ideal. And you just wind up with this, the p's that are three month four, the norm is p squared, the p's that are one month four, the norm is p, but there's two of them, m plus ni and m minus ni, so you get the minus two here. And so this is a, so this is an l function, it's a datacan zeta function of that field. And in fact, it factors into zeta of s times the Dirichlet l function with the character, the real character mod four. Okay. In general, you have a functional equation that looks like this, the square root of the discriminant of the field and then a bunch of gamma s over two's and a bunch of gamma s's. R1 is the number of real roots and R2 the number of complex pairs of complex roots in a polynomial that defines k and that gives you the datacan zeta function. And that's a starting point for investigating the distribution of prime ideals in a number field, kind of like the analog of Dirichlet's theorem for primes and arithmetic progressions, you want to know how the prime ideals are distributed in a number field. Well, so here's a question, I don't know, maybe a silly question, I'm not sure, but okay, let's look back at that m plus ni, those are Gaussian integers. Can you walk to infinity, so you have a giant that can take steps no larger than its height? Well, if you have a sufficiently tall giant, could it walk off to infinity stepping only on Gaussian primes? Okay, so here's a picture of these are Gaussian primes and actually, have I done that right? Yeah. So the idea would be, can you walk off to infinity? I'm not sure I did this picture right? Okay, stepping only on the Gaussian primes. Actually, what I really wanted was a video, I mean, a cartoon of a giant taking those steps, but I couldn't do that. All right, Hecker, the very instrumental figure in the development of L functions, both for number fields and for modular forms. In 1917, he generalized the Dirichlet characters to characters of number fields, module and ideal. So, for example, again, with q of i, if you looked at the ideal 3 plus 2i, this has norm 13. So you have 13 residue classes in here and you can make representatives of those residue classes just a number of zero through 12. Now, you have to make sure that the thing is one on the units. So on i, which is five, and minus i, which is eight, and one and minus one, you put ones. And then you sort of have, well, 12 divided by four is three, so you can take sort of cube roots of unity for the other values and put those on cosets. And so this would be a Hecker character for that number field. Oops. And it has function, so it, let's see, yeah. Okay, so it has a functional equation square root of 52, like this. But he also realized you could make characters that depended on the units, so they're no longer finite characters. So in this case, you take n plus ni to the 4k for any integer k, and that gives you a, also an L function that has a functional equation. So here it is for k equals 4. In general, you're going to have a, this will turn out to be a modular form of weight, weight 2k, I guess. And if you do it for a real field, then you have another extra complication because you have these, so you have the units. There's these infinitely many units for the real quadratic fields, and so you need something that's one on the units. And the grossing characters he came up with here are kind of strange looking. At an ideal, say m plus n, this is in the cube square root of 2, you can take any k, but the values are no longer sort of algebraic integers that we're used to. We sort of gone out of the kind of situation of L functions with nice numbers that we're used to. And that's got a functional equation with, now we see some i's kind of in the shifts of the L function. Now, so I want to back up for just a second to, so Eisenstein series. So if you take this sum, e2k of z is sum over co-prime integers c and d, 1 over cz plus d to the 2k, then you can check quite easily that this thing at minus 1 over z, you put a 1 over z here, but then you multiply by z to the 2k, and c and d are symmetric, and so you recover back to e2k of z. And also you can replace z by z plus 1, and that doesn't change that co-primality. And so you have these two things, which then means that this is a modular form of weight 2k, so it translates like this for any ABCD integers with determinant 1. So for example, and you can work out what the Fourier series, so it's periodic with period 1, it has a Fourier series, and its Fourier series involves the Bernoulli numbers, and then the divisor functions, sigma sub a of n is summation d divides n, d to the a. So you have very explicit Fourier expansions for these two things. Here's e4 and here's e6, those just depend on those. Also if you multiply two together, if you multiply a modular form of weight j and a modular form of weight k, you get a modular form of weight j plus k. So for example, e4 times e6 is exactly equal to e10, but e4 cubed is not e12. And if you take the difference between e4 cubed and e6 squared, and then scale it, you get this delta function, which starts off e of z minus 24 e of 2z, and that's what we would call a cuss form, because it vanishes as z goes up to i infinity, a cuss form of weight 12. And this was the thing that Ramanujan did in 1916. He studied this delta z, which Ritter-Brady showed us that it also has, amazingly enough, also has this infinite product like this that gives you this tau. So it's either that difference of i's and sine series, or else it's product of tau, and here's the first bunch of coefficient. Here they are down to 10. And if you look, you see tau of 2 is minus 24, and tau of 3 is 252. And what's minus 24 times 252? Well, it's minus 6048, as I'm sure you spotted instantly. Anyway, tau of 2 times tau of 3 is tau of 6, and tau of 2 times tau of 5 is tau of 10. And Ramanujan spotted that, that it's multiplicative. And this is actually quite remarkable, because then you make the L series with this, and you have an Euler product, and all of a sudden you have an L function that nobody ever really thought about before. So I mean, in my mind, I think this was his best contribution, I don't know what other people think, but I mean, in the movie, the partitions gets quite a lot of attention, but this I definitely like. Anyway, yeah, so you get the L function, it has a functional equation that looks like this. It's a modular form, cuss form of weight 12. He conjectured that tau of p is less than 2p to the 11 half, actually less than. It's an integer, so it should strictly less than. And Deline proved that. It took till 1974 as a consequence of his Riemann hypothesis for varieties, and he won the Fields Medal for doing that. That's his coat of arms. Here's a conjecture. It used to be quite, I haven't heard much about it lately, but there's a conjecture that tau of n is never zero, and nobody's been able to prove that. So try to prove that tau of n is not zero. Okay, so all right. I think I'm going to move along here. Here's another example of a cuss form L function. You take an e4 times delta, and that gives you actually a cuss form that has multiplicative coefficients. So c16, let's call it, is you just take this series for e4, and this series for delta multiplying together, and you get another cuss form. Arton said, if you have a Galois extension with a Galois group G, then you can factor the dedicated zeta function into L functions attached to representations of that Galois group. So that's pretty interesting. So for example, if you take x to the fourth minus x plus one, it's a datacan zeta function looks like this, and that factors as zeta s times L of rho s, and this is your L of rho s, and it has a functional equation like this. The thing that we don't know, so we know that it's meromorphic, and it has a functional equation, an Euler product and all that, but we don't know that it's entire. It might have poles, it might have singularities, like at the zeros of zeta s or something like that, and so we don't know that, so that's an open question. But here's an example of what you could do with these L functions. Let's say you take that polynomial, we're just looking at x to the fourth minus x plus one, and you say, how does that polynomial factor mod p? You want to factor it mod p. Well, if you factor it mod three, it's x plus one, it factors into a linear times a cubic. Mod seven, it just stays irreducible. Mod 23, it factors two linear and a quadratic. Mod 193, you get four linear factors, and mod 173, you get two quadratic factors. Well, let's say you look at the first 10,000 primes, and you do statistics about how often it factors into, this would be four linear, a linear and a cubic, linear, linear, quadratic, quadratic, quadratic, and just irreducible. And these are the number of times over those 10,000 primes that that happens. And you see that these are almost exactly these fractions. And these fractions are sort of the frequency of cycle types of permutations in S4. Right? You've got 24 elements in S4. You look at how the permutations are written as products of cycles, like 11111. That's just the identity. There's just one of those. There's one over 24. Anyway, and so this is actually a consequence of what's called the Chebotare of Density Theorem. Oops, what did I do? Oh, there it is. Yeah, Chebotare of Density Theorem is that this that actually works. Here's Arton. So Dedekin conjecture just at zeta k divided by zeta is always entire. And Arton conjectured that the factorization, the LS rows are all entire. Okay, so elliptic curve L functions. So y squared is, say, a cubic in x. These are elliptic curves. You can make a group of points on an elliptic curve. And it's a finitely generated group. And the number of generators is the rank. And basically, there's a way to make an L function out of these by counting points on finite fields. You've got rank. The first rank two curve has a conductor 389, rank 3577, rank 4234,000. Okay, so you count points modulo p, make an Euler product, and an L function. So for example, here is the one with conductor 11. y squared plus y is x cubed minus x squared. So you count the number of points mod p for every p. And you define ap to be p plus 1 minus np. And you make your L function like this. Okay, and that turns out to be an L function and have a functional equation. Now in this case, there's actually alternatively an infinite product. That doesn't always happen. But anyway, you have this functional equation. And the fact that the elliptic curves lead to L functions, entire L functions, or that they're related to modular forms was proven by Andrew Wiles. And that was how we proved Fermat's last theorem. So that's another application of L functions was to Fermat's last theorem. There's a Birch-Swinner-Den-Dyer conjecture that if the L function, if the elliptic curve has rank r, then the rth derivative divided by r factorial should be expressible in terms of a bunch of invariance of the elliptic curve, a regulator, the period, T H Taffer-Avage groups, the number of torsion points, and yeah. And this is one of the million dollar clay millennium problems. So you got the Riemann hypothesis and Birch-Swinner-Den-Dyer. So two out of the seven are L function problems basically. And, okay. And Gauss problem two, the effective determination of all imaginary quadratic fields of class number, of a given class number, actually was solved by Goldfeld Grossensage. Goldfeld in 1974 said, well, if you had an L function with a triple zero, you could do this and that and then get an effective lower bound. And then Grossensage in an amazing paper actually proved that the elliptic curve of conductor 5077 really did, the L function really does have a triple zero at s equals one. So for example, now we know up to 100 all the imaginary quadratic fields with a given class number. So this is the curve they use. So that heck of character, oh my goodness, okay. The heck of character with the real quadratic field had these weird imaginary parts of the gammas. And well, actually this would be the test for the functional equation would be just like that. But so what happens is if you make this function, okay, so the Mellon transform involves this K Bessel function here. And if you make the f of x, y like this, so it's now a function of two real variables instead of a complex variable z, but it satisfies this differential equation. Okay, now this is the non Euclidean Laplacian, Laplacian on the upper half plane that's compatible with the SL2z action. Okay, and so that heck of character is an example of that. Now for that one we actually know specifically what these a n's are. They're those weird logs, right? But Moss in 1949 realized that there could be others of these things. And so he sort of opened the door to a whole new set of L functions and that are eigenvalues of that Laplacian and they have multiplicative coefficients amazingly enough. And but very, very strange. These numbers are nobody that has any idea what these numbers are. Just the eigenvalues are this like 9.5 here. And who knows, it's absolutely anything. And you can do this for GL3. So these are degree three Moss forms now that satisfy this. And here the R1 and R2 are parameters for what we call level one. And this shows where they are. They're just, yeah. These were only calculated in 2008, the very first example of this, but now we've got lots of them. Rankin-Selberg, if you take tau of n squared, that's an L function. Its gamma factors look like that. There's Rankin, there's Selberg. Here you can take delta and that C16 one. And it has an L function with that kind of functional equation. Symmetric square that Rita Broda mentioned. That's a degree three L function. And basically it's that Rankin-Selberg of delta times delta, but divided out by zeta. So here's an example where you can divide out the zeta and the thing you're left with is entire. So Shamor approved that in 1974. The L functions, these elliptic curve L functions. If you look at the p-th coefficient, you scale out that square root of p and you're left with a cosine theta p. Well, the theta p's are distributed like that, the sine squared law. So they seem. And that was a conjecture for a long time. But Richard Taylor proved it using properties of sort of symmetric powers of the elliptic curve L functions. All right. I need to stop, right? Okay. Okay. Right. Okay. So genus two curve, so it's a hyper elliptic curve. Now you're getting into a degree four L function. Okay. Well, let's see. That's its z function, which it was a rank four. It had a zero of order four. And this big peak here is right at the zero, the first zero, the Riemann's data function, which is kind of crazy. Okay. All right. The Sado-Tate pictures for genus two curves can be sort of anything. There's a bunch of other L functions. Langlands has a way of describing L functions in terms of automorphic representations. And Selberg has a way to define L functions very, very classically, very concrete down to earth. And he conjectures that anything that satisfies this satisfies the Riemann hypothesis. And there's a giant database of L functions, which I invite you to look at, LMFDB.org. It has millions of pages with kind of all L functions all over the place. And everything you might possibly want to find, all the sources of the L functions, it's quite a large project. About 90 people have contributed to it and it's ongoing. And okay. So thank you very much. Thanks, Brian, for a tour in the world of L functions. The questions, we have, we have a time for a few quick questions. Yes. It would be good if also another student. Regarding Vail has L functions, we defined it for elliptic curves, but isn't it possible to define it for general abelian varieties? And if this is the case, is it of any kind of importance to do this for general abelian varieties or just? Oh yes, absolutely. That's a big, big open question is that, yeah, the Hassabay Zeta function. So for whatever your variety is, you count points and make Zeta functions in that way. For mod P, and then you do it, multiply them all together, your Euler factors, and then is that an L function? Yes, but is there any analog for conjectures about these L functions like Swingerton, Dyer, Conjecture or? Yeah. So yes, there are special, certainly special values, like for the Zegel modular forms, so the degree four ones, the central value also has a, yeah. So in general, I think the answer is yes. Maybe Fernando can answer that better. Maybe not now. No? Okay, so the answer is yes. Deline Balanson. Okay, there we go. There's all kinds of things about values of the L functions of any kinds that are generally described by Balanson and Deline. Yeah, there certainly is something, but it will take long to make you size. Any other question? Yeah. Thank you. It's very nice, it's very interesting talk. How much are computers being used? Because I can see the plus, of course, you're using computers for getting those peaks and so on. But I saw first that the, when you were looking at the record for the number of zeros just from 2004, and I guess from the last 15 years, we had a little. I think when they got to 10 trillion, people just said, okay, that's enough for that particular one. But yeah, so on this website, you can find lots of, you can find, well, certainly the first few zeros are basically every L function that's listed there. But I have to say, for the sort of the Zegel cuss forms of level one, the L functions have not actually even been calculated. I mean, so this L function LMFDB website is really very intensively computer based and people finding algorithms. And that pushes a lot of the research, is how do you actually calculate all these things, you know? But yeah, so. And are people using machine learning? So many of these conjectures you can. I don't know of anyone using machine learning on L functions, but that would be good. If they take over that business, and I think when they were all out of work, then we're in trouble, right? Okay, thank you. Any questions? I have a quick question, Brian, from the historical point of view. I was always curious about which methods people use to actually pinpoint the zeros of the Zeta function. For example, you said that Riemann already knew the first three zeros up to certain decimal places. And nowadays, with the computers, we know this amount of zeros. So those zeros are not identified, but they're rather sort of verified that they're there. So you just need, if all you're trying to do is verify the first 100 zeros are on the line, then you find 100 sign changes for the Z function. And then you calculate, you can give an exact formula for the number with the argument principle up to any height. You can calculate exactly how many there are, and you just make sure that they're all there. You found with the fundamental theorem of calculus for the Z of t, the sign changes, right? Yeah, by sign changes. So if there were ever, if there were, just to say, if there were ever a double zero someplace, then you couldn't get that. You can't detect that on a computer, and you would be stuck. You would not be able to verify by what we know right now that, in fact, the Riemann iPod, that those things are there. You can't tell the difference between the double zero and two zeros very close on the line or off the line. And so it's conjectured there aren't any multiple zeros, except for the ones down at the center of the critical, at the central point. Yeah. Yeah, but if you want to calculate the zeros precisely, then you have to, you know, you have to zoom in, right, on that sign change. Other comments or questions? We have now some refreshments outside. At 5.15, I remind everybody that there is the showing of the movie that was mentioned about the life of the Bussalam. Otherwise, we finish the mathematics part and thank Brian and Erivatra for their talk.