 Okay, hi everyone. Thank you for being here today with us. This is the first basic notion seminar of 2021. Well today in this opportunity it's a pleasure for me to present our speaker, Professor Emanuel Carmeiro from ICTP. And today he's going to talk about Fourier optimization and number theory. And please if you have any questions you can write in the section of questions and answer at Zoom and I will read for you. Or at the end you can raise your hand and we can allow you to talk. So thank you Emanuel and I let you with him. Thank you. Thank you Andrea. Thank you everyone for stopping by this afternoon. And you all hear me fine? I suppose so. Let me share my screen with you. All right. Welcome everyone. My big audience here in the Beaudenich Hall and my nice audience following from wherever you are. It's a pleasure to me to be here. This is the second time I have this privilege to give this basic notion seminar at ICTP. My mind is supposed to be a talk, you know, a little bit targeted to the students and to people who are learning mathematics or in their PhDs. So this conversation this afternoon is going to be a little bit light. Okay. Let's see. It's a good opportunity to be recorded. So if you have seen parts of this talk, I apologize in advance. So the title is fully optimization and number theory. And I wanted to convey to you some ideas without going too much into too much detail this afternoon and how to use tools, how to combine tools from analysis and number theory to, you know, do some nice results, some nice insights. Okay. You see here the pictures of the two main actors of this talk. Fourier on the left and Riemann on the right. Let's see. Let's have some fun. So this is my vision for the talk today that I think that some mathematical lecture could be as interesting as going to the movies. It could have some fun. So let me know if at the end of the talk I at least partially fulfilled my promise to you. So it all starts with a trailer of what we are going to see. In the next minutes what I promise to you are the following. We will see together a little bit of history. I will present to you some excellent business opportunities. We will understand how ballroom dancing can help in your mathematical career. We will have some guest special lecture, lecture appearing. We will see how it's possible to make great compliments by accident. And most of all, I expect you to have some fun along the way. Let's see. Part one of, it's okay. Part one of the lecture today is called zeroes and primes. I want to refresh your memory about the Riemann hypothesis. The last basic notion seminar that I gave here maybe four years ago or three and a half years ago was about the Riemann hypothesis. I want to start today where I left off in the other day. The first is my favorite quote. One of my favorite quotes by mathematicians is this one by David Hubert. He said, if I woke up from a 500 year sleep, the first thing that I would ask is whether the Riemann hypothesis had been solved. As you know, the Riemann hypothesis has appeared in this famous list of problems over the last century. More recently, it appeared in this list of seven millennium problems offered by the Clay Mathematics Institute. So it's worth a million dollars if you can solve it. So here's a nice business opportunity that I promised to you if you want to get a million dollars. It's one way to do it. Here's one of our main actors, Riemann. So he had a short life. He lived 39 years old. He was 39 years old when he died. He was a professor at Gottingen. He had several marvelous contributions to mathematics, notably the Riemann integral, the Riemannian geometry, and many, many, many, many contributions. And in number theory, he wrote just one paper, but it turned out to be a very influential paper. So it was an eight-page paper published in November of 1859 in which he discusses some properties of this, what became known as the Riemann zeta function. So this function that's there in your screen is called zeta of s. It's just the sum of the inverse powers. It's 1 plus 1 over 2 to the s plus 1 over 3 to the s plus 1 over 4 to the s. So you start with s being a complex variable with real part bigger than 1 to make this sum well-defined and absolutely convergent. You know, by using the fundamental theorem of arithmetic, you can just factor, right? So the sum over all the integers factors as the powers of 2 plus the times the powers of 3 times the powers of 5 and so on. And each of these sums over prime powers is a geometric progression now. So you can actually evaluate what's the sum and you get here what's called the product formula for the Riemann zeta function on the right of 1. So from this product formula, you already see that this function has no zeros there. Before Riemann, Euler had already worked with similar sums, but mainly taking the variable s to be real and not complex. Well, in this paper, Riemann presented some of the most important properties of the Riemann zeta function. So for example, he showed that this function in which you start to the right of 1, you could somehow manipulate it and bring it to be well-defined to the right of 0, having a pole at s equals to 1. Then he went ahead and he showed that this function in this strip between 0 and 1 satisfy the very nice functional equation. If you define this function c of s to be just zeta of s multiplying by some well-known factor, say s, s minus 1 pi to the minus s over 2 times gamma of s over 2. Gamma is the gamma function here. So written in this form, this Riemann c function satisfies a very neat functional equation, namely c of 1 minus s is just c of s. So we have a reflection here. And you can use this functional equation since you have a function which is just defined to the right of 0 now, you can use this functional equation to define the function in the whole complex plane by analytic continuation. Okay, so now here's the catch. From the product formula, the Riemanns at the function has no zeros to the right of 1. From the functional equation, the Riemanns at the function will have the trivial zeros at the negative, even integers. So these have to come to kill the poles introduced by the gamma function. All the other zeros of the Riemanns at the function must have real part between 0 and 1. And here's where it lies, the question. Riemann already knew how to compute the amount of zeros. So this function actually has a lot of zeros. And Riemann already knew that if you go up to a certain height t, and the height here is just along the imaginary axis. So from 0 to t, you have roughly t log t over 2 pi zeros. So that's a lot of zeros. And the Riemann hypothesis that he proposed in this paper is that all of these zeros are aligned, are aligned in this so-called critical line. The line, all the non-trivial zeros have real part equals to 1 half. So here's the, if you have never seen this before, this is a nice image. This is the manuscript, the original manuscript of Riemann. So if you're curious to see how was the handwriting of Riemann, here it is. There you can see the Riemanns at the function and the product formula written here. So here's the version he submitted to the journal. Here's the part in the manuscript where he proposes the Riemann hypothesis. And you see it's relatively innocently, you know, written in the sense that he says, and here's a rough translation to English from the German manuscript. One now finds indeed approximately this number of roots within these limits. So he's referring to the counting of the zeros. And it's very probable that all the roots are real. So here he's working with the rotation of the axis. So what he means to say is that all the roots are probably aligned. Certainly one would wish for a stricter proof here. I have meanwhile temporarily put aside the search for this after some fleeting futile attempts, as it appears unnecessary for the next objective of my investigation. So here he just mentions that all of these roots probably are all aligned, but it does not prove. And neither do we, 161 years later. Here's the published version of the manuscript that appeared in the monthly notices of the Berliner Academy of Sciences in November of 1859. Okay. Here's one of my favorite papers in the theory of the Riemanns at the function. So lots of progress has been made, you know. We already know that computationally the first 10, you know, million of zeros are in the critical line where they should be. But this paper is an example of a paper that proves that 40% of the line of the zeros are on the critical line. It's a paper by Brian Conrie. More than two-fifths of the zero lie in the critical line. So he is able to prove that 40% are where they should be. Of course, this is an asymptotic result. So if you go high enough, at least 40% of the zeros are where they should be. So this was just to introduce me, to mention to you a funny story that happened to me, as I promised you in the beginning of the talk, how ballroom dancing affected my life. I don't know if I told this to many people, but I was a postdoc at the Institute for Advanced Studies in 2009. They were having a thematic year in analytic number theory, especially in the celebration of the 150 years of the Riemann hypothesis. So all, many people there, about 100 people spent in the whole year, you know. Lots of the greatest mathematicians of our time that worked in number theory were there. Of course, I was just a young guy. Never had too much of a chance or never had too much to talk about to any of these big guys. Until one day in the fall, and I was able to find this in the internet to prove you that I was not, you know, telling you lies, I found this newsletter that they had. Every three masters, they had a newsletter there, every three months. So this was the one from the year that I was there where they invite people, they had lots of activities for the families at the Institute. So this particular one was inviting people to ballroom dancing lessons. They had an end-of-the-year ball, an end-of-the-year gala there. And people if they wanted to do this, or ballroom dancing, they could go. At the time, of course, I mean, my wife got this and got very excited. We should go, we should go. And I was kind of, I'm not so sure, I'm not so good at this. I was kind of trying to give her some excuses. And then after one week, I could give some excuse. After the second week, I could give some other excuse. After the third week, I could give some other excuse. Until the fourth week, I guess, I was, I ran out of excuses. And this is when I went there. And there was among the other people in the ballroom dancing, there was just one other mathematician there, which is the author of that paper. And I had, I mean, he knew me from the corridors. We never had the chance to talk. I was just a young postdoc. But in the ballroom dancing lessons, he looked at me and said, well, you're a mathematician, aren't you? I said, yes, I am. I know who you are. He said, yeah. And he looked at me and he just said, well, it seems that you could not give a proper excuse to your wife here today either, right? And I said, yes, no. So we are here. So at this day, we ended up playing cards later and discussing little bits about mathematics. So the moral here is ballroom dancing makes friends. This happens for me. And some of the nice pictures that you see in the stock are provided by, by, by this, this good friend Brian Conrie from the American Institute of Mathematics. All right. So second part of the talk today is to discuss a little bit about oscillations. And when you talk about oscillations, you want to talk about Fourier. Okay. So Jean-Baptiste Joseph Fourier was a French mathematician. He has his name on the, on the Eiffel Tower. This is a picture I took the first time I went to Paris. I was illuminated how the French, you know, raised some nice tribute to their scientists, but 18 names on each side of the Eiffel Tower. So there are 72 names of scientists there. Just by curiosity, there are 21 names of mathematicians in the Eiffel Tower. And Fourier is one of these. So the whole idea of Fourier was to decompose complicated functions into, you know, smaller into, you know, simpler pieces. Times today we know this as Fourier series, which is just an expansion of, say, a periodic function into its basic components, sine and cosine, sine of x, sine of 2x, sine of 3x, sine of 4x and in cosine. Okay. So here's an example. And this is on the, on a very basic level, this is actually the, what's behind all sorts of telecommunications that we have nowadays. Whenever you want to transmit a signal, what, what is done is that you have a certain signal, you express this as a Fourier series, you take the first 30 or 40 or 50 coefficients, which is what our ear or our eye, what we can see or what we can hear. And then you just send this, this sample, you know, 50 signs, instead of sending an infinite number, just send 50 or 40 to the other side. The person that receives the other side gets this 40 and reconstructs the, the rest, essentially randomly, because we don't care. And this is how a signal is transmitted. You know, here's an example how to write a function as a superposition of signs and co-signs. You see, the more you go, I mean, the, the best precision you have. Now, the theory of Fourier series is developed on the periodic world. You have a periodic function, you can have the, you have the Fourier series. If you are working in the Euclidean space, and for now, let's just work in the real line R, what you have is the Fourier transform. Okay, so start with the function f. Let me just mention the definition here. We started the function f L1, it means an integrable function in R. And whenever you have this, you can define an object that we call the Fourier transform, f hat. And f hat of t is given by this integral. So it's the integral over the whole real line of your function f of x multiplied by e to the minus 2 pi i t x. For those of you who have never seen this, e to the pi, e to the i theta is just cosine theta plus i sine theta. Okay? So this is one of the most famous objects in harmonic analysis. And harmonic analysis is essentially the subject that proposes to understand the nature of oscillatory phenomena. Okay, so this is an example of an oscillatory operator, where you have this kernel e to the minus 2 pi i x, 2 pi i t x that highly oscillates. So you expect that in this integral, you have lots and lots of cancellation. So when your parameter t is very big, you have more and more oscillation than the Fourier transform should be small. Okay? But anyway, this is a well-defined operator. You define it in L1. If the Fourier transform has sufficient decay, you can invert this operator, so you can just recover your function f of x by taking the inverse Fourier transform of f hat. We have a nice theorem of Planche rail that says, okay, if you start with the function that is in L1, intersection L2, then in fact, let's say, start with the Schwartz function, the Fourier transform preserves the L2 norm. So the L2 norm of f is the L2 norm of f hat. So this means that you can, if you start in a dense subspace, say Schwartz functions, you can extend this transform to the whole L2. So this is really the right environment where the Fourier transform lives. It's an isometry in L2. Now throughout this talk, I will talk a lot about band limited functions. So in the whole language of telecommunications, a band limited function is nothing else than a function f, such that f hat has compact support. Compact support is something which is particularly interesting for the transmission of signals. You know, these are the, somehow the simplest functions, the functions whose Fourier transform has compact support. Okay, I told you that we were going to have some guest lecturers today here. And here he comes. I wanted to make a point, especially to the students who are watching this video. This is a little bit of what I try to say to the students that when you're learning at this level, at your undergrad, at your master's, at your PhD, you're going to take a bunch of courses. And you should pay really attention to try to learn them very well. Because, you know, the most fruitful ideas in mathematics today occur when you actually combine tools from different fields. Okay, so when you have the chance, for example, to learn in analysis, you will do a bunch of courses with an analytic flavor, you will do in your life, real analysis, complex analysis, harmonic analysis, analytic number theory, functional analysis, geometric measure theory, PDEs and so on and so forth. So all of these courses are important. And you should really pay attention to the main results and get what is behind the philosophy in each of these. You never know when you're going to be to need to use these results. You should focus on the fundamentals of learning all of these things well. Okay, and my analogy here, as I remember, I told you in this very same auditorium, and Massimo was recording this as well, and he liked very much, was with the knowledge of Mr. Miyagi in teaching Daniel San Karate, right? So I don't know if you have seen this movie, you're probably younger than me, watching this lecture. But this was a nice movie in the 80s when I was a kid. I remember very vividly watching this movie, where the young boy Daniel San is being bullied at school by some other people there. And he somehow gets to know Mr. Miyagi, who seems to know Karate, and Daniel San asks Mr. Miyagi to teach him. And you teach me Karate so I can defend myself at school. And Mr. Miyagi says, sure, I will teach you Karate. And then he takes Daniel San to his house and starts to give him house chores. So he asks Daniel San to paint his fence, to wax his car, to sand the floor. And Daniel San spends two, three weeks doing just that. And then after three weeks, he gets pissed off, because he didn't learn anything of Karate that he was supposed to and he confronts Mr. Miyagi and says, what the hell, man, you promised to teach me Karate, I'm just doing your house chores here. And this is how Mr. Miyagi teaches analysis. Let me show the video to you. Very nice. Okay, so now it's the time to practice. You have a solid foundation, you have learned many things in all of these many courses. Let me show you just a little bit of how we can combine some tools from some of these courses to produce some nice results. And this is what I call arriving at free optimization wonderland. So this was the topic of the talk today, how to use some optimization problems that arise in Fourier analysis. So these are going to be purely analytical problems to draw some conclusions to some problems in number theory. So first, let me give you some motivation to what I'm going to do. I mean, this slide might look a little bit dense, but you don't have to really pay too much attention to it. Just the philosophy of it is the following. I'm going to apply here the residue theorem in complex analysis, okay? So you start with the function c, this is Riemann's c function, this is just z multiplied by some factors. And in blue there on the on your corner, the upper right hand corner, you'll see the symmetries that this function c satisfies. So you have the functional equation c of one minus s is c of s. And you have also this c is a real entire function in the sense that it's real in the real axis. So c of s bar. Bar is equal to c of s. Now, and remember, this function c has the same non-trivial zeros of zeta. So there are two things you can do with that. You can take the logarithm derivative of this function, you can take the log derivative c prime over c, you just expand, and you have a zeta prime over zeta appearing here, which is something that has a well-known Dirichlet expansion. So zeta prime over zeta is you can write it as a Dirichlet series. So it's going to be the sum of this function lambda of n over n to the s. And this lambda of n is a certain function that encodes the information about the primes. So it's just lambda n is log of p if n is a prime power and zero otherwise, okay? So you get this expansion for zeta prime over zeta directly from the product form. If you just take the product form of zeta and take the log derivative there, you get, you arrive to this. Now, I told you I wanted to do some sort of residue theorem here in complex analysis. So I start with the function h. Let h be a good function. And by good here, I mean a function that has the right decay properties because I'm going to send some of these lines to infinity. But if you start with a good function h, let's see. I want to sum h over the rho are going to be the zeroes here of zeta. So I'm going to sum h over the ordnance of the zeroes of zeta. Roughly speaking, I want to sum h of rho minus a half over i. And I can write this as just this integral, this contour integral of this contour c here, which is highlighted on the right, of the function h of the complex variable s minus a half over i times the log derivative of c. This c prime over c there just picks up as poles, the zeroes of c. So by an application of the residue theorem, you get that this integral over this contour is just the sum of h over these zeroes. Now, this is one way of evaluating the integral. Another way, of course, is to expand what's in the right hand side. And to expand what's on the right hand side, you have to understand the symmetries. So this c has a lot of symmetry. So if you start with the contour which has two greens and two reds, as I mentioned there, you can just use the symmetry between s and one minus s to relate the two greens. And you can use the symmetry between s and s bar to relate the reds. So you can transform this integral that goes over a rectangle to essentially an integral just to on one side of the rectangle, half of it. And we do what we do in complex analysis usually. Once you have reduced that, you send the parameter big t here, which is the height to infinity. And then you have just a green line to the right. You're going to shift this green line to a little bit towards the critical line. So if you do these computations properly and patiently, what you get is what we call an explicit formula. So remember, on the left here you have the expression that was there on the left hand side before. The sum over a function h over the zeros. And on the right, what you get is what you get when you do this process. So you have some terms here from h evaluated at two points that come from the poles, some integral of the log derivative of the gamma function, and some expression. This comes from the zeta prime over zeta because it has the big lambda factors here times the Fourier transform of h. You don't have to memorize what this formula is. I don't expect you to. I just want to highlight the good features of this formula. It contains the zeros of zeta on the left hand side. And on the right hand side, it contains the primes. These are encoded by this lambda function, and it has the Fourier transform of h. So these are three things that we like, zeros of zeta, prime numbers, and the Fourier transform. Of course, if you are working under the Riemann hypothesis, if you assume that the Riemann hypothesis is true, then the zeros have the form one half plus i gamma, and your formula simplifies because then this rho minus a half over i becomes just h of gamma. So I'm just really summing a function h over some certain real numbers, gamma. So this is the bottom of the story. So if you have a good function h and you want to evaluate at the ordinance of the zeros of zeta, I want to sum h of gamma for a good function h. There is a way to do this by means of these explicit formulas. Now, for this to work, I mean for this to be meaningful for you to have a chance to estimate in this thing, your function h that you plug in there must be a good function. For example, what is good for me here in this case, this term that is in orange there that has the Fourier transforms, if I have a function h that is band limited, such that the Fourier transform has compact support, this would be good because there's infinite sum that appears there. It would just be finite. So in most of my applications, I want to just plug in a function h which is band limited. For some other applications, you want to put the function h which is negative outside some part of the support. Anyway, let me move on. So let me mention now, I want to mention to you four problems in the interface of number theory and harmonic analysis. The first problem is to estimate the size of zeta, the size and the argument. So here's the zero counting formula of Riemann. The number of zeros up to height t is just t log t over 2 pi minus t over 2 pi plus a constant plus this function s of t that appears here. So this function s of t is just the argument. It's 1 over pi times the argument of zeta at the point 1 half plus i t. Okay. There are two basic, if you have a complex number or if you have a complex valued function, there are two basic quantities that you want to understand. First is the modulus of the complex number and the second one is the argument. Here's what I'm going to do now. Okay. So this argument function, here's how you define, of course, the argument of a complex number is well-defined but module 2 pi. So what you do is, if you want to define the argument at the point 1 half plus i t, you have to start. Here's the little graph in green in your screen. You start at the point 2. You know what zeta of 2 is. It's just a real number. So you baptize the argument there to be zero. So you define the argument at zeta of 2 to be zero. And then you go along a line vertically up and then horizontally to the left. And you let your argument vary continuously along this line until you arrive at the point 1 half plus i t. If you did not hit a zero, your argument varied continuously. You baptize the argument at that point as being what you get. If by some chance this line contains a zero, you cannot pass through a zero. So what you do is you take the half of the limit coming from upside and from below. Anyway, unconditionally, that means without assuming the Riemann hypothesis, one can show that this argument function is big O of log of t. It means that if t is very large, this loses to a certain universal constant times log t. So the argument doesn't grow too fast. In fact, there's an old paper of Littlewood in 1924. So this is almost 100 years old. That shows that both the size of zeta and the argument are not too big in the critical line. Meaning that log of the absolute value of zeta of a half plus i t loses to this big O of log t over log log t and also the argument. On the Riemann hypothesis, you can do a little bit better than just log t. You can prove that it's big O of log t over log log t. So these are the best estimates, best up-to-date estimates for the size of zeta and for the argument, for the modulus of zeta and for the argument of zeta in the critical line. Best in the sense that nobody in almost 100 years was able to improve the order of magnitude of these estimates. All the improvements have occurred in the implicit constant, implicit universal constant that you can put in front of these estimates. So let me mention to you a couple of these results. So this is a result of of Schindy and Sander-Arashon in 2011. They proved that under R H, for the size of zeta, for log of zeta of a half plus i t modulus, this is less than the record in this particular constant, log 2 over 2 times the main term, log t over log log t. This improved upon a previous result of Sander-Arashon from 2009, which was published in Deanos, by the way. And the idea was to, the idea that you will see passes through those explicit formulas that we discussed a little bit ago. The idea is to express the quantity that you're interested in, the log of modulus of zeta of a half plus i t. You want to find an expression for the object that you you have at your hand as just being a a well-known function plus a sum over zeros plus an error term. And this is exactly what's done there. So the object that I want, which is the log of the absolute value of zeta, is a well-known function, log t minus the sum over the zeros, gamma. Gamma are the ordinates of the zeros, so the sum for of a function f, t minus gamma, plus an error term, big O of 1. And the function f that appears naturally connected to this problem is this function here, log of 4 plus x square over x square. So you find yourself with this, you find yourself with this problem. You want to estimate an object, this boils down to estimate a sum over zeros of a certain function f, but this function f is not good enough. It's not good enough in the sense that we discussed before. I didn't actually say to you what good enough meant before, but certainly this function is not good enough because it's not even a continuous function. Okay, so in the previous slide I needed a function which was actually analytic and a strip containing the real line. Certainly this is not the case. But what's the strategy there? The strategy as we'll see is if you want to generate an inequality, so what's written here is an identity. Log of modulus of zeta, or am I? Log of modulus of zeta is equal to a certain sum plus a sum over zeros plus O of 1. So this is an identity. If you want to generate an inequality, you could replace your function f that appears there by some function that is below it. Okay, therefore you would generate an inequality. So this is what we're going to do. We're going to replace it by a function that lies below it and possibly with a good function, and then we will try to estimate the sum. The same philosophy was applied to a theorem we proved later. So this is a theorem of myself with Chandi and Milinovich two years later where we studied the analogous problem not for the modulus of zeta but for the argument of zeta in the critical line. Here the strategy is the same. To write your object, here's the function s of t s a sum over the zeros of a certain function f plus an error term. And the function f that appears naturally connected to this problem is this function r tangent of 1 over x minus x over 1 plus x square. This is a node function that has a jump discontinuity at the origin, so it's not good for our purposes to apply the explicit formula. And we will adopt the same strategy. So if we want to generate an inequality we can replace this function f by another function that lies above or that lies below and then I will be able to generate an inequality. And hopefully this function that you place above or below is a good function in the sense that you can plug in the explicit formula and evaluate. So this is what's drawing this picture. You have your function there in black in the bold black. That's the original function. And you see a little function that is above it. It's majorizing this function. And I told you it would be great if this function that we choose to majorize was a band limited function. It was a function with the Fourier transform with compact support. So we can just plug in the explicit formula and one of the sums that appear there would become relatively easy to tackle. This is a picture of Feichon D and Michael Milinovic with whom we did this work a few years ago. And this is the analysis problem. This is the Fourier analysis problem that is connected to that number theory problem. So you will see that's a purely analytical problem. I give you a function f from r to r and I want you to find functions l and m such that l is smaller than f everywhere and m is greater than or equal to f everywhere. So l minorizes f and m majorizes f. The support of the Fourier transforms of l and m is contained in an interval let's say from minus 1 to 1. Given these constraints I want you to minimize the distance from f to l and from m to f. Minimize the distance in which norm then you have to pick a norm. For me here the most convenient norm will be the l1 norm. You really want to minimize the integral from m to f m minus f or the integral from f minus l. It turns out that this problem which is purely in analysis you can present this problem without ever talking about analytic number theory before. This was actually this is actually a very old problem in approximation theory. This was considered by Burling in the late 30s and then revisited by Zellberg in the 50s. So here's a construction of a majorant for the characteristic function of the interval. So lots of nice applications in analytic number theory come just from replacing the characteristic function of an interval by a band limited majorant. So this was one of the insights of Zellberg and Burling in the past. Well this problem is generally hard meaning that there is no obvious recipe to generate the solution. This is a theorem that we proved a while ago some 10 years ago. Don't work with Friedrich Littmann and Jeff Waller which is essentially says we have the most you know general framework for the solutions of this so-called Burling-Zellberg extremal problem and it goes as following. This general framework that we have establishes the solution of this problem whenever your function f is subordinated to a Gaussian. Whenever there is a Gaussian subordination going on. Gaussian subordination in which sense? Well we have the solution for this problem if your function f is given by the integral of a Gaussian e to the minus lambda pi x square. Lambda here is a parameter and you can integrate this Gaussian against any measure d mu of lambda. So essentially integrating a Gaussian with any measure. Okay so if f is of this form or this is the even function and or if f is an odd version of this function essentially it's the integral of a Gaussian against the sigma of x so this is the odd version. If there are some mild conditions on this measure we can generate the solution of this problem. For example if the measure is a non-negative Borel measure finite we get the solution of this problem and by the solution I mean this problem has a unique solution and we can say what it is and we can compute the value of the minimal integrals. Okay now it's not obvious that the functions that appeared in connection to the those two applications in number theory. For example this function f of x arc tangent of 1 over x minus 1 of x over 1 plus x square which is a mod function. It's not obvious that this function is the integral of a Gaussian against a certain non-negative finite measure and here's what it is. It actually is so this crazy function arc tangent blah blah blah it's just the integral of a Gaussian against the measure which is highlighted in orange there which is you can you can prove that it's a non-negative and finite Borel measure. Therefore this function falls into the framework that we had for this analysis problem. We can generate this optimal uh majorance and minorance with the compactly supported Fourier transform. You go back to your problem you plug in these and you use the explicit formula and a careful asymptotic analysis to actually compute what the main term is. This is the general strategy of course the proof is full of technicalities that it's not the point for me to present here but I just wanted to highlight to you what's the main you know strategy going on behind them. Moving on this this is a picture of Friedrich Littmann and Jeff Valer who appeared in this work. A nice feature that appeared in this in this previous connections between the problems in bounding the zeta and with these band limited functions is this one of my favorite theorems in harmonic analysis it's called which is called the Paley-Viner theorem. So this is a bridge between harmonic analysis and complex analysis. This is this is to say that it says the following every time you have a band limited function so functions in no two the following two things are equivalent the function is band limited meaning that the support of the Fourier transform is contained in an interval minus delta delta and f can be seen can be extended to an entire function of order one which has what we call exponential type two pi so it's an entire function of order one that verifies this growth f of z in modulus loses to a constant essentially times e to the two pi delta modules of z okay so you see entire functions of order one and exponential type are essentially you know Fourier transforms of functions whose Fourier transform has compact support. This is a theorem of Paley and Viner. Viner and you see if you see in this picture lived a lot Raymond Paley lived only 26 years old you are a harmonic analyst you probably have used many of the results from Paley in your research you know Paley appears in little wood Paley theory appears in this Paley multipliers you know Paley Viner theorem and so on and it's incredible that he did this everything before being 26 as a matter of fact he died in a ski accident and he's so he was skiing in the Rocky Mountains in Canada and near the Banff station and nowadays there is a conference center in Banff Canada so whenever when I went there for the first time I went to a conference this is a picture of me and my advisor Jeff Valer I told him hey we are we are huge fans of this theorem so I told him in the conference hall hey do you know that Paley is just buried 10 minutes from here and he said no kidding and said yes so we took a walk from the conference center 10 minutes to the cemetery to take a picture and and Tom of Paley you see there so of course this is one of my favorite pictures every time I look at this picture I put here in black and white I remember of another picture of also another one of my favorite movies from my childhood which is back to the future so here's another great advisor so my advisor was a great advisor to me here's an example of another great advisor and a somewhat rebel student in with an object that is meaningful to both of them. Third application now we talk a little bit about prime gaps and this is a very you know simple problem simple to state in the sense that we let pn be the nth prime number so you just take the sequence of prime numbers and you take pn to be the nth prime number and an old result of Kramer 100 years old 1921 exactly says that under the Riemann hypothesis the gap between two consecutive primes is not so big meaning he proved that pn plus one minus pn is big O of square root pn log pn so meaning there is a universal constant that you can put there that bounds that pn plus one minus pn in terms of this quantity again this is an estimate that was not has not been improved in almost 100 years the order of magnitude of this one it is actually believed to be much smaller than this but nobody has been able to prove all the improvements have concentrated in improving the explicit constant that you can put in front of it so a theorem of ours from a few years ago this is with Milinovich and Sanderarajan we proved that under the Riemann hypothesis the gaps the gaps between consecutive primes so pn plus one minus pn loses to this number 22 over 25 square root of pn log pn and this is the best up to date for all the prime numbers bigger than three so this this improved upon the previous result of the that Grenier-Montaigne who had essentially one in place of 22 over 25 and so the difficult part was to beat one for a little bit some new ideas were necessary to beat the threshold of one and these new ideas came in this paper by by relating this problem of measuring the prime gaps to another problem purely in free analysis so here in orange is a problem which you can just assign to a person purely working in analysis which is the following even fully optimization problem say given the number a bigger equal than one you fix an a bigger equal than one i wanted to find a function f real valued even continuous and integrable say that maximizes the certain this certain object so in the bottom you divide by the l1 norm of f so you're normalized by the integral of f and you want to maximize f of zero minus a times so this is the penalty thing so minus a times the integral of f hat plus this is the positive part of f hat outside the interval minus one one so for any function you can play this game plotting plot this function in this functional and tell you what it gives so you take f of zero and you subtract a times the integral outside minus one one of the positive part of the Fourier transform i wanted to find the function that maximizes this object this is a hard problem that the berlin-zelberg problem was hard but we could actually find the explicit solution this is also a hard problem but we are far away to finding it's i don't think it's actually possible to actually find the exact solution given a certain number a this is this is difficult what we can do in this paper is find good upper and lower bounds for this application to prime gaps this problem comes this problem arises with a certain particular number a associated to it so you can take a for example to be four okay and then you can just investigate and every time you find a good function for this problem you can generate a good estimate for your number theory problem too so this is what i meant by arriving in Fourier optimization wonderland so you arrive at the Fourier analysis problem that is complete every time you can do a better bound for this problem there is a better bound in a number theory problem that you are also allowed to to make okay here's a picture of of sander arian and and cheerio cheerio will appear in the next slides i am getting close to the end of my conversation with you here today so the last application that i want to mention is to this problem of pair correlation of zeros of zeta and this is a very nice topic too so reman already knew how to compute the number of zeros up to height t so reman knew that the number of zeros up to height t was t log t over two pi and the goal in this theory of pair correlation is to study how the zeros are distributed you know if they are evenly distributed in the certain scale or not right so his the goal was to study this function now i give you a parameter beta and i want to compute this function n of t beta has just been the sums over pairs of zeros gamma and gamma prime up to height t and whose distance between gamma prime and gamma loses to two pi beta over log t so whose distance loses to beta times the average spacing okay so the question is if you know that there are this number of zeros up to height t are they equally spaced given the scale or not or do they behave somehow differently and it seems that they they don't behave equidistributed in this scale there is some skew movement towards the side and it goes this is the content of the conjecture made by humongomering 1972 that this number n of t beta is essentially the number of zeros up to height t times the integral of this kernel integral from zero to beta one minus the fair kernel one minus sine pi x over pi x square okay so you should note that if the zeros were equally distributed in this scale this kernel should not be there the n of t beta should just be n of t times beta the fact that the conjecture has a one minus something proves that the zeros behave a little bit different than the what is the equidistribution here uh this is a conjecture for almost 50 years now called mongomers pair correlation conjecture uh here's a picture of humongomery perhaps one of the men alive who better understands this connection between harmonic analysis and number theory so he has a very one of my favorite books it's called 10 lectures in the interface between harmonic analysis and analytic number theory harmonic analysis uh as a matter of fact this whole his conjecture generated a bunch of you know new research directions new connections between different fields in mathematics and physics arose because of this you know he was a young postdoc this is in the year of 1972 mongomery was a young postdoc he was visiting the institute for advanced study and he was kind of showing to the number theorists there is alberg and chaula what he had found what was his conjecture and so on and they found it very interesting and they told him well you should talk to freeman dyson the famous physicist at the institute for advanced study there at the time talk to dyson and he told me well i believe that the pair correlation of the zeros of the remands at the function is given by this measure and then dyson said well this is the same pair correlation function that appears in the values of a random matrix and so this connection between the remands at the function and the theory of random matrices was was was was born at that moment but this is a copy of the letter that dyson sent to his colleague zelberg saying hey this is the reference that dr mongomery could use a book showing that the pair correlation function of zeros of zeta function is identical with that of the eigenvalues of a random complex Hermitian or unitary matrix of large order this is up to date i mean this connection has grown mentally this there is a source of of very very interesting works in the interface of physics and number theory so let me just mention one last application that we did in connection to this theory which is the following uh so mongomery's work is is based on this on this on the evaluation of this function he calls well we call it now mongomery's f alpha function and so you can define this function let's call it you fix up a big t which is going to be the height you call this function f of alpha t has just been two pi over t log t so this is one over the number of zeros morally speaking and you're going to sum over pairs of organets gamma and gamma prime up to height t you're going to sum t to the i alpha gamma minus gamma prime multiplied by this smoothing Poisson kernel four over four plus gamma minus gamma prime square let this function consider this function f alpha what mongomery really showed was that if alpha is between zero and one in absolute value he could really know explicitly state what this function was this function f of alpha t was actually given by this block this expression there has a delta spike at the origin t to the minus two alpha log t plus modules of alpha times some error plus some error term and he conjectured that outside the interval minus one one outside the interval of absolute value one this function f alpha should be morally just one just constant equal to one okay so this conjecture for his function f alpha was what made him conjecture the the the pair correlation what made him arrive at the pair correlation conjecture so this conjecture for the function f alpha sometimes is called the strong pair correlation conjecture in fact this pair correlation conjecture so if you believe that this function f alpha should be morally one for alpha bigger than one this is equivalent to saying that one can show that this is equivalent to showing this in average so for any fixed beta and any length l if you just integrate this function f alpha from b to b plus l you should get l asymptotically right so if you can prove this for any interval this is equivalent to proving what you want and this is exactly what we have worked over the past months in improving to so for example for this for this integral version or for this for this average version so the best result here was due to Goldstone and Gonec in the 90s they proved that for if you fix b and l and l is large if you integrate this function f alpha from b to b plus l remember this should be equal to l morally right they can prove that it's less than or equal than two l and bigger than or equal than one-third of l so on average your function f alpha which should be one you know kind of loses to two and is bigger than one-third the most recent result that we have been working on is to relate this problem to some other problems in Fourier optimization and we are able to show the following bounds this is a work with phi shun d under estuary who appeared in the previous picture and my Kamilinovich so we can show that for any fixed b and l large the average of the function f alpha from the from b to b plus l so the integral from b to b plus l is bigger or equal to 0.927 times l and loses to 1.33 times l so we're we're bringing up the factor of one-third in the lower bound to almost 93 percent and we're bringing down the factor two in the upper bound to less than four-thirds this says that f alpha in average is at least 0.92 and one between 0.92 and 1.33 we don't know yet if we can still improve this a little bit better all right i guess i reached the end of my talk i just wanted to to mention to you a story that happened to me a few years ago and then i will conclude this is on the same conference that i went in Canada in 2015 that i showed you the picture of what Jeff Valer and the tomb of Raymond Paley i received the i received the best prize in mathematics of my life you know this this this ever occurred to you i mean this is something that occurred to me that i think is is rather strange i received a compliment and the person that gave me the compliment compliment doesn't know doesn't even know that he did so this happened to you so the story is the following i was at this conference in Banff and my advisor was there my advisor is a is a genius guy he's a brilliant mathematician definitely one of my favorites so we were seated at this conference and we know we both admire the work of Montgomery a lot a few months before the conference maybe a year before the conference you know we are all mathematicians here we receive a lot of papers to referee right you know you know the papers for those of you in the audience who are not you know in the world of mathematics so every time you submit the paper in mathematics somebody else has to read your paper and give an evaluation this is called the referee of the paper right and it's anonymous you don't know who it is you just get a referee report saying yes this paper is very good i recommended to the journal or sometimes you get a negative response and you get some feedback from the referee and so on if it doesn't work you submit to another journal a year or two years before this conference i had received a paper to referee it was a paper from my advisor uh submitted to a prestigious journal you know we are all busy we receive lots of papers to referee all the time sometimes you can give more attention to a paper sometimes you just give a quick opinion say yes but when i receive this paper i say well it's not often that you have really the chance to sit down and read a paper and while you referee you actually learn from the paper so i took the other opportunity to sit down and study the paper so it took me a few weeks to read the whole thing so i read but very carefully very carefully i spotted out some some things that needed some improvements i outlined what the improvements should made and so on so i wrote a very detailed referee report for this paper from my advisor it was a beautiful paper of course i recommended to the journal with high you know priority and so on but i wrote a six or seven page referee report of course i knew about the paper he had sent it to me before the when before submitting to a journal he had sent by email to me i found this paper you might find it interesting and so on anyway he doesn't know i was the referee of this paper he might just be learning this now if he's watching this talk and never told this story to him we met at this conference one year later and on the first day of the conference i sat beside him and i asked him you know jeff how how did things ended up with that paper of yours from last year you know very nice paper what happened and he was very happy oh you know it was accepted in the journal you know i was very happy he was telling me the story his loader guys was very happy you know i received a very nice referee report no very detailed with lots of nice insights you know referee was correcting some imperfections there gave giving us good feedback and so on and i looked at him said good good for you man that's very good and then he just looked away from me looked at the board and he said like very innocently you know a manual i am very i'm almost sure that it was mongomery who refereed my paper and i said wow so cool man so this was kind of my best prize in mathematics that i ever received the best compliment and the person that did it doesn't even know that he gave me a compliment jeff this is for you if you're listening to stock highly appreciated say way appreciate you man thank you everyone good vibes for you thank you very much for your attention it was a pleasure to be with you here today okay thank you well thank you yeah can you hear me yes thank you for this lovely talk it was very nice so well now um if time to questions so if someone have a question or comment can write in the section of questions and answers or just raise your hand in the zoom it will allow you to talk now i can see the screen here right participants q and a yes you can all right i'm glad it will be recorded well so well i have a little question um maybe you already answered during the talk but when you talk about this problem of prime gaps that you explain the number theory problem and then the analytic analytic approach looking for them maximize this quantity for a function f just how is if you have a lower or upper bound for this analytic formula how you can improve or get a result for the number theory problem just always related yes no no this is absolutely a good question this is absolutely a non trivial part you know so the most beautiful part in these problems that i find is to actually establish the connection so you have to establish a bridge that says whenever i have this result in analysis i can have this result in number theory this is a difficult part this is the difficult theoretical part so for this prime gaps problem it's there's a little bit of a long way to get there it's not trivial i mean it's not something that i can explain to you here in a few minutes but you're right i mean this is this is one of the difficult parts of the business of this particular thing is to establish this bridge i mean there is the established that the theoretical bridge and once you get to the problem to this optimization problem you can start to investigate this optimization problem you know sometimes you can solve it explicitly most of the times you cannot and then you have to rely in maybe computational tools to estimate your answer or to provide good test functions and so on but this enters also if you have the power to a more numerical world a more computational world which is becoming more and more important nowadays the the ability to do to use the you know to do computational mathematics in this particular problem that i i mean this this was a large there was the use of the explicit formula you know yeah the main idea is that if you have you want to estimate the prime gaps right so if you have if you have an interval if you have an interval which which doesn't have a prime if you have a big interval that does not have a prime in its interval what you can do is you go to that explicit formula cooked up in some specific way and you build that you put a test function you put a test function h such that the Fourier transform remember that in the explicit formula the last sum appearing there was a sum over gamma of over lambdas of n so this lambdas the function that has information on the primes times h hat times the Fourier transform of h so you will you plug in a test function h that has support exactly in this big interval where you don't have primes if your function h hat is supported in this interval where you don't have primes the last sum in the explicit formula is going to be very small because h hat is going to be supported in something that does not have primes so but then the the big lambdas of n will be zero in this interval morally speaking so the sum will morally be zero that final part of the explicit formula so this is how you have to explore it so you have to construct a nice test function but in the support where in the interval that does not have primes so this was our initial approach but the band limited function now later we realized that we could actually not put the function that is entirely supported in that interval we could just put any function that we wanted we could evaluate the the contribution of the function inside the interval and we could estimate the contribution outside the interval the bad contribution so this is why you see in the Fourier optimization problem you see a value f of zero and you see a penalty minus a times the integral of the bad part so this is kind of estimating the bad contribution that the test function gives you and this is how the Fourier optimization problem is born okay thank you for your answer now it's it's very interesting this this way to connect two different areas to solve a problem and there is a question in the the list domenico says is is there a result for prime gaps that that does not assume Riemann hypothesis yes so there is so all that i told you is assuming the Riemann hypothesis this is the best known if you don't assume the Riemann hypothesis there there is a bunch of results i mean there's a series of results that come if you don't assume the Riemann hypothesis the best to my knowledge i think it's goldstone idrin and pins but you don't really have the same order of magnitude square root p log p you really have p raised to some exponent here which is strictly bigger than a half so half plus something okay so it's a it's a different problem it uses different techniques okay you should you can take a look at these results to prime gaps this loop and of course this is kind of a dual problem of this small gaps between primes you know you want to know how often in the sequence of primes you have small gaps and then among these problems perhaps the most celebrated one is this twin prime conjecture do we have infinitely many primes infinitely many primes such the gap is just two you know this is a problem that goes in the in the dual direction in different direction i want to know to estimate the big gaps how big can the gap between consecutive primes be good okay there is another comment in the chat says hi professor from Junete hi professor in the beginning you showed a slide about your favorite paper Riemann hypothesis true up to some factor can you please just show that the slides again it's one of my favorite papers is this this paper that uh share screen let's see Junete it was right here i like the title of the paper more than two-fifths of the zeros of the Riemanns at the function are on the critical line sometimes we joke with uh with brian who wrote this paper he proved that 40 percent of these zeros are on the critical line that he should be entitled to 40 percent of the price if you ever call the clay mathematics institute to give his bank account to collect the four hundred thousand dollars that would be good no doesn't hurt here is Junete hope you liked it man well any question or comments anyone wants to talk we can allow to do it well hopefully the students enjoyed i know some of these might have been a bit harder but it's just for you to give a to get a glimpse of what's going to come after but if you have any questions i'm always available to chat well just people say it was a an amazing talk and i agree with that so if there is no more questions or comment we can thanks again to Emmanuel for this lovely talk today thank you