 Ready. All right, so welcome everyone this afternoon to this series of the basic notion seminar series. It's a pleasure to have you all here and it's a pleasure to have our speaker, Micah Mililovic with us this afternoon. So Micah is an analytic number theorist by training, completed his PhD under the supervision of Steve Gonek. Has been collaborating for the past, what Micah? 10 years, 11, 12 years. It's a second visit here to ICTP, okay? One before the COVID times and then now it's coming back again. And I hope it might be many more in the future, okay? Micah will be speaking to us this afternoon about an introduction to the ribbon hypothesis and gaps between products, okay? So this talk is broadcasted on the YouTube and we also have colleagues following by Zoom. So the Zoom video and audio for the people who are viewing it by Zoom is gonna be turned off. If anyone wants to ask questions on Zoom, please write in the chat that I'll be monitoring and I can forward to Micah, okay? All right, so let's close the door. Micah, the floor is yours. Welcome back to ICTP, it's a pleasure to have you here. My third trip to ICTP, I was planning to come in May 2020 but there's a little hiccup there. But okay, anyway, so I'll talk to you today about the ribbon hypothesis and if there's time there's gaps between products. So I don't know how fast or slow I'll go so I have many natural places I could stop, so okay. I also was not 100% clear on what basics notions mean. So I think I'm gonna start out a little bit basic but at least then I won't lose the audience right away. Okay, so I start out with this quote of David Hilbert. It says, if I were to awaken after 20,000 years or slept for a thousand years, my first question would be, is there even a hypothesis improvement? I titled this slide, Rip Van Winkle. I realized that for non-American audiences you may not know that but this is a quote Laura in the United States about someone who went to sleep before the American Revolution slept for 20 years and woke up after. So he went to sleep in England and woke up in the United States. Okay. Okay, so I guess we all remember what a crime is. It's an integer greater than one and there's only positive divisors or one in itself. So the first few primes, two, three, five, seven, 13, 17, 19, 23, they had no divisors other than themselves, right? And sort of basic facts. So apart from two all primes are I, everyone can understand that, right? And apart from five all primes ended in one, three, five or seven. Can everyone understand that one? Yeah, are we okay? We're still all on board, all right. Okay, so kind of the first I would say start of number theory is something called the fundamental theory arithmetic. Every integer greater than one can be expressed as a product of primes. Essentially this is unique to rearrange that. So what I mean is that I consider two times three to be the same as three times two. So, but that's the only way you can write six as a product of primes. So other than rearranging, there's nothing you can do. So I have to click to get my, click attack. Okay, all right, so if you're thinking of an integer I can write it as p1 to the a1, p2 to the a2 up to pk to the ak. If I think the primes are distinct and I'm writing them in increasing order then this is a unique way to write it. And I don't know, I always think a useful, this is again maybe two United States centric. So we have a test for college called the SAT where they do analysis. And so the analogy of primes are the integers as atoms are the molecules. So if we want to study the universe which is made out of molecules we have to understand the periodic table. If we want to study integers that's number theory. We have to understand the periodic table of number theory is primes. So you can find lots of pictures of Euclid all over the internet. But my understanding is that the great library of Alexandria burnt down and antiquity and some statues of him. So no one has any idea what he looked like. There's lots of pictures of him though. Okay, all right, anyway. Okay, so there's no largest primes. So my title is here but, so to see this here's a cute proof that let p be the smallest prime divisor n factorial plus one. And I claim p has to be bigger than n. So if p is less than n, clearly p is in n factorial. So it divides n factorial, right? So that means well p divides n plus one factorial like we're assuming it divides n factorial. So that means they divide their difference which is one. Okay, that's impossible because primes are at least two in size. So they can't divide one, right? Okay, so while I just said no matter like n plus one is arbitrary no matter what n you give me I can find a prime bigger than that. So it's a cute proof of another theorem of Euclid that they're infilling in primes. In fact, this is kind of logically identical to his maybe just written in a slightly more modern language. Everyone's seen how to prove and claim any primes, right? Okay. And this says that are arbitrarily like yes between primes maybe you've all seen this proof too but n factorial plus two that's divisible by two and factorial plus three is divisible by three and factorial plus four is divisible by four and factorial plus n is divisible by n. So here's n minus one consecutive integers that are not prime. So I can make arbitrarily long strings. So despite the fact there's really many primes they can be arbitrarily far apart. Okay. Okay, so perhaps minor prime number two well I guess maybe before Euler was for Mahler but Euler is the start of our story today. So it's a nice quote, which I assume was probably in Latin that's what he wrote most of his papers in. Is that mathematicians have tried in vain to this day to discover some order in the sequence of prime numbers and we have reason to believe that it's a mystery which the human mind will never penetrate. So I hope to convince you otherwise today. A little bit anyway, okay. So about 2,000 years after Euclid Euler studied this function which you see in calculus class is it we call these p series in the United States usually put a p for a power and you learn calculus is convergent S is greater than one. So one of kind of his famous results is he found another way to write this function if you're thinking of it as a function of S and it says a product over primes. So it's a product of one minus one over p to the s whole quantity inverse. We now call this the Euler product is for obvious reasons. And this is essentially an analytic way to restate the fundamental theme of arithmetic. So let me convince you of this. So this is maybe, so start out with his product the Euler product. So this is a one over one minus p to the s is a geometric series. So expanded as a geometric series we can do that with p's at least two and S is bigger than one. And now imagine we can formally multiply this product. So this product's running over all primes and all powers of primes. So what do you get? You're going to get all the ones and multiply to get together and every other term is going to be some prime to a power times S, a different prime of power times S and things like this, right? Probably these back opponents I can back to the S out of them. And now we get products of primes to powers, right? Well, every integer could be written as a product of primes. And then how many ways can we write integers of private primes? Just one. So every integer appears in the sum and every integer appears in the sum exactly once. That's the fundamental thing we're taking. So that's one over n to the s. Okay, so that's Euler's product. So anyway, it is just the fundamental theorem of arithmetic. And so you immediately see that this harmless looking thing that said something about the primes. So here's a very basic way to get information of primes out of it. So here's a proof but also essentially due to Euler. This identity in itself implies infinitely many primes. So how would we see that? Well, assume that there's only a finite number of primes and this is a finite product. And so in particular, this is founded about by letting S be one or two minutes greater than one. And that's just something. It's just a finite product of non-zero numbers. It's just something, right? But what happens if we let S go to one in sum? Well, that's the harmonic series if you know that divergence. So we get a contradiction. Okay, so if you have many primes it's built into this identity. Okay, so kind of historically, but I guess it's clear that if you wanna understand the distribution of primes, it's enough to understand this function pi of x. So pi of x is just the number of primes less than or equal to x. And in analytic number theory, we don't like sets, we write everything in sums. So this is the sum of p less than x and I just count one each time, okay? So we never wanted this, we always wanted that one. Okay, you can graph this. So I made these graphs many years ago. So prime pi x, this is this pi of x function this is defined by Gauss, the number of primes less than x it's built into Mathematica. Mathematica seems not to understand it's a step function. So I made these pictures many years ago. I went and re-graphed this this morning with the exact same code, it still does this, and I don't know why. So here, the number of primes less than 100. So you can see there's exactly 25 primes less than 100. Here's less than 1,000, 10,000, 100,000, it's starting to look like a nice, smooth function, right? So as we zoom out and look at more primes, we get, well, there's some, if you were a physicist, say, maybe the International Center for Theoretical Physics, there's some kind of law here, right? That we need to uncover. And, well, that's exactly what Legendre and Gauss did. So Legendre wrote down what he did. Gauss just claimed he did it many years before Legendre. So we know the date of this, that one, we're not sure. Sure. Legendre said that this function, and he kind of best fit the data, and I'll show you in a second, it's a really good fit. It should be like X over log X here. My logs are always gonna be the natural logarithm, so Bayesian, minus A or A is 1.08, and this constant you picked to best fit the data. And Gauss, being Gauss, had a much more mathematical way of thinking of this. He's thinking probabilistically. So Gauss convinced himself that the probability that an integer n is prime is like 1 over log n. So what's the expected value of primes less than X? It should be like the integral of 1 over log T, D, T. And so, well, that's why he's Gauss. Okay, and so this turns out to be magically the answer. So this is a hard thing to understand. So you can integrate by parts in order to get a leading order of behavior. This is like X over log X, so it kind of looks like what Roger said, and then there's some error term here. How many of you are familiar with big O? Maybe how many of you are unfamiliar with big O? Okay, so another thing analytic number theorists do is we like to think about sizes of things. So just brief notation. So bigger notation, I think of it as no bigger than. So F being bigger out of G of X means that you're always thinking of these notations in some kind of limit. In this case, we're thinking of X going to infinity. So when I say F is bigger out of G of X, I mean F is no bigger than a constant times G of X. So G of X is something that's above F bigger than F in some way. And analytic number theorists also like to take complicated things and replace them by nice, simple things. So we have this asymptotic notation, so twiddle. And so when I say F twiddles G, what I mean is that in the limit as we're approaching something usually a hundred or infinity in this talk, I mean that the ratio tends to one. So that this thing which is complicated is kind of behaving like this other thing, which is simpler. But we're always thinking of it in some limit. Okay, so both Gauss and Mejander's conjecture mean that pi of X is behaving like X over log X. It's an asymptotic notation. So the limit of this over that should be one. Gauss though has the more precise thing he thinks it behaves like. And at least when I teach channeled a number theory, one of my first homework assignments is to show that this integral isn't X over log X minus whatever the genre gets, X over log X minus one plus a smaller error. So Gauss didn't believe Legendre's conjecture. He thought there should be a one here and not a 1.08 that's a little bit less. Okay, so let's compare who was right. Gauss or Legendre? These two guys hated each other. Yeah, so. They also like quadratic reciprocity is another thing they thought over. Anyway, so here is N taking powers of 10, number of primes less than those values. Here is Legendre's best fit to the data. No, this is 1798. So you're doing all this by hand, right? Okay, so no mathematical back then. So he probably knew how many primes there were, right? I mean, he had exact counts. He's best fit in the data here, right? And look at every step, he's better than Gauss, right? 498 is 521, that's much closer than this one, right? Okay, all right, look what happens after a million. All of a sudden, Gauss is right. Now it's a three decimal places. Okay, Gauss is looking and this one's off and now 550 million, this is really close, right? And this one's kind of now off. And you can prove then that Gauss is even better than Legendre's forever after this point. So yeah, so it's small numbers where you might try to best fit data that Legendre was spot on, but his data, yeah, okay. Gauss is probabilistic intuition with the right one and Gauss's guess looks right kind of, yeah, okay. Okay, so hopefully now we're out to a large enough number. You can kind of see how accurate Gauss's guess is, right? In fact, he gets roughly half the digits right in his guess. And that is the Rima hypothesis, that these two match for the first half of the digits. Okay, I'll explain it more in a moment. Okay, is it not? Okay, so, sorry, yeah, well, we'll see. Okay, so using ideas of Rima, which I haven't told you yet, this conjecture was established in this sort of weaker form. It's called the prime number theorem in 1896. So about a hundred years after they conjectured it. And indeed, Pi of X is like X over log X, but X is large. This is proved independently by Hedemar and De La Valle's song. So it's mysterious if you haven't seen this before. So here are some ways to kind of think about what the prime number theorem says. So when it is not too large, you can think of log N is roughly twice the number of digits of N. Yeah, you can try to think about why that's true later. On today. So some consequences of the prime number theorem. So given a prime, the next prime is usually log P away. So that's what this is saying. So using this kind of here, it's like I just gave you the average distance between 10 digit prime should be about 20. That's twice the number of digits of 10, right? And so the average distance infinity, right? Because the average distance between 11 digit prime should be 22 and 20 digit prime should be 40, right? So the average distance between primes is getting bigger. The primes want to be farther apart. And number theorem also says that the nth prime has above size n log n. So we know when n goes to infinity, it's growing faster than here, right? So this is actually pretty close for small n. So the 200th prime, let's see, that's three digits. So it wants to be six times 200. It is 12.3, it wanted to be 1200, right? I had P of 1000 here at 79.19 and it wants, okay, so twice the number of digits is eight to 8,000. So it's pretty close for small n, right? All right, anyway, I was happy when I figured that out. I mean, there's had an order that I was just on. So is it these remind ideas? What were remind ideas? Well, one thing you want to do is you don't study pi of x. You want to change the problem slightly and you want to tie it back to this function that Euler studied. So we can write this function as a to s. Now called the Riemann zeta function. I don't think I said that now, but named after Riemann. So if you take the log of a product, you get a sum. So if you take a log and then differentiate, you can rewrite the logarithmic derivative as the sum. And the coefficient, because these are primes, these kind of want to be supported on primes. But if you think about the differentiation here in the geometric series, you're going to get not just primes, but powers of primes. So this logarithmic differentiation process is going to give you new coefficients. These are going to be log p if n is the power of a prime p and they're going to be zero, otherwise. And so instead of summing one at the primes, we sum the coefficients of this thing and I'll explain why in a moment. But let's think about what this is. So this has support on primes and prime powers, but there aren't very many powers, right? So if you think about how many squares are there less than 100, there's only 10. How many squares are there less than a million? There's only a thousand. So the powers are kind of harmless. This is essentially just the sum over the prime log p. Now Clinton, log p is also pretty harmless. So if you think about the logarithm function, it starts steep and then it flattens. So log ones, they behave pretty constant. So this is really kind of like the sum p less than x log x. It's kind of flattening out. And so this is pi of x log x. So this function of some of these coefficients wants to be pi of x times log x. And in a first analytic number theory class, we kind of prove that all of these things are justified. So these square root equals can be made equals with small errors. And so pi of x, the x of the log x kind of isn't only of psi of x, but it's like x. So this is an equivalent formulation of the prime number theorem. And it's usually the first step in the proof of the prime number theorem. And the point is that you can rewrite it as a logarithmic derivative. If you remember, these are things, logarithmic derivatives of analytic functions are things we like in complex analysis class. So it's easy to understand the properties. If you take a complex value function, it's easy to understand it's logarithmic derivative. You have things like the residue theorem and the argument principle and so forth. Okay, so Riemann's great insight was a lot of us thinking of this function zeta f as a real variable. And Riemann was thinking of it as a complex variable. It is something completely natural to do in 2022. This was something that was like way out there in 1858 because there was no complex analysis in 1850. And this was something that I think historically people don't think about enough. It's that this idea is something we would all do now. But the reason we all do now is because of Riemann, not Riemann was doing something everyone always does, right? Okay. So during Riemann's time, I mean, that Coachy had proofs of theorems and things, but a lot of the theorems you prove in complex analysis class are theorems of Hadamard, who was trying to prove conjectures of Riemann. So these things we think of as natural pain, 50 years after Riemann was doing this, okay, something. Just kind of a historical perspective of thinking about this. So because of that, Riemann really couldn't prove anything because there was no theory to plug into. We have a theory to plug into. That was due to Hadamard and De La Valle-Couson. These are, okay, people that proved the prime number theorem, established complex analysis is a field that we all know of studying grad school. Okay, all right. So Riemann was able to say like, think of Zeta function as a complex, a value function of a complex variable, then you can probably say all these cool things about primes, but there was no infrastructure at the time to do it. So it took about 50 years for people to understand what he was talking about. Okay, all right, so he's very good, Jen. Okay, so what did he prove? So you can, instead of thinking this is a real function, real variable, think of as function of complex variable, and then this sum and this product will converge for the real part of a complex variable bigger than one, so in a half plane. And he showed, you can mirror more quickly, you continue to the complex plane and the only pole will be at S equals one. So it's almost an entire function. Multiply by S minus one, it is an entire function. There's a functional equation, I won't state it, but if you understand the value of this at S, you understand it at one minus S. So if you understand it bigger than one, you understand it less than zero. So there's this kind of weird place in between zero and one, where we don't understand it very well. And he was able to more or less say where the zeros of this function are. Remember, we wanna look at the logarithmic derivative. So it's gonna have poles at all the zeros. So you need to know where the zeros are. He showed, well, so the zeros are symmetric about the line real S equals a half, that comes from the functional equation and the real axis comes from the reflection principle. So there are symmetries of these zeros. And he made a kind of comment that I have reason to believe that all of these zeros are on this line of symmetry. And it's kind of a throwaway comment in his paper. Okay, all right, so I'll explain you why that's important in a moment. So that is the statement of the Riemann hypothesis. He didn't state it as a conjecture, he just kind of said in a sentence, I have reason to believe that this is pretty fine. All right, so here's kind of all the facts I just said. So we understand where the sum and the product converge to the right of zero by the functional equation, we kind of can understand it to the left of zero, so because we understand it to the right of one. There's a simple pole here at S equals one. There is zeros that are easy to identify on the real axis, these come from the functional equation kind of to keep things simple, I won't state it. And then Riemann proved there's infinitely many zeros in this area between zero and one is infinite strip. We call this the critical strip, and this is the area where you have to work with analytic continuation because you don't have it as a convergence sum and you don't know it by the functional equation. So the critical strip is the region of analytic continuation and this critical line is this line of symmetry. So he conjectured all the zeros are on this critical line. Okay, yeah, my great technology I drew a picture in scanning. So the zero types of symmetry, they reflect across the real axis. So if there's a zero on the critical line at height T, there's one at minus T. If there's a zero at height T, but off the critical line, then there's a zero at one minus it here. And then if you conjugate, that'll put it here and then one minus it here. So zeros on the line from the pairs and zeros off the line come in four couples. Okay, so here is how we can use complex analysis to study this. So here's an integral that I claim that any of you that have had complex analysis with a little bit of thought should be able to prove. So integrate X to the S over S you're integrating along a vertical line. So X is between zero and one. So this is something less than one. So then if I pull the real part of S to pi to infinity, this is getting smaller and smaller and smaller. So if I shift a contour and pull, I should be able to make this small. So I use Cauchy's theorem and pull right to infinity. I should be able to show this integral zero. But if X is bigger than one, well I can't pull right because then this will be growing. So let me pull left. So we'll add to a negative power if you pull left so you can make it go to zero. But you cross the pole of one over S. So you get the residue at S equals zero and that's gonna have residue one. So you use the residue theorem to pull left if X is big and you use the Cauchy's theorem to pull right if X is small. And everyone at least kind of believe that this should be true and it's not so hard to prove. So you have to work with a finite interval integral and then take a limit. I mean, so it's a little bit of epsilon and deltas to work out, but we're all smart people. We can do that, right? Okay, so here's the kind of thing that is the magic of analytic number theory. So now apply this with X over N. So this is gonna detect whether or not N is bigger than X or less than X. So if N is bigger than X, and this is small if you get zero, then it's less than X, I get one. So the sum N less than X is the sum of this integral but now I can sum to infinity. And now let's interchange the sum in the integral. And this is what I told you was eta prime over zeta. And so we can replace it by that. And so now I've expressed the sum in the prime number of theorem as a complete the in terms of some analytic function. Not only some analytic function, some analytic function where Riemann falls exactly where the poles are. So now I can apply residue theorem to this thing. So where is it gonna have poles? We'll have a pole at S equals zero from the one over S. It's gonna have a pole at the pole of the zeta function will give a pole of the logarithmic derivative and the zero of the zeta function will give poles of the logarithmic derivative, right? And that's it. And so using residue theorem, you can show this sum equals, well, here's the pole of the zeta function will give me an X. The zero is in the critical strip all right this way. The zero is on the real axis, sum to a function when X get big, this is going to log one. So this is essentially zero. And the pole at S equals zero is just some constant. So really just these two terms or functions of X and what I think about it. And this is an exact formula. So an exact formula for the, so kind of beautiful. This series over zero converges conditionally. So this is a useless formula. But it's beautiful. Okay. You can, again, complex analysis. We know you always to get integrals to infinity. You first truncate and then take a limit, right? So first truncate, you don't take a limit and you're gonna get then a finite sum over zero but then you have some error term. So we throw it in a big O. So three one notation sucks our constants away. Okay. This is pretty remarkable. I found this picture a decade ago on the internet. So it stayed in various versions of my slides. So here are plots of X minus X to the real world. These are the zero, the zeta function using 10, 40 and 100 pairs of zeros. So we're using the ones symmetrically above and below the real axis. And so using 10 zeros between zero and 20 and this is between 100 and 120. Not very good, but as soon as we use 40 zeros, you can start to see the step function today. Here if we use 100 zeros, you get a real nice step function behavior. So I tried to redo this yesterday. So this is the picture I came up with using mathematics. So here I used 200 pairs of zeros and you get this kind of just remarkable. Like not very many zeros. You can see the side of X behavior, right? It is the step function. Yeah. Yeah, that is a number theory. Wonderful. That's pretty cool. And then here I zoomed out. So this is between 40 and 50 and you can just see the jumps at the prime powers, right? So the jump at 41, the jump at 43, the jump at... Okay, so Riemann's conjecture. So we have this formula. So we want side X to be equal to X. And so we need to understand the sum over zero. So X to the row. So think of a zero as, I don't know why this is the notation, but I think Riemann used this notation and everyone's been scared to change. So zero is a row and the real part of a zero is beta and the imaginary part is gamma. So X to a zero, that's X to the beta plus i gamma. Well, the modulus of X to the i gamma is one. So this is X to the row. So the numerator and the b's are bounded by X to the real part of the zero. So the Riemann hypothesis, or he said, I think these should all be on the line. That should be a half. Well, that is actually trying to make this sum be as small as possible. So why? Well, I said the zeros are all between, they have real part between zero and one, but there's also this relationship between S and one minus S. So if I say I want to take all of these real parts to be near zero, well, that means then it's reflected zero is near one, right? So if I introduce a small real part, then I get a big one and it's in the four tuple, right? So the way to make it small is have them all be on the line and then they're all sides of half. Yeah. And so that means then that the side of X is X with an error that's what square root the size of the main term. Well, if you think about that as an integer, that's saying that I understand side of X like to the first half of the digits, I understand, right? Square root is gonna, but looking at a 10 digit number, the square root is gonna affect the last five digits and not the first five, right? So when I was saying the output conjecture is right and the first half of the digits matches exactly with this. And then you can go from side of five. So the number of primes less than X is gout is guess. So that weird probabilistic inhibition and plus a root X log X. And if you think for a moment, these are equivalent because if there was a zero off the line, then it would contradict the size of the error term. So these are, so this completely arithmetic state is equivalent to this really weird analytic state. So this is pure analysis, this is pure arithmetic. So that's the Riemann hypothesis. So these zeros are all on a line or they all have the real part of half or that we can estimate the primes up to like the square root of their range. And so here you can see in red is pi of X and in blue is Galus's guess. And then X over log X, which is how you usually say the parameter there is in green. So you can see there's a pretty big error. And the Riemann hypothesis is just that the red and the blue continue to hug each other forever. I plotted this, this goes to 100,000. I plotted it from a million to two million today and you can't even see the difference in the lines. So I just thought it was a useless picture to pull it out because it was just one. Okay. Little bit. Yeah. So Hilbert, as we open with the Hilbert quote, so he recognized this as a kind of a fundamental problem. So with the 1908 ICM, or 1900 ICM, he listed 23 problems he hoped math could solve in the next 100 years. This was number eight on this list. Barnes assigned this to Littlewood as a thesis problem in the teens. So whoever your PhD advisor is, they're better than Barnes. But a lot of the early papers in the theory are written by Littlewood because he's trying to solve the problem he's thesis, by the way, they gave him. Okay. By 2000, so no luck in Hilbert's list, Claymack Institute offered a million dollars for this. My advisor told me this is probably the hardest way to become a millionaire. Not clear if they'll give you any money for a disparate. So they offer a million dollars for a fruit of their own hypothesis. So if you find a zero off the line, I don't know if that's worth anything. Finding zero's off the line to be hard because David Klatt and Tim Trojan showed that the first 12, whatever this number is, zeros are on the line. 12, so it's a thousands, millions, billions, 12 trillion. Andrew, let's go, showed in the 1980s that if you find the 10 to the 22nd zero off the line, the first several million above and below it are also on the line. Okay. What do you mean by the first? So if you start at the real axis and go up. Yep. So through the first 12 trillion are on the line. Rihwan actually, he didn't put this in his paper, but he has unpublished notes. You can go look at it at the Goodington Library and he showed the first couple by hand are on the line. Like he calculated the numerical errors and the precision and sign changes and a gram in 193 to 15, tishmarked to 1,000. You can see it's 15 years, it goes from 10 trillion to 12 trillion. That's a lot of computational power though. You can also show statistically what the zeros are on the line. So we know now at least 40% are in the following sense of density. So if you start at the real axis and go up and say, I know all the zeros from zero up to some high C and then say, now come to zeros around the critical line. The remaining hypothesis is that these two numbers are always equal if they're always on the critical line. So hard to eat, I guess, I would say famous, but is anything in math really famous because there's a few of us that study math. Math famous, not real life famous. Party prove infinitely many zeros were on the critical line. This is sometimes called Hardy's theorem. Hardy and Littlewood later showed that the number on the critical line as you go up for linearly in T, we know that this NFT function grows faster than linear. So this isn't a percentage, but it was a nice result. So we're getting what was his PhD thesis essentially showed that there is a positive proportion on the critical line, but didn't work out what it was. And that's kind of why he won the Fields Medal. Levinson showed that at least a third of his years are on the critical line and Connery, at least two fifths, so 40%. We can get this a little bigger than 40%, maybe 41% is the record or something now. Connery always mentions that he didn't get 40% of the million dollars. But so there's Hardy and Littlewood and there's Selberg, Levinson and Connery. There's some evidence in support of our H, like beautiful evidence for our H by analogy. So we know for curves over finite fields, the analog of it, the Riemann hypothesis is true. You can define a notion of the Zeta function and this was proved by Pasa of A and deline that the Riemann hypothesis holds these things. There's Andre of A and deline. There's a geometric Riemann hypothesis that's also true. So if you take hyperbolic surface in a finite area, you can construct something called a Selberg Zeta function. Instead of primes, you use a simple closed geodesics. You write an Euler product, you can expand it then as a series over all geodesics. This extends to a Merrimorfer function and Selberg showed it satisfies the Riemann hypothesis. Okay, so we have things that are Zeta-like that satisfy it, but this one's not so good. Zero pre-regions and zero density estimates, just roughly. You can show there are parts of that critical strip that have no zeros. And you can also show that in most of the critical strip, there can't be too many zeros. So these are a little technical to state. So I kept them out of the slides like it's a basic notion seminar. So we have actual theorems you can prove of some evidence. So in particular, we can show that almost all zeros in a density sensor are arbitrarily close to the critical line. So 100% of zeros are as close as you want to the critical line, but we can't show they're on there. Okay. How much time do I have? Just a moment. So everyone's still with me, kind of? Okay. So there's other types of explicit formula in this three month and mango one. Here's one that I quite like. It's a formula of land off. So if you take any real number X, say bigger than one and you raise it to the zeros of the zeta function and sum over the zeros, it detects whether or not X is a prime. So you get T, this is kind of the height of the critical line you're summing the zeros times land at X. This is that by mango function that's above P if it's a prime power and zero otherwise. So just extend it to all real numbers. Say it's zero unless you're a prime power. And then you get some smaller error. And so, yeah, you can use this to detect crime. So not particularly with the efficient algorithm to detect crimes, but you can do it. So here I use 200 zeros to look between two and 10. Sure enough, it's helped me to three, four, five, seven, eight and nine are prime powers, but six isn't. Okay. And you can see the prime will get bigger spikes than the prime powers because you're getting log P. So for four, I'm getting log two, not log four. For eight, I'm getting log two, not log four. For nine, I'm getting log three. The primes will have big spikes and prime powers will have smaller spikes and composite integers will have no spikes as will every X. All right, so primes in zero like each other. You can be crazy with explicit formulas and the proof is really kind of exactly the same as I already showed you. So remember I started with zeta prime over zeta times X to the S over X. I can kind of plug in any analytic function there and do the same thing with complex analysis. So by plugging in an analytic function into zeta prime over zeta and write zeta prime over zeta as a series, what you're gonna get is these coefficients times the Fourier transform of the function you started with. If you know what a Fourier transform is. And then you do complex analysis and you pick up the poles at the zeros of the zeta function and then here's the poles of the zeta function. This is kind of what you get from the trivial zeros. Apply the functional equation to get back and you'll get kind of the complex conjugate of the Fourier transform. So it's exactly the kind of the same proof that I kind of already told you. I just put in a function other than X to the S over S. And you get this formula and this is kind of observed by Greenon and Baye. And one of the nice things is it lets you play around with real analysis now because I have a function here. So I wrote this as half-foot like gamma. I'm not assuming the Riemann hypothesis and this formula I'm already, gamma be potentially complex. But if you assume the Riemann hypothesis, these are all then real, real value different. So if you assume our H, you can use real analysis to study this interplay. So that's a useful thing. So this gamma R is kind of just like the gamma function. If you don't know what the Fourier transform is, you may be defining it for you's not particularly helpful. If you do know, then you can kind of maybe see. So remember, we'll get like an end of the I, C. That's how you get the log N over two pi here. That's where it turns on. Okay, so let me just show you, I promised, I guess something that I did. So Emmanuel and I have been working on this kind of stuff for more than a decade. So like how can we study the primes using a formula like this? So let's just look at the problem of how far a person primes feet. So I've told you, they can be arbitrarily far apart. So I need to, that's kind of qualitative. So let's try to be quantitative. So maybe the first one person really think about this, maybe it was Bertrand. And he claimed there should always be a prime between X and two X. I'm sure he did numerics. And it didn't take very long to show you to have approved this seven years. And the prime number at the end is actually we can always get a prime between X and X plus. Well, it's something that's going, that's grown much slower than X. So X over any power of log I would want. Log goes to infinity. So this is much smaller than X, if X is big. Hold on, I just made a big breakthrough. He showed actually there's always prime between X and X plus some power strictly less than one. So better than saving loss. Sometimes called full heist is prime number there. I think the best constant we can take in whole heist with prime number theorem is a little bigger than a half. All right, let's see what the Riemann hypothesis said. So should we shut it and I'll say it again, there's always a prime between that and two. So, oh, yeah, this is nice. So there's always a prime between consecutive squares. This is one of the famous open problems in prime number theorem. Convince yourself that if we could show there's always a prime between X and X plus two root X and this would follow. So this is a little bit stronger than what Legendre said. It's one X and then, well, yeah, anyway, you can think about that later. This is right open, even a seminary in that pharmacy. So not that close. I like this just because we only know if there's a lot of questions. It's just wonderful. Okay, I wonder if he commissioned that. Back in those days where you had to pay people to paint your portrait. Anyway. All right. So I told you the Riemann hypothesis says that the number of primes less than X is this guess it got a root X log X. This is actually first observed on cock about five years after the proof of the prime number theorem. So this says there's always a prime in the interval X to X plus some constant, not, I said two root X, what do we want? You get some constant root X and then some logs. How would you show this? Well, again, exercise if you wanna try to get good at analytic number theory. Assume there's no primes between X plus Y and X. Plug into this formula and just determine what value of Y contradicts it. So you'll take five X plus Y minus five X. You'll get integral from X to X plus Y. You can get a lower bound for that integral. And you know, the area is not bigger than this. Just clear on that. And for me, about 20 years later, was able to replace root X log squared X or root X log X. So people that aren't number theorists feel like so what? The number theorists are like, wow, how did you do this? Okay, so this is the kind of thing, well, so this is the best you can do and this has never been approved. So this is like 102 year, yeah. So this, listen, we need to cut another log out and then just see it's less than two to get Legendre's conjecture. Cromeric took 20 years to cut one log off and in the last hundred years, we haven't been able to do anything. So okay. So if you can't solve a big problem, make it smaller. What can we say about C then, right? So we know Cromeric and show C, so what could C be? As far as I know, the first person to ever write down some value. So this is instead of a big, oh, actually, what kind of inequality can I get? It's the angle, then it's at C equals four. But C equals four when X is sufficiently large, the Romero and Souter said, well, I don't want X to be sufficiently large, I want all X. So they were able to get a smaller constant but that holds for all ranges. A few years ago, Dudek was able to get a constant a little bigger than one for all X and showed you can get, if you take X sufficiently large, you can get any constant bigger than one. No matter how close to one. And I guess Emmanuel and I got interested in this project and Dudek said, I think C equals one is optimal using no methods. And we didn't, okay, anyway. So our kind of modest, improvements of classical results is we were able to show you can get better than one, both in a sufficiently large and up for all X with a slightly bigger constant. So this is general for Emmanuel Carnero and Sunder-Argent. So we could, for sufficiently large X get like 0.84. And then what is this 0.86 for all X or some of these? 0.88. These numbers aren't sharp. These were just small fractions that were close to the things we could get. Okay, so you want your theorems to look pretty, right? I don't want like 0.8427136, okay. All right. So we use this formula. So very hand wave, I'm going to do 30 pages and two slides. So what I want to do is assume there are no primes in an interval of the shape I want. So I start with a function before a transform has a finite support. So support means the interval or where it's non-zero. So I assume it's non-zero only three minus one and one. And then I kind of dilate and shift around the function so that I can put the support of my query transform wherever I want. And I want it to be between X and X plus C log R. So I'm assuming that this sum over primes is non-zero terms just done an interval of this form. And so this sum is zero because there's no primes in that interval, right? And then with that choice of function you have to estimate these three other terms. This integral is harmless because you get this exponential. So it's an oscillatory integral. So it kind of disintegrates to zero. This is the contribution from the pole. You get something. And then you get something from the zeros or assuming the Riemann hypothesis. So you kind of just get, well, we take F two values and you estimate with the L one norm. So you can show the pole is going to contribute our function. This is going to give us the F of zero and the zeros will give us something like the L one norm of F. So very loosely, you can show that we get a contradiction as soon as C is bigger than something like this. And so we've taken a number theory problem and we've grown a way on the number theory and now it's just a real analysis problem. And so we have something like this. So you have a function. We want the function to be non-zero. So just assume it's value zero is one. You can always normalize for that. It's got, sorry, transform is finite and supportive. What's the minimum value of the L one norm? Continuous function. And so here's a function. It's not the optimal choice, but it's pretty close to optimal. So it's pretty close to the best in place. Look something like this. So where you transform is kind of one bump of cosine. So that's the thing we plug into the prime sum and this is the thing in the sum of zeros. We were able to show for this problem in general that the minimum function is just a little bit smaller than this and made something. So nice, Tom. We're pretty happy with this. But the upper bound turns out to what we need for our application. And so we were pretty convinced this was close to sharp. We didn't work so hard on this. The week we were ready to submit, someone told us that this Russianization, Gorbachev had worked on this problem and sent us the papers. And they were all in Russian. It was before the Russian journal for translated English. We had used Google Translate to try to read what he did. We read, okay, 13 years before I see it. Down this, we had to rewrite our paper a little bit. Okay, great. Our paper gets accepted. We have to make small corrections. The week we get the proofs. So this happened the week we were gonna submit. The week we get the proofs, this Gorbachev emailed us. He said, I saw your paper on PrimeGap. I wanna let you know that I didn't do the best. Hormander and Bernholzson did in 1993. Okay, so we, yeah. So Gorbachev, 12 years after Hormander and Bernholzson didn't improve them and then we didn't improve. Okay. We don't know. So actually, this problem emits a unique extremizer. We don't know what it is, but we know it's at six digits. So in fact, our proof only requires, instead of supportive F hat being non-zero between minus one one, if you just assume it's negative outside minus one one, and we can make our proof run. And if you're familiar, this is exactly the kind of conditions that come up in these spear packing problems, which has been a lot of nice press. These are the kind of results we get from PrimeGap. That's a manual. I found that online. I don't know where you are. Your hair was a lot more good. So if you look at the picture of Sound, I mirror image it because it looked weird to have him looking away from the manual. So you can see all the math on the words backwards. I wanted to put them in alphabetical order because math, but then I often didn't want to sound like looking off the page. I just flipped it. And here's kind of sales and Marnia, the corkscrew that work on spear packing. So similar kinds of analysis come up the most. This I guess is optimal, right? It is packing in an optimal way. And it should be okay. Am I out of time? Five minutes, okay. So I did press for a part. Prime split together, we don't think it has anything to do with the 3R hypothesis, but it's a famous problem. So I think we think that for the intermediate Prime's P, the Nicholas 2 is also Prime. Green 5, 11, and 13, for instance. As of according to Wikipedia, this is the biggest known pair that we know that they're both from. Trust Wikipedia. Morally, I trust Wikipedia. So it's known to do to in prime conjecture. So obviously you can't have P and P plus one be Prime apart from two and three, because then one would be even and one would be odd, right? So that'd be at least two part. And we think that there are infinitely many such pairs. Here's a list of plans between 79 and 1279. So there's some twin primes. Can you see the red? Twin primes, twin primes, twin primes, and primes. Anyway, you can see, not so dark, but I put all the twin primes in that one. So it seems like there's plenty of them to go around. So then the same way that primes want to be far apart of the integers, we think that twin primes want to be far apart within the primes. So as you move farther in the list of primes, your twin primes are going to want to be farther apart. So in a density sense, primes are 0% of the integers and twin primes should be 0% of the primes. We can't prove that, but that's what we think. So I guess there's been some famous progress towards the twin prime conjecture recently. Again, nothing really to do with my talk, but I thought I'd advertise because it's beautiful. So often so recently, I guess the most convincing evidence is Chen's theorem. Which I'm right around when relations between the United States and China were opening up politically when Nixon opened up for China. So one of the first things that American mathematicians did was like, go to China and get all Chen's papers so they can translate them. Because you know, probably Chinese journals and Mandarin, right? So this is obviously 30 years before the internet was ubiquitous, right? There's no way for people in Europe or the United States to read what one of the greatest mathematicians of the 20th century was doing. Okay. Okay, but he proved there is going to be a primes fee where people as two is either prime or a product of two primes. They're pretty close, but we're stuck there. Bombieri has shown that prime fee, there is going to be a primes fee where people as two is at most three primes. That's part of why he won the Fields Medal. And Chen dropped it down to two. And we had believed that Bobby Katz between primes was something that was like maybe a millennia off. And so we were pretty shocking. It leads to the number three rule. When Yixing Zhang in 2013, showed that there isn't any primes of bounded distance apart. 70 million, but he wasn't trying to work about 70 million. He was just saying there is some competency that PN and PN plus one are less than familiar. And part of the shocking is everyone kind of know his story. So, I mean, he wasn't even really a research mathematician. So he was a teaching, he's a teacher at the University of New Hampshire. He didn't have a research position on so. And then he was immediately promoted to full professor. So he jumped. And certainly thereafter, Maynard and Tao independently came up with a different proof. This is slightly more powerful, but they can show that there is going to be times fewer than primes less than 246 apart. So I actually met Maynard a couple of times. So I asked him one time. So he had been working on this before Zhang's proof. And so I asked him if I said, did you think he would approve this if you didn't know it was possible to show that? And he said, I'm not confident. I can answer that again. He said, there's something in knowing the methods can do something that make you more proactive to go in the methods. So I wanted to know, would he in November of 2013 still have proved bounded gaps between primes if he didn't know Zhang's theorem? And so, you can say, oh, he was so close to getting it first. But then the question is, but the philosophically knowing something works, that's a big, right? But yeah, okay, so he couldn't confidently say one way or another, he didn't know what he was thinking in April, 2013, if it was gonna work. That's what he'd relate to me. So I don't know if his mind has been changed, but I asked him that a few years ago. So they're saying, I just, I like the picture, okay. So he had a MacArthur genius award. He moved from New Hampshire to Chicago and there's Maynard and Tao. Okay, I'll think I'll enter. And now it's Mike, I think for the brilliant talk. If anyone have any questions or comments, you can start with a question that's very, I thought about in your talk. So you mentioned this, there are these other instances, they're not about geometry, where the Riemann hypothesis has been proved. Yeah, right. To be able to comment a little bit in these occasions, how it is proved, I mean, is there, no, if you remember. Geometers in the room. Maybe a lot of you can comment a little bit about, do you know about this proof in the Riemann hypothesis in the, I mean, so curve, I don't know, I mean, that being here, there's something super ultra complicated, but there's a general, I mean, I hope it's all right, because it was a very big thing, right? So there was a whole theory, which runs in several thousand days and so on. So it's a very big thing. But the first one was the Hassabahn, right? Yeah, yeah, that's somehow more unbothered, but I don't know if I saw that one. But if you want to generalize it, because the general thing is it's very big. But I think, yeah, the Hassabahn, no, I don't, I'm not familiar with it. We need that more unbothered, but yeah. So I can loosely say the Hassabahn, so you can define like the zeta function of an elliptic curve over Q. And then you want to figure out where does it converge? So you need some control over the side of the coefficients of the order of the product. So you can decide where the product converges and then where the series converges. So Hassabahn works for cells. And this ends up being equivalent to showing some zeta function. My P has zeros on the unit circle, which you can think of as the Riemann hypothesis for an elliptic curve, my P. But yeah, it's also very different in engineering because like the Riemann hypothesis is like, for algebra, right? It's concerns, but like rational functions in a sense. They're not infinite series. You have a finite number of zeros. You have a finite number of zeros and you want to prove that they have certain absolute values, which is quite a different statement. But the function, if you consider the L function of an algebraic variety, then everything is right. Like what the zeta function for algebraic variety is would be one local L factor. And then the product of the L factors is what is like what the corresponding conjectures are like the question of the Riemann hypothesis for the completed L function, which is the product of the L local L factors and the factors that infinity and so forth. This would be the one that like the corresponding Riemann hypothesis and the corresponding functional variation is not the Riemann hypothesis. The case of the Riemann's data function would be the corresponding to the spectrum of Q, which is like the simplest case of a number. So that was a long time ago. If you want more questions and comments, go ahead. Can you hear me? Yes, Fernando, we have one from the audience and then I'll move to you, okay? Okay. Can you explain in a nutshell what's the duration between the four, why you find a bomb for the four-year class world thing and it's here, Mike? I think maybe you can get another comment. I was just saying loosely that if you make the condition, where is my, so can I go back? I think the bottom of the slide was cut off. I was kind of for simplicity saying imagine that the Fourier transform was supported between minus one and one. So it's non-zero there. But if you go back and think about the logic of the proof, all we need is say anything we want between minus one and one because we'll assume that's zero. And then if it's negative outside of that, then we would still get an inequality in the direction we want it. So you're thinking about minimizing functions that are eventually negative. And that's exactly the constructions you need in the spear packing argument, but in a higher dimensional setting. So you need functions that instead of the Fourier transform supported in some range, you know, finite range, it's eventually negative outside that range. So what is the motivation again, the understanding of distribution of prime numbers? Well, it's sort of the analogy at the beginning. So if you can understand prime, so you can kind of understand anything you want about the integers, right? So, yeah, so I don't know. It's just, I guess, whether or not you care about number theory. Yeah, so do you know this? There's a quote by Gallus of mathematics is the queen of the sciences and number theory is the queen of mathematics. And so it's sort of an argument of like, what is the most pure, right? Like, so mathematicians will claim they're more pure than the other scientists, but number theorists will claim they're more pure than all the other mathematicians. Yeah, it's pretty useless from a practical standpoint. It's really just like from a Greek standpoint, knowledge for knowledge sake, right? If you want to think about it that way. Fernando, back to you, are you still there? Yeah, I'm still here. Somehow my camera doesn't work. So can you hear me okay? Yes, I can all hear you. Sorry, you can? Yes, yes, you can hear me. Okay, so I just wanted to say a few things just historical about the question on the Arima hypotheses for curves or for varieties in general. So they formulated these conjectures in the 40s or 50s and were eventually improved completely by Delin in 73, but the hardest part was precisely the equivalent of the Arima hypotheses. I mean, the other parts were been proved by Dwork, the fact that the state of function is a rational function and so on. What I just want to point out is that the interplay between number theory and geometry and algebra geometry that Delin established is actually quite remarkable because by considering the question of the zeros of this analogs of the state of function which amounts to saying something about the absolute value of roots of certain polynomials, he inferred something completely geometric, purely geometric over the complex numbers which has led to the notion of mixer structures which is a fundamental part of the tools that he used, Delin used in this proof. So but that came by studying what happens over finer fields. So it was an inference from finer fuels to complex geometry. And on the other direction, I don't know much of the details but I understand that the inspiration that finally allowed Delin to nail the question of the Arima hypotheses was from a proven analytic number theory of ranking. So there was an idea of sort of taking powers or something that ranking used and that's what somehow Delin translated back into the question of over finer fields. So there was a transfer sort of in the other direction. Symmetric power L functions, is that it? Yes, yes. And yeah, and the other thing to say is that there is a significant amount of the formulation of algebraic geometry in its modern form by grotting and everybody else. I mean, in this very, very sort of general form where you want to be able to talk about things that you're used to talking over the complex numbers to talk about them or things over finer fields like homology and various other sort of purely completely sort of geometric in the complex sense or real sense you want to transfer them in a very algebraic and abstract way that the motivation for a lot of that was the proof, the attempt or the search for a proof of the Rima hypothesis. So in the finer field case. So it's sort of a central sort of story in mathematics of the 20th century. Thank you. Thank you for a moment. And other questions or comments? I know that there has been a line of research in the price to delay the zeros of the Rima function with the eigenvalues of some of the data of some. Sure. So what do you think about it? Is there any restening outcome of that? So I mean, this is a meta philosophy. It's sometimes called the Hilbert-Poyer conjecture. So if you could show some that they were in some way corresponded to eigenvalues of Hermitian operator then in particular they would all be real and lie on a line, right? So I think Hilbert-Poyer were thinking about what conditions would you need on an analytic function in order to have all its zeros be on a line? So this is some magical correspondence. So I'm not sure it's a useful way to prove the Rima hypothesis, but what is very useful is thinking of the zeros as having some kind of spectral interpretation. And so there's this whole interplay now between analytic number theory and random matrix theory. And it's that you can model the Rima data function and other functions like it called L functions by using random matrices from classical complex groups. And in fact, the zeros want to behave like eigenvalues and the modulus wants to behave like characteristic polynomials. And it's a conjectural interplay, not a provable one if that makes sense. So if there's a problem that we don't know what it should be happening with the zeta function we can go over to the random matrix side and say what happens here? And there's many more useful tools there. And then you can try to write down, okay, what's the analogous thing? And you can come back and in almost every case you can then see the behavior on the zeta function side. So the zeta function wants to behave like random matrices. And so there's beautiful papers of Keating and Snape that started this off. So Montgomery realized that this sort of spectral behavior the zeros of Keating and Snape kind of took this much farther. And so they say like if you're at height t on the critical line, so you should let log t over two pi equal n and then you use n by n matrices. And so they know exactly at the heights on the critical line what kind of matrices should kind of philosophically describe the behavior. And so it's a conjectural understanding, but I think this started from this over poya idea. I know some people tried to prove the Riemann hypothesis that way I'm not convinced it's fruitful but maybe it's because I don't understand what they're doing. I don't know, a lot of people have said it's the hardest of the seven millennium problems because no one's had a good first idea. And so the other ones you can say like, I think you want to go this way to prove it. And this one, I don't know that anyone said, I think a proof will start out with this step. Like I don't know that anyone's had a reasonable first step. We have some questions from the internet that I mentioned to you. So three questions from the Zoom. Thank you very much, Michael, for a great talk. What did exactly little work did about the Riemann hypothesis in his thesis? Is that smart? I'm going to have to decide. Yeah, so, I mean, a lot of the early estimates for the zeta function are due to little wood. So I guess you can look in Tishmarsh's book, The Theory of the Riemann Zeta Function, and I bet there's probably 20 or 30 papers by little wood in the references. So just estimates in particular, proving the prime number theorem and simplifying the prime number theorem, you need to know how zeta function and log of the zeta function, logarithmic derivative behavior in different regions, how big is it and where the zeros are, how do they cluster? Almost all of those early estimates were proved by little wood. So I don't know that he worked on the Riemann hypothesis in particular, but a lot of the early theory of the Riemann Zeta function is from little wood. All right, second question by Lawrence Vijaya. Is it possible to work in maximal gaps between primes and arithmetic progressions? Yes, so I think that's due to, I can remember two of the three authors. So an analog of our theorem was proved by Chira Delot, did you remember the third author? What would you do? Yeah, yes, so yes, so if you're thinking about X being large and Q being fixed, they did the analog of our theorem. In fact, the problem of the primes and arithmetic progressions where Q isn't fixed is something that Manuel and I have been working on during my time at ICTP. So the answer to both of those questions is yes. And the final question, this is by Andres Chikutin. Is it not any regularity or patterns of the ratio between consecutive prime gaps? So if you normalize by the average spacing, it's conjectured to be Poissonian. So there are papers by Sander Arjun on the archive that describe this. You mentioned in your talk that it's not known that the eventual twin primes, one cannot prove that the twin primes are of this proportion zero within the primes. You can, yeah, I said that wrong. We have upper bounds for the number of twin primes, so I missed forward. The number of twin primes in a zero proportion Yeah, we can prove that. Yeah, I misspoke. We can absolutely prove that. We can get upper bounds for the number of twin primes. We can't get lower bounds for the number of twin primes. So no, I completely misspoke. We do know there is zero proportion. Any other questions or comments? So this is it for the internet. Not everyone. Let me remind you that on October 27th, if you like my very nice talk, on October 27th, we are organizing a number theory day here at ICTP. It's going to be a Thursday the whole day with four lectures. Mike is going to be one of them. We have Danilo Radchenko who works on the sphere packing program. We have Beto Zanier from Scholar-Marie-Depis and Pietro Corvalia from the University of Puglia. It's going to be a day full of lectures in number theory. You're all invited to come if you want to, okay? So let's thank Mike again for the lovely talk. And please, I'll base it on what she said another this afternoon. I think there is the international spirits outside. We want to join. As a tradition that we have the diploma students talk to our speaker by themselves in the basic motion. Well, we're open, but it's also between coming back or coming back here. Yeah, so maybe we can finish, we'll finish the broadcasting and the video. And the diploma students want to stay in the room when I want to uncheck with Mike and just by themselves or they can ask anything they want. Thanks, man. Thanks, man, okay? Thanks, man, it was great. I love you, too. Okay, okay, very nice. Thanks, Mike.