 So I want to talk to you about the pretentious approach to analytic number theory, which Sandaraj and I have been developing over the last five years. And some of you were at the summer school. We had some years ago and have access to the book that never appeared on the subject. So for those of you who don't know much about it, let me explain what this is all about. So the general approach to analytic number theory stems back to Riemann's memoir from 1859 and involves studying the distribution of zeros of the Riemann's data function. And as we know, it's been a very profitable approach to L functions and naturally arising questions and arithmetic for a long time. And yet many theorems have been proved without zeros of L functions. And to a certain extent, the community has said, well, those approaches are ad hoc. And look down upon a little bit. Particularly, if you like, the elementary work championed by Paul Erdos. And many of the great theorems of analytic number theory, especially about the distribution of prime numbers, come from so-called ad hoc approaches. So a few years ago, Sandaraj and I had been using certain techniques that we called pretentious. And I'll try and explain what pretentious means in these lectures to prove theorems on open questions to try and develop the subject. And we started to become interested, perhaps, in what established theorems could be proved by these pretentious methods. And we attended a lecture by Henrik Evaniec in Princeton five years ago, where he announced a proof by himself and Friedlander of Linux theorem that in every arithmetic progression where there can be primes, there's a relatively small prime. And Evaniec's proof avoided many of the difficulties about zeroes of L functions that all the previous proofs had. Or Evaniec and Friedland's proof avoided these deep ideas about zeroes of L functions that had previously been part of a very difficult proofs of Linux theorem. Though the proof of Friedlander of Evaniec is not easy itself, but it technically avoids these tremendous difficulties. And if you like, it's an elementary proof, but it will be at a very hard elementary proof. So it gave us courage to say, well, maybe these techniques we've been working on could prove everything in classical analytic number theory. So we sent ourselves that goal, and the first goal was maybe we should try and prove Linux theorem ourselves. And we did. We surprised ourselves that we had some ideas for a technique, and they worked. And we gave a pretty short proof of Linux theorem, certainly much easier than what existed at that time in the literature. And we wrote up, if you like, what we could do in proving the classical results of what you might find, particularly in Davenport and then in Bombieri's large sieve book. And we were able to accomplish quite a bit, but there were some clear flaws in what we had. Most importantly was that we couldn't get a good error term in the prime number theorem. So I'll remind you of what we know about the prime number theorem in the moment, but the number of primes up to x being something like x over log x. And we could achieve an error term that was little o of x over log x, and x over log x is some power, but we couldn't even get that power to be 2. So we were unable to prove that number primes up to x was x over log x plus bigger of x over log squared x. Our methods failed us on that. So that was one big flaw in what we were doing. And that flaw has a knock-on effect, because if you try and prove a Bombieri-Vinogrado theorem, which is central to many of the developments in analytic number theory, then you need to be able to win in the prime number theorem and the prime number theorem for arithmetic progressions by an arbitrary power of log x. So an error term of log x bigger of x over log x over the a is what you need. The second thing was that the subject, and I'll explain all this, centers around what's called Hollass's theorem. And the proof of Hollass's theorem, or Hollass's proof, is a little opaque. It's a little hard to really understand the motivation behind it. And after Hollass, there have been many improvements to his results. In fact, we had a paper improving his results, but all of them started with the same fundamental construction that was somewhat opaque. And this meant a lot of the proofs that we have, well, they worked, but they're rather difficult to understand why they worked in a larger setting. Now, these I would regard as what were the two main flaws in the book we wrote up three years ago. And what's happened in the meantime is that those flaws have both been removed. And not by us, but one by Dmitri Kukulopoulos is here, who showed us how to prove the prime number theorem, not only a strong version prime number theorem, but as strong as can be done by classical methods. And more recently, Adam Harper came up with the new proof of Hollass's theorem that it's not incredibly easy, but it allows one to appreciate what's going on and makes it much easier to use the techniques. So in both cases, we've tried to further develop a theory, developed what Kukulopoulos did and Harper did. And so that's what I'm going to explain to you here. These developments that now means that we feel we have a strong theory that rivals the classical theory. And well, maybe I should embarrass Sandarajan by saying, we'll get the book out in nine months. But no, we've got to work on it and get it out. There are people here I know who would love to see it out so that they can use the ideas. So it will take some writing, because there's a lot of things they're doing. And then people like Maxine keep on proving consequences that one would like to use. So it makes it difficult to write a book when people prove nice theorems to get it finished, the book. Anyway, so let me change the title, because both of these new proofs come down to sort of doing something a little bit surprising. And I'll call this Perron's formula revisited. So that's really going to be my title for the three talks. And in these talks, what I hope to do, if I have time, is prove to you or show you how the proof of the prime number theorem goes the strongest form than a growth of Karabav form, give you the new proof of Hollaster's theorem and explain what Hollaster's theorem is, and hopefully prove Linux's theorem in a way that hopefully you can understand. So it won't use anything very deep. OK, so what I want to do is start with the classical proof of the prime number theorem. And I'm going to assume that you know it already. Now, when one says one knows the proof of the prime number theorem, there's very few people who could actually just write it out, because although the ideas are very elegant, they really are, you have to go through a lot of steps, and you have to make a lot of estimates work in order to make the proof of the prime number theorem work. So I'm just going to quickly go through it, maybe in the next 20 minutes, the proof. And some of the notions that go into the usual proof of the prime number theorem will come into what we're doing. So let's just start off with basic definitions. So lambda n, it's just a convenient weight that we, what we want to do is count primes. And so we want to weight 0 if n is not a prime. And in fact, it's convenient to count prime powers for technical reasons. And if n is a prime power, we count p. This is van Mangels function, as it's called, and it fits lots of formulas. So do the prime number theorem, instead of counting the primes, we count them with this weight. And instead of x over log x expected with some error term, or really the integral dt over log t, we expect the main term of x. And then the question is what error term? And so let me just give 2, maybe 3. So we get big O of x to the half log squared x if we assume the Riemann hypothesis. So we'll get into the Riemann hypothesis a little later today. And the best result known is due to Karabov and Vinogradov. And it's log x to the 3 fifths over log log x to the 1 fifth. So this is unconditional. And that's the best result known, up to the constants. And then the one you've seen in the textbooks, if you've worked through the proof, would be something like this. So we'll more or less give all the details of the proof of that today. And we'll indicate how you get this. OK. So you can get this, but what does he know about? I didn't say that. OK. So let's just remember Perron's formula. So this is a beautiful idea for recognizing an inequality amongst integers. And very simply, so this is the real part of c is greater than 0. And something happens in between. Anyway, the important thing is that, well, it's the average in between. The important thing is that we have a way of recognizing inequalities for positive real numbers using this integral. And hopefully, you've played with it before. So you prove it just in the standard way of pulling the contour to the right or left. And you either get the pole at s equals 0, which leads you to this. And the value on the contour, the added contours, is small. Or you pull it to the right, in this case, and there's no pole. And the value on the added contours is small. So it's not very useful to have an integral going all the way to infinity. So what one wants to do is truncate this at some height. And then one can work out an error term for that, which looks like y to the c. This is all. You find this, for instance, in Davenport's book. And this is if y is not equal to 1, and then if y equals 1, you get some 1 over t. I guess c over t, yeah. So the point is that, well, you can work out the point at which you can truncate and not have to worry about the error term. I'm not going to spend much time thinking about that. You're just going to have to trust me. There is a point. But again, I'm sure you've played with the proof of the prime number theorem, and though that's the case. So what is the key application of Perron's formula? Thanks to Riemann, I guess. If we wish to look at the sum of an and up to x, and let me just assume, we have always have this problem with y equals 1 that it's a bit weird. So let me just assume x is not an integer. So I can be correct and not have to write extra. Then the way we try and sum a sequence of interesting numbers, an up to x, is we write it as the sum over all integers times 1 if n is less than or equal to x and 0 if n is greater than x. But I'm going to rewrite that as x over n is greater than 1 or x over n is less than 1. So the idea is then you can just plug in what we've got up there. We take y equals x over n. And when we plug in y equals x over n, we've got an infinite sum and an infinite integral if I do the thing at the top. So the point having a little flexibility with the c is we can pull it far enough to the right that we can make things absolutely convergent and so swap the order of the integral and the summation. And so we end up with 1 over 2 i pi. Well, let's have a look what's inside the integral. So I've got an and I plug in for here y equals x over n. So it's x to the s over n to the s. So the part that's interesting that varies with n is this. And the rest is x to the s over s to the s. And so what we can replace, we'll just call this thing a of s. And so this is the application of this formula that is most useful in number theory. So in order to understand the sum of something arithmetic, say up to x, all we need to do is understand the integral of a Dirichlet series times something on an infinite line. So you could argue we've taken a nice finite discrete combinator problem made into a hard analysis problem. But sometimes that's useful. OK, so the most famous application will be for the we want to work for lambda n's. And all you have to do to the if you go real part of s greater than 1, then you write out zeta s as the sum of 1 over n to the s. You write it as the Euler product over the primes. Then when you're over in the domain of absolute convergence, you can do things like logarithmic derivatives. And this is what comes out when you do the calculation carefully. So now we can just apply the formula over there. And we find that the sum n up to x of lambda n equals 1 over 2 i pi the integral. Well, let me just curtail it in all in one go. Just believe me that you can for some suitable t minus zeta prime s over zeta s, x to the s over s to the s. And actually, I'll just say what the right error term is, something like this, when you do all that. So again, this is all standard. Oh, there should be a plus log x, because I was careless what happens when you're at the integer. OK, so what's the game now is we have the integral of a rather hairy function. And we have it to the right of 1, because we're doing our manipulations where you can do manipulations without too much worry, where everything's absolutely convergent. And again, we want to play complex analysis. So what we're going to do to this is we're going to pull it to the left. And the hope as we got this integral up to height t is, well, we'll create an integral like this. And the hope is that these integrans are nice and small, the value of the integral and those integrans. And then everything will be picked up by Cauchy's theorem, which is the residues of the poles of the integrand inside here. So we'll talk a little bit about whether or not it's easy to prove that the integral and these lines are small, but in a minute. Let's just think about the poles of the integrand. So what are the poles of this? Well, the room's data function, I'm not going to get into this, but as I hope you know, it's maramorphic. It's got a pole of order 1 and s equals 1. It's the only pole. So at s equals 1, zeta s looks like 1 over s minus 1 plus gamma plus that when s tends towards 1. That's the expansion for zeta. And so that's the only pole. Now, OK, why am I saying that? Well, let's have a look. What are the poles of this thing? Well, actually, s has no poles. 1 over s, there's a pole at s equals 0, but that's pretty benign. We'll think about that in a minute. zeta prime over zeta, we're either going to have a pole of zeta prime or a 0 of zeta. The poles of zeta prime are the same as the poles of zeta because zeta was maramorphic. And then the 0's is zeta. Well, the 0's is zeta. So what do we have? We have the pole at s equals 1. And so the pole at s equals 1 gives us what? Well, if there's a pole of order 1 at s equals 1, zeta prime over zeta has a pole of order 1 at s equals 1, residue minus 1. So that minus 1 times minus 1 is 1. Plug in s equals 1 to here, we get x over 1. So that gives us a residue, x. At s equals 0, we have the residue minus zeta prime 0 over zeta 0. And otherwise, we've got s equals rho, where zeta of rho equals 0. And those are the possible poles. So these two we can handle easily. This requires more thought. So this is the difficulty and the main difficulty and Riemann's approach is understanding these guys. That's what's difficult. And it's still pretty mysterious how to understand these guys. So what happens if you have a pole of order m? Here, when we take s prime over s, we get m over s minus rho. And so minus m over s minus rho. Typically, when we write the formula, we act as if we've got m separate 0's, which makes it beautiful to write that the formula becomes x minus the sum over these rows of x to the rho over rho, counting m times, if it's a 0 of order m, minus zeta prime 0 over zeta 0. You can actually prove this exact formula if you let t go to infinity. But what's more convenient for us is not letting t go to infinity, and then we get the same error term. OK, so this is all very standard. If you know this well, sorry, but I thought it was a school. So it's worth going over to everybody. I'm trying to attribute 0's in that sum. Yeah. So there are lots of 0's, perhaps, to the right of 1. And there's the bunch we all talk about in the critical strip. But there are some along the real axis going back here also. So maybe I have to be a little bit careful, depending on how far back I take the contour. But it's really going to affect things. OK, so now the question is, can we make this argument work? Now, you have to think things through. Like, for instance, when you try and pull the contour across here, what if you happen to bump into a 0 of the Riemann zeta function as you go? Then you're going to hit a pole of this, so it's not going to be so easy to just bound the integrant. So what's a good idea, it seems to me, is if there are 2 0's of the Riemann zeta function, is you take your contour so you sneak your way halfway in between them. Now, is that easy to do? Yeah, it's easy to do because what you can prove, it takes some work, is that the number of 0's up to height t is some constant t over log t, which means there's kind of an average gap between 0's of 1 over log t or 2 pi over log t, in fact. So somewhere there's going to be such a gap around there, and you just go halfway between 2 0's, and that allows you to avoid the problem of stepping on a pole. So when you do that, you can get some idea of the size of zeta prime over zeta. And well, to cut a long story short, in this kind of region, so the real part of s between minus 1 and 2, you can have something like minus zeta prime s over zeta s, is dominated by, well, it's not perhaps not surprising, zeta prime over zeta should be something like the sum of 1 over s minus rho. I mean, that's what appears in the formula over there. And so this is dominated by the s minus rows that are close in. So in other words, the at the rows, this is sum over the 0's rho, such that s minus rho is less than 1. And you can prove plus some sort of error term. So with this sort of estimate, you can work with zeta prime over zeta. Let me not get into that too much. So what I want to do is just focus on the formula here. And see what we can deduce from it. So the sum, of course, I've said psi x over there. So we want to bound the error, psi x minus x. And the main thing here is the sum of x to the rho over rho. And what would be nice is to be clever enough to find some cancellation between the terms for different 0's. But unfortunately, we don't really know how to do that. So the best we can sort of do is something like the sum over these rows of the absolute value of x to the rho over the absolute value of rho. Plus, well, that's a constant. The zeta prime 0 is 0. So we just need to worry about this guy. And I guess I'll take t in a range just so that I don't have to simplify things as we go along. Now, the reason I write this down is simply just to note that the absolute value of x to the rho is x to the real part of rho. Because if rho is sigma plus i t, you can forget the x to the i t. It has absolute value 1. Using the fact that the number of 0's up to height t is like t log t. And also, one can show there are no 0's of the Riemann zeta function very close in to the axis. One can show that the sum of these 0's up to height t is less than log squared, which is partial summation. And so what we end up with here is a bound on this term, which is like x to the maximum part of real rho in this range, rho such that the imaginary part of rho is less than or equal to t, times log squared t plus the second term. So this is the key calculation in getting the error term in the prime number theorem. And well, you can see very clearly that what's really important is how big do the 0's get in this box, how big the real parts of 0's get. Now, what would be convenient is that all the 0's, which is, we certainly know there are 0's on the half line. In fact, billions of them have now been calculated to be on the half line. So if we believe a conjecture that all the 0's are on the half line, so if all, oh, that's fun. If all rho satisfy real part of rho equals a half, or less than or equal to a half because there are some 0's further to the left, then, well, here we've got x to the half times log squared. OK, if we pick t as root x, and this will also be x to the half times log squared, so then take t equals root x. And we get an error term, well, what we said over here, x to the half times log squared x on the Riemann hypothesis. So under the assumption that all of the 0's lie on the half line or to the left. So what we're going to look at in a bit is what happens with other 0's for regions. Maybe I'll just do it rather than leave us around. So what we'll eventually prove is something like the real part of rho in this box. If the imaginary part of rho is less than or equal to t, then the real part of rho is less than 1 minus some constant over log t. And if we can do that and we just plug it in, let's just forget the logs. So these are the main terms up to the logs. And then if we want to minimize this rather, the way to do it is to equate the x to c over log t with the x over t, that'll require us to take something like t as exponential of root log x. And then when we plug that in, we'll get the standard error term on gets in the textbooks. And then this error term requires a better 0 free region, so going somewhat further than c over log t. OK, so what matters to proving the prime number theorem is a 0 free region. And let's just talk a little bit about how one goes about this. So the first step, if you like, is to show that you can't get 0s on the one line and then to come in a little bit. So the standard proof, if you like, which I want to say this. So let's suppose zeta 1 plus i t was 0. And then we have something like the product of 1 minus 1 over p to the 1 plus i t to the minus 1 is 0. And what is this kind of telling us? You've got the 1 over p weight, and you've got this p to the i t. And arguably, it's telling us something about the way the p to the i t's tend to point, if this was to be the case. So if I was to take, say, absolute value of, no, how do I want to do this? So I guess what I want to say is that if the p to the i t's predominantly pointed towards 1 or the positive real direction, then you might expect this to look something like 1 plus 1 over 1 minus 1 over p to the minus 1. So it would go to infinity. So there's a way in which one thinks that p to the i t points predominantly in the other direction. That's the way we force it to go to 0. So if it averaged out that it didn't point in any direction, you might expect this to be a constant, a bit like those L-function calculations in the previous lecture. But I always hate writing in this form. So this is approximately 1 plus p to the minus i t over p. So if the p to the i t's looked like they're pointing towards the positive direction primarily, then these have size typically bigger than 1. So the overall product might go to infinity. But the overall product here is going to 0. So we expect them to predominantly point in the negative direction. Predominantly is a very vague term that I'm going to make less vague in a minute. And then, so this is actually how Davenport talks about things before he goes to Maten's proof. If that's the case, if the p to the i t's tend to point in the negative real direction, then the p to the 2 i t's, afterwards the square, tend to point in the direction of minus 1 squared. But then that tells us that we should expect zeta of 1 plus 2 i t to have a pole for that to diverge towards infinity. But as I said, fairly simple considerations where zeta function tells us the only poles are at s equals 1. So this is sort of a standard heuristic to finish off the proof of the prime number theorem. And this is proved nicely using a clever identity of Maten in probably the treatments that you've read. But this is, if you look at both Delaveret, Poussin, Ademard, essentially they're trying to capture this idea in their proofs. And it was Maten came later, the proof of the cosine identity that came afterwards. So what I want to do is go back to this original heuristic and try and understand it a little better. So to do that, I'm going to make a definition. I guess I'll use the bottom of this board. So for coefficients an greater than 0, greater than or equal to 0, let me define the distance between two functions. I've done this in too much generality. But that with me would be this thing. So this is not exactly a distance function. Maybe I should just cut to the chase of the simplest example, which would be if we take a n to be 1 over p of n is a prime and 0 otherwise, so the whole thing works in some generality. And what's the idea going to be? The idea is that we have two multiplicative functions, f and g, both inside the unit circle. And this is some sort of measure of how close f and g are to each other. So if f and g were to be equal and also to lie on the unit circle, then f g conjugate would be 1 everywhere. And so 1 minus 1 would be 0, and this so-called distance function is 0. It's not quite a distance function, because if f equal g, and we're inside the unit circle, then this wouldn't actually equal 0. But there's something good about this definition is that it satisfies a triangle inequality, and that's what's really going to be relevant to us, is that, well, there's several forms of triangle inequality. Let me just give a traditional one so it satisfies this. I have a preprint on the archive quite recently, which is for the celebration of 100th or 150th anniversary of the Math Association of America, where I try and describe some of these ideas. And there's a competition there to find the best proof of this. So we don't like our proof very much. Had three entries so far, one of which is very good, I think. Anyway, have a look there if you like elementary identities on the archive. Where is the determinant of x on text? Oh, I'm sorry. Yes. Thank you. Got that one too high? There's the middle one. OK, so what's the point of this in this context? So the log of the Riemann zeta function at s is, we can think to the right of s, this is sum over the prime powers of 1 over m p to the m sigma times p to the minus i m t. So here I'm putting s is sigma plus i t. So if I look at log zeta sigma, actually, I mean, sorry. So I just want to consider this natural quantity. Then that's the same as we said before. When you take the absolute value of something, you can sort of take the real part upstairs. So that's the real part of the log of zeta sigma over zeta s. And then if we just use this formula, what do we get? Well, s and sigma have this part the same. So we get the sum over the prime powers of 1 over m p to the m sigma. And then for the zeta sigma, this is t is 0. So this is just 1. And for the other part, we have that. Now, if you look at our definition up there, it looks pretty similar. And if you like, this is comparing with the appropriate set of weights. This is m. The distance between 1 and n to the i t, all the way to infinity. So the triangle inequality will give us simply that, well, you can write that as. And then just by the definition up there, the fg prime, I can rewrite the second one as just divide out by the same function on the unit circle, i n to the i t. I can just write it like that. So in other words, this is equal to 2 times the distance between 1 and n for the i t, infinity. Thank you. So now what am I going to do with this? I'll simply square this. So I get myself in here, substitute in these. And then we get back, well, almost a slight fail. We almost get back M10's inequality. We actually get this thing. OK, so all I'm doing here is I'm squaring. So this is d squared. Over there is less than or equal to 4 times the d squared over there. I'm substituting from the line above. Sigma plus i t and sigma plus 2 i d. And when the smoke clears, this is what you get, taking exponentials. So I'll just remind you of how M10 works in a minute. But in M10, you get a plus 4 there. So this wasn't quite what we wanted. But actually, that's not very hard to repair, because I could go through the same thing again. And in order to get a plus 4 there, I could get the inequality sort of purposely taking away from zeta functions here. Sorry, twice that. So exactly the same argument you see. And then the square of the minus sign doesn't really matter. And when you work it back, that would put a minus sign in here. That would put zeta s up here. And we get a plus 4 here. Again, just trust me that the algebra works. You can verify that yourselves. But so this distance function is a somewhat flexible tool to create inequalities. And then M10's famous result, famous proof, says that if you had a 0 here as you go at 1 plus i t, as you come in towards 1 from the right, this is because it's an analytic function. This is going to look like sigma minus 1 to some integer power to the 4, so at least sigma minus 1 to the 4. This is 1 over sigma minus 1 to the 3. So together we get a sigma minus, at least the sigma minus 1. The only way this whole thing could be greater than or equal to 1 as sigma goes to 1 is that this compensates by looking like 1 over sigma minus 1. But then we have a pole here, which is impossible. So that's M10's beautiful argument to finish off the proof. This is sort of a segue between the heuristic that everybody used and the eventual proof. OK, I can see I better not get into everything I was hoping to get into. So let me, oh, I've rubbed out what I wanted, of course. So let's go back to the beginning of this proof. It does stop in the middle, right? The outside board? Or is that just a big mistake? Looks like my work is lighter than sounds. Oh, well, what can you do? OK, so going back, we had psi of x is the integral 1 over 2 i pi, the integral from c minus i infinity, or c minus i t to c plus i t of minus eta prime s over zeta s x the s over s ds. And as we said, in order to make this work, we had c to the right of 1. So c greater than 1 plus some error term. Don't worry too much about the error terms. And the tactic of Riemann, so now we're going to get in more into pretentiousness, if you like, the tactic of Riemann is that you take this contour to the right of 1 and you shove it back to the left of the residues. It's all very elegant and beautiful. Some technical issues need to be resolved, but wonderful proof. So the new approach is, why don't we just look at this and try and bound this thing? Why do all this contour integration? Why move the contours? How big is this thing? So I want to make one sort of step that's going to make life easier, is that when we're trying to count the primes up to x, the technique we use doesn't use primes bigger than x. We want to look into that. So I'm going to slightly change zeta. I'm going to regret this later, but in the next lecture I'll have a different meaning for this notation. But zeta xs will just be the sum 1 over n to the s. p divides n implies p is less than or equal to x. So this is a nice finite thing. It has an oil of product. So in the argument we use, there was no reason not to use this thing, because it'll just give you the primes up to all the prime powers for all the primes up to x and their prime powers. So we'll work with this. That's the reason for that. Just in one second. And now let's just take absolute values here and see what we've got. So what's the absolute value of the right-hand side? So this is less than or equal to take absolute values. We get 1 over 2 pi, some sort of integral. And I'll do something with limits in a minute, but let me just say zeta prime of c. Let me change. Let me write s is c plus i little t. So this goes from minus t plus t, zeta prime over zeta. x to the c, when we take absolute values of x to the c plus i t, we get x to the c. And in the denominator, we get something like c plus the absolute value of t dt. Something along those lines, well, to a constant. So maybe a 2 here. So the x to the c is a little bit of a worry because we know that the right size of this is about x, right? So the x to the c is a bit of a worry. So let's, as is typical in these contours, let's just pick c a tiny bit bigger than 1. So if it x to the c is conveniently some well-known constant times x. So this is less than, less than. Well, there's various ways we could play this game. We've got this x, constants we're not worried about. We've got some integral, zeta prime of zeta, very complicated zeta prime of zeta. Let's just take the maximum of it, of zeta x prime. And then some integral that looks more at c is very close to 1. So this looks something like this. So how big are these two terms? Well, this one's easy. This is just like a log, maybe two log. But you'll forgive me the two. How big is this guy? Well, I'm really going to be very crude and say, well, zeta prime over zeta looks like the sum of lambda n over n to the c. Yeah, forget the it. And we'll do this for n up to x. It's not 100% correct. There should be a few more terms, but they're not going to be important. The c is more or less 1. So it's certainly less than lambda n over n up to x. So it's less than log x. OK, so by doing something completely stupid, which is just take absolute values, how much have we lost? The right answer is x plus an error term. And we've lost by a log x and a log t and t. Well, we chose it to be x to the half before. We might be a bit more judicious now about the choice of t. In fact, we'll take t to be a power of log. So actually, let me be careful what I say. Let me not say that. This t might well be more complicated than that. But we've only lost by some powers of log. So that's surprisingly good for such a crude argument. So another approach to perhaps working on the prime number theorem is let's just try and be a little less crude here and see where we get to. OK. So I'm so confused with these boards. OK, I guess that one's next, if it'll stay down for me. OK, so we've done this crude argument, and we haven't done any analysis at all. So note we're integrating to the right of the one line. We're not straying into the critical strip. So the first question we should ask is if we've got a function which, let's assume z prime over zeta is a fairly benign function, which it actually is. 1 over s is certainly benign. Actually, yes, we certainly understand. What I want to understand is we know the right answer is x. So we know there must be some cancellation in this integral. So where does it come from? Well, there's an obvious place it comes from. Well, think about the terms. The only term that we really understand very well, besides the 1 over s, that we understand well is x to the s. And what happens to the integral of x to the s as we go through a short interval? Well, the integral of x to the s over s is x to the s over log x. But more importantly, really, for us if we go through from s0 to s0 plus what? 2 pi over log x to i pi over log x. So we go over a short integral, short part of the integral up the vertical axis, then that difference is 0. So in very short intervals of length 1 over log x, the x to the s part disappears. So there's a win there. We just have to learn how to exploit it. How do we exploit the cancellation thanks to x to the s rotating fast? So what's the obvious way to exploit that? Well, integration by parts will allow us to first integrate the x to the s, and then hopefully the rest won't get too nasty when we differentiate it. OK, so if I was working with, now I'm going to switch back to another notation, which is over there. So if I'm integrating a of s x to the s over s0, here a of s is zeta prime over zeta, then what's the idea now is I'm going to integrate by parts. So it'll be a to the s over s, the integral of x to the s, which in other words is x to the s over log x integrating by parts. And then we've got minus the two other terms. So minus the integral of a prime s over log x x to the s over s plus the integral of a of s over x to the s over s squared over log x. OK, I haven't put limits there. Let's not worry about limits. What's going to happen because of this circulating around so much is we can basically consider that to be 0. And if we go to infinity, that's certainly an identity. And actually to study this as an identity, let's just take this and go back over here. So we know this represents the sum a n n up to x. What happens in this first term here? What is a prime of s? What is the derivative of a Dirichlet series? Well, this is the sum of a n over n to the s. So minus a prime of s is the sum a n log n over n to the s. So this first term corresponds to the sum a n log n over log x. Well, I think you can guess what the second part represents. And in fact, this is sort of a standard trick in using Perron's formula. So this part actually is easily bounded. Now, assume all the a n's are less than or equal to 1, which is not quite true in our problem, but is almost true in the average of a lambda n's is bounded by 1. Then we can just take an absolute bound of this with 1 here. So the sum 1 minus log n over log x. And this thing is less than x over log x. So if I'm just putting a 1 in here, then this would be x minus log x factorial over log x. And then by Sterling's formula, the main term is x over log x. So the meat of this integral here, the thing we want, this is small. I mean, we win by 1 log x anyway. The meat of it lives in here. So let's just examine what we get out of this integral as opposed to before. So before, when I was integrating this thing and taking absolute values, I had something like the maximum in my interval of a of s times x. So here again, I'm thinking of s as 1 plus 1 over log x plus it. And I'm thinking of an integral from minus t to plus t times log t. Yeah, that's basically what I had up there with zeta. And now what do I have? Well, I have something a little different when I make the same. So this is the bound that I get by taking absolute values in this integral. Now I've integrated by parts one time, and I try to do the same thing. What is it that I get out of this thing when I take absolute values? I still get the x. I'm actually going to pull out the same thing, the maximum of a of s. Excuse me? You're going to bring your problem this time. Yes. There's an extra log x. I can bring outside. That's cool. Because remember this was bounded by log x. And I divided through by the log x. That's cool. Less than, less than. And now I've got inside this integral from minus t to t of a prime over a, in absolute value, of 1 plus 1 over log x plus it over the t over 1 plus t. So have I won or have I lost? It's a bit hard to tell. I've made it more complicated at the very least. That's good. So here, what can I do? Well, for instance, bounding this, I could calcium it. I can calcium it. Because a prime over a is kind of hard, but a prime over a squared, it's easy to play with. Terry looks like he's in pain from me saying that. But anyway, so one idea to get up a bounce would be to expand it in that way. Perhaps I don't want to get into that at this stage. But suffice to say that proceeding like this, you can, for instance, in this problem and in other interesting problems, reduce this bound to root log x from log x by this. So there is a win. Trust me on this. So this is something we're going to get into more when we get into Harper's new proof of Hollas's theorem is how you can win by doing such things. But this actually, well, I guess I'm running out of time today. This is going to come in now. So let me actually, let me finish off by something fun. So what we're going to do is we're going to revisit the proof of the prime number theorem. We're going to do it to the right of one. We're going to do it by taking absolute values, though we're going to be a little bit more clever about how we construct the integral. But we're going to do it to a slightly different function from the standard function. So OK, if you've been lost up to now, this bit is kind of fun to come back to. So I said that zeta s looks like 1 over s minus 1 plus gamma plus big O of s minus 1 when you're near to 1. And it's got a Taylor series from then on. So this would give us for zeta prime s, well, you just differentiate, right, minus 1 over s minus 1 plus s minus 1 squared plus big O 1. And then what I'm trying to get to is minus zeta prime s over zeta s, how does that start? That starts with a 1 over s minus 1 plus gamma. Plus gamma, yeah, no. Minus gamma, OK. So that's just a calculation based on this series. Now, what we want to do in the proof of prime number theorem is integrate zeta prime over zeta. And the whole issue for the main term becomes the polar s equals 1. But in the problem we're working on, we want to actually get upper bounds on the integrand to the right of 1. So somehow we want to get rid of that pole at s equals 1. So here's a nice way to do it. Let's just look at zeta prime s over zeta s plus zeta s. So that's that minus that. The pole's gone. If I subtract 2 gamma, this is at s equals 1. It even vanishes. So I want to play with this function. Now, let's see what's kind of neat about this function. Now, it has no pole at s equals 1. We've just canceled it out. But let's have a look at the zeros. So what I want to say is what are the poles of this? Now, we know that the poles of zeta s, well, the poles of this must be at least the pole of one of the terms. There's no poles of gamma. The only pole of zeta s is at s equals 1. That canceled with the poles of zeta prime over zeta. So the s equals 1 isn't a pole. And I released terms, contributed pole. What are the remaining poles? We said the only poles for zeta prime over zeta are the poles s equals 1 and the zeros of zeta s. But they remain poles here because there's nothing doing with these other terms. So the poles of this equal the zeros of zeta s. So one way to phrase the Riemann hypothesis, instead of looking for zeros of Riemann's zeta function, we can look for poles of this guy. And this is the guy we're going to put into Perron's formula tomorrow. But for today, I just want to finish off by playing with this. So if you're a good analyst, you might have good ways of tracking zeros. But if you're like me, the only thing you know that's easier to do is to show something's not a pole. Because something's not a pole means it doesn't diverge to infinity, which is relatively easy to do. So we have this function. I'm going to call this function z of s. And the Riemann hypothesis then can be rephrased, z of s has no poles with real part of s greater than a half. And what I want to do is think about a technique that would allow me to prove that. Oh, no, it's not staying. OK, well, anyway, off it goes. So how can we prove that a function has no poles in a certain ball, say? Well, one way to do that is at a certain point is if you have a Taylor expansion with a certain radius of convergence, then the thing converges. It doesn't have a pole. So how about this for an idea? Here's the half line. And I want z of s to have no poles to the right of a half. But I only want to play z of s to the right of 1. So here is z of s. And what I'm going to do is try and build a Taylor series for z of s at every point to the right of 1. And I want to prove that z of s doesn't have a pole there. Well, if I can just get radius of convergence a half for every point here, I can include every point to the right of a half. Yeah, so every point to the right of a half, I go sort of half minus epsilon, just enough to skip over the one line. And then, hopefully, it'll be within the ball of convergence there. So the Riemann hypothesis can be said to say, or Riemann hypothesis can be is implied by, z of s 0 has a Taylor series with radius convergence at least a half for all s with real part of s greater than 1. So now, suddenly, I've got the Riemann hypothesis living to the right of 1. So how do you prove that a Taylor series has a certain radius of convergence? Well, when you write it out, you write it out well, in terms of the Taylor coefficients, obviously, et cetera. So I want the s minus s 0s to go up to a half. So this grows like a half to the k. I can make this converge if this grows a little bit smaller than 2 to the k. Or actually, if it grows like 2 to the k, then I can get any radius convergence less than a half. And that'll be enough, obviously, in open interval. So here's my new Riemann hypothesis is z k s 0, z k s, over k factorial, where s is sigma plus i t. For all s equals sigma plus i t with sigma greater than 1, that is bounded by something that depends only on s times 2 to the k. So if I can just take these derivatives, high derivatives of zeta prime over zeta, of zeta, and well, gamma, we know how to do high derivatives of, obviously. But if the first two terms, I can do high derivatives of them and bound them this well, then I'm going to prove a Riemann hypothesis. So this is what we call the pretentious Riemann hypothesis. Again, I haven't told you why pretentious. And let me just say that if you assume the Riemann hypothesis, then just by classical techniques, Riemann hypothesis implies that log of 3 plus t. So although all we need to assume is that there's some bound with a fixed constant for each point, actually, the Riemann hypothesis will come back and give you a uniform bound at every point like that. So how we're going to start next time, I guess, is we're going to make this assumption. And we're going to prove that error term of x to the half log squared x, only playing on the right of 1. Now, that may not surprise you. I mean, we have a proof already because this implies the Riemann hypothesis. And we saw the Riemann hypothesis implies that. But I want to build a technique that doesn't go into the critical strip or use zeros. Then what we're going to do is, well, OK, we can't prove this. We can't prove a Riemann hypothesis, unfortunately. Maybe we can prove a weaker version of this. Something else to the k. And using that, well, we can do that. And using that, we can prove these other results. So that's what we'll do next time. And once we've done that, which hopefully won't take more than 20 minutes, then we'll move on to this more general technique using Perron's formula. OK, that's it. Any questions or comments? Yeah. So like, if you prove this, z equals 0. So then there is a difference from the other one. It's the plane instead of playing against type 2p, it's playing against two real factors. Can you write that as the distance from between 2p and 2p? So to prove? Zero 0. So instead of the cosine identity, you have the L1 chi 1, Ls chi 2, Lx chi 1, chi 2. Can we write that in there? The answer is, yeah, that there is a formulation. Oh, yeah. Can you do it? But it's not. Yes. It takes a little bit of explanation. So I didn't really do a very good job talking about the distance function. Maybe I will next time, and then I can try and answer that question. I mean, there are some different issues to talk about, but I'll talk about at least one of them. OK, Terry. So how do you philosophy, I guess, is to minimize the law of quantum theory? God, yeah. So I mean, I had plans to talk about how everything in Riemann's memoir comes into the proof of the prime number theorem, but the only thing really that's left is Zeta being analytic, having a pole less equals 1.