 OK. Thank you. Sorry that I might go over time. I'll try not to, but say what I need to say. If you need to leave, leave. Leave now if you need to leave. So I thought I'd start by advertising. We have a special year in number theory at the Institute, the CRM, Sonto Arecious Mathematique in Montreal. There are four conferences in analytic number theory in the autumn semester. Those are the dates. Those are the titles. The third one is on the boundary of analytic number theory and arithmetic geometry. I mean, beautiful. Nowadays, about the work of Baageva and several strong people like Venkatesh and Ellenberg is that there's not a lot of difference between for them algebraic methods and analytic methods that all come together. So this will be a celebration of that in November and perhaps more obvious topics in the other three conferences. The first is going to focus large extent on counting points over finite fields, maybe size of Selma groups, this sort of thing. OK. What do you celebrate? Sorry? What do you celebrate? Do you eat meat? Who knows? We'll find out soon about fields metals. So yeah, if you look on the website of a CRM, you can get information, information how to apply if you need funding. Also, there may be some depending on where you're from. That's not my prejudices. That's to do with who supplies money to us. OK. So let me get back to my topic. I want to prove Hall-Essers theorem today and prove Linux theorem today. Or a strong indication of Linux theorem. I'm writing you some of the things we've done. Here's the definition of distance. Here's the connection of distance in the size of a Dirichlet series throughout today after some multiplicative function. We have Perron's formula. We have Hall-Essers theorem, which is what we're aiming to prove. I thought I'd start today by talking. Somebody asked me about Siegel's theorem, which I don't exactly want to talk about, but Siegel's zeros. But it's pretty connected. So what is a Siegel's zero? If you have a quadratic character, then it's typically written this way. Actually, I guess it's typically written with beta. That you have some zero of Dirichlet L function where beta is close to 1. And here, chi is a real character. And if you just think about the graph of his 1, there's a zero very close to 1. And you can do something about bounding the derivatives, so the L function can't grow much. So it actually is equivalent to L1 chi being small. Q is a conductor, log Q. And actually, what you can typically do with these things is truncate. Well, at least what I've been sort of saying all long is if you look at sort of all the product, if this is small, i.e. it goes down towards 1 over log Q, there's some sort of predominance amongst the chi's to be, remember, there are any plus or minus 1 in the chi p's. So here, there's a predominance of minus 1 over plus 1. There's some sort of bias. So this is the same thing as saying, and you can make this precise, that chi p is predominantly minus 1, which is, of course, mu p. So that's the same thing. And our language is saying mu pretends to be chi, or is chi pretentious. And actually, rather than most specific, I put all these f in only if I'm not sure why, because these are all very poorly defined. Let me be a little more precise is that the distance squared between mu and chi, and now I'm actually going to restrict myself. So this was p up to x, but let me change this definition a bit and say, if I put q and an interval qx in here, I mean the primes between q and x. So between q and e to the 1 over 1 minus beta, remember beta was the seagull 0. It's very close to 1. So 1 minus beta is very small. 1 over 1 minus beta is very big. e to the 1 over 1 minus beta is really big. If you look at the 1 over log q, oh, the seagull 0 is also defined to be within a distance 1 over log q of 1. So there you see this is sort of a non-trivial interval. And this is the same thing saying that this is uniformly bounded. So there's a direct link between seagull 0s and pretentiousness appropriately written out. So I'm going to be a little bit careless because I don't want to write this down every time. But there's something about distance over a certain range where mu and chi are quite close together. So chi pretends to be mu in some sort of interval. Now one of the great results of analytic number theory is repulsion results. So that says that, well, some of us you may well know well, that you can't really have two 0s of a zeta function very close together and near to the one line, or even two different zeta functions or l functions, which have roughly the same conductor can't have 0s too close together. Now if we were to say have two seagull 0s, suppose you had two characters. Suppose chi 1 and chi 2 had seagull 0s. And the conductors are close together. So they're roughly the same order of magnitude. So this is a Bourguin approximately, a bigger one, to the bigger one. And as I said, this is known to be impossible from analytic methods. There are these various techniques to look at repulsion. So let's have a look using our way of thinking of things. This would mean that the distance between mu and chi 1 is small. And it would also mean the distance between mu and chi 2 is small. So both of those are small because of the seagull 0. But then all I have to do is say use the triangle inequality. And that's greater than or equal to the distance between chi 1 and chi 2. That would also be small. And then just changing things around, that means the character chi 1, chi 2 bar is very close to 1, which essentially means that, well, if we look at what sort of thing does this mean, it means that chi 1, chi 2 bar typically points towards 1 for the primes in that range. And that's going to mean when you form the sum out of this that this thing gets too big. So it's not hard to show that L functions at 1 are smaller than log q, or I should say log q 1, q 2, q being the conductor for chi 1, chi 2. And so you get a contradiction. And so this is how we can prove repulsion using pretentious methods, which is a lot easier than the repulsion from classical methods. And this lemma has a name, which is pretentiousness is repulsive. So this is a strong technique, because it allows us to show that things can't be too close together. Let me give you another example. Suppose you had some, well, all right, let's just do this one. Suppose d mu chi, suppose you have a seagull 0, but you didn't happen to know chi was real. Then if this is less than, less than 1, if I just take twice this, then this ends up being, sorry, greater than or equal to, by the triangulantility, d 1 chi squared. And for the same reasons, this means that the L function of chi squared at 1 has to get big. There is one L function that gets big at 1, and that's the Riemann's data function. So that implies that chi squared must be principal. So in other words, chi is quadratic. So all of the sort of standard results about zeros of L functions in A to 1 can be recovered very easily in this context just by such manipulations. In, we will, I hope, get to the proof of Linux theorem today, or sketched. And so Linux theorem says that for all, so there exists a constant L. I'll write it over here. Linux, we've got a lot of mention today. We learned a lot of great work in different fields, of course. There exists a constant L such that if A and Q are co-prime, then there exists a prime P less than Q to the L, such that P is congruent to A mod Q. So this is less than, less than. So there's a small prime, smallish prime, in every arithmetic progression. And the proof, in all cases, comes to two cases. So in the proof, there's the hard case. Hard case. Let me do the easy case first, or talk about it very briefly. Easy or easier case, which is when there is a seagull zero. So as Terry pointed out this morning, we'll do, maybe I'll write that out specifically later, is that when you're asking for things, objects in arithmetic progressions, you can break it down according to additive or multiplicative characters. An analytic number three is traditionally with multiplicative characters. So if I'm looking for psi x, QA, the primes congruent to A mod Q, I break it down as 1, V, Q. Well, I'll just write it down now. The sum, chi, different ways to write this. Well, for now, I'll just write it easier. Chi bar A psi x chi, A mod Q. Here psi x chi is the sum of chi n lambda n. Oh, thank you. Thank you. Thank you. And what we get is that from the principal character, this is, well, essentially x. And the seagull zero character, so this is the quadratic character, minus x to the beta over beta from the seagull zero beta. And now what I said and I raised, no, I didn't raise. This is my diagram, very meaningful diagram. But if you have 1, 0, it repels all the other ones, the seagull zero. So it means all of the other terms are small. Well, there's some work to be done to justify one about psi. But basically, you can forget them. So this formula only becomes 1 over phi Q x minus chi of A x to the beta over beta. So again, chi is the quadratic character here. Well, obviously, this is big if chi of A is minus 1. But let's focus on chi of A plus 1, chi of A plus 1. So when I say equal, I mean approximately equal. So when chi of A is plus 1, this guy becomes 1 over phi Q x minus x to the b over b. So the beta is very close to 1. So the 1 over beta is more or less 1. And so this thing can be written as, let's take out the x. And then we get 1 minus x to the minus 1 minus beta. Yeah, it's a little bit of algebra, x to the 1 minus beta. It's e to the minus 1 minus beta log x. As long as x isn't too large, this looks like x. So 1 minus beta, this thing in brackets looks like 1 minus beta log x. So we get 1 minus beta x log x over phi Q. And so actually, sort of amusing to note this, the number of primes up to x mod Q is a constant multiple of x. So if you have a single 0, at least in the appropriate range, and it's the same appropriate range from Q to e to the 1 over 1 minus beta, you actually get a positive proportion of the numbers in the arithmetic progression of prime. You have to observe when beta is close to 1, very close to 1, you're going to gain more on the rest of it. The rest of the rest, there are other contributions. You say you can dismiss that. Yeah. But there, you're going to get less also. Ah, OK, OK. So let's assume 1 minus beta is a moderate size. Well, yeah, it is tricky if you if it is possible to do it, but you have to work. Yeah. You can bound with 1 minus beta involved in the error term. Yeah. So this looks a bit like a sieving result. It looks like you take all the integers in arithmetic progression, there's x over Q of them, and then you knock out a positive proportion of them, some proportion, some constant. Am I taking away from your lecture on Tuesday? I'm trying to. OK, Jolly. I haven't finished yet, Henry. OK, so yeah, maybe I'll leave it like that for now. So this is the relatively easy case, though, as Jean pointed out, I was a bit exaggerating how easy it is to dismiss all the other terms if 1 minus beta is really quite small. By Dirichlet's class number formula from the connection I put up there, 1 minus beta is bounded below by 1 over root Q or pi over root Q or something. So it can't get incredibly small, but it can get pretty small. A lot smaller than 1 over log Q. The hard case in the traditional proof is when there's no seagull zero, so you can get a lot of zeros interfering with life. And then there are very, very delicate estimates that need to be proved. And Linux dedicated a lot of time for this and there are other places where you can find work on zero density results that allow you to prove Linux theorem. It's not the approach we're going to take. So OK, so let me go now. We're going to prove how less is theorem, more or less. And then we're going to come back to Linux theorem. So what we said last time, or the first time I talked, was we have Perron's formula. And there was some advantage to breaking the f of n up into two parts, into f of n log n over log x, and then the complementary part. And this complementary part, we noted that the value of the sum would be irrelevant, the fn 1 minus log n over log x. So this is the important part. And that comes out in this integral that when we do 1 over x, the sum n less than or equal to x of fn log n over log x. Then by Perron's formula, we get that's approximately 1 over 2i pi, the integral minus f prime s over log x, x to the s over s. And then our tactic was to pull out f of s from the integral. Actually, we'll write it on here. So this is approximately, we're going to do absolute values. So this is less than, less than. I'm firstly going to pull out the biggest f can be on the contour over log x. So this is less than, less than m f x t. This is how big the Dirichlet series gets. Times the integral between minus t and plus t of f prime over f of c plus i t. There's an x to worry about. And then there's a dt over 1 plus t. So that's just pulling out the f and taking absolute values. And as we said, this loses. And I made some remarks about Cauchy-ing, got a wintz from Terry. But now I'm going to do the Cauchy-ing. But this is Cauchy-Schwarz. So if we Cauchy-Schwarz this integral, we've got two parts. We've got the integral of the 1 over 1 plus t all squared. Well, that's easy. That's a constant. And we've got the more interesting one, which is the integral of f prime over f squared. And as I said, squared, you have a nice thing is you can fold it out. So the other thing I should have said here is that we'll assume, again, that f of p to the alpha is 0 if p is greater than x. We're allowed to do that when we're constructing f. And that's convenient when we look at f prime over f because we make it into more or less a finite sum. Well, actually, if I just put p to the alpha greater than x, we make it into a finite sum. OK, so what we want to evaluate is the integral of f prime over f. And that, well, let me work with essentially the case where f is, well, I don't want to say this. So if we look at f prime over f of s, then where is this going to be supported if f has an Euler product, as it does because it's a multiplicative function? It's going to look like something that's only supported on the prime powers. So over n, where n is a prime power. And I guess the easiest way to write this is lambda fn over n to the s. So it's a sort of similar, I guess, one minus sign to make it all look completely analogous. So with zeta, that would just be lambda of n. So when f is all 1s, this is only supported in n less than root 2x and on prime powers. And the other thing I wanted to say is what happens at primes, which is really going to be the important thing, is lambda fp. When you just work it through, it looks like fp log p. So to do that, you just think of the Euler product. You've got the 1 plus fp over p to the s is how it starts. You put f prime on the top, f on the bottom, and then you see the first coefficient is that. So this is the thing that we want to take the integral of. So it's essentially lambda times something. And at least for log p squared, it's a little more complicated here, but I'm not going to get into that. So what we're really interested in is something like the following integral. So it's from minus t to t, the sum over n up to x of lambda n over n to the sigma plus i t of times a of n. Some constant a of n, which is, in this case, at least the primes f of p. Sigma is c, but later I'm going to want the flexibility. I'm doing a little more. So let's do this dp. And what we need is a good upper bound on that. So what I want to do is see how to get an upper bound on this. Now there's a slick trick to do well here. So let's take a function phi, which is even. So that phi hat has compact support. Actually, that must be wrong. I need to get this right. Well, what I want is for phi, well, what I'm going to do, I'm very good at these things, is extend this from minus infinity to infinity, the same sum. We'll worry about phi in a minute. We'll get something that recognizes at least the integral between minus t and t. OK, so it's probably going to look something like this. Maybe even flatter at the top. OK, so we have a nice inequality. And the idea now is just to expand this. And then we have this integral n2 over n1 to the it phi t over t. So this thing looks like it's very transformed. I mean, you can put the e to the it log n2 and 1. So what we end up with here is phi hat of t log n1, n2. This is what I want, actually, just supported in minus 1, 1. So what would that mean? That would mean that times t. Thank you, Henrik. Yeah, we just changed variables by putting over t and over t. And so there's another t. So if it's supported just in minus 1, 1, what does that mean about the phi hat t, blah, blah? So if t log n1, 2 is less than 1, well, I guess this means that this is less than 1 over t. And so we're going to get n1 minus n2 is less than n2 over t. I'll just put that in absolute value. So it restricts the range of n1 and n2. So let me take the particular case for sigma equals 0 and just work this out. So sigma equals 0, we get the sum n1, n2. I want to do one other thing, which will be what I'm going to do here is fix one of the variables and make the other one very in a short interval. So it's convenient just using an inequality like an1, an2 in absolute value is less than or equal to what? Twice or half the square plus square. Yes, that'll do. OK, so if we just plug that in, then there's a symmetry. We can choose one of the variables first, the other second, and then it's twice that. So it's going to be less than, less than with n1. So we have an1 squared lambda n1 t, the sum over n2 of lambda n2. Just substituting n. But now, actually I put n2 up to x, but I've been wiser to put that n2 was in the short interval around n1. So now we're trying to count primes in a relatively short interval. And well, we can use Bruntitch-Marsh and Selberg's treatment. It's nice and elementary. So there's an upper bound here of less than some constant times n1 over t. So when we plug this whole thing in, the t's disappear rather magically. Now, what have I done wrong? I'm sorry? You are working. It's sigma equal 1. Lambda n, yes. I feel like I put an extra n here. Yeah, because I think it's sigma equal 1, not 0. It's the same. Sorry. You know when you write notes on scraps of paper. OK, well, let's continue with it and not worry too much. So the value of sigma doesn't greatly affect things. I was just trying to do one example where we'll work the thing through. You have to make sigma equal 1, not 0. You have to divide by n everything. All right. Let's try sigma equals 1. I'll do what I'm told. OK, so we get over n1 over n2. But then this is in a range around n1. So this becomes 1 over. OK, and now we get this this way round. It's certainly one option. And then, OK, if the a's, for instance, are like f, that they're uniformly bounded, then this is going to be, and this n is going up to x, this is less than less than log x. OK, so such a technique is going to work for different sigmas. You just have different estimates. And if you put the right thing in, the right thing comes out. So anyway, my point, yeah, I guess in this case, that's what we wanted. Sorry, you were sort of making the right point. I wasn't listening very well. So we have s equals 1. So it's lambda n over n. It should have been sigma equals 1. We get the log x. It's of the square. So we take the square root. Here, our upper bound is bad by a square root log x and a log t. But if we take t to be a power of log x, then the key problem is the square root log. That's really what things are going wrong. So we say to ourselves, OK, if integrating once by parts one is halfway there, then integrating twice by parts is going to get us all the way there. And that's what you might think, but good luck. So in fact, in Selberg's element, to improve the prime number theorem, you can interpret that he did that, that he goes to essentially f prime prime. But it's a hard road to get from there to the proof of the prime number theorem. It's not so easy after Selberg's formula. So when Harper came to us with a new proof of Hollath's theorem that was elementary, and the question was to try and reformulate it in a way that was more analytic. And the key to it, in a way, was that we actually got two f primes over f, not an f prime prime. So eventually, I think it was sound who came up with the following formula, which will just prove, works. And the idea is to have two f primes over f. And it's a rather peculiar generalization of this, say the least, but it works most importantly. So here we go. We're going to add in two new variables. We'll integrate over our favorite line. We're going to add a new variable alpha. And we're going to add a new variable beta. So what are we doing here? So I'm quite finished yet what we're doing. But we're going to take alpha and beta being positive real numbers. And then rather strangely, as s goes up this vertical line, up the line c, so here's s. Then in the first integrand, f prime is shifted along a bit. And in the second integrand, it's shifted along a bit more by alpha and by beta. So we somehow are simultaneously going up three lines, which is a bit of a peculiar generalization. Let's see why this is useful. What we're going to do is integrate over all possibilities. Let's assume that things are nice and convergent. And I'm just going to compute this thing. So firstly, let me integrate over beta. So if I integrate over beta, well, I'm left with integral over alpha. Well, I'll bring that in actually. Integral over alpha. Let's just see what happens when we integrate over beta. If we integrate f prime of s plus alpha plus beta over beta, obviously, you just get f between its two limits. So we get f of s plus alpha plus infinity, so far to the right. So what is f evaluated at infinity? As you go off to the right side of a plane, the only term left is the one. Everything else disappears. And when beta equals 0, you get f of s plus alpha. So that's rather cunning because it multiplies through in there. So I guess we're left with f prime of s plus alpha minus f prime over f of s plus alpha when we do the beta integral. Now we do one more integral. And, well, again, it's the same trick for the f prime. The f prime becomes between f of s plus infinity minus f at 0. So f of 1, f of infinity, which if you like, f of 1 minus, I hope I've got the right way around, f of s. And then on this guy, the f prime over f says you go out to infinity, all of those terms are going to go to 0. The only thing left is what happens at 0. And so we're going to get minus plus log of f of s, x to the s over s to the s. And so I think I got a minus sign wrong somewhere. But let's now just use, I'm sorry, here. And now it looks prettier. It becomes f of s minus 1 minus log f of s. So the first couple of terms of expansion of e to the log f of s. So now let's just use Perron's formula, assuming it went to infinity. And we get the sum for n up to x. The first term gives us f of n. The second term gives us a minus 1. I'll worry about that in one second. And log f of s gives you lambda f of n over log n. And then, in fact, if you think about it, the 1 just knocks out the first term. So you get this. So this guy is not going to be important. It's going to be small. And so you've really got what you want, which is f of n. We've got an extraneous term, but it turns out to be small. And what we have in the middle here is what we want is 2f prime over f, or once we pull out an f. So the idea here is simply to, well, OK, we've got to do a couple of things. One of which is we've got these integrals going from 0 to infinity for alpha and beta. But it's not hard to imagine that, well, if you think about Dirichlet L functions, as you go further and further to the right, especially f prime over f, I guess, as you go further and further to the right, they're going to get smaller and smaller and not relevant. So for a small price of a small error term, you can cut these integrals down to mean very short. It's about length epsilon. And the important thing is that, when certainly, it more or less goes through the same maneuvers, but you end up with 2f prime over f. So I'm not going to be very precise, and you can see that already. But what we end up with is something like... So one of the things we want to do, actually, is slightly move the line of integration because we've put things so far across. So let me just say we'll cut alpha and beta down to a nice finite length. So that's without any real cost. And then we're going to do a slight change of variable, which is we're going to move a line of integration as follows. Which is not a big deal. And when we do that, and we do things slightly carefully, we end up with 1 over 2i pi, the integral from minus t to t, of the sum n up to some point x, lambda fm over m to the s minus beta over 2. So you see, if we move c to c minus alpha minus beta over 2, the first one, the line goes to c minus beta over 2. And in the second term, we get a sum over n. And then we've got an f term here, and we'll want to take absolute values. So this is f of s. So I guess one does this before moving the contour. And so let me take this out. Yeah, absolute values. Thank you. So forget that. We've got x to the c0 minus alpha minus beta over 2 t over c minus alpha. And then times a maximum. Well, you can be more precise than this. And now this maximum over f is happening in a box. It's no longer just happening on a line, but instead the maximum of f happens on some sort of box. It's the maximum height minus t and width 1. And so how do you find the maximum for such a thing? Well, you can use maximum modulus principle, if you like, and know that the maximum is going to occur either on the left-hand side or the right-hand side or the top. The right-hand side, it's far enough away that it's going to be bounded, so not very difficult, not very harmful. These are not going to be important, so the maximum ends up being on this line. So sorry, it's a bit technical, that. But you end up with, well, you can see the structure is emerging that we want. We're going to get these two lambdas. There are some technical issues. It's convenient to make sure these primes don't get too big and that they don't get too small. So one wants to kind of cut in a bit. And this will all be written out carefully in the book. One wants to cut in a bit to avoid some issues that come being right at the extremum. To do that, one alters a little bit the identity that I proved up there to sort of big and small prime factors dealt with in a way that allows one to avoid any of these difficulties. With this, when we take the mean square using the lemma that we kind of proved, well, you need the extra flexibility of a sigma, but when you work it all out, it all nicely works out. So we have this one issue about, well, actually probably I'll just leave it at that, so if I don't want to go too far over time. So this is roughly how proof might go. I mean, at least you've seen the main elements. You can see there's a little bit left over with the 1 over 1 plus t term that gave us our log t before, and I said it was like log log x. If one gets bound suitably here, sorry, for here, and I mean there's bounds that come out of this maximum and there are also bounds that you shift more to the right that are just bounded by the zeta function, in fact, then you can manipulate that to change that log p to a log the maximum value. I mean, it's a technical thing, but it can be worked out. Okay, so that's roughly the new proof of Halas's theorem. It takes a little bit of effort to get your head around it, and I certainly haven't done a great job of it, but I think when you read it, you can get to growth. So the main idea I don't think is so hard is just that these 2f prime over f allows you to use the mean square. You win the root log x twice, and that's enough. I'm sorry? Ah, you have a point. I haven't explained this x minus alpha here. We'll give you another log. Yeah, he gets the details from this. Thanks for that. Okay, let me move on to the idea in the proof of Linux theorem, which you'll see comes fairly naturally now, and I only want to start rubbing out from here, I think. I'm going to leave that sum up there. So now what we said last time, I guess, is that when you're trying to do the proof of prime number theorem, instead of using lambda n, if you're a multiplicative function person, you look at the sum of mu n for n up to x. And similarly in arithmetic progressions, when you want to prove a prime number theorem for primes in arithmetic progressions, you can look at instead this thing. And then with suitable convolutions, if you like, you can elementarily show that if you can prove that these are all little o of x over q, then that's going to imply for you the prime number theorem for arithmetic progressions with primes. So this is what we're going to work with, but I want to do the same thing I did in the case of the prime number theorem is I want to go more generally and work with an arbitrary multiplicative function fn inside the unit circle. So we're interested in, and I'll put the phi of q here just to, I don't have to write it out very much. Okay, so this is what I want to estimate and think of f as mu. And then I'm going to decompose it with the characters, and so I'm going to get the sum over the characters mod q of, so let me round, chi of a is the sum of f of n, chi bar n, n up to x. Okay, and what you notice is that this thing is a multiplicative function. Product of two multiplicative functions inside the unit circle is a multiplicative function inside the unit circle. That's good. So we can apply the technology we've created there, and it's going to be convenient to apply that technology but not necessarily using every character in here. So, you know, what I should have said earlier, I meant to say was when we were proving this repulsion principle by showing that, that there was that, well, I've got several things to say. How do I want to say this? So let's just, what I want to say is suppose that one of these is big, okay? Suppose that one of these sums is greater than greater than x. In a minute, we'll apply the technology, but let's just suppose one of these sums is greater than greater than x, then by Hollas's theorem, that means that m of s must be, so Hollas, that implies by Hollas that m f chi-bar x t is greater than greater than one. That's the only way from Hollas's theorem, that can be the case. That implies from over there that there exists a t less than or equal to t, such that the distance from f chi-bar to n to the i t up to x is bounded, okay? Which I'm just going to note, the same thing is the distance from f to chi-n n to the i t. Now, let's suppose two of these terms were big, okay? If two of these terms are big, you'd have this small, and you'd also have it for a chi-2. But then we can just use the triangle inequality again, and we'll get the distance between what? One chi-2 bar, n, and n to the i t2 minus t1 up to x is also small. And this, if you like, tells you something like the value of l1 plus i t2 minus t1 at chi-1 chi-2 bar is too big, and so we get a contradiction. Okay, so again, this repulsion principle through pretentious and triangle inequality says two of these, only one of these sums can be large. And I guess this is Terry called it orthogonality, but it's the same thing. The difference here is that you're sort of more particularly working with one modulus than in the large sieve. So, yeah, so now we know that there can be at most one big guy here, okay? So let's pull the big guy over to the other side. So we're going to have the big, there is a big term we'll put it over on this side, and then we've got the sum of all the other terms on the right-hand side. So, let's, so e is all of a moduli mod q minus perhaps someone where that character sum is, where the sum of fn chi-n is big if it exists. So let's, let's try and bound this using the same techniques. So we want to bound the sum, all right, one over phi q, the sum over chi and e, which is almost everything, e for everything, almost, not exceptional, of the sum n less than or equal to x of f of n chi-bar. Now, what we do here is we just proceed as above, as the bit I've just rubbed out, but let me, you'll see that it's very similar. So this is less than, less than essentially one over phi q, the sum over chi's. Forget the alpha and beta integrals, the one over two i pi, just the integral from minus t to t of something like the sum m lambda fm times chi-bar of m over m, well, when you work it out, one minus beta over two sum m n lambda fn chi-bar n and one plus beta over two, okay? So these were the two f prime over f's we had, but now they're twisted by chi in each case. That's a dt that comes in. So plus i t, thank you. So now we're going to Cauchy, but I think maybe you can see what's going to happen when we Cauchy, we're going to Cauchy putting this sum of chi inside, okay? And so when we Cauchy, we get less than, less than, and actually with this whole thing inside, we get two terms of the form one over phi q, the sum chi in e, the sum m lambda fm chi-bar m m to the one minus beta over two plus i t squared dt and then a similar term over here. It should be a big square root, I guess, to the half. And now when we expand, rather like we did up there, we have something extra going on. We have the sum, oh, and we can extend this to all characters, I should say, extend it to all characters. We have something else going on. We not only have the same expansion, but when we expand to all characters, we're going to have m and n in the same arithmetic progression. Yeah? So that's what's going to come out from the part of a sum that's the sum when I expand that chi-bar of m, chi-n one over phi q. That part of the expansion will restrict me to m congruent to n mod q. So the calculation I did before when expanding that, I ended up wanting n two in a short interval around n one between n one of times one minus one over t and n one plus n between m times one minus one over t and m times one plus one over t. And from Brunt-Titsch-Marsch, I got an upper bound on the number of such primes, like m over t, well, with the lambda weight. But now I'm adding in the extra condition m congruent to n mod q. But you can also apply Brunt-Titsch-Marsch with this additional condition, as long as the length of the interval is bigger than q to the one plus epsilon. So when you do that, you win an extra factor of q, phi q. So winning that extra factor of phi q when you go all the way back ends up giving you something like the following, is that, I think I will finish on time, gives you something like that this thing, I don't think this will stay in the middle, will it? It's my big discovery of, oh, maybe we'll, okay. So what we're getting then is that the sum over, in our set E, is going to be essentially less than, less than up to some logs, but the most log of m f, let's just say, the maximum with chi and E, and let's just write this way, the maximum with t less than or equal to t of f twisted by chi one plus i t over log x. So that's when we restrict to some subset of a moduli, there should be also a log of this thing as well. But the point is, winning that one over phi q means at first sight you think, well, if we apply Hollas, we just have to sum the result from Hollas over all of these moduli. But the orthogonality condition with the characters means actually we only have to take the maximum of all of them. It's a remarkable win. So now let me just go back to the beginning, is that we had the sum of fn up to x, we took out the plausible big character, the one that may be big if there is one. So let's just say if all of these f chi's over log x are small, then we've already got Linux theorem. Oh, there's an x, sorry. If all of the f chi one plus i t, if none of them get as big as log x, then we've already won. We've already got a good error term. But it's possible that one of them is big, we proved you can't have two of them are big by repulsion principles. So if one of them is big, we put them on the other side and we get a good error term. So what if we proved, we've proved something like, the following is that some n less than or equal to x of, oh, this is of mu, of course, sorry, with mu of n, or f, I think you've landed in my head, is like 1 over phi q chi of a of some n less than or equal to x fn chi bar n. And in fact, you can win an error term of the form x times something like log q over log x, if you do this carefully, to some power which is 2 over pi, 2 over pi as it happens. So we haven't quite proved a theorem. So, okay, right now I've done it for general f, you'll notice in error-affirmative progressions. So the real question then is, if what we want is if f was mu, we'd want this to be little of x. So we want to know that for, that there's no character for which this is big. So let's suppose that the pnt for error-affirmative progressions fails. And here notice this error term is good as long as x is a big enough power of q. For x equals q to a big power. Then, when we work through this, we have something like, sorry, there should be an over q here. We have a sum n less than or equal to x of mu n chi bar of n, phi of q, sorry. Then this guy is going to be greater than greater than x. So if that guy is greater than greater than x, we know that chi is mu pretentious, or mu is chi pretentious. That's from Hollas's theorem. But at the start, I pointed out that that's equivalent to having a Siegel zero. So what we've proved is, unless you have a Siegel zero, Hollas's theorem, just using the Fogonality mod q, gives you Linux theorem, and we're reduced to the Siegel zero case. So just to finish off, how do you deal for Siegel zero case? Well, I told you the classical way. Let me tell you another way. This says that the quadratic character mod q, for most p up to x, the quadratic residue symbol, q over p is minus one. Remember I said if you have a Siegel zero, then a positive proportion of the numbers in arithmetic progression tend to be prime in the appropriate arithmetic progression. So how can we exploit that? Well, there's a nice way in the book of Friedlander and Van Jets in the Bible, the opera of the Cribra. I'm going to give you a slightly more naive way to do it, which is, and this only works in certain cases, but it actually motivates what they did. So if you look at a binary quadratic form, ax squared plus bxy plus cy squared, which has discriminant q, then it's elementary number theory that p divides at implies q over p is zero minus one. Sorry, zero plus one. So let's see. If you look at the values of this binary quadratic form, most of the values, well, most primes cannot divide them. In fact, the set of primes that can divide the values of this binary quadratic form are very sparse. In fact, they're so sparse that when you do the most simple sieve method, the inclusion and exclusion, you can more or less prove the right number of primes using that. If you use a little more technology, you can get a very good estimate as they do in their book. But it is a simple case to do this exceptional sieving. And so anyway, so that concludes the proof of Linux theorem. If you can read it legally using Hollass' theorem, when you can see the ideas, the main ideas that you can get, the details, well, hopefully they'll be cleaned up in the book. Okay. So questions or comments? You start with the formula out there. That means you will never have a reminder better than one of the log x. Well, in Hollace's theorem, as I pointed out before, there's certainly cases where you can't get a better error term. In very special cases, I mean, so in some ways, what we're trying to do in just writing this way is that this is a technique that works for general multiplicative functions and error type of progressions. When you specialize to mu, then you can expect a lot more. And we saw in the proof of a strong form of the prime number theorem, there are techniques to exploit this sort of thing to get better error terms. Whether or not those can be combined, here, I don't know. So perhaps the next question, which I'll pre-empt, is how good a Linux theorem can you get? What's the value of L you can get? Well, that's for someone here to do. The proof has to settle down, and then one can look for a constant. But there is some hope, perhaps, but by combining these two methods, you could do well on the constant. But the current record is 5.2. What's the current Linux record? It's 5. No, it's not. I'm not going to keep it down. And now look at all the things I'm going to do. They're going to get a better result right now. It's engineering work. It's engineering. Anyway, so there's engineering work that gets you down to 5. something. And that's an amazingly strong result. And to do that by any technique is extraordinary. If you look at Heath Brown's paper, well, you'd be blown away by a number of techniques that are incredible in that paper. So I don't think there's any great hope in the very near future we're going to be competitive. But who knows? Somebody has a clever idea. Who knows? So what do you have, 10 pros and cons? I wish we had 10,000. If we had a number you could write down, that would be a good start. I don't know. We haven't tried. We had a previous proof that was somewhat more difficult than this, with the old version of the proof of Halas, and trying to write down a number was a struggle. I mean, it's one of these things you really don't want to do that until you kind of cleaned up the proof. But when you've cleaned up the proof, it probably won't give a good constant. So then you've got to clean that proof, and you've got to look for the bits that are really damaging the constant. So it is the work for somebody energetic and good technique. I'll set as a problem, Terry. I think. I have a comment. It's found that the question is kind of very favorable. This is something that came up in trying to use the theory of the multiplicative functions in connection with the Galois abstractions for the flight of certain questions. So basically, what it looked like is that it would be very interesting to have this called beyond the integers when you are in, say, looking at the algebraic bits. So somehow it could not stand in the literature any kind of work that can be done. I mean, for instance, if you want to extend the proof with your native subprime number here, and say in algebraic number here, then automatically you would run into a problem of understanding is multiplicative functions beyond the integers in such a sense, I would imagine. So I don't know if there is any work that can be done on that. I don't know that there's any particular problem. I mean, class number one probably doesn't agree. No, but I mean, you would deal with ideas that are a number of things like that, right? So you can introduce a distance the same way where you're going to take the problem, etc. So this came up naturally in trying to make some results stronger about Galois function. I think if we really would have needed this kind of theorem in number of fields. So is there anybody who has looked at that? No, and it's the first time I've actually been asked that. So no, it's a good idea. And I would hope that we'll could identify what issues come up because of a class group. Well, we presume the ideals wouldn't be a big issue. Okay, I mean, it's just that it's not what we need to see. No, it's certainly not written. I don't know, have you ever seen anything? No. Also, you'd like to have a structural theorem and a kind of stuff, right? So if you want to put the prime number of theorem in a number of fields statistically, automatically you would, I guess, you would end up with that kind of stuff. One would assume that there would be an analogous theory. It's just that it's not the form of the theorem. Yeah, I mean, if you're using multiplicative functions, it should work. So yes, that was actually my question, I guess. So I think one way of trying to do some of what you're suggesting would be if the theory works for functions which are non-message-bounded by one. Okay, so this is something we've sweated some blood over only to come to a painfully simple conclusion. So it turns out that the techniques more or less all go through fairly easily, not if the function is bounded by two say, but if you take lambda f, okay, so this is when you take the coefficients of f prime over f and you look at the absolute value that that should be less than kappa times lambda n. This is the pretentious dimension. And so anyway, pretentious density, whatever. So when you bound like this, then it turns out this is the right, it seems to be the right thing to do. The proofs naturally want to behave and Demetrius and Sound are developing some of that theory at the moment. So one thing we did do recently was do the analogies in function fields, partly to try and motivate how to simplify some of this. And it works out very beautifully in function fields. So Perron's former is replaced by Cauchy and Cauchy behaves rather nicely. I mean, it's fairly simple. The other generalization that's begging to be done for obvious reasons is to go to gl2, l functions, which I hope somebody will get going on in the near future. It's not very obvious how to proceed, but there are some ideas around of how to do gl2. We see there is no pretentious character, no pretentious problem on the platform. We'll see, we'll see. There will be a result that, okay, we may not be talking about exactly the same thing, but I'd like to hear afterwards maybe what you're talking about. Also to answer your question, I think a number of things would be more interesting for the analogies, not in terms of the character, but in terms of the progression of the machines, but in terms of the discriminatory. I mean, for example, what's that smallest prime in the queue in terms of the normal? Actually, I was going to say, of course, we would expect this, I mean, sorry. Smaller prime means that prime gets its fleets. It's probably a step forward to extend these two number things to the same extent that there is an expression. It's how much, to what extent, one could get a theory which would be independent of the degree of the exception. So it would be uniform. It sounds like you have an application in mind. Actually, a former student is quite interested in the splitting question. He keeps on threatening to work on it, but he hasn't done so yet. So maybe I'll encourage him. Any other suggestions for more work? Yes, okay. What about some sort of, in the pediatric version, it seems like Hollis is there and says these things of cancellation unless you're an unrunned 5G L1 character? So what about like picking out seriously characters or something with some creative, I mean, I have no idea. I'm just wildly speculating. What do you think? Good luck. No, it would be nice. I mean, I don't think there's any, you know, I mean, one of the things I've tried to bring out somewhat, but not fully in these lectures is that there is an algae for classical theory. And one can hope where ideas of the G1 L functions have generalized in different directions. And a lot of a theory is after all, you think of G1 and you do something suitable elsewhere, that hopefully the pretentious stuff can follow suit and allow one a little more flexibility. So I mean, it's perfectly reasonable, something might work, but we don't know of obstructions, but it doesn't mean they don't exist. I mean, Henrik feels he has one, so. Let's say I can do it again.