 Thank you very much. And thank you very much for the invitation to talk in this seminar, which is, I have to say, has become quite a fixture in the number theory calendar worldwide. And it's, I think the organizers have to be thanked for this brilliant idea. Anyway, the material, the subject I want to talk about, and it's just going to be a survey, basically, I'm not going to go into any great detail, is a subject which has interested me for well over 40 years, nearly 50 actually, and which has had quite a bit of activity over that period. And even quite recently, so let me first of all start off by mentioning the first theorem that was proved of the kind I want to talk about. This is due to Hugh Montgomery in 1970. And this concerns the prime number theory, basically, the distribution of primary meta progressions. The psi function here is a standard function of prime number theory. I'm going to use that function rather than the alternatives of theta or pi, which I sometimes use, simply because in some ways it's closer to the analysis with the von Mangerl function here, which of course, as I think everybody will know is that is basically the prime power is the logarithm of the prime and otherwise it's zero. So basically, it's a weighted version of counting primes. Okay, so the rather interesting discovery that Hugh made was that there's actually if you if you take q to be large enough, there's an asymptotic formula for v of x q. And the main term is q x log x. And this is kind of remarkable in a way because it says, if you think about it, okay, you divide by capital q to sort of replace the sum here. And then you think we've got roughly q terms here. So this says that this this expression here is about the square root of x divided by the square root of q is actually better than the Riemann hypothesis on average. And that I think it really makes it sort of quite exciting and interesting in some regards. As far as utility is concerned, well, you're averaging over so many things that it's probably not so useful in applications. I have seen one application by George Graves, I think, where he was doing some sitting in several dimensions, maybe a form of some kind into two variables, where it was useful to have the sum over all the, all the residue classes modulate q. Okay, so the Hooli contribution came a few years later, where he found a much simpler proof and somewhat more precise result. In fact, if I just go back, you can see here this error term is not particularly good. If q is x divided by log x or something, this is already almost the same size as the main term. And what happened was the Hooli's argument made it fairly easy to isolate that particular term. But you can see that if q is very close to x, this term here also starts to get close to the main term. So you have it, you have an asymptotic formula, but it's not very precise or not as precise as you might hope. Anyway, so that's a feature of the result. But still, it is an asymptotic formula for at least if q is greater than x divided by log x to sum fixed power. Okay, so I've made that observation. And I should have said that there's a good reason why you can't expect to be a fantastically good asymptotic formula, because if q is, say, proportional to x, x over two, or even x over log x, well, the number of primes is smaller than q. So you don't have enough primes to go around in the residue classes. So some of these are going to be empty. And so this expression here, x over five, q is not such a good approximation to the psi function. Okay, if q is smaller, of course, you expect it to be a good approximation. But as q, as capital Q grows towards x, you can see that you get this diminution in quality of the result as a consequence. Okay, so those are things which will come up later in the talk. Okay, there was, of course, earlier work by Barban and then Davenport and Powerstam and by Patrick Gallagher. We're getting upper bounds for v of x and q. And it was closely connected, of course, with a large sieve. And you can get a result of this quality if q is of most x divided by the power of logarithms. And the actual power you require in order to gain the log x to the a here was worked out first by Davenport and Powerstam and then Gallagher got the bizarre result. And actually, as far as asymptotics are concerned, Barban had apparently stated an asymptotic result when capital Q is equal to x. I have to say I haven't seen this paper, it's in a journal which is not readily available to me. And it's actually quite short as far as I can tell from the details or from the reference. So I'm not sure whether it actually carries any proofs in it. And one can speculate as to how it would prove something like that. It's not, you know, writing down a proof is not that hard, but it does say it's not clear exactly which method he used. So I made this comment already. The results are not very surprising. I want to look at what ingredients are necessary in order to achieve a result of this kind and then see to what extent they could be applied in other circumstances. And one of the, if you look at Gallagher's proof of the upper bound rather than the asymptotics, of course, this is a well known proof. I think most people in the area are very familiar with it. It reveals two of the basic ingredients, very succinctly. So if you use directly characters to pick out the residue classes, of course, and apply all orthogonality and use the prime number theorem to deal with the principal character, modulo q, then you end up by having to look at a sum like this, where now you've got one over five q, and the principal characters are omitted, modulo q, and you can then replace the characters by primitive characters. And well, modulo is a little bit of detail which has been omitted. You have to worry about primes which divide q and things like that, but we won't concern ourselves too much in little details like that. And you can see that this expression here, by partial summation, at least if r is greater than l, you can reduce to this situation here, and to that you can apply the large sieve directly, or the character version of a large sieve, and you can see that you get x over l here, and then q over, well, q over m here times x log x, and if you take l to be a pair of logarithm, then you've dealt with a significant part of the problem. And then the piece out to capital l, you can treat with the Siegel-Wolfe's theorem, a standard result on primes and arithmetic progressions. So the proof is actually pretty simple for modulo the large sieve and modulo the Siegel-Wolfe's theorem. And I like to think of these as being Siegel-Wolfe's theorem, sort of corresponds in the terminology I'm used to, to major arcs, and the large sieve in some sense corresponds to minor arcs. If you think of Gallagher's proof of the large sieve inequality, for example, that in some sense, it corresponds to, well, Hardy-Lithwood method with and the intervals really correspond to some sense to minor arcs. Anyway, so you have these, these two ingredients, and we shall see those ingredients coming up again and again. Okay, how about the asymptotic formula? Well, there's a standard way of approaching this, at least it was a standard way for many years, is you just take your, your square and square it out. And when you do that, you get various sums, you should get at least three sums, right? And if you do some splitting, you get four. So from, from the psi function squared, you get this, you get the combination s naught plus two s one, which comes here, you can see these, the residue classes here, and it's the special case when, when the two elements are the same in the psi function. And then you get a diagonal term, which reduces to s two, and then you get the term corresponding to the main terms. Okay. And these s two and s three are rather easy to work out from prime number theory or attitude from elementary number theory at s naught, of course, from prime number theory, you can work out. So the crucial sum is s one. And it looks like a problem in additive number theory, right? Additive prime number theory, which is not surprising as what you'd expect. Okay. And the question is, how do you deal with s one? Okay, well, one way of dealing with s one is to rewrite this sum of collect together in what in one function, the prime powers whose difference is H. And then you can think of Q as dividing that difference. And so you've got a divisor function with the restricted range for the divisor, right? And this is exactly what Hugh Montgomery did in his, in his paper in 1970. And now this function r of x and eight, you'd expect to be able to say something about it by Vinogradov's method. Think, think of, well, actually the results of Tudakov, Esterman of Anticorp, where they showed that almost every even number of the sum of two primes, the basic proof of that is the corresponding function where you have a plus instead of a minus. And you have an L2 mean for that. And the same technique works for this function. And you can replace it essentially in L2 mean by a singular series times something which counts the number of real solutions here, x minus h. And this was actually worked out by Navrack in 1960. And it's what Hugh uses to, to deal with this sum. And then gets some asymptotics part. Okay. So that's it, technically, it's quite involved, of course, but it's, but it's, it's clear how one can do it. But this idea, this method was really superseded by who leads brilliant idea. So I can't. Okay, the top of the screen on my screen is disappeared behind the thing that says you are screen sharing. But anyway, maybe there's a way of getting rid of that. But okay, so, so you can probably see it and I can't. But who these ideas is, you can use the large C of the Gallagher results, say, to deal with the smaller values of little q. So you're left with the values of big q which lie between this q naught and x. And so, you end up in that s one function of having to deal with something like this. Now, and you, you, you can think of dealing with a difference of two of these, one of them has q naught and then I've asked you, and the reason that huli writes it this way is for a very clever reason, a clever idea. If you interchange the summations here, then write n minus m as q times r. So r is the complementary factor. And you can see now that that complementary factor is actually pretty small. And what he does is, is switch variables. I mean, it's a bit like, it's like, it's like Dirichlet's method of the, of the hyperbola. Only you don't have a small portion of a hyperbola to deal with. And you switch from the long axis to the short. Right. And the only cons, if you work out the inner qualities here, the only constraints are on r and it's basically bounded above by n minus m. And then you can, you switch that round again in the sums. And you see that you can produce a sum on the inside, an n, which is greater than m plus r q and less than equal to x. And the only condition, other condition on n is that it's congruent to m not r. But r now is no bigger than a parallel algorithm. So you can use a single whole fish. And yet, and it's a beautiful idea. I mean, the whole proof you could write down in a page, even, even with the detail. So, and it's, it's, it's, it's the method of choice is to this day for this, this kind of problem. Well, unless you run into some other difficulties, I mean, that's, you know, if you have a problem of this kind, this is the method you should choose. And if you have some other difficulties, then maybe you have to think outside the box, but that's not a matter. Okay. Okay, so I'm, there are questions you can ask about, okay, you have a fixed q, what can you say about things like this expression with psi function, for example. And I may mention that a little bit. But my main concern is the double sign in things like this. Okay. And to what extent can you extend these ideas to functions other than psi? Well, who he wrote at least 20 papers on the subject. 19 of them have this, this type, this generic title on the Bible and Davenport, how we stand theorem. And there's one other paper that I know of, which was in the proceedings of the Vancouver ICM in 74. You may have written some other papers, but certainly 20. And he was obviously fascinated by this whole subject. Okay. Now, the primes have a huge advantage over other sequences in that they're uniformly distributed in the reduced residue classes. And that makes life very easy. You know, you think of almost every other sequence that is of interest to natural in number theory, like the square free numbers, for example. They are no longer that well distributed into residue classes. They vary, at least according to the GCD of Q and the residue modules in the residue class. And some other sequences are even worse than that. We shall meet some which are quite painful in that aspect. Okay. So that is always going to be a difficulty, I think, in generalizing these results. Okay. I wish I could see this. Okay, so who has a paper, I think it's it's this third paper with the generic title, where he treats a rather general situation is not exactly like the primes, because now he's treating, if you think about what this means here, he's treating a set S, which has positive density. If you take Q equals one, you can see that he's saying that the density of his set is a constant times X. So it's just different from the primes and nicer than the primes in some sense. He has a criterion, which I don't understand why it's called criteria new, but well, there must be a reason for it. And it's basically saying, well, we're going to assume a single wallfish thing. We assume that we have, we know for relatively small modular, something about distribution of our set, its elements of our set in residue classes module of Q. And you see that he hasn't, his dependence is really the GCD of Q and A, not not A. Right. So it's still quite restrictive in some regards, but it does now include quite a lot of sequences of number 30 interest. Okay. The reason that he's able to manage this is because the large sieve allows it. You could always take out common factors dealing with a large sieve basically relatively easily. Okay. And his final conclusion is that you get an upper bound of this quality, that you get QX plus an error, which is where you save a lot, parallel logarithm, but nothing much more, of course. I mean, you can't expect to do any better than saving a parallel logarithm if all you put in is something which only saves a pair of logarithms, seems reasonable. Anyway, so this is quite interesting. You see the QX coming up here. And for the, in the case of the primes, we had QX log X. And I'm not giving too much away by pointing out that X log X is roughly the behavior of the vol mango function square. And here X or constant, I should say, is the average behavior of the function which occurs here. Okay. So, well, we'll come back to that. As far as the square free numbers are concerned, this is a very substantial history. And I will, I've listed all the papers in the bibliography at the end, those are interested. One can push the theory quite a long way for square free numbers, of course, as you might expect. And I've written out here the corresponding function capital V. There's a question about what main term you should use as an approximation. I think for many, from an analytic point of view, this infinite series makes the best approximation. It's a kind of singular series. It corresponds anyway, quite well to the, to the number, to the number of square free numbers which occur in the particular residue class A modulo Q. Okay. And it actually conforms to who is requirements that really this function should only depend on the GCD of Q and A. And you can see basically that that's true. Okay. So the result that one can get, the ultimate result, which is quite a lot of hard work, is you get a main term which starts to look different from the one we had, once we've had before. You see you've got X to the one over K, not X. And you've got a Q to the much higher power, not Q to the first power, but nearly the second power of K as well. So you get something which is looking different from what I just mentioned, where I said, well, it looks like the main term corresponds to the square of some characteristic function. Clearly not the case here. Okay. The error term also is quite good here, which you'd expect for something like the square free numbers. These functions f1 come out of the zero free region of the zeta function. I don't really want to go into detail. Okay. But you still have a lack of uniformity as Q goes to X. You see how the error terms start here, it starts to blow up on one. This becomes basically a constant. And this becomes, wow, it's pretty close to X squared. I mean, approaches X squared. So you get, you lose your uniformity again. And the reason is the same one. If Q is, at least if it's bigger than 6 over pi squared times X, there still aren't enough square free numbers to go around to fill all the residue classes. So this is not necessarily a particularly good approximation of that end. Okay. Okay. Another question which has been looked at is the distribution of smooth numbers in arithmetic progressions. It became a very topical subject in the 80s and 90s through various applications to cryptography, for example, to neurarical routines for factorization and stuff like that for computing the sizes of things. Also in wearing's problem, it became a useful tool. So the distribution of these things in residue classes is quite interesting and rather complex. You know, for example, if the GCD of A and Q has a prime bigger than Y, the Y, the Y smooth, there aren't any Y smooth numbers in your residue class. And finding criteria which match that is sort of tricky if you're dealing with general sums. Okay. So the only result I know is due to Harper in 2012. And that's actually an archive. I couldn't find anything in the referee journal. So I'm not quite sure what happened to that. But it has quite a nice upper bound here. And you can see again, you get Q. Now it's the size of the number of Y smooth numbers up to X. That's the main, that's the sort of main term, but it's not asymptotic form. And I wonder if there's an asymptotic form that has been improved. But I couldn't find anything in the literature. Okay. Okay. Yes. So there's an idea which was, has been floating around and was first, first appeared in print in the paper that I wrote with Dan Goldstone on the distribution of Psi of X, Psi of X Q and A, the primes, assuming the Riemann hypothesis. And there's quite a bit of work assuming the Riemann hypothesis because one can push things quite a bit further. And we found a way which was at least as good as the Hooli technique. And which has turned out to be useful in those situations where the Hooli technique doesn't necessarily work as well. Okay. So let me go back to the sum S1. And the go back to Hugh Montgomery's original idea of basing things on Vera Grados method. Okay. Well, this sum S1, you can write it as an integral with exponential sums. If you take f of alpha to be this sum Q less than equal capital Q, R less than equal X over Q, E of alpha Q, R and then G of alpha to be the corresponding generating function for the von Mangel function and use all folginality of the additive characters to compute this integral, you can see you've got exactly S1. And what Hugh did was equivalent to applying Vera Grados method to one of the functions G. And then the rest of the integral basically by a part of R's I didn't do, or Cauchy Schwarz and parts of R's. But there is another approach you can take to this. You see this function F. It's a very easy function to deal with. And it's very easy to show that it's small in the minor arcs. Much easier than using Vera Grados method. So you can actually avoid Vera Grados method completely and just use the estimate for F on the minor arcs and then apply Parseval to the function G squared. So it gives actually a proof it's longer than who did his method, but it's a proof which is still quite elementary. Doesn't use anything deeper than the prime number theorem. Okay. Well, maybe the Siegel-Volffish theorem to work out the major arcs. Okay. And it has some advantages of flexibility because it doesn't use the large serve. It doesn't require your sequence to have positive density or the density to have size which is close to positive density. So there's a lot more flexibility available there. And if you're dealing with something, say with the Riemann hypothesis, it's relatively easy to take advantage of that in the methods as well. Anyway, so that although it's not as simple as who did his method, it has some advantages. Okay. So let me just suppose that A sub n is a real sequence satisfying the L2 mean bounded by X. And let's have a Siegel-Volffish condition which psi of X is some function going to infinity at a reasonable rate, probably a pair of a logarithm or something like that. And you can compute the asymptotics by the method I just described of V of X and Q. And you get a main term which I've written here. I haven't said anything about the error term. The error term is somewhat complicated. And you see here, you get your a n squared, which is in the prime case of prime because this condition avoids the von Mangerl function. But still, in that case, you would have X log X here still. And in the case of square freeze, which this satisfies, you would have again something which corresponds to the square freezes main term. But now you've got another term here. And although I've written this function G of Q in this rather complicated form, it turns out that it corresponds to the major arcs. It's actually the singular series which arises from considering that integral I wrote down previously. Okay. And you can see that if the what you have here is something that corresponds to the L2 mean of the whole, you know, the whole integral. And this one corresponds to major arcs. And so this difference ought to correspond to minor arcs. So what this says is, if your functions are such that the minor arcs are large, then you're going to get something which behaves like QX. But if the minor arcs are small, as they are with the square free numbers with the K free numbers, then you get something that's smaller than that, substantially smaller than that. So you can see that the behavior of V of X and Q is kind of intimately tied up with the behavior of that integral I wrote down previously. Let me just go back. The integral corresponds to this. Okay. Very curious. And say the whole behavior of V seems to depend quite closely on this difference in many, many circumstances. Okay. So I've written out here what I just said. You have this sum here corresponds to this integral squared. And this corresponds to the major arcs of this thing. Okay. And the actual, the statement I made that this expression, the L2 mean should be bounded by X. Of course, I always have pointed out the von Wangen function is bigger, doesn't meet this criterion. But the same ideas will still work in that situation. And you can see that in the case of the primes, this main term is X log X and then you get some other stuff, which we know in the second order corresponds to the, in some sense, to the sum here. The one thing I should say, though, we know it's a well-known fact that the integral here when you have the von Wangen function, there's a positive contribution to the integral from the minor arcs. If this was dominated by the major arcs, you could prove the twin prime behavior. Okay. Okay. So I think that covers everything I've said there. Now, okay, the next thing I want to talk about is something that, yeah, this is a, in some sense, a detour. There's a remarkable paper that Hoodie wrote in 98, I think in Crela, where he deals with the third moment. Of course, it's not the third, it's not the modulus. It's just the third power of the difference here. Okay. I don't think anybody knows how to deal with the modulus. It's cubed. It has this weight 5q in here, but you need some sort of weight in there because you expect the thing you're summing to be dropping off to a relatively high power of q. And so you would summing over little q out to big q, you wouldn't get much benefit from the sum out to big q. And so in order to gain more benefit, you have to put a weight on which, which weights the larger values of q. And you might ask, why doesn't it use a smooth function here? That would be the obvious thing to do. Well, he has a lot of complications in working things out. And it turns out that things just cancel more nicely if we use Euler's function. Okay. That Hoodie runs into considerable difficulties with the process. The process is you cube out this pro this expression. So you now got four terms. And three of those terms you can do by methods are fairly well established, right? I mean, the part coming from the x over 5 cubed cubed is easy. The next part is easy. The one coming from the psi squared can be done just like the second moment. And then so you're left with the third moment. And that, that means you have to deal with a product of three long, or mangle functions. It turns out you can use Vinogarov's method. The depth is about the same depth as the Vinogarov ternary prime problem with, with complications. And Hoodie's problem is to show that these four sums add up to zero. Basically there's cancellation of the main terms and you get, and the expression is asymptotically equal to some collection of second order terms, which is the pattern that's established for the second moment. And it takes, I think the paper is 75 pages long. I think 40 of those pages are taken up showing that these terms cancel out. And as a consequence, the paper is littered with quotations from Dante's Inferno, which you have to, it's quite a literary experience to read this paper, I have to say. But not just because of that, but because of the language it is used generally. And I would advise any student of the subject to have a, proves this paper for things other than mathematics, because it's kind of interesting from that point of view. Anyway, it's a remarkable stuff. Okay. And that the, the idea that Dan Goldstein and I use turns out can be used in this, on this problem to give a somewhat simpler approach. And okay, so let me just say that the core problem for the primes concerns really this sum here for the second moment. And on the major arcs, you expect that this generating function behaves a bit like, well, if you're on the arc centered at A over Q, you'd expect it to behave like the Mobius function of Q divided by Euler's function of Q, twisted by a sum, which has its peak at A over Q. So you'd expect the term corresponding to A over Q here to be a very good approximation to G. And if you summed over all these, well, the other ones wouldn't add up to very much because they're points further away. So there's a technique which actually goes back to Hardy and Littlewood of approximating your generating function by this, by a sum of this kind. And was used extensively by, by Davenport and Heilbronn and others in the 1930s and by Roth Eton's later 1950. It was later superseded by other methods. But, but it's a useful technique. And here, we revived it for this situation. It makes quite a good way. The point is, it's independent of, when you write it in this way, it's independent of Q and A. But it's still a good approximation to all these places that you have to visit in the integral. Okay. So, okay. So the idea, remember, in the, in the, in the original Montgomery-Hooley result, we have a main term of X over 5 Q. Well, the idea here is to replace that main term by this expression, which is just a form of using this and then summing this over the N, which are in the residue class A modulo Q. Okay. Okay. Seems a little complicated, right? But it actually has this feature that in here, you have a function G-style, which is a good approximation to G throughout the whole of the minor arcs. Sorry, major arcs. Okay. I can't see that at the top again. Is there a way of removing this thing at the top of the page so I can see it? Anybody know how to do that? Okay. I could, I could stop using full page, I suppose. Full screen. Anyway. And notice, are you in full screen mode? Yes. Should I just exit full screen mode so I can see it? It doesn't really matter. Maybe that'll work. Yeah. Does that work? I mean, you can see the, the gubbins around the sides now. Maybe you can also close that blue thing that we, the cross. How do I? Let's see if it can work. Oh, yes. Perfect. Thanks. Okay. Now I can see the top line at least. So I apologize for the, oh well, technical problems. Okay. So if you use this, this new main term, you can, you can actually, you can then use Hooli's method of the flipping of the R and Q to prove this asymptotic poem. It's actually easier than using the hard input method. And you get a main term now of QX log of X over R minus a second order term. And you get an error term, which is now uniform as Q approaches X. Okay. And the thing is that the point is that this, this approximation you're using now takes account of the distribution in the residue class in a better way when Q is large. Okay. So that's an interesting, and then you can recover the original results by a bit of work too. But it gives an interesting approach, which is useful in quite a number of circumstances. And in particular, if you use this on the third moment, you can produce a very short proof of an asymptotic formula for the third moment. And you can see again, it's, it's uniform as Q approaches X because you can take R to be, well, your nice big fat parallel algorithm. Okay. So I've added that to you, the bits here with the Q squared X log X log R, you can actually put that down explicitly as well. So you can get actually quite good asymptotics. I've just been, I've just shortened it for the page here. Okay. Oh, something else you can see for the, this third moment is remarkable, you actually get better than the square root cancellation. If you, if you, if you, you see the power of X, if you had square root cancellation, you expect it to be X to the three halves. Something remarkable going on there. I've never really got to the bottom of that. Probably some cancellation going on because of the, of not having absolute values, but still. Okay. Oh yeah, I get some justification of what's going on here, but I think we'll skip that. Okay. Well, the comment I just made about square root cancellation. Okay. So I wanted to say something about the, this, this second moment of using this method. And you can see that if you write, if you write now this expression out, you don't need to split up. You don't need to square it out. You can write this difference as lambda n minus psi sub R of n squared. And you can write this directly as the, as as an integral. Anyway, I won't go into detail too much, but you can see that you can get a direct application or direct proof that this corresponds to the things that are going on and that the division into major and minor arts is a, is a crucial part of the problem. Okay. Okay. So here's some details explaining what's going on with these various, with the, with the third moment. It pretty much gives a little bit more detail about what I was saying. I don't think we're going to have time in front of a little short. Okay. Okay. I wanted to talk about generalizations. There are quite a few generalizations in literature, some to algebraic number theory. There's a paper by Smith in 2010 for number fields over number fields. The primes are now in number fields and certain properties. There's an example of Keating and Rudnick on an analog for function fields. And all of these cases so far, these things are well behaved. And I started having students look at situations which are not so well behaved. I had Mike Danck's look at what happens if the function you have here is the number of ways of writing numbers and sum of two squares. So the, the first moment is well behaved for R of N. Second moment is a bit bigger. It grows like X log X, I think. And he was able to work, gain an asymptotic formula for this. And it has a similar character to the primes we've seen before. Again, you can see the L2 mean coming up here for the R of N squared. There's the log X. Another example which is a little bit more complicated was looked at, I think in a special case by Motahashi and then by Pongtriyam, the divisor function. And the interesting thing about the divisor function is that the main term, the approximation, it doesn't factor into local factors. In the same way in the way that all the other examples do, you have a factor which corresponds in here. There's a mix of the local factors and the X. It's a real nuisance. And there's no way of avoiding it. So that is of interest in itself. And this approximation which we ended up using is again the major arc approximation of the kind that I know that we discussed before. Okay. And then with that you can get, it's a great deal of work to get an asymptotic formula, but you can get one. Again, if you write out all the detail, it takes out about a page just to write out what the main terms are. I've written here P of log X, log Q appears a polynomial of degree two, but in the thesis the terms are all written out. But you see again, you've got, it's uniform as Q goes to X. It's quite interesting. Okay. Then recently I had Penyang-Ding look at the case where now your function is the number of ways of writing a number is the sum of three cubes. And this is very interesting because we don't know the size of the second moment. If you look at this thing, all we know is that it lies somewhere between X and X to the six, seven, six. You don't have an asymptotic formula for this, but nevertheless it comes up, if we go forward, it comes up as the main term. So here we have an asymptotic formula where we don't know the size of the main term. And again, you have a smaller, this corresponds to the major arcs. This corresponds to the whole integral. And so this corresponds to the minor arcs, I suppose. But Hooley showed that there's a positive contribution from both. So we know that this is proportional to at least QX, but we don't know if it isn't bigger. Okay. And again, this approximation, which we used here, is really a major arc approximation. Anyway. Okay. So that's an interesting example. Okay. So all the examples we've looked at so far have automated you to approximately X, but you have Brudan and I recently looked at a case where the number, this number would be significantly smaller. We just look at r sub two of it and the number of ways of writing into some of two cubes. And you can see the lattice point argument tells you that the L1 mean is X to the two thirds. And if it's famous for a result of Hooley says that the L2 mean is also about X to the two thirds. The number of representations is either zero or two, basically. Most of the time, it's zero. Okay. So again, it's natural to use the highly lethal approach and to use this approximation here. And we find that again, you get an asymptotic formula by this time a constant QX to the two thirds, which corresponds to the L2 mean. And I should point out here that the major arcs here are smaller than the minor arcs. It's a well known phenomenon. They're only about QX to the one third in size. Okay. And the core of the proof uses this major arc approximation that really simplifies the proof quite a bit. Okay. And there's a, just recently, we've been working on something which is even thinner, the sum of a chafe power and the elf power, the various choices of KNL, same techniques, they're quite technically quite complicated, but it's doable. And I've just listed the results here. Okay, we should probably get, I should probably finish it in a second or two, but okay. So, okay. Oh, there's a question. Is this in the literature? Does anybody know if the Mobius function has been treated? I couldn't find it in the literature. Interesting question. Pretty easy to do, actually, once you've seen these techniques, but still. Anyway, so the first question I have is, is there a change of nature if lambda is less than a half? All the results we have are for when the size of this is bigger than X to the one half. What happens at X to the one half? Do things get worse? Better? Different? Are there any examples? Okay. Another question which has come up, is there an asymptotic formula for this expression? Where now you just, you fit, you fix Q and you average over just the A's, not the Q's. You better take Q to be large. And it's known that there's an asymptotically X log for almost all Q. Okay. Clearly, apparently, you conjectured this rather sloppy road for every Q without saying anything about the Q. It's rather easy to show that it's false if Q is small. I mean, for example, if you take Q equals one, you've got little words there, which says it's false. But I've shown that for a small range of Q, not going very far. It's false, but I think Hoody would have intended that Q is greater than X to the delta, maybe even that Q is greater than X to the one half. But I think it's kind of hopeless, but still it's an interesting question. We do know it's true for almost all Q for quite big ranges. Okay. And there's some results here, which I've listed as well. Okay. The second question I have is related to this. So if you know how to deal with the Montgomery-Hooley kind of estimate for a general expression like this, can you prove something for when you just sum over the A's, not both Q and A? Is there something in general going on here? Can you prove something? I haven't really thought about it very much, but it seems an interesting question. There may well be examples of sequences A and N where you can actually prove something. It would be interesting for somebody to have a look and see. Anyway. And Hoody may even have looked at some of this stuff, at least for almost all Q in some of his papers. Okay. So that's it. Thank you very much for listening.