 When I agreed to talk, I thought this would be a cozy little with a couple of dozen people. So it turns out there are more people who are interested in number fish, which is a good thing. And I think it's a great thing that this seminar has been organized. It's a really nice touch that we can still keep in contact around the world. Okay, so I want to give a flavor of a BlackBall talk. So I'm using my standard Zoom teaching technique. And what I want to do is talk about a topic which began for me about 10 years ago. And I'll try to say what's going on. So first of all, listen background, I want to think about wearing's problem. So wearing's problem is to consider representations of a positive integer N, as per some of S-cake powers of natural numbers. And in particular, I'm interested in this talk in the number of representations of an integer N as per some of S-cake powers, I'll call that RSKNN. And for this classical problem, goes back to wearing in 1770. We think we know what the answer is to this representation problem. So I was conjectured and put an asterisk on this, which I'll return to in just a moment. But when S is at least as big as K plus one, the number of representations of N as per some of S-cake powers, there's a quotient of classical gamma functions times the singular series, short of finding just a moment, times N to the S of a K minus one plus a little over N to the S of a K minus one. And I call this an asymptotic polynomial. So the singular series might be zero if there are no local solutions in the problem. It's defined as actually a product of periodic densities, but it can be written in closed form in terms of these classical Gauss problems. So E is just equal to two pi I times A of a QR to the K, as usual. The only wrinkle in this, why there's an asterisk is that if S is K plus one, you have to take a little K. The singular series may not be nicely convergent. So this is what I've written down here is certainly valid. Everything makes sense if that's at least K plus two. And so with a number of variables, which is a little larger than K, you expect to understand this problem. You expect to understand this problem very well. That's the conjecture, right? So what do we know about this kind of problem? So that's where some notation comes in handy. So let's define G tilde of K for the number of variables you require to establish this asymptotic formula one, which I have at the top of the slide here. So that gives me a way of measuring what we can actually prove. We conjecture that should be just K plus one. And this is really the starting point for this talking in ways. There's a nice classical result of Poirre from 1938, which shows that with two to the K plus one variables, you get to prove this asymptotic formula. And the proof of this is if you've ever studied the circle method, your very first course in the circle method, that's what you prove. That's how you learn about the circle method most usually. It's a very classical sort of sequence of arguments really dating back to how the little word started for 20th century in the 1920s, a little before that. And Poirre introduced some very nice refinements involving counting solutions of guaranteeing equations to make the argument more efficient. Okay, so that's the ancient history. Just to complete this picture for the moment, let me tell you about what's known in the current state of play. So this is a topic which has undergone some refinements, but in the case of cubes, you want to understand where this problem in the case of cubes, then still the world record is a result of Vaughn from 1986. So you get the expected asymptotic formula with eight cubes. That's one variable less than Poirre would have gotten. When K is larger than three, then the recent progress on the integrator is the value for plays a role. So once the main conjecture and that subject have been proved by Bagan Demigloum-Gouf by myself, then some work I had in 2012 gives up a balance on these values, which supersedes those of Poirre already when K is four or more. And for large values of K, you get a result which is like K squared minus K with some stuff involving the square root of K. So this is quite a precise version of the result that was in a paper which started off life actually more or less at the Fields Institute in the 2017 program we had there and appeared last year. Bagan had a, what you might call a sketch of a less explicit bound where sketch requires some interpretation. Okay, so that sort of completes the picture there. And what I wanted to say by way of introducing the topic of this talk is that you can obtain similar results to the ones I mentioned so far for integer value polynomials. So if I replace extra K by a polynomial of degree K with rational coefficients that takes integer values and all of the results that I've explained have analogs with the same number of variables, you get an asymptotic formula. The asymptotic formula changes a little on the local solubility conditions, change a little as well. But that's all a very well understood topic. And in some sense, except for the definance of a number of variables, even the local solubility issues were very well understood back in the early 1950s thanks to work in nature and others. So you can see people are buzzing with questions maybe or complaints about what I'm saying. So I'll let Mike or Alina figure out what's going on there. Carry on, sir. Okay, I'll carry on. Right, so this brings me to a problem. So as far as I know, the first familiarity I had with this problem was a conference in Lille in 2009, I think, all of my memories can be very faulty by the same. So Ben Green posed a problem which he may well have posed earlier, which is to generalize these results from integer value polynomials to certain generalizations of polynomials which had occurred in investigations surrounding systems of linear equations and primes, so the whole nil sequence approach to this topic. So motivated by that, he posed the question of what happens if you replace as sort of a simplest example, that he replaced the squares or quadratic polynomial with rational coefficients by a bracket polynomial and by a bracket polynomial, what we mean is something like this, an integer times the integer part of that same integer times some real number and to make this interesting, it's best to keep that real number to be positive and irrational, same. So he had in mind stressed the case that way to equal to the square root of two is a special case of this problem. So for example, can you find an integer s sub zero having the property that whenever you have at least s zero variables, then all large integers n have a representation as some of s values of this bracket quadratic polynomial. That's the problem. Of course, you can imagine all kinds of variants of this problem and I'll say something about some of the variants later on, but at least that's a good problem to have in mind to begin with. And maybe I can make a couple of observations which are not new about this problem. So as Ben pointed out when he presented this problem, of course, if theta, this number theta is a natural number or rational, then basically this amounts to understanding what goes on with the polynomial case. You can take into arithmetic progressions like this. So you really should stick to the case of irrational values of theta to make this interesting. And really what makes this rather non-trivial is the observation that these polynomials, bracket quadratics, are not necessarily, don't have the same kind of equidistribution properties, but polynomials or rational coefficients have. So if you think in terms of the bio-equidistribution law, biosfilm, then you realize that polynomials or rational coefficients have nice equidistribution properties, module one, when you multiply an irrational number alpha. But this sort of thing can fail for these bracket quadratics. So basically with some examples of these bracket quadratics, as Ben observed, you can sort of relate them to values which are integer. If I multiply by two here, I'm really getting some integer here on the right-hand side. And then some term which involves the fractional parts of x squared of two squared. And x squared of two is uniformly distributed. It's equidistributed module one. So it spends a fair amount of time close to, in fact, what I mean is strictly a fair amount of time, as in not an unfair amount of time close to zero. And that means that it spends an unfair amount of time, the square of the spends an unfair amount of time close to zero. So somehow the square of something small tends to be smaller. And so this has a bias towards being close to zero. And that means that you can sort of get a failure of a distribution for these things. So that's a nice observation, which means that one would expect that some fairly routine version of the second method is not going to be very successful in tackling this problem. That's a perfectly reasonable observation or expectation. Okay, so where do we go with this? Well, there's a thesis of a student of Ben, Vicky Neal, from 2011, where she answers this question. So the answer is yes, there is such a, we'll do this, we'll get the question in the slide. Well, so there is such an S zero, you can represent large integers by values of these, sums of values of these packet quadratics. So the thesis doesn't give an explicit value for this number S zero, it involves Fourier analysis on null manifolds. And if you want to, if you're sort of familiar with the second method, which I don't expect everybody is, you don't have to be for this talk, but you might say that the method would be similar to Hardy and Littlewood's approach to Waring's problem where basically vials and quality is used on every variable. The analog of vial comes from this Fourier analysis on null manifolds, so the number of variables you need is very big. And in the thesis, if he expresses the point of view that the answer is from the methods is probably closer to a million than 10 to the 100. So probably a large number of variables, that's the upshot of that discussion. And so what I want to do in this talk, what am I gonna do in this talk? So I want to present first of all, because I think it's from a couple of results of the flavor of Hoare's lemon, which can be input into the machine that Vicky and Ben developed. And this is something that is almost 10 years old. It dates to very shortly after Vicky's thesis. It's something that at the time that I had in mind, this would probably be an appendix of some paper of Vicky and Ben. It might have become joint work at some time. Anyway, nothing's happened with it since. So I've presented it anyway. And then I'll say a bit more about some ideas I've had since then. And this gets us into the game of speculation about what methods may be used in wider generality. And so the Hoare's lemon discussion, which I'm about to enter into for these bracket quadratics is particularly elementary in a certain sense. So I think everybody should be able to get something from that. And it's kind of thought provoking. Okay, so let's get into that. So this is where my backboard comes into play. So let me say a bit about Hoare's lemon. I guess it's a good time to pause it for any questions that people have had in the meantime now. It's a very good time to ask him or carry on and wait for a little alert from the organizers. So what about the classical version of Hoare's lemon for in the simplest interesting case, which is for the squares. So that allows me to get into the ideas, which we can make use of for these bracket quadratics. So Hoare's lemon, what it does is it bounds mean values of exponential sums. And the interesting case in the case of squares is just the fourth moment. Now, of course, in fact, you can analyze this in many different ways and we can get precise certain products for the number of solutions here. That's not the direction I want to take this because I want to get a crude estimate that motivates what happens for these bracket quadratics. So biofogonality, the mean value here is counting solutions of a definite equation. And I can arrange this, yes, question. Sorry, can I interrupt you for a second? There's just a question from the audience. Question, it's going back a tiny bit. Is this uniform in theta, the bound for S naught? I think in Vicky's work it is and what I'll present, it certainly is. So, but I have a feeling that in Vicky's work, well, I mean, you can get improvements for certain values of theta, but I think there's an absolute S zero of theta. We'll see later on, but so even if I'm wrong about that, it can be made before. So, so short answer is yes, I guess. Yeah, is that enough now? Okay. Right, so I've arranged the definite equation with some suitably chosen pluses and minuses here. You can arrange it as a sum of two squares equal to a sum of two squares if you prefer. But now if I want to down the number of solutions here, you can see that for two obvious classes of solutions, there's one class where X1 is equal to X2 and if that happens, then X3 is also equal to X4 and vice versa. Of course, I can reverse the implication if I want. And here the contribution is clearly something like capital X squared solutions, like X choices for X1, X choices for X3 and then X2 and X4 are determined. And then the other case, let me use X3 and X4 as the sort of a classifying pair of variables. So maybe X3 is not equal to X4. And then what I can do is fix X3 and X4 and then X1 minus X2 and X1 plus X2 multiply together to get X1 squared minus X2 squared and that's equal to a fixed integer which I'll call in X3, X4, which is just X3 squared minus X4 squared. And that's non-zero. That's very strange, not equal to, that's not equal to zero. Okay, so that's all well and good. And so now we can see that each of X1 minus X2 and X1 plus X2, these are divisors of N and the number of divisors of N is at most N to the epsilon. Okay. So what have we done here? We've proved quasi-lemma already and that's for the case K equals two. What we have here is we've shown that this mean value which we're interested in, two classes of solutions that were being counted a diagonals contribution and then we had another type of solution. There were X squared choices for X3 and X4 in this second class of solutions and then X1 and X2 are determined by divisor function estimates. If I know X1 minus X2 and X1 plus X2 then of course I recover both X1 and X2. And so that's why it's lemma. So we've shown that the total number of solutions here is at most X to the two plus X1. And what's important here, so far as the application of the little method goes to Waring's problem is that we've saved very nearly X squared here. So the degree of this polynomial is two. We've saved very nearly X to that degree X squared over the trivial estimate which would be X to the four. And that's what allows you to prove that you get the expected number of representations of an integer as a sum of five squares of natural numbers. Of course you can save much more precise things about sums of squares with three or four squares but this is what we're doing in this setting of while's lemma. Okay, so that's the classical version all very straightforward. So what about bracket quadratics? So bracket quadratics, first of all there's no nice factorization of X1 times four function of X1 square root of two minus X2 times four function of X2 square root of two. Everything gets corrupted. So that goes away, divisor functions go away. It's all very frustrating. So it's not at all apparent how you would ever prove a lemma or a qua type. So let's deal with a bracket quadratic and following Ben's lead let's consider the case of later in square root of two. You'll see that for this argument this is not particularly special. And I'll write C of X for X times the floor function of theta times X. So for us that's X times the floor function of square root of two times X. And I'll give a proof of the following more general results in just a moment. So let's record this as a theorem so you'll see the argument that I'm explaining I'm about to explain applies if you have any theta which is the square root of a rational number and I suppose that theta is rational and positive and then we've got this analog of plus lemma. So where's my bracket quadratic X times the floor function of theta times X. And I look at the fourth moment of this and this is all bounded by X to the two plus epsilon. And what I'll do is I'll prove proof us in the case of a theta equals square root of two and you can see it's a simple exercise to convert this to a proof for more general values of theta of this type which I've listed here and the idea the strategy is to think about sums of two well sums or differences of two bracket quadratics so I suppose that n is a natural number so I've had time to sort of define this since the early days when we thought about this so you can either look at a sum or a difference of two of these things. So I claim that the number I'll call it r of n of integral solutions X1 X2 of X1 floor function of square root of two X1 minus X2 floor function of square root of two X2 equals n so this I claim satisfy this r of n less than less than n to the epsilon and if you think about what this implies for brief moments this presents the idea of applying exactly the same strategy as we had in the classical proof of Mars lemma so there we have diagonal solutions and for the non-diagonal solutions what we did was we looked at the difference of two squares and it turned out we had an argument involving the divisor function which showed that the number of representations of an integer and nonzero integers the difference of two squares grows like n to the epsilon and we're getting exactly the same result here if we can prove this result about bracket quadratics so I granted this idea the proof of this result is lemma was lemma but these bracket quadratics follow straight away and I want to actually say more about that truth other than this result some differences of bracket quadratic polynomials good so here's the strategy this is the only idea I think that really plays a role here and then you have to use the idea and the idea as with most of my ideas is very unimpressive so let's write y sub i for the proof function that's square root of two x sub i when i equals one and two and the profound implication of this is that square root of two times x i minus y i is between zero and one for i equals one and two good nobody disagrees with this so now what does this tell us this tells us that square root of two times x i minus y i sorry I should make that a one square root of two x one minus one one is squared minus square root of two x two minus y two squared well each of these objects is between zero and one so their squares are also between zero and one which means that this difference is somewhere between minus one and one good but I can expand this this expression so if I expand it and use the definition of square root of two then I get a polynomial x one x one up to y two quadratic polynomial with integer coefficients minus two times square root of two times x one y one minus x two y two and notice here that this expression is nothing other than x one times the floor function of square root of two x one minus x two times the floor function of square root of two x two so this expression here is supposed to be equal to n so so what we've done here is we've shown that this expression two x one squared plus one one squared minus two x two squared minus y two squared minus two square root of two times n is somewhere in the interval from minus one to one so far so good but that means that we've almost fixed this this integer here on the in the first bracket so it's first integer it is an integer it's in the interval from two root two n minus one to two two n plus one so the left hand side is either the ceiling function of two root two n minus one or it's the ceiling function of two root two n two choices at most okay so we almost know what this integer is so we know what some x one y one minus x two y two is its n and this other quadratic expression is also almost fixed and so what that means is that our then is bounded above by the number of choices of a system of equations let me write down that system right here so x one y one minus x two y two equals n let's say and two x one squared plus one one squared minus two x two squared minus y two squared equals m where m is in this in this interval two root two n minus one or two root two n plus one it's one of two integers okay and here's here's the punchline that means that m plus two times squared of two n I can rewrite as squared of two x one plus y one squared minus squared of two x two plus y two squared so the number of choices for this integer here on the left hand side is is at most two we know almost exactly what it is it's one of two integers in in the ring z a join square root of two and what we've got here is a product of two we've got a difference of two squares so that means that the sum and difference of these two variables so that's squared of two x one minus x two plus y one minus one plus y two and the square root of two x one plus x two plus y one plus y two these are both divisors n plus two two times n in the ring of integers and q join squared of two and what most what if I write this as big or m plus n to the epsilon choices for these divisors then that certainly gives an upper bound for or the number of choices for these two factors here and if we know about these two divisors you have a question what about units what about units well the point is that the coordinates of both of these expressions the coordinates are bounded in terms of n as well so it's a good good point so both x x one all the x i's and the y i's are bounded by a constant times square root of n thanks to thanks to the you know the setting of the original problem so we have an upper bound on the size of all of the coordinates which means that the units also feed into a sort of a factor of entry epsilon in this in this game and that's true in generality as well of course I said you could look at square root of two and think of an arbitrary quadratic irrational so things get more complicated if the plus number is big but the same conclusion holds in fact you have to work harder for it that time but it's all quite familiar because we're only looking for solutions where the coordinates are bounded in a nice way and does that answer the question absolutely okay brilliant thank you so so what we've done is we've bounded the divisors since given these divisors we can recover x one x two and y one y two this means that the big O of entry epsilon choices for for x one x two and also of course y one and y two because those defined in terms of x one and x two and that means that that this number r of n which I was interested in right at beginning is bounded by well at most two choices for m times this n to be epsilon so it's bounded by n to be epsilon which is what I what I claimed it would be bounded by and as I said that's enough to prove for this kind of example so you can see what we did in this problem and this in this argument was we started out with our mean value of course that converted by a problemality into a statement which we could say something about if we knew about numbers of representations of integers of differences of two bracket quadratics and that problem we converted by a bit of subterfuge into a bounded number of problems involving differences of two squares in differences of two squares representing fixed integers in the number field that we're working with in the ring of integers in the number field that we're working with and so secretly this problem was controlled by a problem in the number field involving actual quadratic polynomials so the argument I've just presented across generalizes from squares to more general quadratic polynomials in the number field it also generalizes to some extent to higher degree polynomials but that's a that's a different topic which depending on how much time I have I may say something about later or I will say something about it but I may not get much more details about it the other problem that you may be interested to know about is what happens if feta is not a quadratic rational what about more general values of feta and there that's sort of an interesting story so I don't know a similarly elegant proof in that case although there's a reasonably elegant proof maybe I'll give the shorter version of this the story since time is sort of running down a little so the two approaches of this but I know one involves having a machinery although neither of them are particularly easy so we have an analog for these more general rationales and maybe I can slightly short circuit the whole thing so so again this is going to involve a number of solutions of a d'affentini equation involving these bracket quadratics and on this occasion what I want to do is consider something which I'll call r star of n the number of solutions of the equation x1 times full function of x1 theta plus x2 times full function of x2 theta minus x3 times full function of x3 theta equal to n and then the claim well now we've got three variables not just two so the claim is for r star then there's less than less than n to the half plus epsilon here and to some extent we're going to we're going to use a similar strategy to the one which I used earlier so I'll write yi to be the full function of theta xi for i equals 123 so that means of course that zero is less than theta xi minus yi is less than one and then what I mean is that if I sum sorry I should be more careful about this so if I think of theta x1 minus y1 squared plus theta x2 minus y2 squared minus theta x3 minus y1 minus y3 squared then that's between minus one and two because each of the squares are between zero and one so we're just a bounded number of that times here and I can expand that whole expression so what I find is that theta squared times again a different example but so there you go theta squared times x1 squared plus x2 squared minus x3 squared minus 2 theta times x1 y1 plus x2 y2 minus x3 y3 plus y1 squared plus y2 squared minus y3 squared so that whole expression is between minus one and two so now it's not as convenient anymore right so theta has no special significance we don't know that's in the number field so it's rather hard to think of what to do in that situation so what we could do is consider the values of x1 squared plus x2 squared minus x3 squared maybe I'll call that L and x1 y1 plus x2 y2 minus x3 y3 well that's actually n that's the fixed object and y1 squared plus y2 squared minus y3 squared and I'll call that m so what do we know about L, n and m we know that L times theta squared minus 2 theta n plus m is this expression which you see at the top of the screen it's essentially fixed for most of the finite number of values of it we also know that L is kind of close to m if you think about it so well maybe I'll say that instead of actually the m is rather close to m so y1 is supposed to be theta times x1 approximately so m is roughly theta times m but there's an error and if you think about how the error works maybe it's the differences as big as square root of m times a constant m is theta n plus bigger square root of n and if I fix m then I know basically what L is so m is fixed, n is fixed L is essentially fixed up to a bigger of one choices so what that means is that the the bigger of square root of n choices for L, m and m given our fixed choice of n right at the beginning and what we have to do is to solve these three equations now for the x i's and y i's and thanks to the work of Paul and Jones and the theory of these are indefinite quadratic forms and three variables now representing binary quadratic forms the bigger of L plus m plus n to the epsilon choices for the x i and y i and again a similar observation applies concerning what happens with units in this situation as well so again because the coordinates are bounded in a sense there's an extra factor which involves the coordinate size for the x i's and y i's but they're all bounded in terms of n so you can see here that for what happens is we don't get the same simple situation it's not a problem about number fields but we can understand this in terms of what happens with these this sort of higher quadratic form problem okay and what we've shown there is that r star of m total number of choices is indeed bounded by n to the half plus epsilon and given that result remember this is all related back to a Daffin team equation in four variables for every fixed choice of that fourth variable that n to the half choices there which is like x then we're getting square root of n choices which is again like x choices for the remaining three variables so that's why we've got an extra two plus epsilon up and down here for the mean value okay so so what that means is that we have an analog of last lemma both for quadratic irrational fader which is a little simpler to describe but also for for general irrational fader as well and the fact we've saved almost everything but we should in these mean values we've saved a factor x squared means that's good input for the methods of Vicki Neal and Ben and so with an extra variable their machinery would give you a five variable asymptotic formula okay so I wanted to say in the last I think I have five minutes of rest then correctly a little bit more about where you can take this next so mean values and generalizations and I would like to see much more here than in fact I can so there is at least something that I can say which I think is some sort of interest something provable and then the rest is more speculative which I probably won't get to but so never mind so let's consider again the situation for this discussion the situation where fader squared is in q and fader is positive irrational and so if you want to generalize what we did before then one possible way that you can sort of convert a problem about racket polynomials into a genuine polynomials would be to consider special polynomials right so so I'm going to consider two polynomials cxy and cxy so these are going to be polynomials with rational coefficients and they're going to be defined this is my way of defining them so the kth power of fader x minus y is fader xy minus fader times fader xy because fader is quadratic I can always this sort of thing makes sense what I can also do is define the number s0 of k and for the moment I have a feeling I can probably improve this but let's take this to be 2 to the k, 1k is 2k and 4 or k times k plus 1, 1k is at least right and then here's a theorem to indicate what kind of thing you can prove there are some pretty clear generalizations of this but this illustrates what's possible so let me take f of x to be either fb of x, floor function of fader x or c of x, floor function of fader x with x in it x in natural number and then the claim is that if I look at the mean value of an exponential sum involving this of these bracket polynomials of order s0 of k then this saves x to the k very nearly in the exponent okay so how does this work in a nutshell so the idea is this is just a sketch proof so the mean value counts solutions of an equation that equation is closely connected with the expression you get when you look at k powers of, so let's see, so here I want some as before fader xn minus yi to be between 0 and 1 because I have a mind that yi has come to be fader times xn so when I look at this sort of difference of k powers I've got s copies of these so this expression is in absolute value smaller than s and the mean value which I have counts solutions of the sum from i equals 1 to s let's take one of the cases, let's take the case of c the number of solutions of this equation and you can see it's the same kind of game as we had before because we know about what happens when you look at these fader xn minus yi that allows me to infer something about the conjugate equation you might think of it as being a conjugate equation so we're hardly in choices for what goes on with the conjugate equation so this is absolutely smaller than fader s it's bounded just a bounded number of choices for what can happen on the right hand side here so that means that if we want to bound the number of solutions of a equation defined by the mean value what we're really doing is as we need to bound the number of solutions in the ring of integers of q join fader of the sum from i equals 1 to s and here this s is half of s0 of k by y of fader xi minus yi to the k minus fader xi minus yi to the k so back to this on the wrong side there equal to one of the fixed values which we should be choosing by trying to learn the quality of surprises to think about what happens when you represent zero now we know how to do this thanks to the recent work in vinaigretto's mean value theorem we know how to do this in number fields but that counts all of the solutions in the number fields and so what you need is for discrete restriction value into this because that nicely takes care of the feature of this problem that instead of having order x squared choices for xi and yi in a box of size x we'll just order x yi's closely linked to xi and when you make use of that discrete restriction variant which you can find in the paper on nested efficient congruencing it's part of the whole resolution of vinaigretto's mean value theorem then you get precisely the estimate that you need to save the correct amount in this problem so again one is able to obtain this variant of was lemon by reinterpreting everything in terms of the numbers of solutions of an actual polynomial equation but now thanks to the discrete restriction variant of vinaigretto's mean value theorem you get the correct bounds for this that can all be converted back into bounds for just the cave power equation and hence to the special polynomials here and it occurred to me maybe you're interested in is for are any of these polynomials in the slightest bit interesting well you know here's one example just to give a flavor of what kind of polynomials you can attack they are not arbitrary polynomials of this packet type but at least for some sort of vaguely interesting versions of these polynomials this is a degree for polynomial among ghetto resulting 17 variables for this polynomial from these sorts of ideas okay I'm out of time I'm well out of time so I can stop there taking questions or whatever I'm happy to answer questions and questions thanks for your attention everybody thanks very much Trevor so I figured out how to clap but we'll try here