 Both systems of diafantine equations, what if we restricted just a single polynomial, say a bounded degree? Maybe that's an easier problem. And indeed it is. Certainly if we consider only degree one polynomials, so linear diafantine equations, these we do know how to solve. Euclid and diafantis both wrote extensively on this problem. And by the seventh century, we had a complete method and algorithm for determining all solutions to linear diafantine equations. That leads us then to the question for degree two. Hilbert, perhaps anticipating the negative answer to his 10th problem, his 11th problem was specifically about quadratic forms. He was really asking whether there's a method for determining which algebraic integers can be represented by a quadratic form over a given number field. But if we take q to be our number field, it specializes to the problem we're interested in. And this is actually a more subtle problem than you might expect. You might be probably familiar with the Hausman-Kowski theorem, which gives us a straightforward way to determine whether a given quadratic form can be represented in the rationals. But this is really a question about integer solutions. And it wasn't fully resolved until 1972 by Siegel. All of the cases of sums of squares was resolved much earlier. OK, so we're good when p has degree one or p has degree two, there is an algorithm that can tell us the answer. What about degree three? So Warren famously asked about the degree three case and many higher degree cases back in the 18th century. And as of today, we have no idea whether there is an algorithm that can answer this question, even for the case of a single polynomial of degree three, even when it's just three variables. OK, well, so what do we know? What can we say about the degree three problem? Well, if we only have two variables, so we're asking, we're given an integer and we want to know, can it be represented as the sum of two cubes? This we know how to solve. So let's first consider the question of which prime, suppose k is a prime, which primes can be written as the sum of two cubes? And the answer is the prime two and all primes of the form 3x squared minus 3x plus 1. I'll show where that comes from in a moment. But I'll note that even in this case, where the answer seems fairly straightforward, there's still an unsolved problem. We expect that there are infinitely many primes of this form. But we don't know that. Just to see where this equation comes from, this comes from the fact that we know we can factor the polynomial x cubed plus y cubed. And if it's representing a prime, one of these two factors has to be one and the other has to be the prime we're trying to represent. So if the quadratic factor is one, well, then x and y both have to be equal to one and our prime must be two. Otherwise, the linear factor is one and our prime has to be of the form 3x squared minus 3x plus 1. So very straightforward. Just as a side note, there are cubic equations that we do know represent infinitely many primes. So Heath Brown showed in 2001 that there are infinitely many primes of the form x cubed plus 2y cubed. And in particular, this tells us there are infinitely many primes that are the sum of three cubes, because we could always change this to x cubed plus y cubed plus z cubed and just take y equal to z. OK, just another brief digression. If we were to ask the question about cubes of rational numbers, the problem has a different answer. So for example, 13 is not the sum of two integer cubes, but it is the sum of two rational cubes. And in general, the question for rational cubes, sums of two rational cubes boils down to finding rational points on an elliptic curve, this particular elliptic curve. So for each specific value of k, or I call it n here, there's an elliptic curve we can write down. Rational points on this elliptic curve corresponds to rational solutions. And we know from the Motivate theorem that the rational points on an elliptic curve form a finitely generated abelian group, which looks like a torsion subgroup of order at most 16 and some number of copies of z. And there are well-known ways for finding torsion points on elliptic curves over q. Computing the Motivate group, we don't quite have a complete algorithm for that, although we can't solve it in many cases. But if we're willing to assume the Birch and Swinnerton dire conjecture, the question of whether there exist rational solutions, assuming, suppose we've already ruled out, torsion points, comes down to the question of whether the L function of the elliptic curve vanishes at its central point or not. And this is something we do know how to figure out. So under BSD, we know that for primes that are congruent to 4, 7, or 8 mod 9, they have positive analytic rank. And so there should be infinitely many solutions. And for primes that are 2 or 5 mod 9, the rank is 0. So other than torsion points, there are no solutions. The case where p is 1 mod 9 is more complicated, but it's still fairly well understood. Just a footnote here. I've been assuming BSD, but in fact, that assumption may no longer be necessary. I'll direct you to a recent preprint by Daniel Cree's, which has some very recent progress on this problem. OK, so back to integer solutions. So if we want to represent an arbitrary integer k as a sum of two cubes, well, our first step is to factor x cubed plus y cubed. So we know that k can be written as the product of two integers, r and s. And if we plug the solution for one of those integers and do an equation for the other, we wind up with a quadratic equation. And if we can solve this in integers, we can find a representation of k. So this comes down to simply a question of whether the discriminant of this quadratic form has a square root or not is the square of an integer. And so this gives us an algorithm. So our step, our algorithm is as follows. We take k and we factor it. And for every pair of factors, r and s, we check whether this integer is 12s minus 3r squared is a perfect square or not. And if it is, it gives us a solution x and y to our equation x cubed plus y cubed equals k. So for example, if k were 1729, the number of Hardie's taxicab, we could, as Ramanujan did, determine that there are two ways to represent k as a sum of two cubes corresponding to these two factorizations of 1729. Now, before I get to the main topic of this talk, which is sums of three cubes, I want to also consider the case of sums of more than three cubes, which turns out to be easier. So there's a straightforward polynomial identity that immediately tells us that every integer can be represented as a sum of five cubes in infinitely many ways. So this identity can be turned into a parameterization. Given k, there's a polynomial that we can write down with an integer parameter n. For every value of n we plug in, we get a representation of k as a sum of five cubes. There's a more complicated series of identities that can be used to represent k as a sum of four cubes, at least when k is not congruent to plus or minus four and minus nine. And it's conjectured that, in general, there should always be infinitely many ways to represent any integer k as a sum of four cubes. But this is still open when k is plus or minus four mod nine. OK, so that leads us to the main topic of our discussion today, which is sums of three cubes. And the first thing to note is not every integer is the sum of three cubes. And this is not true, even if we just think of work modulo nine. So the cubes mod nine are zero and plus or minus one. If you add up three of those, there's just no way you're ever going to get to four or minus four. So this rules out all integers k, which are plus or minus four mod nine. So there are infinitely many values of k that are not represented as sums of three cubes. And for small values of k, for k is, say, zero, one, or two, we can represent k as a sum of three cubes. In infinitely many ways, there's explicit parameterizations we can write down. And more generally, if k is of the form m cubed or two times m cubed, these parameterizations again yield infinitely many solutions. So for the rest of this talk, we're going to assume k is not plus or minus four mod nine. And it's not of the form m cubed or two m cubed. So that's the interesting aspect of the problem. I'll just note these parameterizations are not exhaustive. So there are representations of one and two as sums of three cubes that don't necessarily come from these parameterizations. And I'll also note that the rational question is easier. It was known as early as 1825 that every integer can be represented as a sum of three rational cubes. And I should also note we can restrict our attention to positive integers. We're ruling out zero. And if k were negative, we could just replace x, y, and z by their negatives and get a solution for the positive value of k. So from now on, k is a positive integer that's not plus or minus four mod nine and not of the form m cubed or two m cubed. So much of the work in the 20th century on this problem, representing integers as sums of three cubes, was sparked by what was originally really sort of a side casual remark in a paper that Mordell wrote, where he noted that there are two easy ways to represent three as a sum of three cubes. One cubed plus one cubed plus one cubed to our minus five cubed plus four cubed plus four cubed. But then he said he didn't know anything about the existence of any other integer solutions. And it must be very difficult indeed to find out anything about any other solutions. Now, when a mathematician of Mordell's stature throws down the gauntlet by saying how difficult indeed it must be to find solutions, this is great motivation for younger mathematicians striving to make a name for themselves. And so many pursued Mordell's questions, trying to find a representation of three as a sum of three cubes. And in fact, there are a series of computations I'll talk about in the next slide, work over a 65 year period, searching for an answer to Mordell's question. And up until last year, no solutions were found. However, solutions for lots of other interesting values of K were found. So in this slide, I've sort of tried to summarize in one slide progress on this problem. I won't go through every line on this slide and I apologize to anyone in the audience whose name should be on this slide, but isn't. But please set and drop me a line, send me an email if I missed something here. So these are searches. Most, almost all of these are past the first two parameterizations are computer searches that were run looking for solutions where X, Y, and Z are bounded or in some box, say bounded by N. We have absolute value of most N. And some of the early solutions, the very first search was done just two years after Mordell asked his question, searched for N up to 3,200 and found solutions for quite a few Ks below 100. And later searches extended the bound on N all the way up until at the end of the 20th century, the bounds on N were getting well into 10 to the ninth or further. And there were only three integers K where the question was still open. We didn't know whether there was a representation of K as a sum of three cubes and those integers are 33, 42, and 74. So that's where things stood at the start of the 21st century. Now to add further motivation beyond that given by Mordell, Bjorn Poonin in a wonderful article he wrote for the AMS notices on undecidability in number 30, he starts off with the following example. He notes that the equation X cubed plus Y cubed plus E cubed equals 29 has some easily discoverable solutions, take 311 for instance. But if you just change K from 29 to 30, the problem is suddenly a lot harder. There is a solution, but it was hard to find. It took a lot of work and wasn't known until 1999. Now 31 and 32 are both congruent to plus or minus four mod nine. So we know there's no solutions there. So the next K and the least K that was unresolved at the point at the time Bjorn wrote this article was 33. And Bjorn asked, are there any solutions for 33? And this motivated another intense round of computer searches, people searching for solutions for 33 and also checking for solutions for three, no one had forgotten about Mordell's question. And the bounds on N, the upper bound and the absolute value of X and Y and Z was pushed up to 10 to the 14th and then all the way up to 10 to the 15th. And in that last search by Sander Husman, he was able to find a solution for K equals 74, but not 33. The solution for 33 wasn't found until last spring by Andy Booker, who was finally able to answer Poonin's challenge with this representation of 33 as a sum of three cubes. And this left 42 is the only open K below 100. But I'll note, while this was an exciting result, it didn't actually resolve Mordell's question. 65 years of searching, nobody has been able to find another representation of three as a sum of three cubes. And Andy did check for solutions for three all the way up to the bound 10 to the 16th. Okay, now as I expect many of you are aware, there's been a lot of popular interest in this problem. That's one of the things that makes it fun to work on. Much of this was stimulated by the wonderful number file videos produced by Brady Horan and who's tracked much of the recent progress on this problem. And when with Andy's solution for 33, 42 became the new 33. And this even earned Andy an article in the magazine Newsweek. I made your periodical here in the US and I didn't know this at the time but I found this out about this later on the internet. In that article, Andy actually has the audacity to mention that we are working, he's now working with me on trying to find and answers the last unsolved case below 142. And I'll notice, note that this was a very bold statement for Andy to make at the time because as we'll see, he had very good reasons to believe that this should be very hard. He had already spent several CPU years of computer time trying to find a solution. He proved there were none, up to 10 to 16. And he knew that it should take approximately 6,728 times as much work to find more work than he'd already done to find a solution. So that's tens of thousands of CPU years. So he really had no reason to think we were going to be able to solve this problem. But you know, spoiler alert, we did. You might have, and if you can see my t-shirt, we did find a solution for 42. So all is forgiven. Okay. So why is 42 of particular interest? Well, for science fiction fans, the number 42 plays an important role in the novel, Hikers Guide to the Galaxy by Douglas Adams. And in the book, the main character, Arthur Dent, finds out pretty early on in the book, well, he finds out why the earth was created. And the creation story for the earth involves an alien race that millions of years ago created a self-conscious computer, deep called Deep Thought. And the first question they asked their computer when they turned it on was to tell them the answer to life, the universe and everything. And it took Deep Thought a while to answer their question, seven and a half million years in fact, but he was then, Deep Thought was unable to report the answer to descendants of the scientists that originally asked the question and proudly announced that the answer was 42. Now the scientists were not particularly satisfied with this answer. They really wanted to know the question, the ultimate question whose answer is 42. Deep Thought's of that question, that computing the question was beyond its capacity, but it could tell them how to build an even bigger computer that could answer the question. And this led to the creation of planet earth, which was really a huge supercomputer with sysadmins consisting of mice. The humans are just sort of stray vermin populating the surface of the supercomputer. And their objective was to compute a 10 million year project to compute the ultimate question whose answer is 42. Unfortunately, very early on in the novel, Earth is destroyed by Vogan's to make way for an interstellar highway. So the ultimate question was never computed. Okay, so now that we're very motivated to find a representation of 42 as a sum of three cubes, let's talk a bit about how one might go about doing this. So, well, like everyone else for the last 65 years, we're gonna search in a box. Our goal is to try and make the box as big as we can possibly make it. And so you might ask for a given bound n on the size of our box, this is gonna bound the maximum, the absolute values of x, y, and z. How long does it take to search? Well, if you do this naively, really naively, and just try a recombination of x, y, and z, this is gonna take on the order of n cubed to arithmetic operations. Now, it doesn't only takes a moment thought to realize that that's not your best bet. You could instead just plug in x and y, compute x cubed plus y cubed minus k, and then ask whether that's the cube of an integer. They're very fast algorithms for doing that. That gets you down to an old n squared algorithm. But if you think about it for another five minutes, you realize that, wait a second, we just saw a few slides ago, a way to solve the problem for sums of two cubes. And so if we just plug in one value, we're left with a question that can be phrased as is the representation of an integer as a sum of two cubes? And if we use fast probabilistic methods for factoring integers, we know we can do this in sub-exponential time. So sub-exponential in the logarithm of a number. And that means it's negligible compared to our bound n. So this already gives us a quasi linear time algorithm. Just plug in one value and run the sum of two cubes algorithm. Now, in fact, this is still nowhere near fast enough to get us even up to n at 10 to the 16, in which it had already been checked by Andy Booker. So we need to do something more efficient. And so we're gonna instead do, as Andy did, we're gonna follow an approach suggested by Heath Brown and his co-authors, which seeks solutions for a particular fixed value of K. In contrast, many of the earlier approaches in particular, perhaps the most efficient one was proposed by No-Melkies, which looks for small values X, Y and Z such that X cubed plus Y cubed plus Z cubed is small. And this allows you to search for solutions for many values of K simultaneously. So if you wanna look for all the solutions for K up to some reasonable numbers, 8,000 or 10,000, this is the way to go. But at this point, we've focused on a very specific K or a handful of Ks. And so there's the Heath Brown approaches more efficient. And with suitable optimizations, I'll explain say a bit more about these in a moment. One obtains a heuristic complexity on the order that's not just quasi linear, but on the order of N times log log N, arithmetic operations. And these arithmetic operations involve integers that will fit either in a 64-bit word or in a 128-bit register. So these operations are very fast on the order of a few clock cycles on a modern CPU. Okay. All right, so let's start to drill down into the algorithm a bit. So our setup is as follows. We're looking to represent K in the form x cubed plus y cubed plus z cubed. And of course, if we had a solution, we could always preview x, y, and z. So let's fix the order of x, y, and z by assuming ordering them by absolute value. And since we already know how to solve the sum of two cubes problem, we may as well assume x, y, and z have distinct absolute values. If any of them coincided, we could reduce to an easier problem. And it will also be convenient to assume that they're all bigger than the square root of K. K is three or 42, so that's not a big assumption. And there are other methods that will allow us to allow solutions with one of x, y, or z very small. And we're also gonna make a simplifying assumption. We're gonna assume that K is plus or minus three, mod nine. That covers all of the really difficult Ks, including three, 33, four, and 42. Okay, so our strategy then is to do similar as we did in the sum of two cubes case. We're gonna take K minus z cubed, that's x cubed plus y cubed, and we're gonna factor it. And if we define the integer d to be the absolute value of the sum of x and y, okay, x and y, we are ordering here in our assumption that these are not too small. X and y are gonna have to differ in sign. So x plus y could be positive or negative, but its absolute value is a positive integer d. And we know that K minus z cubed has to be zero mod d because we have x plus y here. So z is a cube root of K mod d. And this leads to, we can then plug into the quadratic equation that arises from this factorization. And this tells us that for a given value of d and a given value of z, we're gonna get a solution to this equation precisely when this expression is a perfect square. So this corresponds to just solving the quadratic equation. We need the discriminant to be a square. And if it is, we can plug in K, z and d and get out the values of x and y. So our problem reduces to, for a given value of d and a given value of z, we want to determine when we get this expression here as a perfect square. And one can show that z has to be somewhat bigger. The absolute value of z has to be somewhat bigger than d by a factor that's about close to four. And it's also convenient to note that d cannot be divisible by three. And the sign of z is actually determined by d and K once we know d mod three and K mod nine, we know what the sign of z is. So we don't need to worry too much about this absolute value sign. Okay, so this yields the following strategy. We have some upper bound n. We're gonna enumerate all the d's, all the integers d up to slightly below our upper bound up to alpha n. That's the range that we know. Once if we take d up to alpha n, we know we're gonna hit every z with absolute value bounded by n. And for each pair d and z, you just need to check whether this expression here, which I've denoted one is a perfect square. And if it is, there is a corresponding solution. In fact, this is guaranteed to find all solutions not just in the box where the maximum value is bounded by n. We're really only bounding the minimal value because z here is the smallest in absolute value of the three integers x, y and z. So this search method will yield solutions where even with x and y, having absolute value is much larger than n. It's only z that needs to be small. So I just wanna bring elliptic curves back into the picture. Once we've fixed k and defined d is above as the absolute value of x and x plus y, there's an integer corresponding to the expression in the equation on the previous slide. Solutions to the equation one are precisely the integral points, not counting the point in infinity on this particular elliptic curve, y squared equals x cubed plus this integer. So this is an elliptic curve. It even has complex multiplication. And for small values of d, it's actually feasible to determine all the integral points on this elliptic curve. I mean, we want d to be small because it's gonna make b sub d small, which is gonna make the discriminant and the conductor of this elliptic curve recently small. The nice thing about this is if we can ever determine all the integral points on this elliptic curve, we rule out all possibilities for z, not just z up to some bound. Unfortunately, this is really only feasible if d is quite small, say up to about 40. One can go a bit further using the fact that this elliptic curve is actually three isogenous to another elliptic curve that has slightly smaller coefficients where it's often easier to find the integral points. And one can rule out all d's below 100 using this method. And if you're willing to assume the generalized Riemann hypothesis, you can go quite a bit further, say up to 20,000 or so. But this is still far, far below 10 to the 16. And we know we're gonna need to go a lot further than that. I mentioned this just to note that the problem of finding integral representations for k is a sum of three cubes can be reduced to a question about integral points on a one parameter family elliptic curves. But that reduction doesn't make our lives any easier. Finding integral points on elliptic curves is hard and find doing it for say 10 to the 16th different elliptic curves because we get a new elliptic curve for each d. I mean, yeah, sure they're all twists of each other but that doesn't make it any easier to find the integral points on them. So this isn't really a feasible approach but a mathematically interesting connection. Okay, so I claimed a quasi linear complexity bound of the form n times log log n. That's not true if you aren't a little bit clutter. So there's two potential obstacles to achieving that complexity bound. The first is that in order to compute cube roots of k modulo d, we need to know the factorization of d. And while we can factor d in sub exponential time, it's still going to be a lot, it would take a lot longer than log d time to do that, log n time to do that. But in fact, we don't need to factor d at all because we're the ones enumerating d. It's not just handed to us and we're gonna check all possible d's up to some bounce. So we may as well enumerate them along with their factorizations. In other words, we're just gonna enumerate primes and prime powers and represent d as a product of prime powers. And we are, as we'll see, we're gonna compute cube roots of k modulo primes and prime powers and use the Chinese remainder theorem to get them modulo d as well. So that takes care of that potential obstacle. The other potential obstacle is that there are actually n log n pairs of d's and z's that we need to consider. I mean, when d is small, we have a very long arithmetic progression in z that we need to check to check all the possible z's up to the bound n. So to get around this obstacle, we sieve these arithmetic progressions. I'll explain and give an example of this on the next slide. And it's only for the small values of d that are a problem for almost all the d's below n, say certainly all the d's greater than n to the 3 fourths, there aren't that many z's to consider. And by sieving the arithmetic progressions that are long, meaning corresponding to small values of d, we can dramatically reduce the number of z's we need to consider. And it means that the total number of pairs d comma z is linear in n. Now I should note, we don't actually sieve these arithmetic progressions. We don't write out the entire arithmetic progression and then go eliminate multiples of different primes. We instead use a CRT lifting process, which I'll explain on the next slide, that allows us to essentially define a bunch of new arithmetic progressions with larger, that are shorter because they have a larger modulus and we're only considering the arithmetic progression up to a given bound. Okay, and so with this approach, I mean, I should, I guess say this is still all heuristic. We can't actually prove that the number of arithmetic progressions, the number of z's we need to check is o of n, but in practice it is and the algorithm runs and we check them all in any case. So even if it weren't, worst cases, our algorithm takes it a little longer than we expect. Okay, so just a few comments on computing cube roots, modular integers. So the way to do this is to, the first thing we need to do is to know how to compute cube roots, modular primes. And for if p is a prime that's congruent to two mod three, q being is a one-to-one operation and that makes it very easy to compute cube roots. We can just exponentiate mod p and if we use standard binary exponentiation, sometimes known as the Russian peasant method, we square and multiply, we can do this using o of log p multiplications mod p. So this is very quick. When p is one mod three, q being is of course then three-to-one because the multiplicative root mod p has the order is cyclic of order a multiple of three. And so the trick here is to essentially reduce the problem to computing a discrete logarithm in the three part of the multiplicative group. So if we write p as a power of three times m plus one, the multiplicative group has order three to the w times m. If we were looking for a cube root of k, if we raise k to the mth power, we're going to be left with an element of the three-celo subgroup of the multiplicative group, which we know has a cyclic of three power order. And we're gonna get a, have a cube root precisely when our b isn't a generator for the three-celo. So if we just keep cubing b until we get one, we can compute its order. It's gonna be some power of three. And as long as that power is smaller than the size of three-celo subgroup, we know a cube root exists and it's not hard to find one. So our strategy is to pick a random integer x, compute the, an a corresponding to x that's in the three-celo subgroup of the multiplicative group of our finite field fp. And then we're gonna compute a discrete logarithm in that three-celo subgroup. Now, you might be scared by the words discrete logarithm. We know that a discrete logarithm problem is hard. At least we hope it is. If all of our financial transactions based on elliptic curve cryptography are going to remain secure, but the discrete logarithm problem is hard in groups of large prime order. It's not hard in groups of smooth order. And particularly it's not hard in groups of three power order. So we can do this very quickly using, I should, I guess, emphasize using a randomized algorithm. We don't know how to do this quickly deterministically anymore than we know how to compute square roots mod p efficiently with a deterministic algorithm. But we're happy to use a randomized algorithm. We're gonna be doing this for lots and lots of different primes p. And we know that on average, our running time is going to be quite good, not much slower than it was, is to handle the case where p is two mod three. And once we've figured out as exponentiated by m and found our cube roots in the three-celo, it's then easy to paste them together to get a cube root of k mod p. And of course, there's not just one cube root, there's three of them, and the other two are obtained by using a cube root of unity and we know how to compute that inside the multiplicative group of our finite field. So this gives us cube roots mod primes. We can then use hence a lifting to get cube roots mod prime powers. And the Chinese remainder theorem for getting cube roots of products of prime powers. Okay, so ultimately in the algorithm that we're running, we essentially will end up spending all of our time computing cube roots mod p. That's where the vast majority of the time in the algorithm is spent provided we can be very efficient with our Chinese remainder. So a lot of the actual coding that went into implementing this algorithm was making this part, this CRT process very fast. Okay, so we use the Chinese remainder theorem to compute cube roots of composite numbers, but we also use it for sieving, as I mentioned on one of the previous slides. So the idea here is we have this expression one, there's this integer we know needs to be a square if we're gonna have any solutions for a given pair D and Z. And if it's going to be a square integer, it's also gonna be a square modulo every prime. And we can potentially rule out certain residue classes that can't possibly give us a square mod of a certain auxiliary prime. So in this example, let's take K is 33, D is five. So that would give us a very long arithmetic progression. There are lots of Zs we need to check because we're just looking at all the Zs that are cube roots of K mod five. But if we look at this equation, we know a constraint on Z mod two. And if we look at this equation, we get a constraint mod seven. In fact, there's only one residue class that works mod seven. Z has to be zero mod seven if we're gonna have any hope of making this a square. And we can look at other auxiliary primes. And you can see that this dramatically reduces the number of Zs we need to check. I mean, when we started out just with our constraint mod five, if we wanna check all the Zs up to 10 and 16, we have more than 10 to the 15 Zs to check. But as we add more and more auxiliary constraints, each of these is gonna reduce, is going to produce a new set of arithmetic progressions with a larger modulus. And depending how many residue classes we have mod our auxiliary prime, that will multiply the number of arithmetic progressions we need to consider. But at the end of the day, the total number of Zs we need to check is reduced in this example by almost a factor of 1,000. And we can also use cubic reciprocity. There are only 14 residue classes modulo 891 that are compatible with cubic reciprocity. And this improves, reduces the number of Zs we need to check by another factor of 63. And so in fact, rather than at the end of the day, rather than needing to check something like 10 to the 15th values of Z for D equals five, we only need to check something like five times 10 to the ninth. And this can be done in a minute. So this is almost a factor of a million improvement. And so this basic, this CRT sitting deals with all the small values of D so that most of the time is spent on the larger values of D. Okay, I don't think I wanna get too much into the nitty gritty details of how one implements these algorithms, but I'm sure it'll be familiar to many in the audience. A lot of hand-optimized C code and using G Intel intrinsics to let us access, particular assembly language instructions. Modular arithmetic is crucial for making this algorithm fast. And one thing we wanna do is avoid doing any more inversions, modular inversions, modular prime than we need to. And there's a standard trick to Peter Montgomery for batching inversions that allows us to essentially reduce each inversion to something like three multiplications. If we have a thousand inversions to do, we can do that with 3,000 multiplications and one inversion rather than a thousand inversions. There's another method for fast modular arithmetic due to Barrett, which we also use because Montgomery representation is great when you're working, doing lots of operations modular with the same prime, Barrett representation is better when you're doing lots of operations modular with different primes as we are when we're CRT and things together. I guess the only other thing I'll highlight on this slide is this PrimeCiv library, which you can find on GitHub. You know, I'm sort of an old school computational number theorist. I tend to like to write my own code. I mean, I'll use other tools when I can, but if it's not fast enough, I'm almost always inclined to get out my C compiler and see if I can't make it faster. This library is an exception. This is, as we say in Boston, wicked fast. I highly recommend it. Okay. Anything else I want to note here? Maybe the other thing worth noting is that the search process is easy to parallelize. I mean, we can partition the Ds and since we're enumerating them in order according to their prime factorizations, it's convenient to partition them according to their largest prime factor. And since there are gazillion trillions and trillions of Ds, we're considering it's very easy to chop the work up into lots of pieces. All right. So now I want to come back to some mathematics and explain the number 6,728 that I mentioned earlier that Andy should have been well aware of and should have given him pause when making any predictions about our ability to find a solution for 42. So there's a conjecture of Heath Brown that says that we can estimate the number that first off predicts that for any integer K that's not plus or minus four mod nine, there should be infinitely many representations of K. as a sum of three cubes. But those representations are very sparsely distributed. They're not so easy to find and how sparsely are they distributed? Well, Heath Brown's conjecture is that you can determine this by taking a product of local densities of solutions. This is an interesting conjecture to make because in the same paper, he also proves that you really shouldn't expect this to work because the corresponding cubic surface actually fails weak approximation. So you don't really a priori have any reason to believe that multiplying local densities is going to give you the right global density. But nevertheless, he makes that conjecture and tells you exactly how to compute these local densities. They're not hard to compute. There's a explicit formula for each prime P, the prime three is special, and then there's a factor for the upper median prime at infinity. And the conjecture is that if you want to know the number of representations with the opposite values X, Y, and Z in some range, say between N1 and N2, it's given by this expression, this product, this infinite product, but you only need to know the small value, the sigma P's for primes up to some bound to get a very good estimate of this infinite product. And this tells us that if you take K equals three, Heath Brown concluded that you really should expect each new solution to show up only after you increase the range of your interval by a factor of 10 million. And at the time he wrote this article, they hadn't even searched past two to the 16th. So they've gotten nowhere near 10 million. So Heath Brown answered to why no one had found as an answer to Mordell's question was simply you haven't looked far enough, keep looking. Now people did keep looking and they went well past 10 to the 17th, 10 to the seven, and then past 10 to the 14th and didn't find a solution. So you can see here on this table, I'm showing Heath Brown's predictions, the expected number of solutions. This N naught is sort of the multiplicative factor of how much further you expect to have to go to find the next solution. And you can see for three, it's a lot, something like 12 million. For 42, it's 6,728. And if you're, and I mean the prediction, the conjecture is that the distribution of these solutions is following sort of a memoryless Poisson process. So the fact that you've searched up to 10 to the 16, doesn't make it any more likely that you should expect to find another solution right around the corner. You should expect you have to go 6,728 times further. And in the case of three, you should expect you have to go 12 million times further than 10 to the 16 to find a solution. Now in this table, I'm also comparing the predictions from Heath Brown's conjecture against data that Senator Hoosman collected for k's between three to 100, in fact, for k's from three to 1,000 with n up to 10 to the 15. And you can see the numbers match pretty well. It's looking pretty good. Okay. So the search for 42, we implemented and optimized this code and we ran it on a charity engines, crowdsource compute grid of about 500,000 home PCs that are donating people whose owners donated their compute time to projects like this. Each dot on this graph represents 50 cores. It was, the picture got too dense when I tried to plot every PC. The purple dots are smooth values of D, Ds with no large prime factors. The blue dots are values of D that have a large prime factor, a factor bigger than the square root of the bound. And we, our bound was 10 to the 17th. We searched up to 10 to 17th. And this search took about 90 core years. And lo and behold, we did not have to go 6,728 times further. In fact, we found a solution not that much further than Andy's initial search had gone. We only had to go about 12 times further. And here it is. And we also were able to find a solution for three. What's perhaps notable about the solution we found for three. Again, a priori, we should have expected to have to go a lot further than this. We got quite lucky. But one nice thing about the algorithm we're using, the algorithm that Andy came up with is that it only depends, it's only bounding Z. X and Y are allowed to be a lot bigger. And they are in this example. Whereas Heath Brown's conjecture is assuming that all X, the absolute values of X, Y, and Z are all bounded by the integer n. So that was one thing working in our favor. Now you might ask, what's next? Which Ks are still open? We've solved all the Ks below 100. But we have data on Ks up to 1,000. A lot of data. And I've highlighted in red the Ks that are still open. The next one is 114. And Heath Brown's conjecture says we should have to expect to expend something like 26 million times as much effort as we've already expended in order to find a solution. That might be depressing. On the other hand, on note, we found a solution for 165 in our search. And that 165 a priori should have been much more difficult than any of the other Ks. I mean, this one didn't make headlines, but it's in some sense much more remarkable than three or 42 or 33. And I'll just end by noting that there is a way to improve this search strategy. Our experience with three where there was such a big gap between the values of X and Y and Z and also between D, D is much smaller than Z here. Let us to explore a search strategy where we bound D and Z separately and allow the bound on Z to be much larger than the bound on D because it's much this, especially with the CRT sitting, we can very efficiently check all the Zs for a given value of D even when the bound on Z is much larger. And this leads to a new search strategy and one can compute what the optimal ratio should be heuristically between the bound Z max on the absolute value of Z and D max on the absolute value of D. I've denoted them N and B here. And you find in practice, this ratio should be something like 50 or 60 or maybe a hundred. And with this search strategy, I'll just end on this slide. If you run the exact same search we did for 42 using the optimal ratio, we're able to find a solution in less than 90 CPU years. In fact, it only takes 0.7 CPU years to find the same solution. So that's exciting on the one hand. It's something like a hundred times faster. On the other hand, we know we potentially have to do 26 million times as much work. So the jury's still out on whether we'll have any success in finding a solution for 114 or any of the other numbers or not, but we certainly will keep trying. I'll stop here. Thank you. Well, thanks very much, Drew. Are there any questions?