 telling magma, this is the 0 of fp, the 0 of fp, and the 1 of fp. These aren't integers. And then here, I'm just going to extract the four coefficients of f, because I want to build the matrix. But rather than building the matrix, I'm going to just build the parts of the matrix that I actually need to run the algorithm. So I'm constructing the right-hand column and these two entries. And I'm thinking of this as these entries. I'm really secretly thinking of them as linear polynomials in k. Because if I want to apply finite differences, if I'm thinking of k as a variable that's increasing, I should really be thinking of that as a linear polynomial in k. And that'll be important in the next algorithm we look at. The idea of thinking of this, even though we're instantiating k going from 1 to p minus 1, it's helpful to sort of switch perspective and think of this as maybe a matrix over a polynomial ring in k whose entries are linear polynomials in k that we're then instantiating. And so I'll leave it as an exercise. You can check that I'm doing the right thing here, that it matches what's on the screen. I'm not going to walk through each step. k is going from 1 to p minus 1. And in each step, we're replacing v with v times m sub k. But I've actually written out the multiplication explicitly. So the first entry of v is just going to be the first entry, this 2 times f0 times the second entry of v. And I'm just taking advantage of the fact that I know exactly what the matrix looks like. So I can write down this linear recurrence just using 1, 2, 3, 4, 5 multiplications. And then this a plus colon equals b is applying the finite differences. It's giving me the increase. It's changing the vector a that's representing the entries in our matrix that we're interested in. And it's updating them to go to the next value of k. And then at the very end, once we've computed the final vector, we want the third component of it. And we're supposed to multiply it by f0 raised to the nth power. And n is p minus 1 over 2. That's what this expression is doing here. And because I want to count points, I'm going to actually, rather than returning the Haasen variant, I'm actually going to convert this to an integer. And then I'm going to figure out which integer is actually congruent mod p to this value. We know p is big enough to uniquely determine it. And I'm going to return p plus 1 minus the trace of Frobenius so that I'm actually getting the number of points on e. So let's go ahead and try this. And as usual, we'll test it against a bunch of random elliptic curves and make sure it agrees with the answer Magma gives. And we could go ahead and run it on a bunch of examples. And you can see it's respectably fast. Even when p is up into the million, it takes about a second to run. And that's mostly because I took the time to make this inner loop pretty tight. Any questions on this algorithm? I could have written a two-line version of this algorithm. But I wanted to give you more code examples, because I know that in the problem sessions, often you're wondering, how do I do this, that, or the other thing. I'm going to give a language. I wanted you to see some examples. All right. So great. We have an algorithm to compute the Haas invariant. It runs in linear time. We already have a linear time algorithm. We already have a bunch of faster algorithms. So this isn't super exciting. But now let's see what we could do to make it better. OK. So the key idea to making the algorithm better is, as I mentioned, the idea of thinking of the matrix as not a matrix in FP 3 times 3, but actually a matrix in FP bracket K 3 times 3, a 3 by 3 matrix whose entries are polynomials in K. And so we could multiply elements of that ring and get matrices whose entries are bigger polynomials in K. But before we do that, I just want to drill down to let's suppose we just had one polynomial. And let's suppose we wanted to evaluate it at a bunch of points. So let's suppose we had a polynomial G, say, of degree R. And here you should think of R as being big, potentially. For us, R is going to be something like the square root of P. But much bigger than 3 now. R is no longer 3. And suppose I wanted to evaluate it at S points. Well, I could just plug in those S values and evaluate G using a number of different methods. I could use Horner's method or there's other. I could pre-compute power. There's lots of different tricks I could do. But no matter what I do, I'm really not going to do any better. It's going to take time on the order of R times S times the cost of multiplication in Fp. But it's possible to do much better than this. And the way to do that is to use a product tree. So this is called fast multi-point polynomial evaluation. Rather than evaluating a polynomial by plugging it in for X, we're going to take advantage of the fact that, well, if we reduced our polynomial modulo, the linear polynomial, X minus X naught, we would get the evaluation of that polynomial at X naught. And we want to apply this sort of in bulk. So one approach, instead of plugging in X naught to XS, is we could reduce G modulo X minus, I should have put a 0 here. I guess I got my indexing off, but X minus each of the evaluation points. We could do that one by one, but that would be a bad idea. A better strategy would be to multiply all these polynomials together in pairs and build up a product tree and then reduce G modulo, the polynomial at the top, and then modulo each of the children and then all the way down to the leaves until we get G mod X minus X sub i in each of the leaves. So maybe just to be the one time when I might use the board, I noticed there was Hagaromachok. So I felt obliged to use it. So if I wanted to evaluate a polynomial, say, at 1, 2, 3, in 4, you're supposed to think of these as separate linear polynomials. I maybe wrote them too close together. I'm not going to try and multiply them in my head, but imagine we actually multiply them. And then at the top, I would have the product of these four polynomials. And then I would take my G, which could have whatever degree it wants, and I'm going to reduce it modulo this degree 4 polynomial. And I'm going to get something that has degree at most 3. And then I'm going to take that degree at most 3 polynomial, and I'm going to reduce it modulo this degree 2 polynomial and also modulo this degree 2 polynomial. And then I'm going to reduce it modulo the result. That's going to be a linear polynomial. And then I'm going to reduce that linear polynomial, modulo this linear polynomial. I'm going to get a degree 0 polynomial. That's exactly G of 1. And similarly here, I'm going to get G of 2 and G of 3 and G of 4. OK. So our idea is to apply this at scale. And so we have a polynomial. We're ultimately trying to evaluate our matrix, which we could think of as a matrix over a polynomial ring at p minus 1 points. But rather, but the matrix itself is only a linear, our initial matrix is just a matrix with linear polynomials. So what we should do instead is first multiply a bunch of the matrix, a bunch of these matrices together over the polynomial ring, the matrix ring over the polynomial ring. And I realize this x here should be a k. I shouldn't try and highlight it. So I'm thinking of m of k. I'm now moving the k from the subscript into the parentheses so that you'll think of it as a matrix over a polynomial ring, which I'm thinking of living an fp bracket k 3 by 3. And I'll just note that the product we want to compute, we could compute as a product of evaluations of this matrix over the polynomial ring at p minus 1 points. And just to make things work out better, it'll be convenient to shift the indices so that we're trying to evaluate it from 0 up to p minus 2 rather than from 1 to p minus 1. So we'll replace m of k with m of k plus 1. And then to compute this product, what we're going to do is basically group. We're going to compute sort of the first square root, o of square root of p terms of that product as a matrix with polynomial entries that are going to have degree on the order of square root of p. So this m prime of k is the product of these matrices where k is still an indeterminate here. I'm not replacing k with anything. I'm just thinking of these as matrices over a polynomial ring. And I'm multiplying them all together. And the result is going to be a single 3 by 3 matrix whose entries are polynomials of degree r, which is roughly the square root of p. And you might ask, how long? We'll analyze the complexity in a moment. But first I should note, we won't do that multiplication by multiplying the matrices one by one. We'll again use a product tree. And when we're doing that product tree, the things are going to be getting bigger at each level as we go. And then at the top, we're going to have them. We'll start out with something like r matrices and then r over 2 matrices, r over 4 matrices. And at the top, we'll have a single matrix with polynomial entries. And then the next step is to evaluate that single matrix at s points, where s is going to be on the order of p minus 1 over r, so s is also going to be on the order of square root of p. And here's where we're going to use the multi-point polynomial evaluation trick. We're going to need to apply it to each entry of the matrix. So we're going to do nine multi-point polynomial evaluations because we want to evaluate our big matrix with degree r polynomials in it. We want to evaluate all nine of its entries at s different values. And we do that. What we will wind up with is the product from j equals 0 up to r times s of m of j. It's possible that should be r times s minus 1. Don't trust my indices on the slide, but we'll jump to the code. And the code will have the right indices because it works. And if it was off by 1, it wouldn't. OK, notice r times s is not necessarily exactly p minus 1, so we might still need to tag on some extra matrices at the end. But this is going to be close to p minus 1, because we took s to be p minus 1 and 4 of p minus 1 over r. So it can't be off by more than square root of p. So doing an extra square root of p multiplications at the end is not going to change the overall complexity. And so this gives us an algorithm for computing the Haas invariant using multi-point evaluations. So this is just a different strategy rather than computing vector matrix multiplications. We're going to compute the product of these p minus 1 matrices all at once, and we're going to do it in two stages. We're first going to convert our m sub k into an m parentheses k that we're thinking of as a matrix of our polynomial ring. And I've written m of k is m sub k plus 1, so that I'm shifting to 0 indexing here. I set r and s to be roughly on the order of the square root of p. And then we're going to compute the polynomial m prime of k as the product of all of these matrices that are whose entries are linear polynomials using a product tree. Same sort of product tree as here, but now they're matrices with polynomial entries, and we're multiplying them up until we get a single matrix at the top.