 And notice it's inside the loop so that if h gets changed, it'll be a different h. It'll be whatever h is the current h we're supposed to work with. And now I've switched to calling our variable x, which is more familiar, and our f is x cubed plus a x plus b. But notice this f is now living in this quotient ring. So for example, if h had degree 2, this f would automatically get reduced to a linear polynomial. Because sage knows that this variable x is living in this quotient. And now I'm going to compute x to the q. And the beauty of working with a computer algebra system is I can literally just write x to the q. And it's going to do the right thing. And similarly, I can compute y to the q. I can just literally write just raise f to the q minus 1 over 2 power. The slash slash there means do an integer division. Then I have, so I've computed my Frobenius endomorphism as x to the q, y to the q. To compute its square, well, I just multiply for the Frobenius endomorphism by itself. This mole is calling the function that we saw up above that we implemented. This isn't something that's built into sage. And then I'm going to compute 1, which I'm going to call id. Because I know I can't really use the symbol 1 for a variable. And it's just the endomorphism represented by x mod h. x is already mod h, because it was named as the generator of this quotient. And 1, but I do need to cast 1 into this quotient ring. That's the one I mean. Then I'm going to compute q sub l by applying scalar multiplication, our double and add formula, to q mod l. I'm going to multiply. I'm going to add the identity to itself, q mod l times. And a and f are just going along for the ride, because the scalar multiplication needs to call the double and add functions. And they need to know what a and f are. And now I'm going to compute pi squared plus q. And this is the thing. And we're trying to solve this equation. So pi squared plus q we know is equal to the trace Frobenius pi times pi, but we don't know what t is yet. But it could happen that when we do this addition, we get 0, in which case we know what the trace Frobenius is. It's 0 mod l. It could also happen that we get something we already know. We could get pi, in which case we know the trace Frobenius is 1 mod l. Then we go into our loop. And we just try all the c's between 2 and l minus 1. Remember, the range goes up to 1 less. So this is going to check everything from 2 to l minus 1. And we just add, we're going to set p to pi. And we're just going to keep adding pi to p. Again, this ad is using our endomorphism operation that we addition that we implemented. And when we get a match, we return the trace we found. And this is going to be some integer that's less than l. And that's the trace Frobenius mod l. This will always happen. That's a theorem. But when you're writing code, you don't want to assume that maybe there's a bug in your code, or maybe there's a bug in your theorem. So we'll put an assert down here that if we ever get to this line of code that should never get reached, we'll raise the alarm. And then finally, in this triclaws, the accept clause on a zero-division error, we're going to do exactly what I said we're going to do. We're going to take the GCD of this div polyfactor that we saved with H. Now, the div polyfactor is in our quotient ring. So I have to lift it up to a univariate polynomial. I'm going to take the GCD and reset H. And notice that's at the bottom of the loop. So it's just going to go right back up to the top and start over with the new H. So when this gets hit, it will just pop up here and recreate the quotient ring here using a new H. Sorry, here. It's in this line. All right. So let's, and the scope software of them itself is exactly what you saw in the slide. It's just iterating through primes L until it's acquired at applying the Chinese remainder theorem. All of the work is being done here in this line where it's calling our trace mod function that we spent the last 10 minutes looking at. OK. And then at the end, we know the trace mod L, or sorry, we know the trace mod some m that's bigger than 4 square root of q. Of course, the trace could be negative. So we need to get the sign right by checking whether it's bigger than half than m over 2. And because I want to be able to compare this algorithm to all the others, I'm actually not going to return the trace for benius. I'm going to return the number of points on my elliptic curve, which is q plus 1 minus t. OK. So let's go ahead and give this a try. First, we should just sanity check it on some small cases and compare it against the result from sage that sage already knows. And so this e dot cardinality is how you count points on elliptic curve over a finite field than sage. And this just looped through all the prime powers from 5 to 100 and tried three random elliptic curves and checked that the algorithm we saw got the same answer. OK. But now let's try it on a curve over a larger finite field. So I'm going to go ahead and run this. So this is a random, yeah, it runs so fast that I can't, yeah, it runs faster than I can talk. But let's sort of, we'll see the replay, the slow motion replay. So this is, I picked a fairly small prime. This is like a 60-bit prime, something that we could easily have handled with our mestres algorithm. And it computed the trace with zero mod 2 almost instantly. It takes no time to figure out if it's f is irreducible. But when it was trying to compute the trace mod 3, it hit a zero divisor. And so it had to restart. So the three-division polynomial had a non-trivial factor of degree 1. That means that either this elliptic curve or its quadratic twist has a rational three-torsion point. But that's OK. It restarted and computed the trace correctly, which is 2 mod 3. There's a fact that it's 2 mod 3 tells me it must be the quadratic twist that has the rational point. Then zero mod 5, zero mod 7, et cetera. And you can see it took less than a second. Let's try it on a, and we also verified that we got the right answer at the end and we did. Let's try it on a larger example. So here you can see it's actually thinking a little bit when it gets up to 29 or so. It's starting to work a little bit. There are many things we could do to make this faster. But I tried to keep the implementation as absolutely the minimal implementation one needed to implement the algorithm. And it takes about 12 seconds on this example. If I were to rerun it on another example, it might take a different amount of time. It actually can vary quite widely from one curve to another, because it depends on how quickly it finds the trace and also when it hits division polynomials. It happened here on the five-division polynomial when we found a non-trivial factor. But it took about the same amount of time again. And maybe just before we wrap up, I'll run it on 100-bit prime. You might object that, well, that's still pretty far from cryptographic size, but here's the thing. This is going to take maybe 20 seconds. The magma implementation is faster. The Julia implementation is faster still. But let's suppose we wanted to go to 200 bits. How much longer would it take? About 32 times longer. OK, this takes maybe a minute, so maybe it takes 32 minutes. If we wanted to go to 256, it's not that much strange. I'm exaggerating a little. It's not quite exactly m to the fifth. Well, it is asymptotically exactly m to the fifth, but it might not scale exactly at that rate, but it'll be quite close to it. And asymptotically, it won't take long before you'll just see if you were to do a log-log plot. You'll see a straight line with slope of 5. And before we wrap up, I just wanted to quickly run through the complexity analysis, which is much less important than the algorithm. I just wanted to emphasize that you can analyze how long each of the steps take. How many primes do we need? Well, we need to compute enough primes so that the sum of their logs, we need log 4 root q bits. If you sum the logs of the primes up to x, so if you sum the primes up to x, their logarithms are going to be approximately equal to x. And so that tells us we need roughly n over 2 bits. So for 256-bit prime field, we should expect to use Ls up to about 128, which is not that much user, huge, than the Ls we saw. We spend something like L squared n log n. And in general, L is going to be O of n. So you could replace every L with an n here to get a sense of how long each of these take. But the upshot is that for most for the large Ls, the second half of the Ls, we're going to be spending on the order of n to the fourth log n time. And how many primes L are there? There's n over log n. And so the upshot is that the actual complexity bound is exactly O of n to the fifth. And if you have read any of my earlier slides on this or in the course that I teach, you'll have to remember that there was a log log in here. But thanks to David Harvey, I can joyfully take the log log off the screen. Because the fact that multiplication is now O of n log n means we don't need the log log anymore. OK, and I better stop there.