 you click, so these are available to you on, I posted the link on the, on the Zulip and you can open them up. There's a PDF viewer inside the Jupyter Notebook server, but if you click on the links in that PDF, just be aware it'll open a new browser tab. It won't open it as a tab inside, but you could instead just go to the EC sub directory of the course folder and all the links are links to Notebooks that are in that directory if you wanted to keep everything just in one browser tab. Okay. All right, why don't we go ahead and get started? So just to recap where we left off last time, so we're looking at algorithms for counting points on an elliptic curve over a finite field, and we looked at four different algorithms. There was a naive point counting algorithm where we were literally just counting projected points which had time complexity roughly p squared. Then we had a slightly less naive version where we took advantage of the fact that we can quickly count rational roots of a polynomial over a finite field, and that brought us down to a time running time that was quasi-linear in p. And then we looked at Maestra's algorithm. A question in the back, sorry? Yeah, maybe the mic is a little low. Yeah, I'm okay. Hello, hello. There are people up there that look like they're trying to do, ah, there we go. Okay, now I'm loud. All right, let's get loud, okay. So, yeah, so now we're, our first version, sort of a very dumb version of Maestra's algorithm was we took advantage of the fact that we knew we were looking for an integer in the Haas interval, which has width on the order of root p, and we did just a blind linear search marching along the Haas interval, looking for multiples of orders of a point, and we took advantage of Maestra's theorem that if we look on elliptic curve antiquadratic twist and we flip back and forth and generate enough random points, we'll eventually be able to uniquely determine the group order, and that gave us a running time that was on the order of square root of p. And then the algorithm that people actually use, and this is actually what all of these computer algebra systems are doing if you ask them to count points over an elliptic curve over a reasonably small finite field, say anything that, any word size prime, they're probably running exactly this algorithm. They're not switching over to asymptotically faster algorithms yet, is to do a baby steps, giant steps search on the Haas interval, which amounts to computing, so the width of the Haas interval is on the order of square root p, but with a baby steps, giant steps search, we get a square root speed up. We compute square root of the length of the linear search we're trying to do, so that's like p to the 1 fourth, so it's the square root of four root p, so we have a p to the 1 fourth baby steps, and then we march along p to the 1, roughly p to the 1 fourth giant steps, and when we get a concollision, we know we have a multiple of the order of the point, and we can do exactly the same thing we did as in the linear search. We can keep marching along until we see two multiples, or if we never see two, then we know there's a unique, a unique multiple of the points order in the Haas interval. So Mestra's algorithm works exactly the same, it's just the BSGS search implements a slightly different, a more efficient way of finding either the order of the point or a unique multiple of this order in the Haas interval. The downside to that is you do pay a bit on the space, the space complexity increases. There are actually O of p to the 1 fourth time algorithms, probabilistic algorithms that can get away with quasi linear space based on Pollard row, but they're generally not used in this context because we're at a baby steps giant steps, the constant factors are usually slightly better, and by the time you get to the point where you would really be worried about space, you're ready to switch over to the SEA algorithm anyway. Okay, all of the algorithms we talked about yesterday work over arbitrary finite fields FQ, they're totally agnostic as to the characteristic and whether it's a prime field or not, but today I wanna talk specifically about prime fields. So are all of our FQs have been changed over to FPs and all of our complexity bounds are gonna be in terms of p. Okay, so any questions on just the quick recap? We're gonna see three new algorithms today. Yeah, I'm sorry, I can't quite hear you. Well, you said for small enough times. Yeah. Yeah, well, back in the old days, I'm sure there are a few people in this room who remember, that's what we did do because we didn't know anything better. So in the very early days, when elliptic, I should have maybe mentioned one of the major motivations for a lot of these point counting algorithms was for elliptic curve cryptography where it's quite important to know the order of the finite group you're going to be working in if you're using a discrete log based cryptography and up and people were already working towards elliptic curve cryptography in the early 80s. And at that point in time, we did not know a better method than this method for counting points and that's what we used. But then there was a major breakthrough that we're gonna spend tomorrow's lecture talking about due to Rene Schof who came up, developed a, introduced a polynomial time algorithm. When it was very first introduced, I'm not sure people completely realized the power of that algorithm, but now everybody does. And as we'll see tomorrow, it's quite efficient and it will blow past the fourth algorithm on this list pretty early on, certainly by two to the 80th, I think would be around the crossover point. And then if you throw in improvements to that algorithm due to no melky, so I understand it's also gonna be here during the summer session, the complexity gets even better and we'll get a chance to talk about that tomorrow. Okay, good question though. But we shouldn't sneer at this algorithm. This algorithm is impressively fast and as we'll see for one of the problems we want to consider today, which is suppose I have a fixed elliptic curve over Q and I wanna count points on all of its reductions. In fact, even though there are asymptotically faster algorithms, there is no practical situation in which you would ever use them. This is always the algorithm you're going to use and we'll see why when we come to that. Okay, all right. So I should pause here to thank Sam for his excellent question yesterday, which motivated the problem set that you did in this morning's problem session, which was why don't you just compute, think of your curve as being y squared equals f of x and compute a table of squares or test all the genre symbol to compute the squares and evaluate f. And this leads to the notion of the Haase invariant, which you all worked with this morning in the problem session, which we could take as our definite, we'll take this as our definition. It's the coefficient of x to the p minus one in the polynomial f of x raised to the p minus one over two. And some people define the Haase invariant a little more coarsely. They just say it's a thing that's either zero or not zero and it tells you whether your elliptic curve is super singular or not, but actually there's a lot more information in the Haase invariant. It's gonna give us the trace of Frobenius as you showed in the problem session. The trace of Frobenius is congruent modulo p to the Haase invariant and once p is bigger than 13, that's enough to uniquely determine the trace of Frobenius. So if we can compute the Haase invariant, we can compute the trace of Frobenius. And another way to think of the Haase invariant if you're familiar with the Cartier-Manin matrix or the Haase-Vitt matrix that you can associate to curves of arbitrary genus, the Haase invariant is basically, you think of it as a one by one matrix. Okay, it's exactly the same thing. It essentially is the Cartier-Manin matrix for a curve of genus one. Okay, but we will just be focused on our Haase invariant which where we have a very concrete definition. If you wanted to compute it, you could just take that polynomial, this cubic polynomial f of x, raise it to the p minus one over two, go look for the coefficient of x to the p minus one and there's your Haase invariant. And if you stop and think about that for a moment, you probably think that's an absolutely horrible algorithm because this exponentiating this cubic to the p minus one over two, you're gonna get something, a huge polynomial whose degree is linear in p. That seems like a crazy way given the algorithms we've already seen for computing the trace Frobenius. We'll come to that, but let's just note that from this definition, we can, the fact that it's congruent to the trace Frobenius tells us a few nice things. One is it tells us that while you might have thought maybe the definition of the Haase invariant depends on the choice of f. I mean, what if I replace f of x with f of x plus one? That's gonna define and that y squared equals f of x plus one is an isomorphic elliptic curve. You might worry that is our Haase invariant really invariant. Well, the fact that it's congruent to the trace Frobenius means yes it is because the trace of Frobenius is invariant. Okay, so all the algorithms we're gonna look at today, the way they work is by computing the Haase invariant and they, but there are going to be, we're gonna look at three different methods for computing it. Any questions on the setup before we get into the, start digging in? Okay. All right, so the motivation for all the algorithms we're gonna present today are on the one hand, it seems crazy to exponentiate this polynomial and then have to go find this one coefficient in this polynomial that's got p minus, or something like three p minus one over two of them. But there are these coefficients aren't completely random. They're related. They're relations between these coefficients and we're going to develop some linear recurrences that will allow us to move our, shift our focus from one coefficient of the polynomial to the next without ever having to write them all down. Okay, so how does that go? So to set things up in a very general setting, let's forget about finite fields for a moment and just suppose somebody gave us an integer polynomial, let's call it h of some degree r and the only thing I'm gonna insist on is that h has a non-zero constant coefficient and if it doesn't, you can just shift it so that it does. So that's without loss of generality. It'll be convenient since we're gonna be looking at coefficients of powers of x in some power of the polynomial. It'll be nice to have a shorthand for that. So h sub k superscript n means the coefficient of x to the k in the nth power of h. It doesn't mean take the kth coefficient and raise it to the nth power. Those are different things except when k is zero when they're actually exactly the same thing and that's a very useful fact that we're going to exploit. Okay, and if you ever see an h sub k without a superscript, you should imagine the superscript is one. I just mean the kth coefficient in h. And if you just write down two sort of mindless equations, h to the n plus one is h times h to the n and you write down the Leibniz rule for taking the derivative of h to the n plus one, which is the formal derivative of h to the n plus one, you get two linear relations that relate to nearby coefficients of the nth power or of the n plus first power of h. And if you then combine these two linear relations and solve for h sub k superscript n, the coefficient of x to the k in h to the n, you get a sum with r terms in it, which r is the degree of h, that involves either a coefficient of the initial polynomial h we were given or other coefficients in the nth power and not just random coefficients, but r preceding coefficients, the r coefficients with indexes just less than k. Okay, so this now suggests a strategy for computing h, k to the n, or h sub k superscript n would be to start with h sub zero superscript n