 Okay, thank you, Bjorn. So good morning everybody. Welcome to day two. So the last thing we were talking about was this baby version of the trace formula. So now we are going to do the grown-up version. Okay, so this is theorem 4.2.7 in the notes. So let me just remind you of the setup. We have a hyperliptic curve over Fp of genus G. Okay, so there's our hyperliptic curve. And we had this polynomial h, which is F to the p-1 over 2. So I want to define a matrix. This is the matrix we're going to take the trace of. So I'm going to define a matrix, which I'm going to call a sub F. So this is a G by G matrix with entries in Fp. And its entries are given very explicitly as follows. If I take the entry in the v-th row and u-th column, this is given by a certain coefficient of h. It's vp-u. And this is for u and v between 1 and G. So probably a lot of you recognize this matrix. It's often called the Hassavit matrix of the curve or the Cartier-Manin matrix of the curve. Really, when you call it these things, you should be thinking with respect to a certain basis for whatever homology space or space of differentials or whatever it is. I'm not really going to go into those homological interpretations. All I care about for this course is that the entries are given by certain coefficients of h. Okay, very nice concrete interpretation. So here is the theorem is that if I look at the number of points on the curve over Fp to the r, so not just Fp anymore. Now we're looking at extensions. This is congruent to 1 minus the trace of the rth power of this matrix. This is modulo p. Of course, we can only get it modulo p because we're only looking at h modulo p. There's no higher power of p information here. In section 8, you'll find a generalization of this formula for higher powers of p. Okay, and this is for all r, all r greater than or equal to 1. Now I don't think I'm going to go through the proof of this in the lectures. It sort of worked out in the notes. There's some problems that help you work this out. I might just say a few words about it. The first thing you should notice is that when r equals 1, this really is just the baby trace formula. Because when r is 1, you're just taking the trace of af, which means you're summing these guys over all v and u where they're equal. So v and u are both 1 and then both 2. And those are giving you the multiples of p minus 1. So that's exactly what you're getting in the baby trace formula. But the idea of the proof, I think this came up in the questions at the end of the last lecture, is that you first look at, so you first replace, we looked at f of alpha to the p minus 1 over 2. So you have to replace this by f alpha to the p to the r minus 1 over 2. This expression is detecting if things are squares in fp. This is detecting if things are squares in fp to the r. When you do this, you end up with an analog of this polynomial but with a much higher power of p. You get the p to the r here instead of p. So that's no good. But then there's some trickery which lets you express the coefficients of that polynomial in terms of the coefficients of this h. And it turns out, for magical reasons, that this is the relationship you get. There are a few different ways you can prove it. I've suggested one in the notes. So that's the idea. Let me just write down an example so we see what we're dealing with. So same example as before. Let's take p is 11, genus 2, and remember that the degree of h was 30. That's 5 times 6. So here's the matrix. So h10, h9, h21, h20. So if I draw the coefficients h again, so this is x to the 30. So what do we have? We have 21, and so I think before I had, this was my 20, and this was my 10. And now we're dealing with 21 and 20 and 10 and 9. So there's four coefficients. They come in little groups of 2 because the genus is 2, and there's two groups because the genus is 2. And it's quite incredible that just knowing these coefficients lets you get these point counts over all extensions. I actually don't know the history of this result. I mean, it's very old. I think it was folklore now, but I'm sure someone here would know who first wrote down this kind of formula. I think it goes back a long way. But if I now have this matrix, I can write down, for example, the number of points over fp. I mean, I just get the baby trace formula here. So we wrote this down yesterday. h10 minus h20. This is mod p. And then if I want the number of points fp squared, I just get 1 minus, well, I'm not going to write it out. There's some ugly formula involving all these coefficients. It'll be the trace of this matrix squared. So you can see you would have to do something like this times this, and this times this, and calculate the trace. One other comment I should make is the notation. I've put the f in the notation here because you could write down a different model for the curve. You could do a transformation of x or something. And you'll get a different matrix. They're related in some way. But I like to put the f here to remind me it's this particular model that I'm writing down this matrix for. Any questions? No? All right. So now we've got this trace formula. This is really great. So you can already see there's an algorithm coming out of this, which I'll get to in a minute, is if you just calculate these coefficients, then you can calculate the number of points by using this formula. And what a lot of the rest of the course is about is how do I calculate just these coefficients more efficiently than just multiplying out the polynomial? There's better ways of doing it. That's really what the rest of the course is about. But before I go on to that, I want to talk about consequences of the trace formula for the L polynomial. And in fact, I learned a few things while I was preparing the notes for this part of the course. A few things I didn't realize, which is kind of interesting. So, okay, let's write again n sub r number of points over fp to the r. Now, so what do we know? We have this a sub f. Okay, this is a matrix over fp. And we know we can use a trace formula to compute these point counts modulo p. Right? This is the trace formula. I wish I had a wider board here. My diagram is going to have to go around a corner in a second. That's okay. Let's see what happens. Now, okay, I've got these numbers n1 up to ng. And you remember that the L polynomial, if I know these guys as integers, then I know the L polynomial. So you might wonder, well, suppose I know these guys modulo p. Does that mean I know the L polynomial modulo p? Okay? Well, let's see how it would work. So first you take these numbers and then you would try to compute the zeta function modulo p. I'll put question marks here. And when I say the zeta function modulo p, I guess I really mean just the first g coefficients. Right? I mean the coefficients of t up to t to the g. Okay? Now, there's actually a bit of a problem here because if you think about the formula for the zeta function in terms of these numbers, there are denominators. Right? You have to remember you have to write down, you know, there's n2 over 2 and n3 over 3. So that could be a problem when p is small. And then the other problem, when you take the exponential, there's also denominators. Right? So something funny could be going on. So I'll just leave the question marks there for a second. Come back to that. Once you've got the zeta function, then there's no question that you can get the L polynomial. That's fine. This is okay. This is easy. Mod p. Okay? Because if you think about, remember the formula is that the zeta function is L of t over 1 minus t, 1 minus, in our case, pt. So you just multiply through by this denominator and that will tell you the coefficients of the L polynomial. So that's fine. This is the step that's a problem. Now, there's a few interesting things happen here. So first of all, this arrow, it does work if p is bigger than g. Now, let me just check my notes. Is it bigger or bigger than or equal to? I think it's strictly bigger. So this arrow here is okay if p is bigger than g. And this is discussed in a problem. This is problem 4.3.3. And the reason is basically that these denominators are not doosal by p. So there's no problems, okay? No surprises. But it turns out that this is not always, this arrow doesn't always work when p is less than or equal to g. Not okay if p is less than or equal to g. And there is a problem about that, too. I had a lot of fun with these problems. And here's what I mean. This problem, 4.3.1, it shows you an example of two curves that have the same point counts, modulo p, right? N1, I think it's, the genus is 3, I think in that example. The point counts are the same, modulo p, but the zeta function and the L polynomial are different, modulo p. Okay? So what's happening is these numbers actually, modulo p, they don't have enough information to get the L polynomial modulo p. It's kind of annoying. Now, at some level I don't really care because in this course I mainly think of p being large, but it's good, it's good to be aware there's a little issue there. Okay? Okay. But, and here is the crazy thing. It turns out that there is a short cut in this picture, is that you don't have to go via these little arrows. You can go directly from this matrix to this L polynomial modulo p. And in fact there is enough information in this matrix to recover the L polynomial modulo p. So you actually, you lose information when you go along this arrow. Let me write down what the statement is. Nevertheless, there's a beautiful theorem, 4.3.2, which says that the L polynomial is congruent to the inverse characteristic polynomial of this sum Hassavit matrix, i minus taf modulo p. Okay? So that, that Hassavit matrix actually has the missing information. Now, that, that theorem when, when p is bigger than g, you can prove it from this trace formula, right? You can, you can sort of trace through what happens as you go through this diagram. And there's a problem about that in the notes. And that, that's pretty straightforward. When p is less than or equal to g, the proof is a bit more difficult. And it's not in this course. And I think I've got a reference to a paper of Menin that does it. It's a bit more involved. And I think, I think that result might be due to Menin. Does anyone know? Or is it, is it older? I don't know. Curious, curious about the history of that. Okay, so what I'm going to do if the rest of the course is just because this is so beautiful and simple, I'm just going to assume this even though we don't quite know how to prove it when p is less than or equal to g in, in this course, but just makes life easier. So we're going to, we're going to use this quite a bit. I should also mention that in section eight, when we go up to higher powers of p, I'm not quite sure what the analog of this is. I suspect there is a way to, to do something similar. But I don't know exactly what it is. And again, I haven't really thought about it much because I'm mainly interested in the case that, that p is large. But you can have a think about that if you're interested. Okay, what I want to do next is analyze the complexity of this algorithm that I've been hinting at. Okay, so, so the algorithm, let me, let me quickly write down just briefly what, what is this algorithm? And we'll do a bit of complexity analysis. Okay, so this is what I call the expansion algorithm or expansion strategy, the expansion strategy for computing, I guess the matrix af and hence the l polynomial. Okay, so it's very simple. So the first step is we're going to expand out h is f to the p minus 1 over 2. We're just going to compute this polynomial. Okay, and we need to analyze the complexity of that step. And this, this gives us, of course, a sub f. We can just read off the coefficients. And two is to compute this, this inverse characteristic polynomial. So I want to, I want to spend a bit of time today talking about the complexity of these two steps, not so much for its own sake, but more just to give us some practice with thinking about these things before we do more with some other algorithms later on. Okay, now of course, I'm, there's, there's lots of ways you could do these things, right? I'm not saying this is the best possible. I'm just going to give you one way of doing it. And so it's so that we have a theorem that says what is the possible complexity. Maybe there's a better way to do it, a different way to do it. That's fine too. So I should say just a few words about the computational model. And there's a little bit about this in section, section, I think, 1.6 in my notes, only a little bit. So again, I just want to go back a few years to when I was first studying this stuff, I didn't realize for a very, very long time that there was such a thing as formal computational models. I was very naive, right? Now of course, no one in this room is, is that naive, right? But I had no idea. And it was many years, I think it was a paper of, of who was it? Maybe Klaus Stiem, I think, who wrote this paper about models of computation in computational number theory. And he was making the point that a lot of people never say what they mean, right? And how do you prove theorems about an algorithm if you don't say what, what the model is? And so I've, I've tried to, I've tried to get on a bit of a campaign to always say what model I'm thinking about. And it's usually, it's usually the multi-tape Turing model. That's the one that computer scientists seem to use as their standard model. And there are some disadvantages and advantages. I was discussing these with Drew the other day. But very briefly, it's a Turing machine. So a finite state machine has a finite number of states it can be in. And it has a bunch of tapes, a finite number of tapes. You have to decide in advance how many tapes you get. And each tape stores some data, potentially unlimited amount of data on the tape. There's no finite memory to this thing. It has not infinite memory, but potentially infinite memory, whatever that means. And the point is that every step, it can sort of look at, it can look at where the current tape is on all the tapes and then decide what it's gonna do. And what it can do is move to another, another state and possibly write a value to each of the tapes and slide the tapes over by one cell, whichever direction it likes. And then the complexity is the number of steps you take on this Turing machine. And so the one thing that might be unfamiliar in this model for people who are experienced programmers is you do not, you cannot access an element of an array in constant time. If you want the millionth element of array, you have to walk over there. It takes you a million steps to get there. And then you can read it and then you can come back to what you were doing. So that seems very restrictive, but it's actually not. It's actually quite unreasonable. It's actually much closer to your laptop than you think. Anyway, I don't want to spend all day talking about that. But this is, this is the complexity model that I have in mind. Okay, so let's look at how to compute this power. So we're gonna compute H, F to the P minus 1 over 2, using a method called repeated squaring. Now, usually when people say repeated squaring, they're talking about a situation where every multiplication in your, in your ring takes constant time. So for example, you know, if you're adding points on elliptic curve, computing a large multiple of a point on elliptic curve, each group operation takes constant time. That's not the case here because we're gonna start with a polynomial F and we're gonna take larger and larger powers and it takes longer and longer as the degree goes up, but the algorithm is still basically the same. So I'm gonna call it repeated squaring. So let me just show you an example. Let's take P equals 83. Okay, so I need to compute F to the 41. Well, what's F to the 41? Well, 41 is odd. So this is F times F to the 40, and F to the 40 is F to the 20 squared. Okay, so now we need to compute F to the 20. Well, 20 is even okay, so you get the idea. Okay, F to the 10 is F to the 5 squared. 5 is odd. So that's F times F squared squared. And then I guess F squared is, is F squared. All right, so then I run this backwards. I start with F, I multiply it by itself so it gets a bit bigger. Then I square that again. That gets, that doubles the size. Then I multiply by F, which doesn't change the size very much, just a little bit. So now I'm at F to the 10. So then I double it again. Now I need someone else on stage, F to the 20. Now what's happening is as, as we, as we proceed, the, the size is roughly doubling at each, at each point. So the cost is gonna be determined pretty much by the, the cost of the top level multiplication. Okay, and then it's just, then you can add things up like a, like a geometric series. Okay, so the real question is, what is the cost, cost of polynomial multiplication at the top level? That's really what's, what's controlling this. So let's take a look at that. And again, I'm not claiming this is necessarily the best way to compute this large power. I, I think it's pretty close as far as I know. But this, you can probably save a constant factor somewhere somehow. Okay, so now we get to the fun stuff, right? Let's write M sub P N to be the cost of multiplying polynomials in FPX of degree, say less than or equal to N. So two polynomials of degree less than or equal to N. Now let me just emphasize that I'm working here in a bit complexity model. Okay, another common model for this sort of question is an algebraic model, where you count the number of operations in FP. And that's a reasonable model to use, and you get similar answers. But I'm really interested in the bit complexity, which means I have to take into account the size of, of elements of FP. Okay, it's, it's hard to work to multiply in F 101 than it is in F, F2. Okay. Okay, so before I talk about this, I need to talk briefly about integer multiplication. So M int N is the cost of multiplying integers of at most N bits. Okay, now, Bjorn mentioned before, I've, I've worked a lot on this problem. And a few years ago, I proved with Joris van der Herven that you can get N log N. And this is an annoyingly complicated algorithm, which uses a lot of FFTs and other fun things. And as I said, I could, I could talk for hours quite happily about this, but I'm not going to. Okay, so we have this algorithm. Now, what about, what about this guy? What do we know about this? And here's where things get very annoying. So I'll tell you first of all what we expect to be true, which we cannot quite prove yet. We expect that MPN is O of B log B, where B is the bit size of the polynomials. So if you take two polynomials over FP of degree N, and you look at the total number of bits to represent this polynomial, which is N log P, then you should get, you should be able to multiply them in time B log B. That's what we think is true. You might ask, why do we think that? Is it just by analogy? It's actually a bit more than that. I mean, analogy, sure, that's, that's really a good reason. But the better reason is that we actually have proved this, but assuming some number theoretic hypotheses. Okay, so Joris and I have another paper where we prove that this is true, assuming something about small primes in arithmetic progressions. If you're familiar with Linux theorem, Linux theorem gives you a bound for the, an upper bound for the least prime in an arithmetic progression. We basically need a really strong version of, of Linux theorem. We need, we need a Linux exponent, which is unreasonably like much smaller than anyone has any hope of proving. Now that, that's just a conjecture that that holds, but it's actually much weaker than the standard conjectures about primes in arithmetic progression. So we have this situation where, where, you know, we're here and what we can prove is, is, what we can prove is over here and what we think is true is over here and we need something in, in, in the middle, quite a long way from both ends. So, you know, surely it's true, surely, surely, surely it's true, but this is, this is a double, this is a double lightning bolt problem. I've thought a lot about this and I have zero idea what to do next. No idea at all. Anyway, that's a good problem. So let me show you, let me show you one thing we can prove. Using the integer multiplication result. Okay. So here's what we can prove. So let's, let's use Kronecker substitution. Kronecker substitution to reduce the, the FPX case to, to Z. I heard a funny story the other day from, I think from Kate, who is, who's Canadian. I didn't know she was Canadian, but she said she was, she's teaching American students and they thought that Z was this guy, the integers and all the other, you know, this little guy is Z. Yeah. They're all Z to me, except, except in the alphabet song, right, because it doesn't rhyme. So how does the Kronecker substitution method work? So you're trying to multiply two polynomials in FPX. I'll just, I'll just say this out loud. I won't write it down. What you do is you take your polynomials and you forget about P, right? You think of your polynomials as just polynomials with integer coefficients, right? And then what you want to do is you want to multiply these polynomials with integer coefficients and then reduce the result at the end, modulo P. Okay. So how do you, how do you multiply in ZX? What you do is you evaluate your polynomials at a power of two. And what that does, it sort of, it just sort of plugs all the coefficients together with zeros in between them. It writes them down as a, as a huge integer. And then you multiply those, those huge integers. And there's some homomorphism property of the, of this evaluation map, which, which means that when you multiply the integers, you actually get the same result as, as multiplying the polynomials and writing their coefficients next to each other in, in a big long line. And then you, if you make your power of two big enough, then those coefficients won't overlap with each other and you can just read them off. Okay. So that's the basic idea. And what that, what that proves is that you get MP of n is O of, maybe I'll call it B prime, log B prime where unfortunately B prime is a little bit bigger than this B. It has to be n times log of NP. So you've got this extra, this extra factor of n in the log. It's coming from the fact that when you multiply the polynomials of the integers, you get coefficient growth. Okay. So an easy way to see this is suppose, suppose you're multiplying polynomials over F2. Okay. When you multiply, when you write down the polynomials over Z, you'll just have coefficients 0 and 1. But when you multiply the polynomials, they're not just going to have coefficients 0 and 1. They're going to get bigger. And they could get as big as log n bits because you're adding n coefficients together. So you sort of have to leave enough room for that. And then you reduce mod 2 at the end. So you throw away most of what you compute. That's not a good way to, to multiply polynomials over small finite fields. Yeah. Yeah. So the question was, is, is this algorithm always more efficient than naive multiplication? The answer is definitely not. So the way this, so the naive multiplication is going to be n squared. And that's the same for, for integer multiplication or polynomial multiplication. Right. Now the problem is that this algorithm, this, this n log n algorithm, it's a, there's a really enormous big O constant here. Right. I mean in, in the paper, I mean we, we, we were just trying to get this result theoretically. Right. We didn't care about constants at all. So we were very, very fast and loose with our, with our estimates. And I think we needed n to be bigger than, I think it's something like 2 to the power of 1729 to the power of 12. Like some completely ludicrous number. Now I'm, I'm sure you can do a lot better and that's something I plan to work on in the next few years. I haven't got there yet. But still, the point remains that it's, it's not, it's not suitable for small n. Right. So there's always this tension in computational number theory between asymptotic results and, and real life results. What happens in practice is if you, if you ask your computer to multiply two numbers, it will look like n squared for a little while, maybe up to a few hundred bits. And then you get into a region where they start using this Karatsuba algorithm. And then the exponent starts to decrease. It's not n squared anymore. It's like n to the one point something. And as, as n to get bigger, they throw more and more sophisticated algorithms at it. At the end, if you're using a GMP library, they're probably using a Schoenager-Stressen algorithm, which is pretty close to n log n in theory anyway. And it will look almost linear. But it, it, it takes a while for that to happen. And the, the thresholds between the algorithms are determined empirically. It's, it's almost impossible to work them out in principle. It, it depends on so many factors that are hard to, hard to predict. Yeah. So I hope that answers your question. But, but as mathematicians, I don't care, right? If I'm just trying to, I'm just trying to understand these algorithms from the highest level, it's much easier just to say, I've got n log n and just get on with my life than to worry about what regime I'm in. I mean you worry about that when you implement these things, but at this sort of theoretical level, I don't care. Any other questions? Okay. Right. So I was saying, so this is what you get, this b prime log b prime. And you get this extra factor of n. And you can check when, when, if you fix p, like take p equals 2, and you try to multiply polynomials using this method, you will not get, you will not get this, this, this ends up being something like n log n. You'll get, you'll get n log squared n. So you actually lose a whole factor of log n. It's terrible. Anyway, it's okay for the purposes of this course, because we can now figure out the cost of this step one, which I just erased. So the cost of step one, which was, which was computing h is f to the p minus 1 over 2. I can now work this out. So I need to know the degree. Well, the degree of h is o of g times p, because f has degree o of g. And then I've got this exponent p. So that's the degree of h. And so I can work out my b, which is my, I guess my b prime, which is my bit size, my modified bit size, which is going to be o of gp times, that's, that's n. N is, is gp times log of n times p. So it's going to be g, yeah, gp squared. So that's my b. And I can simplify that a bit, because that 2 comes out as a constant. So this is just o of gp log gp. So that's, that's my b prime. Okay. And then I can work out the multiplication cost. It's going to be o of b prime times log b prime. So I get gp times log gp. And then times log of all that, gp log gp. And I can simplify that because all of this is just really no more than a power of gp. So this is o of gp log squared of gp. Okay. So this is, this is the cost of step 1. Okay. And I, I don't think it really changes very much if you assume the thing we expect about, about polynomial multiplication. I don't think I've worked that out, but I think it ends up being more or less the same. Okay. The one point I want to make about this bound is that this is roughly linear in p. Okay. And this shouldn't be surprising because we have to write down a polynomial whose degree is roughly linear in p. Okay. That's, we have to write down h at some point. So roughly linear in p. And, and this is a sort of starting point. We're going to be comparing all our future algorithms to this. Okay. Okay. Let me talk quickly about step 2, which is computing this characteristic, this inverse characteristic polynomial. So step 2. So this, this was computing this guy. Well, I don't really need to write mod p, right? Because this is already a matrix over fp. Okay. So for this, now I'm, I, I'm not a matrix multiplication guy. I'm an, I'm an integer and polynomial multiplication guy. Right? So this is now not my, my area of expertise, but I'll tell you the little that I, I do know. So let's write omega for the exponent of matrix, of matrix multiplication. So what this means is we can multiply d by d matrices using O of d to the omega field operations in k. So if we have matrix defined over k, we can use d the omega field operations. I should warn you in the literature, there, there are some slightly different definitions of omega floating around. Sometimes people want this to hold for omega plus epsilon for any epsilon bigger than zero or something like that. I'll just stick with this simple definition. So the, the, the classical algorithm, if you just multiply out the matrices, the way that you learned in, you know, in first year linear algebra, omega is three. Okay. You have to do this and you have to do this and so on. Strasse improved in I think 1969 from memory that you can get omega is log seven over log two, which is about 2.81. It's a really short paper. It's like two pages. It's just a bunch of formulas. He says do this, this, this, this, this. And that's what you get. In terms of I think impact per page, it's really very hard to find another paper that, that compares. Yeah, question. So by field, I'm being a little bit relaxed about this field operation is, is an addition, subtraction, multiplication, division, single up one of those operations in k. I mean, you can do things more carefully, but let's just leave it at that for now. So this has been improved a bit since then. It's been a lot of work, as you can imagine. Because once you know that three is not the right answer, you know, it's really, really quite incredible. This, this to me is much more incredible than all the, the integer multiplication stuff. I think this is not well understood yet. The current record is omega is, I don't even have the whole number written down. I have 2.372. . . . And there's frequently new results and they're like fighting about the eighth decimal place or something now. It's really amazing. Some people think the answer should be two. Two plus epsilon. So you can basically write down the product as fast as you can, you can compute the product as fast as you can write it down. I have no, I have no insight into this. These algorithms are very complicated. Okay. Let me just tell you a theorem which I actually don't know the details of the proof. So theorem can compute the inverse characteristic polynomial of a matrix in O of d to the omega operations in K. So this is in the notes. This is proposition 2.5.2. So maybe I should say a word or two about this. So this is, I can't remember the name now. It's in the notes. This is quite a recent result. This is only the last two. I think this is even post-COVID result. So you see computing, I mean, this isn't really a hard problem. Computing a characteristic polynomial. There are pretty simple algorithms that will get you maybe d to the fourth, I think. You can get down to d cubed with a bit more work. The algorithms start getting complicated. What's really remarkable about this result, there's a few things quite amazing about it. First of all, it's deterministic. There have been probabilistic algorithms that do this. This is a deterministic algorithm. Secondly, it really does get you d to the omega. There's no extra epsilon. There's no log factors. It is really on the nose. It is a constant time. It's a constant multiple of the time to multiply a matrix. They reduce the problem to matrix multiplication and some polynomial arithmetic and some other stuff. So it's a bit of work. It's completely overkill for the result. I mean, the situation we're talking about, the genus might be three. It's a three by three matrix. So for a three by three matrix, you don't care. You just go and calculate it. But for theoretical purposes, it's nice to know that you can do that. It's the same as the n log n. You can't actually run the algorithm, but it's nice to know you can do it. Maybe that's all I should say about that. Let's figure out the cost of step two. Each operation in Fp takes time of log p to the one plus epsilon. I should say what this means. So whenever I write a bound like this with an epsilon, I mean this holds for any epsilon greater than zero and the bigger constant might depend on epsilon. That's fine. So where am I getting this from? So let's just look at this for a second. So in Fp, you represent elements of Fp by just integers between zero and p. So if you want to add two of them, that takes linear time. That's all of log p. If you want to multiply them, you'll get log p times log log p. That's for multiplication. And even this, there's a few steps. You have to first of all multiply the two residues and then you have to divide by p and take the remainder. And if you look in section two, you'll see some explanation of why that takes the same time as multiplying numbers of that size, which is log p, log log p. And then for division in Fp, you might need to do some divisions. Well, that's going to be, there are some fancy algorithms for computing the extended Euclidean algorithm, which is what division is in Fp, and you get log p squared. So there is an awful lot of algorithms going on in the background here, which I don't really want to get into, but you can read about them in section two. And the point is that all of these are simply, they're basically linear in log p. Log p to the one plus epsilon is more than enough to cover that. Okay. So where was step two? Yeah, to calculate this guy. Now, if I, if I just use this bound for the number of operations in K, then I get step two cost is O of g to the omega, because the matrices are g times g log p to the one plus epsilon. Okay, so we've got, we've got this, this term coming from the, coming from step two, and we have this term coming from step one. So as I said before, this is roughly linear in p, this one, I mean barely depends on p at all. I guess the power of g here is a bit worse. But certainly if p is much larger than g, which is the case I mainly care about, then we're going to spend all our time in this step here, computing this big power of f, it's not that surprising. And we'll spend almost no time doing the characteristic polynomial. But if you did try to apply this algorithm when p is very small, if p is three, for example, to a large genus curve over f three, then you would actually spend a lot of your time in this characteristic polynomial step. Okay, but I don't really care about that regime very much. Okay. So one thing you can see from all this is that these bounds are all polynomial in g, which is what I promised we would do. Remember the enumeration algorithm is exponential in g, and this is polynomial in g. And we've managed to compute the L polynomial, we've managed to compute the L polynomial modulo p. So this algorithm is in some ways is genuinely more powerful than the enumeration algorithm. But we're only getting the L polynomial modulo p, not in z, t. Okay. Okay. So guys, that is what I wanted to talk about in section four. Now I've sort of got nine minutes. Do I have nine minutes, Bjorn? Or should I stop here? Because I could start a little bit on section five. Should I go on, or should we, if the like works? You could take a, yeah, you could take, I mean, if you want, if you want to have time for questions, then probably stop around four or five minutes from now. Okay, let's take a vote. Should I start on section five, or should we call it a day? Section five. All right. Should I lecture for another three hours? Yeah. Okay, let's just let's just take it. This is not a democracy. This is not a democracy. So I'm in a temporary position of power. There are some of us here who have more power than others. Yeah, yeah, that's right. You know, Bjorn, I have to, I hope we have a recording you saying this is not a democracy. Because I mean, I've been having a lot of interesting conversations with with various US citizens about that question. Okay. All right, section five. Okay, section five recurrences for polynomial powers. So we'll just have a little peek at this. Okay, so let me remind you what our situation is. So we have we have our polynomial F, which I'm going to write F0 plus F1x. Now I'm going to I'm going to introduce this symbol D for the degree, just because I'm going to be sick of writing 2g plus 2 everywhere. Okay, so from now on, that's what D is. Okay, and this is in this is in Fpx. And I'm going to write H. H is now going to be called F to the M. So M is my notation for P minus one over two, because I'm sick of writing P minus one over two all the time. Okay, and remember what our goal is our goal is to compute certain coefficients of H. Okay, we've got this big polynomial, we only want a few coefficients. And what I want to do in this section is develop some recurrences for the coefficients of H. And then we're going to use these recurrences to throw some interesting algorithms at the problem of computing these coefficients. Now, I'm also going to have to make an assumption here, which is slightly annoying. I'm going to assume that the constant term is not zero. Okay, that's the first time I've had to mention this annoying condition. You'll see in a few moments, or maybe tomorrow morning, why I need this condition. You don't really need it. And I explained somewhere in the notes how you can get around it, but it just makes life so much easier. So I've decided for this course just to assume this. Okay, now I'm going to introduce a certain differential operator. So del is going to be x times d dx. So this is a this is a degree preserving differential operator, right? So if I differentiate x to the k, I get k times x to the k. And I am going to apply it to f to the m, right, which is which is H. And I'm going to do a little bit of first year calculus freshman calculus, that's the phrase isn't it freshman calculus, I knew I'd come up with it. m times del f times after the m minus one. Now these are just formal derivatives, right? We're just doing formal derivatives of polynomials. There's no there's no actual calculus here. And then I'm going to multiply both sides by f. And here's what I get. f times del H, right, that's H, is going to be m times del f times H. Okay, this is very, very important. This is a differential equation satisfied by H. This is a d e satisfied by H. So we're thinking here of f as being fixed and m is fixed. And this is some equation satisfied by H. And I think I'll just write down one more line. So what you can do in this equation is you can equate coefficients on both sides. Right, you can look what's the coefficient of x to the k on the left and x to the k on the right. And you can do a little bit of algebra. So there's a problem in the notes asked you to do this. And here's what you get. You get a formula for H sub k, the case coefficient of H in terms of the previous d coefficients. So here's the formula. So there's a sum of some terms half j minus k fj hk minus j. And this is all happening, of course, modulo p. So by the way, this half is really, it's really p plus 1 over 2, right? It's coming from this m. But I'm only looking at it mod p. So I just get a half. So what this formula is doing is it's expressing Hk in terms of, as linear combination of the previous d values, the previous d coefficients. But the coefficients here actually depend on k. So it's a bit like the Fibonacci sequence, right, where you each one is some of the previous two. But there the coefficients are constant. Here the coefficients depend on k. And here you can see why I wanted f0 not to be 0. I can't divide by f0. And this only works when k is not 0 mod p. If k is 0 mod p, we have a problem. So I think I will stop there and I will talk more about this tomorrow and we'll figure out, I mean, really the serious, this is really the serious problem that we have to deal with. And I'll talk about a few issues to do with this and how we get around it. All right, so questions got a minute or two. Any questions? Thank you.