 So, rwy'n ni'n rhoi Opening to Start by Clarifying That in this work we haven't actually broken A 128 bit secure elliptic curve dispute log problem. Sorry to disappoint anyone if the title gave that impression. This talk is really about the dispute log problem In finite fields in small captivists Who has the arise from pairings on super singular binary curves. It's a quick outline on the talk, just to give some background of motivation on this problem. I'll then talk about our contributions. I'm going to finish off with a recent result of ours ond it's not in this paper but actually kind of drew naturally from the techniques in this paper so hopefully that will be interesting and relevant. So what are super singular curves? Well a super singular elliptic curve is just one if it's in characteristic P it has no points of order P over an algebraic closure and more generally a super singular curve of higher genus is just one whose Jacobian is isogenous to product of super singular elliptic curves. And here's a couple of examples here over F2 so easier only one. And it's a basic property of super singular curves or these two in particular that for any odd prime P if we consider the curve over F2 to the P then the group orders are just given by these expressions here. So the history of super singular curves and cryptography is quite an interesting one and back in the early days of VCC point count and algorithms were not as efficient as they are today. And so having a group order given explicitly like this made them very attractive for early adopters of VCC. However, around 1993, thanks to Meneza's Ocomode and Banstone, they realized that one can take an elliptic curve DLP and map it via repairing to a finite field DLP in an extension of the base field over which the elliptic curve is defined. And they also knew that super singular curves have low embedding degree and as a result the first lesson for the crypto community regarding the use of these curves was that they are bad. And just to give an example, for this is an E1, they haven't been in degree 4 and so you have an easy attack, well not necessarily easy but you have an attack which is sub exponential. And so these curves fell out of fashion for about 7 years until a series of very nice results starting with papers by Jube, Bonnet Franklin and Sakai, Gysian, Casahara and so you realize that there are many constructive applications of parents in cryptography. And it was reason that well, okay, you may have an attack in the embedding field of a curve but as long as the DLP in this field is still hard then this is an acceptable state of affairs. And so the second lesson to take from the history of these curves is basically provided the applications are good enough and we can safely ignore lesson one. And so lots of implementers spend a lot of time, myself included, making these curves and parings over these curves as efficient as possible. And it's not only elliptic curves but also super singular hyper elliptic curves and as proposed of super singular opinion varieties due to Alice and Carl Rubin and so there's really a lot of work went into this. However last year this is all thrown into complete turmoil I guess thanks to a series of high impact results. And before I describe these I should just mention the DLP computation basically uses an algorithm known as index calculus which consists of two parts. So you have a first part which is to define a factor base and one attempts to generate relations between these elements and then find their logs. And the second part is to take an arbitrary element in the field and then attempt to write it as a product of elements of smaller and smaller degree until all of these elements in the product are in the factor base. This is known as the descent and previously these two stages are both sub exponential time. So the first result was in February last year, which was by myself and three colleagues at University College Dublin, namely Fruigologlu, Gary McGuire and Jenson Braigle. And what we showed is that the first stage in its calculus can actually be done in polynomial time. And also the hardest part during the descent, which is the elimination of degree two elements, that can also be done in polynomial time. And around the same time Antoine G was having very similar ideas. So he came up with a polynomial time relation generation method for degree one elements, which was different to ours, but essentially isomorphic to it. And he also had a polynomial time method for eliminating degree two elements, which was in batches, so slightly different to ours. But most importantly he had a descent method for eliminating other small degree elements, which are the hardest part during the descent after degree two. And with this he was able to get a heuristic other one-fourth algorithm. So that was a really big breakthrough. And then a couple of months after that, we have the quasi-pollinomial algorithm due to Barbara Leskiew, Gary Gigiw and Tomay. And here the descent they showed is quasi-pollinomial, an even bigger breakthrough. And as a result of this, the third lesson for the use of super singular curves in Cypto is that small characteristic super singular curves really are bad for cryptography. And I think Stephen Galbraith wrote in his blog, type one pairings are dead. It's quite dramatic. But that was the way it was. And furthermore, there's a lot of DLP records set last year which support the validity of these theoretical advances. So they're not just purely theoretical. And the last one of these was actually set by the team behind this paper, which is in a 9000 bit field. So this raises an obvious question which is, if the small characteristic field DLP is dead, then why should anybody bother studying it? And I guess just to extend the metaphor a little bit, the short answer is that yes it may be dead, but it's not quite buried. So I guess a slightly longer answer is deserved. So the first point I would say is that all the records I just showed you, well none of them actually arise or attack in fields which arise from parameters in the cryptographic literature. It's kind of seem mad to have all these new tools and algorithms and technology, to have all these parameters in the literature, and yet to declare everything dead without actually trying to attack a single one of them. So that was the first motivation. A second one is that all the DLP records I showed you use very special extension degrees, namely those which permit a cum extension or a twisted cum extension. And these are actually the easiest to break relative to fields of the same bit length. And that's because you can reduce the size of the factor base and also the descent becomes easier. So this leaves another question which is, how hard are the DLP's in the literature? And actually another team of researchers studied this very question, and when we looked at their paper we realised there was certain inefficiencies in their analysis that we could significantly approve upon. So that was a real motivation for this paper. And a final one is a bit of a meta consideration for this type of work, which is that sometimes if you saw a particular problem instance, you can actually gain theoretical insights, which you wouldn't have otherwise just by thinking about things purely in the abstract. And so I'll give you an example of that at the end of the talk. So the team I just mentioned who looked at this first was Ajmaniz's Olivier and Rodriguez Enriquez. And what they did was to take the techniques from Jews, Ella von Forth paper and the Quasi polynomial paper and analyse the concrete security of several DLP examples which arise from the parent-based literature, which were thought to be or were designed to be 128 bits secure. So concrete security just means you take all the different algorithms which you need to perform the computation, you implement them, and then you don't bother running them. And the reason you don't run them is because it would simply take too long to do. You can invoke several standard heuristics, and with that you can get an estimate of how difficult it is to solve a given DLP. So they looked at three fields in this paper. The first one is from the elliptic curve over F3 to the 509, which has embedded in degree 6. And what they showed is rather than 128 bits of security, using these techniques, it only has about 74 bits of security. And the unit of cost they use is this MR. It's a quasi notion of bit security. MR is just the cost of performing one modification, modulo the subgroup order of the elliptic curve in question. And the second one they looked at is this middle one here. This is a genus 2 hyper elliptic curve over F2 to the 367. This has embedded in degree 12. And again, rather than, that's not part of the talk, rather than 128 bits of security, it only has about 95 bits of security. And the final one there, it's an elliptic curve over F2 to the 1223, which hasn't been in degree 4. And interestingly, they concluded that all of the techniques do not reduce the security. So it's still apparently 128 bits of security. And, you know, this is perfectly feasible because these new techniques are asymptotically superior to the original L1 third function fields. We don't actually know the crossover. So this could still be secure. And people would still like to use it. I wouldn't recommend that. But there are several inefficiencies in their analysis. One I'll just point out here. So you'll notice that each of these cases uses a cake. This just means there's a quadratic extension of the target field. And the reason they do this is because you need to take an extension of degree at least to in order to get the relations between the factor-based elements. But what they then did was assume that the target discrete log is actually in this quadratic extension, which it's not. They're instantly trying to solve a problem which is in a field twice as big as it should be. So that was just one inefficiency, I guess. So we looked at this and we came up with a few observations and techniques and principles. And basically our contribution in this paper is to give a plan to follow if you want to attack a given DLP. I'm not saying these are the optimal ones, but they're certainly better than the ones that were there before. So the first one is that if you look at the classical descent and the eleventh and fourth descent, a basic property is that if you use a smaller Q in your field representation, then you get a faster descent. And if you use the original field representation, which basically means finding a... You just have to find an H0 and an H1 which is defined over FQ to the K. So this first polynomial here, H1 of X times X to the Q minus H0 of X, has an irreducible degree N factor. And so you need to choose a Q larger than N in order to represent such a field. What we did is just choose a slightly subtly different polynomial to just bring the Q to the inside. This polynomial then has a degree Q times the degree of H1. And so if you want to represent a field of degree N, we can just choose the degree of the HIs to be greater than 2, such that we get a Q which gives us a smaller descent. So that's what we did. A second thing we proposed is something called a principle of parsimony, which basically means do the minimum amount of work you can in order to solve the problem. So in particular, the descent should always take place in the target field. You should use this quadratic extension or cubic extension. And only when you can't work in the target field, for instance, when you're trying to solve the logs of the factor-based elements, can you make an extension of the base field? So that's necessary in some circumstances, but you shouldn't do it if you don't have to. So the third thing is the observation that if you do have to extend the base field in order to get the factor-based logs, then actually you can use this to reduce the cost of the descent. So if you have a quadratic extension, for instance, then during the descent, if you have an irreducible element of even degree, at any time you like, you can just factorize it over the quadratic extension, thereby halfin the degree automatically. So that speeds up and we use this judiciously during our descents and our estimates. And finally, it's actually possible to get the factor-based logs without taking an extension at all, just using k equals 1. And this also means that we can use the element on fourth techniques to eliminate elements of a higher degree than previously. And as a result, we didn't actually need to use any parts of the quasi-pollinomial algorithm at all. It's asymptolically very efficient, but it's not actually necessary in this case. So as a result, what we showed is that in this elliptic curve over F2 to the 1223, which the previous team of researchers still had 128 bits of security, actually only has 59 bits. And in this genus 2 high-pliptic curve over F2 to the 367, rather than 95 bits of security, we show that it only has about 48 bits of security. And the time on the right there is actually the time required to solve the discrete log problem. And we did that in practice. So let's give a couple of details here. So this is the genus 2 high-pliptic curve, super singular. We take this representation for the degree 367 extensions, and notice we're using the alternate polynomial here, where q is 2 to the 6, rather than 2 to the 12, which makes the descent much faster. And I guess the most interesting thing in this solution of the DLP was the small degree elimination. So here's a flow chart which tells you exactly how we eliminated elements, irreducible elements of a given degree over F212 on the bottom row and over F224 when we needed to on the top row. So I won't explain all this in all its detail, but I'll just point out a couple of things. So you have a degree 1 element over F212, but we can just lift this to a degree 1 element over F224, which is where we solve the factor-based logs. So those logs are automatically known. If we have a degree 2 element, this is irreducible over F212. If we just factorize it over F224, again, this splits, so we have the logs of those elements as well. And we can perform the same trick for degree 4 to degree 2. We just factor over the extension, and we could do it for 6 to 3, but we didn't there because we found it faster to descend along the bottom row. For 8, we just go straight to 4, and then we can descend along the top row. So there's lots of kind of nice tricks to get everything, and all of these probabilities kind of balanced out, and it's all very nice. So we ended up solving this in about 50,000 core hours. And I should just mention a couple of days before we did this, before we announced this, the team of Agitel solved the first instance of a DLP arising from parameters in the literature, but it was only for a 1,303-bit field. So it was more of a proof of concept than really trying to attack something that was meant to be 128-bit secure. So I wanted to mention a recent result of ours, which really grew from that small degree elimination flowchart I just showed you. So assume we're going to try and solve a discrete log form in a field FQ to the KN. We consider this as a degree N extension of FQ to the K. And we only have two tools available. The first one, we'll assume, is the polynomial time algorithm for finding the logs of the degree 1 elements. So assume that's done. That was in our crypto paper and also Antoine's newspaper. And the second thing is to assume we can take a degree 2 element and we can eliminate it, which basically means writing it as a product of degree 1 elements in polynomial time, which is also in our crypto paper. And that's all we're going to do. So let's assume we have a degree 4 element. Well, how are we going to eliminate this? Well, what we can do is use the trick from the previous slide. So we just take a degree 2 extension. This gives us a product of 2 degree 2 irreducibles. We can then apply the polynomial time elimination from degree 2 to degree 1. But we don't know the logs of the degree 1 elements over the quadratic extension. So what do we do? We just take a norm back down to the original subfield. So the degree 2, we just eliminate it over the quadratic extension. We apply the norm, gives us the original degree 4, and then the degree 1s, they give us either degree 2s or the square of a degree 1. So then we apply the degree 2 to 1 elimination again. We end up expressing the degree 4 irreducible. That's the product of q squared degree 1 irreducibles. And we can do the same trick for an irreducible of degree 8. We take a degree 4 extension. We apply the elimination from degree 2 to degree 1. We take the norm back down to the quadratic subfield. We tick-tack our way back down until we get an expression in terms of, well, you have q cubed degree 1 elements expressing the degree 8 irreducible. The same thing for 16. And any power of 2 we like, we can just go up and then just tick-tack our way back down, eliminating taken norms, eliminating taken norms. So we end up expressing any irreducible of degree 2 to the e as a product of q to the e to be 1s. So this is a quasi-polynomenal algorithm, and it's completely different to the one from Barbara Leskiew at L. The problem is it only applies to elements of irreducible elements of degree of power of 2. We do this for any element just by taking a random multiple of the field defined in polynomial, such that, and we add it to the target element, such that this has degree 2 to the e, and we choose 2 to the e to be greater than 4n, such that we can apply theorem due to 1 on, it's a Dirichlet-type theorem on prime-synarthometric progressions. Then we get an irreducible element and we can apply this descent. And so we get a new QPA in fixed characteristic. And interestingly, because this method, this descent method is very algebraic, we can actually argue that it relies on no heuristics whatsoever. As soon as we have a field representation, where the degree of HI is at most 2, then this works. And so we get a theorem. I run out of time, but I'll quickly say it. So for all primes p, there exists infinitely many extensions, fp to the n for which the DLP in this field can be solved in quasi-polynomenal time. And this is the preprint, which will be available very soon. And thanks for your attention.