 Yeah, thank you for your introduction. This is my joint work with Rastvan. And thanks for coming here in this early morning. I will start my talk. My talk is focused on DLP, which is very fundamental problem in public cryptography. There's a problem to solve discrete loop A for a given G and G to the A, where G is a secret group of what I am. And then it is very well-known to solve the DLP of a generic group. It takes a security n times, which is exponential, but however, if DLP is defined over binary field, we can solve it in some exponential time. For this, we have indexed calculus support chivalry such as number field shift and function field shift. And the complexity of DLP usually differs by the size of characteristic P, where if the DLP is defined over a small characteristic field, like a binary field or ternary field, then we have a post polynomial time algorithm that is based on function field shift method. On the other hand, if DLP is defined over larger characteristic, then most known best algorithm are based on number field shift. And in this talk, we focus on NFS, which targets larger characteristic field. And why do we consider this problem, the DLP over binary field over large characteristic? Because it is important in current-based crypto. Usually, the cryptographic pairing has a form that maps from two electrical groups, which are defined by prime field, and that it maps to a finite field of extension degree n. Here we say this extension degree as embedding degree. Then the security of parent-based crypto relies on the hardness of the DLP over these two electrical groups and the DLP over the finite field fp to dn. So when we determine the key size of parent-based crypto, we want this good look over electrical should be as hard as the DLP over finite field fp to dn. And usually, most interesting case in parent-based crypto occurs when the embedding degree is between two and 30. And we have, in particular, in the case of 128-bit security case, we have very efficient construction by BNCov, which have embedding degree 12. Yeah, so I think it's important to consider this DLP over fp to dn, where n comes from the embedding degree of pairing. Let's look at brief history of NFS and FFS. NFS, to solve the DLP over prime field, was first introduced by Gordon and Schildkauer independently. And then we have a function field method to solve the DLP over binary field by Edelman. And then it has been the NFS extended to solve the DLP over fp to dn when p is large by Schildkauer and JSB. Actually, Schildkauer suggested an algorithm to use representation of fp to dn as a ring of the algebraic integer, g of y, modulo p, which we call tower number field shift. And later, JSB suggested another approach, which is obtained by simply modifying polynomial selection step in the NFS of prime field. In this talk, we obtain our goal by just simply combining these TNFS and JSB, which we call the ex-TNFS, extended tower number field. Yes, so let's look at the previous state of complexity of DLP in the finite field. It is very common in the context of NFS to use L-cune tension to express the complexity. And sometimes I will just use a simple, this L-bracket M notation, which denotes this value. So we only change here M. And this L-bracket M is increasing by M. So as you can see from this table, any variant of NFS has the complexity that differs by the size of characteristic P. Where we say that characteristic P is medium compared to the total field size Q when P satisfies this relation. And then we say that P is large if it is in this case. And we say that P is in the boundary case if it is in this case. As you can see from this table, in any case of NFS variant, the complexity of number of ratio in medium characteristic case was always larger than the complexity in the large characteristic case. And even more strangely, we had the best complexity of the NFS in medium case, in the boundary case. So, yeah, and this phenomenon was the same for any variant, like multiple number of ratio or special number of ratio. You can see that any variant in, so complexity of any variant in medium case was larger than the other case. So you can see from this graph, easily, medium is the hardest part in the DLP. So now we can see, we can state our result. Simply speaking, we just get rid of this anomaly in the complexity of DLP. In other words, we can show that DLP in medium characteristic case is easier than DLP in large characteristic case when the extension degree is composite with the factor of appropriate size. Yeah, as you can see from this table again, we can, our result, reduce the complexity here by combining our results. We reduce the complexity here, and this becomes less than the other part. So, by applying our EXTNFS with the JLSV algorithm, we obtain the same complexity in the medium case as large characteristic. And if we further apply our EXTNFS with conjugation method, then the DLP, the complexity of DLP in medium case becomes less than the complexity of large characteristic case. Then, to explain our result, let's briefly review the NFS for prime field. In the NFS for prime field, we choose two polynomials, which are irreducible, and they have a common root M modulo p. Then we have this commutative diagram, which is very common in NFS context. Then this commutative diagram shows that we have a, yeah, we can map this polynomial in integer polynomial by mapping it to modulo f and modulo p and modulo g and modulo f and modulo p. So, it goes to our target field. Then, with this diagram, we solve the discrete law. First, we choose the polynomial f and g I stated before, and then we shift in the shiving state. We collect many pairs A and B such that the norm of A minus B alpha f and norm of A minus B alpha g are B smooth for our parameter B, where alpha f and alpha g are root of f and g. If we have such a pair AB, then they give a linear relation of discrete law of small element. I say the small element means whose norm are less than B and some other. And if we get many such relations, then we can find the logarithm of a small element by solving linear algebra. And then to solve real discrete law of target element, we just express them in terms of previously computed law of small element. Yeah, I don't go detail further from here, but yeah, to extend this result to solve the API of a non-prime field, JRSB just modified the polynomial selection path in the NFS. In JRSB, they choose f and g such that they have a comma root, a comma factor five of degree and modulo p. Then now we have a similar diagram as we seen before, but this time we target our bottom is f p to the n, which are defined by five of common factor of f and g. And then we also receive a polynomial a minus b x by taking modulo f and modulo p here, and modulo g and modulo p here, and get a relation in f p to the n. On the other hand, in t and f x, t and f s, tau number b x, we choose f and g as same as before we did in prime case. But in this time, we choose another polynomial h, which is an irreducible polynomial of degree n, and it is also irreducible modulo p. Then we can represent our target field f p to the n as a ring of integers, ring of algebraic integers, g of iota and modulo p. And g of iota is a subring in a number field, which are defined by this h. Then this time we have again this diagram, but here we replace this base ring integer as a new algebraic integer, g of iota. Then we can shift a minus b x, where a and b are in g of iota, not instead of g. And then if we go through this diagram, then we get a relation here, f p to the n. This is the idea of t and f s. If you understood these diagrams well, then it's very obvious to combine this t and f s and j s v. In t and f s, e x, t and f s, we target a finite field of f p to the n, where n splits into co-prime vectors. Then we would like to explain the target field f p to the n as a composition of extension field with degree eta and co-prime vectors. Then we choose h as we did for t and f s for f p to the eta, which means h is every degree eta. And then we choose f and g as we target the f p to the kappa, not f p to the n, using many methods like j s v words conjugation and so. Then we put k to be g c d of f and g modulo p. Then we get again this commutative diagram, which are almost the same as in the case of t and f s, but in the bottom we have this formula. So yeah, but this time you remember that we chose f and g from integer. So k will be irreducible over f p, but we want k to be irreducible over f p to the eta. So we require this g c d condition to make it automatically irreducible over f p to the eta. Yeah, so then you can have a relation by modulo, not taking modulo here. Then why this simple change made better complex than before? Because in the complexity analysis of NFS, the size of norm plays an important role. And roughly speaking, we showed that the size of norm in EXTNFS for f p to the n is the same as the size of norms in classical case, classical NFS case, which we target f capital f to the kappa. So we can deal with a smaller extension degree. This means that we have a complexity of EXTNFS for f p to the n, which is the same as the complexity of NFS for f capital to the kappa, where capital p is a prime of size of the same size of p to the eta. So as long as capital p remains large or boundary characteristic, the cost of EXTNFS targeting f p to the n becomes the same as the cost of NFS targeting large or boundary case characteristic. So if you see example, EXTNFS with JLSB has a complexity of L of 64 instead of 96 in the case of when p is medium. And EXTNFS with conjugation have a complexity L of 84 instead of L of 64. And we can apply this EXTNFS to any other variants like multiple NFS and special NFS. With special NFS, we use the characteristic p which has a special form. This is mostly occurs when pairing based construction like a BN curve. In that case, we showed that special EXTNFS has a complexity of L of 32 instead of L of 64 in the median case. And so we can reduce the complexity in the median case here. So I think it's better to reconsider the key size of pairing based crypto here. Currently we use the same recommended key size for pairing an RSA. And this key size was derived from the value L of 64. But now we have the best complexity L of 84 when p is of general form. So in that case, we should increase the key size by 1.3 about. And in the case of when characteristic p is of special form, then previous best complexity was by Julian Carroll which was L of 32 when p is large and L of 64 when p is medium. But in the case of BN curve, which has a embedding degree 12, I think that it seems to be considered as median prime case because currently used key size derived from the formula 64. But now our specialty X10FS has a complexity of L of 32 in either case. So maybe it's better to increase the key size by twice. But yeah, but please don't panic because this is not a precise estimation because yeah, there is a hidden constant in a Q notation. And it will take some time to drive more precise one because we need some optimized implementation result. But please don't panic again either if there comes new record. So yeah, you can change your key size or you don't as you wish. But I'm not sure. So yeah, so actually we didn't implement our results to solve the DLP, but we have a graph for the size of norms which is a main vector that indicates the complexity. Here you can see that X10FS with conjugation method seems to be the best choice in the range of this size of field when you target a P26. And here, if you want to target a P32, which is an interesting case in the being curve and when P is of a special form, then it seems that especially X10FS becomes the best choice as the field size goes growth. So yeah. And I'd like to conclude, before I conclude this talk, I'd like to announce some recent developments on this work which applies to embedding degree N that splits into non-code prime vectors. First, Saka and Xing showed that when N is power of two then we have a complexity of L of 64. But soon after this work, Zheng and me showed that when N is composite, any composite, then we have the same best complexity as we had in the X10FS. The idea is simply just choose F and G from G of I instead of G. So we have this graph for any composite N with the appropriate size of vector. Yeah, then let's conclude with some remark. I think we should reconsider of the key size of powering based crypto. But our algorithm does not apply for prime field or prime extension degree field. So strategy at all recently suggested to use embedding powering with embedding degree one, but I'm not sure about the efficiency of them. So maybe it would be an interesting work to improve the powering computation with the embedding degree one or prime. And on the other hand, it would be an interesting question to improve NFS for prime field or prime extension degree field. Okay, thank you for your attention.