 N is a positive integer greater than or equal to 1. But this is the standard sub exponential expression. And depending on the value of A, the fields are characterized into basically four types. Small, if A is less than or equal to 1 third, medium, A between 1 third and 2 third. The boundary case is A is 2 third. And large characteristic is greater than 2 third. So the case that Nadia referred to is essentially the large characteristic case. She was talking about in the context of N equals to 1. N greater than 1 becomes important in the context of pairings. If you're looking at pairings, then N equals 12, for example, is a popular choice. OK. So there has been a tremendous progress in discrete log computation over finite fields. In the small characteristics case, the function fields in the algorithm has led to a possible momentum algorithm. And well, the medium prime case, the boundary case, large case, well, large prime case, not so much. But more specifically, the medium prime case, which is of interest in the context of pairings, this has received a certain amount of interest. And this interest started from a paper by Berbelliske at all, Eurocrypt 2015, followed by a paper at last Asia Crypt. And then this Eurocrypt, Crypto, and so on. And it's continuing. I'll tell you how it's continuing. So this is one slide overview of the number fields in the algorithm. There are two polynomials. You need to choose two polynomials, F and G, with the integer coefficients, such that they have a common useful factor, phi x of degree n, over the field Fp. And then this is the standard way from zx you go to Fp. M, well, alpha and beta are roots of Fx and Gx. And M is a root of phi x. And there is homomorphism this way and homomorphism this way. And then you get a kind of split ideals. So you have to talk about it. So number fields are defined by F and G. And then you work in the ring of integers of these number fields. The factor basis consists of prime ideals, whose norms are at most some pre-specified boundary. How to choose this bound, that becomes part of the algorithm. The main thing, OK, so how do you actually collect relations? Relations are relations among the factor basis elements. One chooses polynomials phi x of degrees at most t minus 1. t is commonly taken to be 2, so linear polynomials. And if it turns out that these two principal ideals are both smooth over the factor basis, then you obtain a relation. So the details of this factorization, et cetera, those are not really very relevant for this talk. What is important is that the norm of this ideal is the resultant of F and phi. And this fact, for ensuring smoothness of this ideal, it is sufficient that this resultant is b smooth. That's the most important thing. And I mean, in the context of our term, I'm not saying that's the most important thing for NFS. In the context of art, this is the most important thing. And the known results on resultant show that the resultants of F and phi are essentially of this type. So if t is fixed, they are controlled by the size of the coefficients of F and G and their degrees. So the lower these values are, the lower the resultants are. So using this, a Barbele-Scottall at Eurocrypt last year, they derived asymptotic complexities. In the medium prime case, they achieved complexity. But the second constant in the sub-exponential expression is 96 by 9 to the power 1 third. For the boundary case, this is 48 by 9. A large prime case, this is 64 by 9. So you can see that it's somewhat arbitrary to y in the boundary case. This is small, whereas in the medium prime case, this is the largest. So one way of trying to improve the complexity of the medium prime case is to try and see whether this complexity can be achieved even in the medium prime case. So kind of reduce the medium prime case to the boundary case. And for that, this tau number field sieve algorithm can be used. This was proposed long time back, analyzed fully at last Ishe crypt. And then there was, at this crypto, there was this paper by Kim and Barbele-Scottall. So the idea is that n here needs to be composite, so eta into kappa, and q is p to the power eta. So you look at this finite field, f p to the n as f q to the power kappa, where q is p to the power eta. So you look at a tau over there. So first extension by degree eta, and then a degree kappa extension. The main idea is that, well, suppose p is false in the medium prime case, so a is small. Now if it turns out that q, q is p to the power eta, q is large enough such that q can be written as lq 2 third, so this shouldn't be q, cp, but essentially lq 2 third, then you can apply the boundary case algorithm to q pretending it's a prime. I mean, it's not really a prime. The algebra won't go through, but the complexity estimates goes through. So you can achieve the complexity for the boundary case for the medium prime case. So that is the main crux, and this variant of TNFS that was proposed by Kim Barbele-Scottall, they called it the XTNFS. Now in this setting, so we want to work over a tower field. So first you have to get a degree eta extension, and that is done by choosing a polynomial hz of degree eta, and hz is irreducible over fp, and its coefficients are small. So that gives you a degree eta extension, and then you want to do a degree kappa extension over this one. Similarly, one defines this ring R, and so previously where fx and gx were in zx, this can ideally be in Rx. Now in Kim Barbele-Scottall setting, they considered fx and gx also over zx. Of course, zx is a subset of Rx, so they are still in Rx, but ideally speaking, you could also put them in Rx. You need both to be irreducible over R, and this gcd is a degree kappa, and so this is essentially the setting. Their method had a limitation. So this phi x is a degree kappa, and it's irreducible over fp to the power eta. So this is a polynomial in fp which is irreducible over fp to the power eta. This requires that kappa and eta are co-prime. So that means it applies to composite non-prime power n, such as 6, 3 into 2, 12, 3 into 4, and so on, but cannot be applied to composite prime power n, such as 4, 8, 9, and 60. When it does apply, they improve this complexity of estimate, so reduce 96 by 9 to 48 by 9. So our first goal was to try and see whether it can also be applied to composite prime power. So we proposed a new polynomial selection method, introduced several parameters. So this eta kappa remains the same as before. We introduced a parameter d. The reason for doing this is that we can see all the previous polynomial selection algorithms as special cases of this algorithm. So that's one of the reasons why we extended the Kimber-Berber scheme method by these parameters, and also we got certain asymptotic improvement. That's not the end of this study. Anyway, so we introduce a divisor of kappa and r, which is greater than equal to k, which is kappa yt and lambda is 1 by n. So the polynomial selection actually proceeds via two steps, or random trials to, so essentially do some random trials to find fx, gx, and phi x, or fx and gx are in rx. Note that we no longer want fx and gx to be in zx. They could be in that middle ring, and we receive it over r, and phi x, I mean, that's the middle thing. So we'll be using the LLL algorithm. This was first proposed by Jouler's here and then extended to the generalized Jouler's here method by Barbell-Skeertal at last year. We'll be using that. So for doing that, I need to describe some slight notation. So given any polynomial ax in rx, so that it will look of this type. So these are polynomials in z, and you can write this polynomial as a vector essentially write out all of these as vectors. Now these will, these has coefficients only up to lambda minus one, the rest is zero and lambda is either one or eta. Now this vector, using this vector one can define this matrix, m, a, r, r is this integer that is a parameter to the algorithm. So this is the diagonal matrix, I mean lower triangular diagonal matrix. No, this is just a diagonal matrix. So we are defining a lower triangular matrix, this is the diagonal matrix, this is the vector a with one on the diagonal, and then again a diagonal matrix and shift a by lambda one on the diagonal and so on. So we saw, I mean, the intuition behind defining this matrix at that I cannot explain right now, but this works. The determinant is important because we'd be getting bounds on the coefficients of polynomial written by this matrix, the determinant turns out to be this. So we apply the LLL algorithm to this matrix and the first row is written in this fashion, which is then again recast into polynomial form as follows in this form. So this forms the B0z, this forms B1z and so on, and the coefficients of B from known results on the LLL algorithm can be shown to be Q to the power epsilon by N where epsilon is this quantity. This is the polynomial we'll write as given A the original polynomial A and an integer R, we'll write this polynomial Bx as LLL Mar. Now I describe the two steps of the random trials. The first step is to choose a monic polynomial A1x in Rx. So the coefficients can be from this z by h. Degree A1 is R plus 1, A1 is irreducible over R, and it has coefficients. The coefficient polynomials are small enough, but large enough also so that you can actually choose such a polynomial. Over Fp to the eta, A1x has an irreducible factor, A2x of degree k such that all coefficient polynomials of ATx of degree at most lambda minus. That may sound complicated, but anyway that can be done. Then one chooses monic polynomial C0x and C1x with small integer coefficients. So now for C0x and C1x, we are taking them to integer coefficients. This can be relaxed and we have done that relaxation later, but not in this way. So I won't talk about that. Then one can define F as this resultant of A1y. So A1, you note that this is a polynomial whose coefficients are polynomials in z. These are C0x and C1x, these are polynomials whose coefficients are integers, but you can define the resultants. This is the way you define this resultant and keep on doing this random trials until you obtain this terminating condition that F and G are irreducible over R and Phi which is the GCD of F and G, it will turn out to be the GCD of F and G is irreducible over Fp to the eta. So based on this now, whether this terminates, etc., and you don't have a theoretical proof, nobody in this polynomial selection walled proofs termination, but it does work. We have experimented and we have seen that you can actually get it within a few trials. But of course, that's a theoretical question, whether it actually does term. So that kind of analysis does not seem to be very important at this point of time. So the degree F is D into R plus 1 and so on. So these features can be shown. The main thing is the norms can be shown to be this quantity. So we can derive norms of this type. Now once you have the norms, the asymptotic complexity is completely defined by the norms because as I mentioned, the smoothness probability is just the probability that it's heuristically taken to the probability that an integer of this size is B smooth. From that, one derives the asymptotic complex. Once you have the norms, there is a very routine kind of calculation which will show you the actual complexity. So now I'll try to explain why this subsumes the previously known algorithms. If you take eta equals to 1, then you don't really have the tower. So this reduces to NFS and then it results in the algorithm A that we had proposed at this Eurocrypt, which in turn also subsumes the previous algorithm of Barbell Square at all at Eurocrypt 2015. Now if you take eta greater than 1 and lambda equals to 1, then this condition forces phi x. Well, the irreducibility of phi x over Fp to the power eta will require this GCD eta kappa equals to 1, and then special cases of this setting would be the Kim Barbell Square algorithm proposed at this script. The new case arises when lambda is eta and both are greater than 1. In this case, this phi x is properly in that middle field. So you don't really require this condition GCD eta kappa equals 1. So that means now you can apply the algorithm for all composite n, instead just of whereas previously it could only be applied to composite non-prime power n. That's the complicated looking theorem that we'll get. So I won't try to explain this theorem. I'll just mention that the runtime can be seen as this LQ 132 Cb, where Cb is given by this expression, this Rkt, T minus 1 is the degree of the sieving polynomials, and while C theta is Cp into C eta can be written in this fashion. So one interest, I mean, this captures the entire complexity of the algorithm. It seems to be of interest that you find the minimum possible value of that. That minimum possible value will not be achieved for all primes, but nevertheless try to find the minimum possible value. So to do that, one needs to minimize the Cb with respect to C theta, and that's a very standard derivative and so on. One can show that the minimum is achieved for t equals 2. Well, in the case lambda is 1, you obtain this and essentially you get this value 48 by 9 to the power one-third, which is the Kimber-Williamson case, and when lambda is eta greater than 1, then the minimum is attained. Well, you have to choose Rk and Kappa to be same, and the minimum value turns out to be this. So for eta equals 2, the minimum is 64 by 9. Now previously this was 96 by 9, this case reduced to 64 by 9. Eta is 3, well, it keeps on slightly increasing. So we do not achieve, in this paper, we could not achieve this 48 by 9 to the power one-third for composite prime power. So for example, 48, 9, 25, and so on. And this is shown by this plot. Well, this is the plot for this lambda equals 1. So essentially, well, this is not the Kimber-Williamson complexity is this one, but this is C theta. So as C theta increases, it's not that you always get this minimum complexity, and it's a more kind of complex thing. These correspond to different values of R. And then you have this other plot, so lambda equals 2. So you have this lambda equals 3. So this kind of keeps on increasing a little bit. So that is what we achieved, but as I mentioned, this is a continuing story. So this is not the end of the story. We have just one, maybe one sentence in that complete story. Subsequent to our work, in fact, while this paper was in submission to Asia Crip, it was improved by John and Kim, who showed how to achieve complexity 48 by 9 to the power one-third for all composites. So before it become published, the results become obsolete, and it was very kind of the reviewers to accept the paper. But anyway, at the time of submission, we didn't know that I don't think this paper was also available. And while subsequent to their work, we generalized their algorithm. So this entire structure that we had put on the Kim-Barbellescu work, well, we could see that, well, we had missed something, and I really knew the fact that I missed it. But anyway, we missed it. But we could see that once the John Kim paper was there, the entire structure that we had put on the Kim-Barbellescu thing, we could also put on the John Kim paper, and then we have a general polynomial selection method. That we did add a concrete analysis, but there are still, I don't know if somebody else, as I said, we have just added one small sentence, but others are probably working on it, maybe getting better algorithms. But this does have consequences to actual choice of prime sizes for parings. And up to this work, how to choose group sizes? Well, I discussed that last week at the MyCrips. So we have a paper on that. It's also on E-Print by Alfred Meneges, myself and Shashank. We did a concrete analysis and tried to show, well, if you take only this much into account, then what should be your group sizes? Okay, so thank you for your kind. Are there any questions? I have some to get things going. So first, when you think of parings, actually my impression was that paring is also kind of a boundary case corresponding to exactly one third, right? Because you have to balance the complexity of the elliptic curve discrete log, which is exponential, and the extension field discrete log. And so you're exactly on the boundary between a small prime and medium prime case, but you can't use the function field sieve because your base field is too large anyway. So in that, so do we really know the complexity of the paring situation? Well, do you really know, is it a difficult question to answer? Now, based on this existing attacks, we did a concrete analysis. That much, I can say. But, and we came up with some conservative estimates of group sizes, which we believe will give you security, say, at the 128 bit or the 192 bit levels. So maybe if you use that, now if somebody comes up with a algorithm which is better. Okay, so the point that you raised, so you have to balance this. Well, I wouldn't say you have to balance this. You have to ensure that it's secure against both kinds of attacks. So while doing that, it may turn out and does turn out, like in the previously in the barodynamic, for the barodynamic curves, what used to happen was that it kind of balanced things very same, this 128 bit security level, you get 256 bit primes, and then 3072 bit, P2 power N, N is 12. And then it was kind of balanced against both attacks, but that no longer holds. At least it doesn't hold for the barodynamic case. You have to increase your P by increasing your P, you get some additional row security, which doesn't help you against this NFS kind of thing. So that balance is lost in the barodynamic case, but not in all the cases. Thank you. So any other question? So I have one more if you don't mind. No, I'm actually not sure. So you mentioned using LLL for polynomial selection. So since we have a sub-exponential algorithm anyway, we could use say a BKZ with a relatively large block size. But does it change anything for the complexity? Well, the reason we used LLL is that we know these estimates on the coefficient sizes of the vector written by LLL. Now if you use the BKZ, I don't know, I haven't thought about it. The other reason is LLL is very easily implemented in magma. I mean, it's there in magma, you just run it, and you can get the examples. But also we need to do this asymptotic analysis. So for the theoretic analysis, I know that this is the vector, shortest vector written by the LLL algorithm. Well, you have bounds on the LL infinity norm. So if there are corresponding bounds or better bounds by other algorithms, you can certainly use that. And that may improve complexity, I don't know. But I haven't seen any work on that. Maybe there is something there. So if there are no more questions, let's thank the speaker again. The second talk of the session is about the security of super-singular isogenic cryptosystem. It's a paper by Stephen Galbraith, Christophe Petit, Barak Shani, and Jan Botti. And Jan will give the talk.