 A short list of pairing funnily curves resistant to the special T1 number feel-sive at the 128-bit security level. In cryptography, pairings are, as a black box, needs three safety groups of large prime other R. So it's a bilinear map from G1 times G2 into GT. It is bilinear on the left and on the right like a scalar product. It is non-degenerate, and it should be efficiently computable. In practice, we use the fact that the pairing of A times P and B times Q is equal to the pairing of B times P and A times Q. So we can swap the scalars. And it also equals to the pairing of P and Q to the power AB. So we can multiply the scalars in the exponent thanks to a pairing. As an application, we have the identity-based encryption of Bonnet-Fronklin, and we have also the short signature of Bonnily-Nantrashan. More recently, we have zero knowledge-proofed and zero-knowledge succinct non-interactive arguments. The security of the pairing relies on the discrete logarithm problem, the Diffie-Ellman problem, the bilinear counterparts, and the hardness of the pairing inversion. More precisely, a pairing is a valid or rotate pairing on an elliptic curve. The two input groups G1 and G2 are on elliptic curve, and the target group is a finite field. That's why we use additive notation for G1 and G2 and multiplicative notation for Gt. So we need to consider these attacks. We need a hard inversion. We need the discrete logarithm on the curve to be hard. So it has an exponential complexity of approximately square root of r. And we also need the discrete logarithm in the finite field to be hard. But this is easier. It has a sub-exponential complexity. So we need to be sure that we take a large enough field. A pairing-friendly curve. So we need to design them on purpose. An elliptic curve over a prime field has a short bias torus form of y squared equals x3 plus ax plus b. It has order p plus 1 minus tyt is the torus. And we need it to have a large prime factor r. The discriminant of the curve is such that t squared minus 4p is equal to minus d times y squared. And d is a square free. So to be able to define a pairing on the curve, we need the embedding degree to be small enough. So when r divides p to the n minus 1 on n is minimal, then we say that n is the embedding degree. Then we can define, for example, the tape pairing. But usually n is very large of the size of r. So a curve is not very infrequently. So we need to build it on purpose. And in this case, we have, for example, a super singular curve, a Miyagina-Kabayashi-Takano curve of embedding degree 3, 4, and 6, Baritoneric, Baritoline, and Scott's curve of embedding degree 2, 12, Kasichas-Chevron, and Scott's curve of embedding degree 16, and 18, and other BLS curves. For example, for MNT curve of embedding degree 6, we have the parameters of the curve p and r given by quadratic polynomials. And the discriminant is variable. For BN curve, they are very popular because it is very easy to derive them. So for example, for embedding degree 12 and discriminant 3, we have that the curve has a short bias to us y squared equals x3 plus b, that is a equals 0. And the gene variant of the curve is 0. And the two parameters, p and r, are given by a polynomial of degree 4 and very small coefficients. So this is very good for efficient arithmetic on the curve. But as we will see later, this is also interesting for a better variant of the number field shift to compute this squared logarithm in the finite field fp to d12. So what do we need for our peri-friendly curve? We need secure, efficient, and compact curve. So we need secure, again, this squared logarithm computation on the curve and in the finite field. We need an efficient scalar multiplication on the curve, fast exponentiation in the finite field, and efficient pairing. And as well, we need compact. That is, we want to have as small as possible parameters on key size. So which curve are the best options? So for this, we need to consider the hardness of this squared logarithm computation in the finite field. But this is much less investigating than an priming or integral factorization. However, there are much better results in pairing related fields. So for example, in 2013, Rue and Pirou designed a special variant of the number field shift to compute this squared logarithm in fpn, where fpn is the target root of a peri-friendly curve. In 2015, Bar-Boulez-Cougoudon-Clinium designed a tower number field shift that is a variant that makes use of the tower structure of the field for fpn. In 2016, Kim and Bar-Boulez-Cou designed a new version where they used the subfields of the finite field. So they explored the fact that there are many possible subfields to make the number field shift faster. It was improved later by Kim and Geo. So this leads to these complexities. So the number field shift and all of its variants have a sub-exponential complexity with the formula given above, like it's the exponential of a constant c times log of p to the n to the power alpha, log of log of p to the n to the power one minus alpha. For a large characteristic p, that is a prime field, for example, or a very small exponent and very, very large p, then we have that the complexity, if we have alpha equals one-third and the constant c is approximately 1.9. If the characteristic p is special, then we have that the constant is approximately 1.5, so much less. For a medium characteristic, then we have p, which is quite medium compared to the exponent. So this is the case for a Pericveni cure for fp12 or fp24, for example. Then the constant c is 2.2 for prime n. So for example, 11, 13 or 17. And this is 1.7, that is slower than the prime field for composite n. And this is TNFS. For a special prime p and composite n, then this is the best case of a special tower, number-flexive, and the complexity is with the constant 1.5. So this is much less than 1.9, it means that we need to increase the key size of Pericveni-curve design before the tower-number-flexive algorithm. How to choose the key size? So for a prime field, we know, so we have a rule for doing it. So for example, we take the asymptotic complexity with the constant 1.9, and we will rescale this asymptotic formula if we regard computation. So we have a regard computation. So we rescale the asymptotic complexity and we obtain that we for approximately 3,000 bits. Then we have a cost of computing a discrete logarithm more than two to the 128. So we are fine. But how to do it for extension and finite fields? So we need to consider the latest variant of the tower-number-flexive algorithm that is very promising for FPN, where n is composite. We need a record computation to scale the formula. And then we need to be sure that it fits. But in fact, it's not so easy because the asymptotic complexity corresponds to a ratio between the exponent n and the prime characteristic p when both tends to infinity. But it does not correspond to a fixed n, for example, 12. And moreover, we don't have a record computation for now. So these are the latest record computation in finite fields of large and medium characteristic. So for prime fields, we do have a large record. But for extension fields, as long as the extension degree grows, then the total size of the finite field decreases. Moreover, we don't have any implementation yet of the tower-number-flexive, only the number-flexive. Since the introduction of the TNFS algorithm in 2015, new key size and new per-infinity curve were proposed. So for example, by Fotiadis and Constantino, so they base their key size on the asymptotic complexity. By Manes Sarkar and Singh, so they started to open the black box of the TNFS algorithm and look more deeper than just the asymptotic complexity to try to refine the key size. Then Baboulescu and Dugaine proposed a model of cost and refined the key size and also proposed seeds. Fotiadis and Maltidaire also proposed new curve based on the Baboulescu-Dugaine model of cost of discrete-localism computation. Then with Singh and with Masou and Tome, we proposed refined model of cost. So the idea is that we can't yet run a record computation, but we have techniques to estimate the parameters like we do for record computation in prime field. We simulate part of the algorithm to try to refine the parameters and to tweak it more finally and to obtain a good estimation of the total cost. So we do the same approach for TNFS. So this is much less precise, but this is a good first approach and then it allows to have more precise key size. In 2019, Baboulescu and Brabe and Gamam consider many, many different per-infinity curve of embedding the grid between two and 54. And they use the Baboulescu and Dugaine model of cost. In this work, we apply systematically the new model of cost of the work with Singh to all the curves already considered by Baboulescu, Elmabrabe, and Gamam. Before going into details of the simulation, I would like just to recall the Brazing-Wine Generic Construction of Per-Infinity Curve. So we start by choosing a polynomial R that is irreducible and that will contain primitive and root of unity. So for example, we just take a cyclotomic polynomial. We define the number field by R and we find in this number field an element that corresponds to an n-th root of unity. Then with an integer E between one and n minus one and co-prime to n, we define the polynomial that will give the trace of the curve to be A to the E plus one modulo R. Then we have the parameter Y, which is T minus two divided by square root of minus T modulo R. And the polynomial P that will define the characteristic of the curve is equal to T square plus T times Y, square divided by four. At this step, if P is not irreducible, then it will fail. And we need also P to be with a positive leading coefficient so that it has a good chance to represent prime value. If it doesn't match, then we start with another exponent E or we start with another irreducible polynomial R at the beginning. Otherwise, then P, R, T, Y, and D give a set of good parameters for a paranthony curve. Then to select the curve, we consider the Brasenway method with embedding the group between six and 21 and discriminant D between one and n and such that square root of minus T exists in the number field. We also consider the baritonic BLS for this Constantino-Fortidis-Martindale curve. For the security, we designed R to be at least 256 bits and we decided that the finite field P to Dn will be between 3,000 bits for this is the minimum size for prime field and 5,376 bits, which is 12 times the size for BN curve. And then in this work, we also consider all the possible variant of the special tau R number field. That is for a polynomial P that will give the quality cost of the curve. We try many different change of variables to tweak the special variant differently so that maybe we can reduce the total cost with a change of variables. So if the polynomial is even, we can divide its degree by two without increasing the size of the coefficients. If it's a palindrome, we can also do another change of variable. More generally, if there is an automorphism on the number field, then we can do something. And otherwise, we can also decrease the degree of the polynomial while increasing slowly the size of the coefficients. So we combined everything for some of the prying-friendly curve. Actually, it was interesting. It does not apply to BN and BLS curve, but it applied to many other brazing wing curves. The second step was to test all possible variants of the tower. So we test all subfields of FPN. So for example, if we have N equals to 12, then we will test Fp, Fp square, Fp3, Fp4, Fp6 and Fp12 have subfields. So in this work, we wrote a script to do all of these variants. We obtain these results. So we have the following curves of embedding the degree between 10 and 16. So for example, for R, we have R of 256 bits at least. And when it was not possible to find R of this size, then we give the smallest possible one, which is larger than this size. So it was not possible for embedding degree 11 and 13 because the polynomial that gives the characteristic is of degree 26 and 28. So the seed is very small. There are very few choices. And it was not possible to find exactly the size. For other curves, for example, for embedding degree 13, we have the smallest possible size of P. But for 16, it's also small enough. And for BN and BLS and 48D's mountain and curve, we find that 450 and 46 bits is enough to obtain 228 bits of security with some margin. We also consider the cost of a bearing computation. So a bearing is made of a miller loop and a final exponentization. The miller loop has a size the length of the seed or the trace or all the prime R. It depends on the variant of the bearing. So T8 or optimal 8. So the above table is from the work with Asun and Tome. And this is an estimation of the number of multiplication in the prime field Fp to compute the miller loop. So as we can see for prime embedding degree, on one side we have a smaller prime P because the extended to R number of fields with a variable subfield does not apply since we have a prime embedding degree. But we don't have many optimization for bearings like we don't have twist, we don't have subfield elimination. And so at the end, we have a very costly bearing for even embedding degree 10 and 14. We don't have a twist of degree four and six but only quadratic twist. So the cost of the bearing is slower than for BN curve, for example. And moreover, we don't win very much on the size of the prime P. We can also look at a popular bearing friendly curve. So we have MNT curve four and six. So these results are from an ongoing project. And BN, BLS, and 48D Martinian curve. So actually we find that between four and three and 16 and four and 48 bits, so we match some machine work size. We have 128 bits of security with error margin. And for embedding degree 16, 18, and 24, actually we are more constrained by the size of R. So with this size of P, then we are safe. Finally, I wanted to share with you this repository. So in this gitlab.inreal.fr repository, you will find all the scripts that were used to generate the result of this paper. So maybe you will be more interested in this Sage repository and then example of shortlist.sage. So in this file, this is a Sage script. So it is with Python-like syntax. You will find the curve, the seeds, the parameters and everything of the curve of the shortlist. So this might be useful for copy-paste. The curve are designed with a class, for example, BLS12 class. So let's see what it says. So in this TNFS, it's organized as a Python package. So in curve, you will find BLS12.py. So you have the polynomial parameters of BLS curve coefficient parameter, like a array of coefficients. And the class inherits from the elliptic curve finite field class of Sage math. So this is very easy for our scripts. Then we go back to TNFS and we go to parameters. And in testvectorsparseed.py, then you have many seeds for BL curve. All of these seeds from, and then here we have the reference with different size. So from a very small size to a larger size with the estimated security level. BLS12 curve, and Cascisa, Shepard Scott, a family growing system on 18 and BLS24. When it was not possible to find in the literature the size to estimate the security, then we just generated the seed. So I hope that you will find useful scripts in this repository. Finally, if you need the other seed to suit your needs, you can generate your own curve because there are many possibilities in one family with many different seeds and sparse seeds to fit your needs. Thank you.