 Good day everyone, I'm Federico Savasta. I'm going to present the paper by Nuit Efficient Threshold ECDSA. This is a joint work with Guillem Cassagnon, Dario Catalano, Fabella Chigonmi and Ida Tarker. In this work we provide new techniques to build Threshold ECDSA schemes from class groups. Before giving the line the presentation will follow, we give an overview of ECDSA. ECDSA is the acronym of elliptic curve digital signature algorithm. It is a digital signature standard and it relies on elliptic curves. It's widely used in many applications. One of the most famous is bitcoin. In bitcoins we need to sign transaction and a stolen sign key can be translated in an financial loss, which is a single point of failure that we want to avoid. The reason why disability ECDSA is in high demand is that a disability version of an ECDSA permits us to share the sign-in key between more devices. And then if an adversary wants to steal the key, he has to obtain the different pieces of that sign-in key. The second goal of disability ECDSA is that it gives cryptocurrency custody solutions. In our context we will talk about T out of N threshold signature schemes, where N is the number of parties. T is a threshold which is an element less than N and also it indicates that T plus 1 is the minimum number of parties that can jointly sign, but a number of T or less party can do nothing and cannot obtain anything about the signature. The ECDSA protocol works in the following way. It takes as public parameters an elliptic curve G of prime order Q with a generator which is a point P. The secret key is a random element it's chosen in the Q and the public key is an elliptic curve point Q which is computed as it's multiplied by P. If we want to sign a message M with ECDSA, the steps are the first sampling a K in the Q, then compute an R which is a point computed as K multiplied by P, take the first component of big R which is little R and then compute S as the multiplicative inverse of K, multiply by a Nash function called computed on M plus R adds module Q and the output finally is the couple R and S. When we want a distributed version of the ECDSA signing algorithm, we face a problem. The problem is how to sharing the elements of ECDSA which are the secret key it's and the element K To give an intuition of that problem, we can compare it with another signing protocol which is the Schnoar algorithm for signing in elliptic curves. This algorithm uses the same public parameters which are the elliptic curve, the order Q, the generator P, the secret key and the public key. The first step in the system is to sampling K in the Q which is as in the ECDSA. To compute a point R as in the ECDSA, to compute an element E which is a Nash function computed on R and M which are known to all the parties and the signature is K minus E multiplied by E. If we look at Schnoar, we can see that the expression of S is linear as the K in point R while in the ECDSA we have the multiplicative inverse of K. If we want to think about a divinely shared K we can think about K as K1 plus K2 in the case of two players but it is the same for more players. S can be written as two additive shares of type Ki minus its I multiplied by E where its I is the additive shares of its but the same thing is very difficult to be done in the context of ECDSA since as I said it is really difficult to compute additive shares of K to the power of minus 1. Computing additive shares of K to the power of minus 1 is what makes ECDSA interesting in its distributed version. Now we can go to the main points of the presentation We will see three parts. In the first part we will introduce some previous works which uses a similar setting of ours protocol and that works are also comparable in terms of efficiency with ours. In the second part we will introduce our full treasure the ECDSA protocol and finally in the last section we will see about the efficiency of that protocol compared with the previous works. So we can start introducing that previous works. There are a lot of works about distributed ECDSA or in the context of treasure the ECDSA and we took an account in particular two of them which are the Lindenov protocol from CCS 2018 and the Generogolf Federer protocol from the Seng conference The former uses an Elgamal in the Sponent Encryption Scheme and it is a second and a simulation-based definition while the second uses a different encryption scheme which is Payer and also it is a second and a game-based definition. There is a point in common in these two schemes. The common point is that they use a binary homomorphic encryption scheme which permits two parties to jointly compute signature. This idea comes from the Mackenzie Redder paper from Crypto 2001 We follow the same line using another encryption scheme which is also linear homomorphic which is the Castagnola Guillaume encryption scheme and our protocol is built upon GG18 protocol. As a result our first main contribution is the introduction of new techniques that permit us to realize efficient threshold variance of the ECDSA signature scheme and as I said this can be seen as a new variant of the Generogolf Federer protocol Our scheme also removes the range proof that comes from that protocol and also it is comparable in terms of efficiency with the other solutions. Our protocol as I said is built upon GG18 protocol and use the same communication model and the same type of adversary. The communication model consists in a broadcast channel where we have end players that can also communicate between them in couples using point-to-point channels. The type of adversary is malicious and it is a probabilistic polynomial tie. As a consequence it can decide to don't follow the protocol. It corrupts the players where t can be up to n-1 players which means that we can have only one honest player it does study corruption which means that it decides what players to corrupt at the beginning of the protocol and it is also rushing that informally means that it speaks after all the honest players so we are assuming that we have dishonest majority and the consequence is that we don't have any guarantee about the completeness and the robustness of the scheme. We can give in brief the structure of the protocol which is very similar to ours and they take as parameters the number of parties and then they start with the interactive key generation which consists in a t out of n verifiable secret sharing of the secret value it's done by all the players to all the players and then at the end of this phase they will have the public verification key queue. After that they start the interactive signing phase which consists on more than two players that want to sign a message the first thing to do is to convert the n shares of it in t shares of it using Lagrangian coefficient and the main idea is to compute a point r using k multiplied by a mask which is gamma and not directly sharing the additive shares of k finally multiplying this with another value which is gamma p in the protocol they will obtain r and finally they can compute the signature or have an abort if some check doesn't pass after saying that now we can go deeply to our full threshold protocol in the signing phase of the generative protocol each of the parties has to run a p2p protocol with the others so we can count about n to the power of two protocols and in each of them are present range proofs the range proofs are caused by the difference of the Pyre encryption scheme model n and the queue which is the model of the elliptic curve so what we want is to use an onomorphic encryption scheme which use a message space of all the queue and we want that this queue is the same of the elliptic curve there are not many such scheme of there but we can use the Castagnola Geomy encryption scheme we will indicate the Castagnola Geomy encryption scheme with CL it has been introduced in CTRSA in 2015 and the framework is the following we have a group which is subgroup where the discrete log is easy to be computed which is cyclic of all the queue multiplied by s where queue is unknown large prime as it is unknown and queue and s are relative prime g can be seen also as a direct product of two subgroups which are f and g queue of all the queue and s respectively and also f is the subgroup where the discrete log is easy to be computed we also use one assumption which is one of the three assumptions that we require for the security proof of our protocol this assumption is the subgroup membership problem which tell us informally that it is difficult to distinguish an element that comes from g from an element that comes from g queue in CL encryption scheme the others give an instantiation in class groups we have k which is an imaginary quadratic field of discriminant delta k delta k is divisible by pq which are primes and then we can define a non-maximal order of delta of discriminant q to power of 2 delta k if we take the class groups c of o we can add submit as a group of order q which is the group where the discrete log is easy and about the security we have that solving the odd subset membership problem reduces to the problem of the computation of the class number and also the best known algorithm which is the index calculus method as a L1 of complexity if we see about the normal groups for the discrete logarithm or factorization the complexity is about L1 third and also we have in this way short elements instead of the elements using the impie given the framework we can talk about our signing protocol this is divided in three sub-protocols which are an interactive setup an interactive key generation and an interactive signing in the first one the setup we have the ECDSA public parameters which are the curve, the order and the generator and the security parameter the final output for all the parties are the generators of the two specific groups f and gq from the framework in the key generation all the parties take the elements from the setup phase and then they will choose a public key for the encryption scheme and the security which is an element of gq computed as gq to the power ski and also they will obtain the public keys from the other players also they have the additive shares it's i of the signing key it's from ECDSA and also they will know the verification shares qk for the other players qk as the verification key of the ECDSA protocol the interactive signing protocol is more complicated so we will explain it to using an example with three players each of them has its own shares of the secret key it's which is its i and the secret key of the cl encryption scheme the first thing players do is to choose randomly in zq two elements ki and gamai then encrypt the ki with its own pki the public key obtaining a specific test of ki and then compute a big gamai as gamai multiplied by p after that each of them commit to gamai and then they broadcast the encryption of the ki and the commitment of the gamai and from now all the players will know the encryption of the ki in particular in phase two we can call gamai the sum of the gamai and ki the sum of the ki what the player want to do is to compute kgamai and kx which we can see as the additive shares ki, gamai and ki they have computed in the p2p protocols as I said it is the same idea of the general further protocol where we want to mask the value of k the next goal for the players is to convert this additive shares of kgamai and kx in additive shares that we call alphabeta, mu and ni so we can describe how the p2p protocol works and we take for example the player p1 and player p2 they want to share the element gamai 1, k2 and k2 its 1 as additive shares and the first thing is for p1 to compute two random values from zq which are ni to 1 and beta to 1 compute a point of the little q which is ni to 1 multiplied by p and taking the encrypted value of k2 from p2 taken from the previous phase and then using the linearity or morphic operation of the encryption scheme it computes alphabeta to 1 and mu to 1 that is send this encryption with the point b to 1 to player 2 and then player 2 can the crypto recover is additive shares alphabeta to 1 and mu to 1 and do a check of the point is received remember that as to alphabeta to 1 is computed as an explanation of an encryption multiplied by the encryption of a random value which is the opposite of the additive shares of p1 so if we add alphabeta to 1 and beta to 1 we obtain gamma 1 k2 the same thing for mu and ni at the end of phase 2 after receiving the alphabeta mu and ni from the other players each of the players can compute delta i and sigma i which are the sum of that element alphabeta plus ki gamma i for the case of delta i and for sigma i it is the sum of the mu and ni given and received in the p2p protocols plus ki it is i then in phase 3 they can all broadcast the delta i values and phase 4 they commit to the big gamma i computed in the first phase from now all the players can compute big r as the inverse of the sum of delta i multiplied by the sum of the big gamma i and also compute the first component of r which is little r finally in phase 5 they can jointly compute s as the sum of the ks multiplied by hm plus r multiplied by sigma i notice that here we have the sum of k and not the inverse of the k since we have used k to the power of minus 1 in the expression of r so this is the inverse of the really k chosen what made the difference with the generical further protocol is that we use another encryption scheme which is not surjective instead of paie so we need to prove that a suffix test sent by the first player in the p2p protocol is a valid encryption and since we are working groups of unknown order proving this is really expensive but the advantage is that in phase 2 we don't have a range proof and also we do only efficient shacks on the curve so we introduce our second main contribution as I said we have a problem improving that the cl-surface test that was formed so we give an efficient protocol to group this which relies on two computational assumptions that are not new from previous work in the setting of cl there is a solution for proving that a cl-surface test is well-formed but the type of challenge that a verifier can send is binary and so the protocol has to be repeated more times so the problem is that we have expensive zero-knowledge proof of knowledge that need to be repeated for each signature and this is not reasonable so our solution is a security of that protocol with a computational security which relies on the two assumptions that I will introduce you and then this is much more efficient we are going now to see how to prove that a servic test is well formed a cl encryption of an element k is a couple of elements the first which is an element in gq which is gq to the power of r and the second is r multiplied by f to the power of k which has to be an element in g but we have some difficulties which are the first that we don't know what is the order of gq it is s but it is unknown that the elements of the direct product gqf which is g are not efficiently recognizable and the verifier cannot check if the element is received come from g and if c1 or c2 are not in g we have an information leakage so the consequence is that p has to prove that e knows the randomness r and the element encrypted k so we improve our zero knowledge using a zero knowledge argument of knowledge while a previous solution which binary challenge was a zero knowledge proof of knowledge the reason why is that we now use computational assumption to make the protocol more efficient since we have to repeat it for each couple of players in all the p2p protocols to extend the challenge set we use one of our assumption which is the low order assumption and that dimension depends on the parameter of that assumption the resulting protocol is complete is sound it is zero knowledge the assumptions we use are the low order assumption which tell us that it is hard to find a low order element in g and a strong root assumption which tell us that given a need with some constraint which is that it is different from all the powers of two since in the class group is cc to compute square roots it is difficult to find roots of that element since it is similar to strong rsa about the challenge space we have that each dimension is relative to the low order assumption parameter and also that either malicious p can construct rk or break a list one assumption Fatemo our protocol needs to be run once thanks for that assumptions finally we are going to see the efficiency of our protocol comparing it with the others two protocols from lindel norf and generogal feather we see in the comparison that for low or level of security the protocols of generogal feather and lindel norf are faster than ours in the key gen and sign protocols while in terms of dimension we have an advantage in key gen and sign for high level of security we have a faster signing in conclusion we have seen that any sensation from the CL framework using class groups give us a nomomorphic scheme which can use the same order of the lity cube as order of the message space it is practical and low bandwidth Fatemo we have improved the zero knowledge proof of knowledge which is costly for a group of non-order using efficient argument of knowledge and our future work are giving a threshold variant of the CL encryption scheme proof simulation security for that full threshold protocol use prepossessing to further improve efficiency and apply the same techniques to the lindel norf protocol since we have done this for the generogal feather