 Thanks for the introduction. So this is joint work with Guillaume Castagnos from Rio Catalano, Fabian Laguomi, and Federico Savasta. So I'm going to explain how we build a generic construction for two-party ECDSA from hash-proof systems and how we efficiently instantiate these from class groups. So before I go into the details, let me give you some intuition for why this work is of practical interest. So ECDSA stands for Elliptic Curve Digital Signature Algorithms. So it's a standardized digital signature algorithm, which relies on Elliptic Curve Cryptography. And it's the signature which is used in Bitcoin cryptocurrency to validate transactions. In particular, this means that if someone steals your secret signing key, they can spend your bitcoins. So we have this single point of failure which we'd like to avoid. And this is where a distributed version of ECDSA saves the day. So by sharing the key among multiple devices, we not only reduce the risk of key theft, but we also enable cryptocurrency custody solutions where you need multiple parties to cooperate in order to perform sensitive operations. So we focus on the two-party setting. So this would allow you, for instance, to share your secret key between your phone and your laptop. So the secret signing key is shared between two parties, P1 and P2, such that collaboratively, P1 and P2 can sign any message. But alone, neither party should be able to forge signatures or learn anything about the reconstructed secret key. So for other signature algorithms, efficient solutions have been around for a long time. In particular, for Schnurr algorithms, whose Elliptic Curve variant is very similar to ECDSA, efficient solutions have been around since the 90s. But devising a two-party ECDSA scheme has proved much more challenging. And let me give you some intuition why. So we'll compare the Schnurr signing algorithm to the ECDSA signing algorithm for both schemes. The public parameters are the group of points of an Elliptic Curve, G, of prime-order Q, and generated by P. The secret key is a random X sampled from ZQ, and the public key is X multiplied by the generated P. So as you can see in the Schnurr signing algorithm, all the steps are linear. The only nonlinear step is the hashing of the message, but since both parties know the message, this isn't going to be a problem. However, if we look at the ECDSA signing algorithm, so things are very similar up until we compute S here, and we need to multiply by the inverse of K. And computing this inverse of K is what makes things really complicated. So imagine we wanted to just additively share X and K. For Schnurr, everything works fine. Each party can compute, can, sorry, each party can sample a share of X and a share of K, and they can each compute a share of the signature, which they just then need to add up to get the overall signature. On the other hand, for ECDSA, it's really unclear how we can efficiently compute K minus one, I mean the inverse of K from additive shares of K. So before I go any further, let me give you an idea of where we're heading. So I'll first talk about previous work in this field, and in particular that which we build upon. And I'll outline the drawbacks that these works have. In particular, the reliance on a non-standard interactive assumption. I'll explain how we remove this assumption by using hash-proof systems, and I'll provide the generic construction that we give and improve its security. And finally, I'll explain how we instantiate this generic construction from class groups, which allows us to remove range-proofs and significantly improve our communication cost. So previous work. In gray on this timeline, there's some work in the full threshold case. So some great work has been done in this field recently, but since once restricted to the two-party setting, these yield less efficient protocols, I won't go into the details here. IEEE S&P, Donner Al put forth a two-out-of-N scheme, which is fast, but it relies on oblivious transfer, so it has quite high communication cost. And we wanted to avoid that. So the work that we build upon essentially started at Crypto 2001 with Mackenzie and Raytor. So they had the idea of multiplicatively sharing X and K. And then they use the linear homomorphic properties of the Ballier encryption scheme in order to reconstruct the signature. The problem of their work is that for each signature, they need to perform expensive zero-knowledge proofs of knowledge in order to prove that the ciphertexts are well-formed. Much more recently at Crypto 2017, Lindel came up with a great idea which allows to remove all expensive zero-knowledge proofs on the signature algorithm so that they're only done once at key generation. So this is great improvement, but there are still some drawbacks in his work, in particular, due to the fact that the Ballier encryption scheme has a composite order message space, whereas the elements were going to be encrypting live mod Q. He needs to use range proofs. And he also needs to introduce in his security proof, artificial aborts. And when he's proving security in the simulation-based model, he also actually introduces a very ad hoc interactive assumption, which I'll talk a bit more about later. So since we build upon Lindel's work, I'll explain at a high level how his protocol works. So recall that our problem is that of jointly computing S. And so P1 and P2 each have a multiplicative share of X and K and they can set up the public key Q and the randomness R via a simulatable Diffie-Hellman key exchanges. And if we call this part of the message here S' notice that all operations relative to X1 are linear. So if P2 has an encryption of X1 that was sent to him by P1, it can compute an encryption of S' using the linear homomorphic properties of Ballier. And then if it sends this encryption of S' back to P1, P1 can decrypt, multiply by his share of K, and then he gets S. And it gets better because actually, since X1 is sampled at key generation, P1 is going to send an encryption of X1, a proof that what he encrypted is indeed the same X1 as that used to compute Q and arrange proof to P2, but he only needs to do these proofs once at key generation. He doesn't have to do anything afterwards. And now every time that P1 and P2 want to collaboratively sign a message, P1 can just compute an encryption of S' using his freshly sampled K2, send this encryption of S' to P1 who can decrypt to recover S' multiply by his share of K, and he gets the signature. And P2 doesn't have to perform any proofs that he performed the correct homomorphic operations because P1 can just use the public verification algorithm to check that the signature is valid. If it's valid, then he just outputs this signature. Otherwise he aborts. Okay, so this is great, but there are still some issues in particular due to the fact that we're using Pallier. In the Pallier crypto system, your challenger isn't allowed to use the, isn't allowed to know the secret key. Otherwise the algorithmic assumption doesn't hold anymore. And so in the security proof, when we're simulating P1 against a malicious P2, we can't actually decrypt to check whether the signature is valid and we can't check that P2 performed the correct operations. And so in the game-based proof, the way Lindell deals with this is he just guesses if and when player two cheats. So we have a security loss which is proportional to the number of signatures that RECDSA adversary is allowed to query from its Oracle. In the simulation-based proof, this guessing doesn't work. So Lindell introduces a non-interactive, sorry, an interactive and non-standard assumption which basically states that security for the Pallier encryption scheme still holds even if the adversary has access to an Oracle which tells you if a given ciphertext is a linear combination of the challenge ciphertext. And then the other problem is that, as mentioned earlier, the Pallier message space is a composite order whereas we're encrypting elements mod Q and so we need range proofs to ensure that no wraparounds occur. So in our work, we wanted to provide a two-party protocol for ECDSA which doesn't rely, well, which is efficient both in terms of computational complexity and in terms of bandwidth which doesn't require any non-standard interactive assumptions and which has a tight security proof. So to this end, we need a linearly homomorphic encryption scheme where security doesn't rely on the challenger not knowing the secret key. And if we can further have an encryption scheme where the message space is of prime order, then we can remove the range proofs. So we achieve this by using linearly homomorphic encryption schemes from hash proof systems. And when we instantiate this generic construction with a hash proof system from class groups, we can also remove the range proofs. So let's first see how we remove this interactive assumption. So we do this using hash proof systems. Hash proof systems were introduced by Kamehanshu at Eurocrypt 2002 as a means of efficiently computing both trace and plain text and chosen psychotech secure encryption schemes. So security in this setting relies on the hardness of a subgroup membership problem. So we have this finite abelian group X which contains a subgroup L which defines an NP language and does search, there's the witness set W which defines this NP language. And the hardness of the problem requires that given a random element sampled from X it's hard to tell if it's in the language or not. And so in this context we have two hashing algorithms, one which works over any element in the whole group X and which takes as input a secret key and hashes an element in X and another hashing algorithm here which takes as input a public key an element in the language and the associated witness and outputs a hash and correctness imposes that both algorithms should evaluate to the same value if they're evaluated over elements in the language. So from this we can, well they devise encryption schemes. So to encrypt an element basically you just sample a random element from the language with the associated witness you use the public projective hashing algorithm to compute a hash of X and you use this to mask your encoding of the message. So I've encoded the message in the exponent of F because we want a linearly homomorphic encryption scheme but the idea is just that you're masking the message. And then you return your masked message and the element of the language but not the witness. And so now the decryptor who knows the secret key can compute the same hash value using the secret key and the element X and can remove this mask to recover the message. And clearly knowing the secret key here doesn't actually help solve the underlying algorithmic assumption. With the secret key you can evaluate the hash function over any element in X but it doesn't help you tell if a random element in X is in the language or not. Conversely in the value crypto system which relies on the decisional composite as a Geosodyne assumption knowing the factorization of N actually makes the problem easy. So if we use an encryption scheme from hash proof systems our simulator can use the secret key and it won't compromise security. So now I can present our two-party protocol. It's very similar to that of Lindell's except for the aforementioned changes. So player one and player two each sample a random share of X. They can perform a simulatable Diffy-Hellman key exchange to set up the public key Q. Then P one is going to sample the secret and public key for the encryption scheme from hash proof systems. It encrypts X one sends this encryption of X one along with a proof that it knows the encrypted value X one and the randomness used for encryption and that the encrypted value is indeed the same X one as that used to compute Q. Both parties store the public key Q their share of X P one also stores his decryption key and P two also stores the encryption of X one that it got from P one. And then in order to sign a message each party samples their share of K they perform a simulatable Diffy-Hellman key exchange to set up the randomness R. P two can compute an encryption of S prime and send this to P one, P two decrypts multiplied by his share of K and verifies the signature if it's valid it outputs it otherwise it supports. Okay, so to prove security in this setting so for a two-party ECDSA protocol what we do is we demonstrate that if a party alone can forge signatures then a simulator which is going to simulate the environment for this corrupted player can output a signature of forgery for a standard ECDSA. And so we reduce the security of the two-party protocol to that of standard ECDSA. So in standard ECDSA a forger gets his input a public key Q which is X times P and it has access to an oracle which will sign messages of its choice and then it has to output a message and a signature which it didn't get from its oracle. So if I denote PI star are corrupted party for the two-party protocol if our simulator can set up the same public key Q as it got as input from like from its challenge then every time the corrupted player asks to collaboratively sign a message with the simulator it can just request a signature of this message from its oracle and then it's going to simulate the signing phase to output the same signature as it received from its oracle. And now if our party PI star outputs a forgery for the two-party protocol since they set up the same public key the simulator can use this forgery as its own forgery and he's broken standard ECDSA. So I'll now justify the security of our scheme. So I'll only talk about the part where our techniques kick in which is when we're considering a corrupted player two and so we need to simulate player one. So the simulator gets as input this public key Q it can simulate the Diffie-Hellman key exchange with the corrupted player two and from this simulated Diffie-Hellman key exchange it can extract the value X2 that was input by player two. Then the simulator samples a secret key and a public key for the encryption scheme and notice that it doesn't actually know the value X such that Q is equal to X times P. And so it doesn't know the X1 it should be sending an encryption of to P2. So it just samples a random X1 star encrypts this value and sends it along with a simulated proof to the corrupted player two and then they store the values that they want to store. And so this is where the indistinguishability of the encryption scheme is important because our simulator is just sending the encryption of a random value. Next to simulate the signing step. So P2 asks to sign a message M. The simulator is going to request a signature of this message from its oracle RS. From this signature it can extract the randomness R used for this signature. And then it can perform the simulated Diffie-Hellman key exchange with the corrupted player two in order to set up the same randomness R and it extracts the value K2 that was input by P2. And now when it gets the encrypted value C prime from the corrupted player two, it can decrypt using the secret key which it's now allowed to decrypt to use and check that player two performs the correct operations. If so, it returns the signature that it received from its oracle and if not it aborts. So thanks to this simple change we don't need to guess if player two cheats we don't need to use any non-standard or interactive assumptions. And yeah, that's it. So now that I've shown how to get rid of the strange assumption, let's now see how we can remove range proofs. So to do this we need an encryption scheme which relies on hash proof systems which has a prime order message space where we can actually choose this order to be the order of the group of points of the elliptic curve which is used for ECDSA. So this isn't common but we can achieve this from the framework which was introduced by Cassanius and Legumi at CTRSA 2015. So that of a group with an easy discreet logarithm subgroup. So we have a cyclic group G of order Q times S where the GCD of Q and S is one. Q is a large prime and we have a subgroup F of G generated by little f which is of order Q and another subgroup GQ which consists of the Q powers in G which is of order S. And we require that the discreet logarithm problem be easy in F. In this framework Cassanius, Legumi and myself introduced the HSM hard subgroup membership problem which essentially states that it's hard to distinguish random elements of the group G from random elements of the subgroup GQ. So from, if I compare to the framework of Cameron Schu that I talked about earlier, so here the language is GQ and the whole group is the group G and so if we witness for an element in the language is going to be a W such that X is GQ to the power W. And so we can create a linearly homomorphic encryption scheme from this setting and the secret key doesn't help distinguish Q powers in G. Cassanius and Legumi also provided and concrete instantiation for this framework from class groups. So here K is our imaginary quadratic field of discriminant delta K. And if we choose this discriminant to be divisible by Q then denoting O delta, the non-maximal order of discriminant Q squared delta K then we can exhibit a cyclic group of order Q in this, in the class group of this non-maximal order where the discrete logarithm problem is easy. And in this setting the security of solving the HSM assumption that I talked to earlier reduces to the problem of computing the class number and best number, the best known algorithm, sorry for this problem have complexity L one half as opposed to L one third for factorization or discrete logarithm in traditional finite fields. So in particular this means that we can use shorter elements than those used in the value encryption scheme which further reduces communication cost. So we implemented both our scheme and I mean our protocol and that of Lindell to compare speed and bandwidth. So yeah, in terms of speed for a lower levels of security we're slower but the more you increase security the better we perform and in terms of bandwidth we do consistently better. So to conclude we provided a generic two-party ECDSA protocol from hash proof systems where we don't have any interactive or non-standard assumptions. There's one thing that I didn't really mention is that the zero knowledge proof isn't completely generic it kind of depends on the instance of hash proof system that we use and we provide an instantiation in class groups which is practical and has very low bandwidth. And we're currently working on extending this to the full threshold setting. So thank you for your attention and if you have any questions. All right, thank you there. We would have time for only a quick question if you have one at the mic while the next speaker sets up. Okay, there's no question, that's thank you again.