 Hello everyone, welcome to my talk. My name is Akira Takashi. I'm currently a PhD student at Offs University in Denmark. So in this video, I'm going to be talking about our recent work, two round N out of N and multi signatures and trapdoor commitment from lattices. This talk is based on the joint work with Ivan Damgord, Claudio Orandi from Offs University and Mehdi Thipsh from NTT. So let me briefly introduce the topic and the background. So as everybody probably knows, right now there's the ongoing NIST post quantum crypto standardization process. In particular, last year they announced the finalists. So if you look at the lattice-based candidates, there are essentially two different approaches in order to achieve the lattice-based signatures. The first one is called the hash and sign. Falcon is a concrete instantiation. And the other one, which we are going to look at is a fiat-chamele with a both paradigm. And deletion is a concrete instantiation among the finalists. On the other hand, in recent years, there's a renewed interest in multi-party signing protocols in relation to, for example, upcoming NIST standardization for the threshold signature or a new application to blockchain and so on and so forth. So if you look at the literature, there are many existing works on round efficient and party signatures in the diskly log setting. So since fiat-chamele with a both style signature has a very similar structure to the channel signing protocol, channel signing, of course here the natural question is, can we construct a lattice-based round efficient multi-party signing protocols by making the most observations in the diskly log setting? So in our paper, we addressed this question. So what is N-out of N-signing? For simplicity, in this talk, I'm going to be focusing on two out of two case. So here there are two parties involving signing. And there is a single signing key not known to either of the parties, at least in both. So this secret signing key is split into two shares. And then first, both parties agree on some message to be signed. Then after some interaction, they output some signature. So of course the correctness should guarantee that with the corresponding public key, the output message and the signature should be verified. So what about the security? There are a few different ways to define the security for two out of N, two signing. But for example, we can extend the existing unforgeability game in a straightforward fashion. So here we assume that one of the parties is corrupted and then the adversary obtains a share of the secret signing key. Then the adversary is able to query the owner's party with some message to be signed. Then after some signature queries, the adversary outputs the forgery to get out with some message. So the unforgeability requirement should guarantee that, should say that the output message and the signature should not be verified with the public key as long as the message has never been queried. So this is relatively simple. And this is the security notion that we want to satisfy when constructing the protocol. So now what about Fiatashamiya with a both paradigm? So here I'm briefly going over the deletion identification protocol, which follows this kind of standard three round identification protocol. So here they are pruber and a signer. And as usual, pruber first generates some randomness from some distribution, either Gaussian or uniform distribution over some small range depending on the instantiation. And then the pruber sends some commitment W and after receiving challenge from the verifier, the pruber performs the so-called rejection sampling because in the Fiatashamiya with both paradigm, all of the secret key and the randomness Y and the challenge relatively small. So in order to make the distribution of response Z independent of the secret signing key, you have to perform the rejection sampling. And then after receiving the response Z, the verifier checks that the equation should hold. And then the verifier should also check that the norm on the response Z is sufficiently small. So as I mentioned, Fiatashamiya with a both paradigm is somewhat similar to Schnorr identification protocol, at least syntactically. So you can see some correspondence by replacing the public matrix with space point and the randomness with uniformly generated nonce from integer module, the group order. But of course, in the Schnorr identification, there's no rejection sampling. So there's some sort of difference here. So what about security at a high level, soundness of the Fiatashamiya with abort, identification protocol can be argued as follows. So we assume that some cheating broober can correctly answer distinct challenge C and C prime. Then this equation should hold due to the verification condition. Then thanks to the LWE assumption, essentially we can argue that the public key T is indistinguishable from a uniformly chosen value, uniformly chosen module element. Then using this cheating broober, we are actually able to find the non-zel solution to the CIS problem with respect to the random matrix A concatenated by the public, random public key T. So this way we can reduce the soundness of the protocol to CIS and LWE. And for the honest verify as an orange, usually we are interested in the non-aborting case because in a concrete application like a signature or no interactive zero knowledge, we don't have to argue security about the rejected transcript. So here are usually non-aborting statistical honest verify zero knowledge simulator. First picks challenge and the response C and then later determine the W, the first message. And this is actually statistically indistinguishable from the actual transcript. So now let's talk about the actual two-party signing protocol. So in this work, our results can be essentially summarized as follows. So in the paper, we present two round multi-party if it's shamu's aborts signing with full security proof in the classical random output model. And we present two instantiations N-Auto-Band signatures and the multi signatures. In this talk, I'm mainly talking about N-Auto-Band signatures. Also, for simplicity, I'm going to assume that the number of parties too but the approach mentioned in this talk can be essentially generalized to arbitrary number of parties by appropriately adjusting the parameters. It is a comparison with the previous solutions. So before our work, there have been a couple of T-Auto-Band lattice-based threshold signatures following either fiat shamu's aborts or hash and sign. However, they either require FHE or generic multi-party computation in order to carry out threshold signing operations. So of course, these building blocks are somewhat heavy even though they allow you to achieve T-Auto-Band with non-interactive signing. So our approach is different from these previous works. So in this work, we only achieve N-Auto-Band case. However, without requiring expensive building blocks like FHE or MVC, we can achieve a low protocol with low round complexity, either three round or two rounds. And the only additional building block is homomorphic commitment or trap-toe commitment. Also for the multi-signature, there have been again a couple of suggestions but these protocols at least required the three rounds of interactions. In our work, thanks to our technique, we are able to reduce the round complexity to two rounds. So let's look at the actual construction. So our starting point is this bare-bone two-party signing based on schnoll, so this bare-bone protocol is very simple but actually not secure. I'm going to explain why soon. So here's a simple approach. So in the first round, both party generate the commitments as usual and then exchange the commitment. And then they take the sum of the commitments and hash the result into the challenge. Then both parties generate the response share and after exchanging the response, they output the sum of the commit and response as a signature. So this is very simple. And we can actually put all these operations to the lattice setting. So here's a two-party deletion signing. So now the public key is random metrics multiplied by the secret signing shares. And then as usual, they generate the commitments. And the only additional operation is again rejection sampling. So here both parties perform rejection sampling locally on their own shares. And then if the rejection sampling is successful, they output the response Z, otherwise they restart the protocol. And this protocol actually satisfies the correctness. So why is it not secure? So there are essentially two issues. The first one is a simulation of rejected transcript, W and C. So as I mentioned, this is usually not a program for a single user setting or no interactive zero reach. However, this becomes problematic in interactive setting because in the two-party signing, we actually have to compute some of the commitments, W and W1 and W2. And this has to be done before you compute the challenge. So the approach inevitably requires both parties to review the value of W before the rejection sampling. So of course in the literature, there's a standard trick that asks the prover to commit to the first message W and then only reveal W if the rejection sampling is successful. But in our application, this is actually not enough because again, you have to compute the sum of W before the challenge. So we somehow have to come up with a way to circumvent this issue. The second approach, the second issue is that essentially the malicious party can choose the first message depending on the honest party's first input, first output. So somehow we have to make sure that the malicious party does not depend their message on the other party's output. So there's of course a naive approach to circumvent the issue. So if we introduce an extra round for essentially committing to the commitment, then we can indeed construct an honest party simulator. But of course, this requires additional round of the interaction. And not only the proof doesn't go through, but also we can actually describe a potential concurrent attack, which can be seen as a variant of the drivers at house, a famous concurrent attack against the snow multi signatures. So in order to circumvent these two issues, our solutions can be summarized as follows. So instead of sending just commitment, we employ a homomorphic commitment in order to carry out the exchange of the commitment in the first round. And this way we can indeed hide the value of W until the rejection sampling is successful. On the other hand, this also allows us to compute the sum of the first message thanks to the homomorphic property. For the second issue, we use a trap talk homomorphic commitment in order to avoid an extra round. I'm going to explain a bit later how we do this. But now let's look at how we can circumvent the first issue. So here we apply the homomorphic commitment to the first message W. And then both parties exchange the commitment one and two. And then thanks to the homomorphic property, we can indeed take the sum of commitments in a meaningful way. And then derive a challenge. Then if the rejection sampling is successful, they open Z and randomness for the commitment. Otherwise they reveal nothing. And for the second issue, for now we employed naive solution. So you just send a hash of the commitments and then later check that the revealed commitment come one and come two is indeed a pre-image of the hash which was previously sent. So this construction is actually secure and the past verification. The verification works as follows. So first, the verifier derives a challenge and then reconstruct a committed W. Then what the verifier checks is that as usual first, they have to check that the norm of the response value is sufficiently small. Also, they should check that the commitment com is a correct opening. Commitment com actually contains the randomness R and the sum of the commitments, sum of the first message W. And the correctness indeed holds because of the linearity of the CIS function and homomorphism of the commitment. So what about the security? So this is provably secure. So here's the simulation sketch. So if protocol doesn't abort, on this party oracle can be simulated with the user non-aborting on this verified Zeneris simulator. If the protocol doesn't abort, then thanks to the hiding property of the commitment, the rejected com together with the challenge reveal nothing about the rejected W. So this way we can easily argue the simulation of the on this party oracle. And then in our paper, we present a security reduction to LW without relying on the forking lemma. So this was made possible by making use of the existing technique for the lossy identification. So what about efficiency? So as mentioned, so this approach doesn't require any expensive machine leave like FHE, MPC, or even Gaussian sampling over lattices. Because as an underlying protocol, we employ Vietta-Shamuvi support. So what we need is just a local Gaussian sampling over the integers. And then, of course, this somehow sacrifice to some extent sacrifices the scalability. So if you think about the general number of parts, N, then because we take the sum of response, the Euclidian norm of the response value Z grows by a factor of square root number of parties. So this is why we have to somehow adjust the parameters. And also another issue is that we have to wait for all N parties to pass the rejection sampling simultaneously. So of course, if the number of parties is large, the probability that the protocol succeeds is the exponential small. So this is why in order to make the protocol practical, we have to either adjust the value of the standard deviation depending on the number of parties or you have to execute the sufficient number of protocol instances. So that at least one of the protocol instances, in the protocol instances, we can hope that all parties simultaneously pass the rejection sample. So how do we achieve two round protocol? So as mentioned earlier, this first round, the committing to the commitment looks a bit redundant. So what if we just remove this round? So the protocol looks much simpler. First, both parties generate a commitment and exchange commitments. However, if you try to give a security reduction, we actually face some issues. So in the standard security reduction for the Fiat-Shammere type signatures, you actually have to simulate the honest party signing protocol. So for the honest party signing protocol simulation, you first generate a response Z and the challenge and then determine the first message W. And then accordingly, you have to program the random protocol such that the corresponding output becomes indeed previously chosen a challenge C. So this is not the issue for single user scheme, but for end party scheme, you actually fail to program the random protocol. If the honest party first sends out the commitment. So in this case, after you send out the commitment, the simulator doesn't know what commitment the adversary will send. So the simulation, sorry, the programming on the random protocol requires some contribution from the adversary. So the simulation actually doesn't work here. Also, so it's not just about the provable security issue. So you're actually able to describe some potential concurrent attack as a variant of drivers at house, the attack against the signal signatures. So in order to circumvent this, we essentially borrow the idea that you hash the message to be signed into the commitment key. Anyway, so in order to circumvent this issue of the simulation, so we make use of the so-called straight line simulation with trapdoor commitment. So with trapdoor commitment scheme, the commitment key generation, additionally outputs an extra trapdoor TD. And given this trapdoor, the commitment can be open to any message. So we exploit this useful feature. So here's how the simulation goes. So the simulator now sends out some fake commitment which is not associated with any message yet. And then this fake commitment can be later equivocated to anything depending on the derived joint challenge. So more concretely, the simulation goes like this. So here, the simulator, of course, doesn't have any secret signing key, but it has some trapdoor. Then during the first round, the simulator just sends fake commitment combo. Then you just generate the challenge after receiving the adversely shares of a commitment to. Then you can invoke the onus verify the simulator for the underlying protocol. And then once you determine the pre-image of the commitment, you can equivocate the commitment to this simulated message tab. So this is how the simulation works. Also, you have to take care of the hashing to the commitment key. So as I said, we actually have to generate the commitment key for each message to be signed. So this requires additional random local simulation. So for the simulation to work, basically for each message to be signed, you invoke trapdoor commitment key generation and then program the random local such that the output commitment key is associated with a message to be signed. So this completes the proof, essentially. So let's look at the final form of our two-round protocol. So in this protocol, commitment key is first generated depending on the message. And then both parties exchange the commitment from one, two. And then they perform rejection sampling. If that's successful, they open the commitment and send the response. Otherwise, they restart. This concludes the protocol. So a bit more about the security. Actually, the inevitably the trapdoor commitment scheme requires computationally binding. So this seems to require some kind of re-winding technique. Actually, in our security proof, we had to rely on the forking lemma, which leads to a larger security loss than the lossy identification technique. Also, although I didn't have time to talk about the trapdoor commitment scheme and the concrete instantiation in our paper, we have some concrete instantiation based on, solely based on lattice-based assumptions. So to conclude in this work, we introduced multi-party fiascia mia visa board signing with low-round complexity, yet without heavy primitives like FHE for generic multi-party computation. I didn't talk about the multi signatures, but essentially you can easily extend this technique to the multi signatures by deriving the challenge for each signer. And thanks to this replacement, we actually don't have to require dedicated interactive key generation protocol. And then the construction can be proven secure in the plain public key model. And we still have a couple of open questions. For example, this approach inevitably increases the normal bound for the output signature. So of course, the interesting question is whether we can make the signature size less dependent on the number of parties. Also our two round protocols had to rely on the forking lemma for the sake of security. So can we even give a tighter security reduction as well as security proof in the quantum random model? These are very interesting follow up questions. So that's it from me. Thank you very much for your attention. If you have any questions, we'll be happy to answer.