 So yes, basically what we're showing is how to make malicious secure and distributed prime generation in order of magnitude faster than the previous semi-honest. And to give some numbers, we talk 15 minutes for the previous faster semi-honest versus around 40 seconds for the new maliciously secure one. All right. So what I will be talking about today is I will start with an introduction. Then I will sort of a motivation for how we get malicious and secure very fast, go through a semi-honest construction. And then I will talk about what we add on top to actually get a very fast maliciously secure construction. And in the end, I will give a bit more of discussion regarding the efficiency and the implementation with it. All right. So just so we're all on the same page, I'm just going to start very briefly to say what I mean when I talk about public encryption. I mean that we have a party or server that can generate a private key and a public key. The public key can then be made public, for example, given to another person. The other person can then encrypt some message at a later point in time, using the public key, sending back, and then that can be decrypted. So the server learns the message that was sent. So that's a general case, except that since the public key is public, we can have many parties that actually encrypt messages and send those to the server. So what I mean in the distributed setting is that instead of having a single server, we have several parties that act sort of as the server in this case. So this means these parties are communicating together to construct a sharing of a private key along with a public key. So again, the public key can be distributed and some other party can encrypt a message with the public key and send it back to the two parties that act as the server. They can then run a decryption algorithm using the shares of the secret key. And then in the end, learn some partial decrypted messages where they can exchange to actually learn the message that was supposed to be sent. All right. So why is this actually interesting? There are several cases where it makes sense to have a distributed private key in such a scheme. For example, it can be used as a gateway to make distributed signature schemes, which is an end in itself. It also comes up in several NPC protocols, in particular when the public encryption is additively multiplicative homomorphic. And even more interesting, it can be used in a commercial setting where you want to actually put a hardware security module in the cloud, meaning that instead of actually having a box from Multimaco or Gamalto, you actually have two cloud servers that basically act as the constructor and keeper of the keys. Okay. So in this specific work, we consider the public encryption scheme RSA. So why RSA? Well, there's a lot of reasons. It's tried. It tested. It's very applicable in practice. It's used in a lot of places, TLS, PGP. And furthermore, there's a lot of previous work in this setting, which means that the community is interested in it. And it also makes it more fun to actually see how fast you can get things going. And just to recap, I mean, RSA in this case, I mean that we have a public key, which we call N, which is the product of two large primes. We have a public exponent, which we call E. That's usually three or two to 16 plus one. And then we have the private key is the inverse of E modulo phi of the public key N. Yeah. So in the distributed setting of RSA, basically what we consider is the same way as our previous work has considered is that we generate primes, as we would normally do in the general RSA case, except these primes are secret shared in an additive way. So each of the two parties acting as server, Alice and Bob, have an additive share of a prime P and an additive share of a prime Q. These are multiplied together again to construct the public key N. And then the private key is also an additive sharing such that when the share of Alice and Bob is added together, we get the value of the inverse of E modulo phi of N. All right. So now the question is how do we actually, how do we do this? Well, we have two parties and we want something that's secret shared. I mean, if we look at our toolbox and a lot of the talks this week, well, we have a very nice tool to solve this kind of thing, which is called MPC. So now we just have the parties pick random primes, do Rabin Miller to ensure, pick random values, do Rabin Miller to ensure primality and then repeat and just do it all in an MPC computation. Well, it would be nice if that would actually be the case and we would be done. Unfortunately, it's not because Rabin Miller is very inefficient to do an MPC because we are talking exponentiation of very large numbers along with modulo reductions and so on. So it's very hard to actually get directly to work in a practical setting using directly in MPC. So what we do instead and with what basically all previous work in the settings have done instead is to do a few different phases that then end up with what we actually want. So the phase, it starts with a candidate generation phase where some sampling and light reading is done of the secret shares of the primes. Afterwards, a construction of the modulus is done in some secure manner. This modulus is then verified in some way to ensure that it's actually a product of two primes. And then in the end, a phase is executed to actually construct the distributed keys. So just to give a little bit of a visual outline of this, we basically consider we have a whole bunch of random values in the beginning and then as part of the candidate generation phase, some get weeded out. These then get paired up to construct Moduli. Again, the Moduli's get weeded out because they might not actually be the product of two primes. So in the end, we have one Moduli left and then that gets split up into two shares of keys, which is what the parties will learn in the end and then we're done. So that was a bit of an introduction that we're actually looking at and what's the general approach to solve this issue. So let me give a bit more specific start of how we do this in the semi-honest setting. All right. We start by picking some random values under the constraint that some of these are congruent to three Modulo four. Then what we do is we execute a trial division based on some ideas from the 90s where we use one out of beta ot to ensure that the sum of these primes are not divisible by a small prime factor. So this basically means we have one party taking Alice, taking her share, PA, Modulo B and using that as a choice input to a one out of beta random ot. Bob gets beta random strings and Alice gets the random string that fits PA Modulo Modulo B. Bob then computes minus PB Modulo beta and then sends that to Alice who can then compare whether they equal or not. And if they are equal then it means that beta is actually a factor of the sum of PA and PB and thus it's definitely not a prime and the candidate can be discarded. So this is a very efficient way to compute the sum of the decisions. So that's a very good start to weed out a lot of random numbers that are actually not primes. Afterwards, when we have something that might be prime, maybe not prime, we want to compute the Modulo's N which is basically computing the product of the sum of the shares. This can also be done very efficiently using Oblivious Transfer by a protocol by a protocol that is basically that this protocol, we have two parties, one with the one factor, one with the other factor and what you want to get is an additive secret sharing. So what happens is the one party inputs a bit of her factor to a one out of two OT. The other party input a random number plus his factor into the one out of two OT. And then what they get back can be used to create can be used with linearity to create an additive secret sharing of what is actually the product of these two factors. Once we have the Modulo's, we need to execute a bi-primality test for this. We use some excellent work by Bonnet and Franklin from 2001. I'm not going to detail with the math here, but it basically involves some explanations. You need to do s times to make sure that what you have in a public Modulo's is actually a product of two primes except with exponentially small probability. Each of these iteration can give a false positive with up to a probability of half, so that's why it needs to be executed many times. Finally, computing the actual additive share of the keys can be done quite efficiently also using the same approach as we did in 2001. So that was a brief outline of the semi-honest construction. As you might have noticed, a lot of these things we do here are actually based on previous work and that's completely intentional because our main contribution is actually how we take this and turn it maliciously secure. So to give an outline, if we look at what can go wrong in the semi-honest protocol in case any of the parties act well, there are a few things. There's the issue of selective failure. There's the issue of not staying consistent with what you pick as a prime share throughout the different stages of the protocol. And finally, but absolutely not loosely, there's the problem of correctness of biprimality. So the question is how can we get this all to work securely in a malicious way without basically not paying anything. What we had is that we give the adversary slightly more power than what we would normally allow in a way that's basically useless to him. So the idea is that the adversary is allowed to fail good candidates. So it means even if we construct a something that is actually a prime and we have a product of two primes, the adversary is allowed if he acts maliciously to not use that. That doesn't really give him much power in terms of how much yes that are picked in this practice are all random. So, and by the fact that we want this protocol to only run in, what should we say, pretty finite time, if he does this too much, well then the other party will basically abort in that case. So he cannot do this super-pollinomially many times. The other thing we do to ensure that this can be achieved very efficiently is that we allow the adversary to learn a little bit of leakage on each of the prime shares, which is not actually a big issue because they're all random, they're very long and we can argue that it doesn't give him more because the leakage is basically constant, it doesn't give him more than a constant advantage. So that's the main ideas we have of going here with the semi-honest construction to the maliciously secure construction. So the contribution is in how we actually do these steps. Well, the selective failure prevention basically means that when we use OT in the malicious setting there's almost always an issue of selective failure where one party can input something malicious for the choice zero and something right for the choice one and then at a later point in time, depending on whether the receiving party picks zero or one, we find out according to an abort or not some information of this party's input. This can we're sure we have how to do this because do this very efficiently using random linear encoding. Efficiently here means that we get an additive overhead of as where as a statistical security parameter in the amount of oblivious transfers we need to do. And that's actually not a lot in this case because we are multiplying very large numbers so it means that we do several thousand OTs. To ensure that a party basically commits to its input from the beginning from when we construct the candidates all the way to the end when the keys are constructed, we put some commitments to this and verify these at the end. So commitments are often quite expensive. So we come up with a new scheme which basically allows to make very cheap commitments assuming we only want to ever use one of these commitments. So that's a bit of a weird thing. It might be applicable in other situations. I will go into detail with that in a moment. And finally to ensure correctness of the bi-primality test in case of a malicious adversary we basically use the standard tool of turning this into a standard zero-knowledge proof. Okay, so let me mention this issue with consistency. Basically what we do is we commit and notice the quotation mark using AES what this means is that the commitment is not in and of itself binding but because it turns out we only need to open two commitments in the end, we can do some zero-knowledge verification that ensures that this particular commitment is correct and that's actually all we need since we allow the adversary to fail good candidates. So basically the overall idea is that in a setup phase Alice picks a random key, commits to it towards Bob, then they execute a zero-knowledge proof that this commitment is correct. And afterwards what they do is they use this key to commit to shares. The zero-knowledge proof here is needed since our simulator must know whatever key the malicious party Alice in this case uses to commit to her shares and thus we get extractability of that from the zero-knowledge proof. So this means that later on we can extract whatever she inputs to these commitments. To verify the modules and then we execute to get malicious to secure by formality tests we follow the step as before by Bonilla and Franklin but what we add on top is the typical zero-knowledge aspect where we pick some randomness and then make some challenge depending on this where at some point the other party gets to pick a bit, either it actually learns what this randomness was or what this randomness plus the actual secret we were gone was, the actual witness. Again this means that we do need to repeat it as times to ensure this with negligible probability and because this zero-knowledge proof needs to be composable with the rest of the protocols we also need to commit to this challenge and verify it later on and this is also again where we can use these AES based commitments. The end of all of this is that we give the adversary a lot of power we allow him a lot of cheating during this protocol and the important thing is that we must in the end ensure that whatever we accept has actually been done correctly. So this means in the end we execute a zero-knowledge proof that verifies the commitments to the shares that have been used to construct the module line are actually those that were committed to using the AES scheme along with the challenges that were used in the bi-primality test. And basically we notice that we can do this kind of zero-knowledge since it's of basically AES circuits we can do this very efficiently using Gauss-Gabbal circuit by some approach by Javarraig et al from earlier on. So when we put all this together we get our maliciously secure scheme with basically the only overhead of adding the AES-based commitments which are very light in the grand scheme of things the bi-primality proof must be maliciously secure but this only actually needs to be done once for the entire protocol not for each of the candidates or anything like that so this also does only add some constant overhead the same with the zero-knowledge proof it's also a little bit heavy, again it's only done once in the grand scheme of things which means that we actually get this maliciously secure protocol very cheaply on top of the semi-honest. So we implemented we implemented this for constructing 2048-bit RSA just a little bit of detail of the implementation we of course used AES and I for doing AES and where we used the PRG we used OTA extension by Kela et al to implement all the OTs which is more or less the only really big cryptographic primitive other than AES we used in this protocol the zero-knowledge was done using the garbled circuits approach and basically most of the primitives that are computed otherwise in the protocol are based on OpenSSL so we ran some experiments on this on the Azure Service for using a ASEAN machine and what we get the numbers we get is for a single thread we get a lowest time of 56 seconds and a highest time of 182 seconds sorry highest time of 598 seconds and an average of 182 so there's a very big variance here and the reason is probably that this is actually an extremely random process we don't know how quickly we actually end up getting some some good values so this is based on an average of 30 times 30 executions and this is actually also consistent with the big variance that have been reported in previous implementations of this I think the main result to highlight is that for an 8-threaded implementation we managed to get an average of about 41 seconds and that the comparison is some work by Hasai et al from 2012 where they get the best times of 15 minutes for the semi-honest protocol so we get a big difference when we look at where the time is actually spent we see that the zero-knowledge aspects actually don't take a lot of the total time as I argued so this is basically the thing that gives them malicious security and it doesn't contribute a lot the main thing is actually the construction of the modules I should here mention that this also involves the prevention of selective failure which might actually end up taking a bit of the time so this is definitely where we would like to shave a bit of time off so some concluding remark we basically showed a new protocol for malicious security generation in the two-party setting where we get malicious security almost for free it doesn't rely on specific number-theoretic assumptions since basically everything is shut into oblivious transfers we also showed a proof-of-concept implementation among other things we managed to use this weird construction with the AES for lightly extractable commitments where only some of it has to be used and we also show some way of doing selective failure prevention when you do OT for multiplication of large numbers so yeah, thank you for sticking around and thank you for your attention