 Hi everyone, I am Michele Giambi and I'm going to talk about Oblivious Transfer from Traptor Permitations in Minimal Rounds. This is a joint work with Arca, Raijanduri, Vipu Goyal, Amishek Jain and Rafa Lostrosky. With secure multi-party computation, what we want to do is evaluating a two-input function in a secure way. More precisely, we consider the general setting where we have two parties, Alice and Bob, which have a private input and they want to evaluate the function F over this private input. They don't trust each other and they do not want to give this input and each party does not want to give its own input to the other party. One trivial way to solve this problem would be to say and assume that there is a trusted party to which both Alice and Bob can send their own input. This trusted party then is delegated to compute function F over those inputs and give back the output of the computation to Alice and Bob. But it's not realistic to assume the existence of such a trusted party since this central node might fail or might collode with either Alice or Bob. So what we want to do is we want to design a protocol which is described by a set of messages that Alice and Bob exchange over a communication channel in such a way that at the end of this communication Alice and Bob get the correct output of the computation. Moreover, in the case where one of the two parties is corrupted, the corrupted party should not learn more than what can be inferred from the output of the function itself. All the protocols we propose in our paper that are secure in the simulation-based paradigm are proven secure with respect to Black Box adversaries. So the simulator can only query the adversary and he does not know the code of adversaries. In the first part and in the main part of the talk, we will be focusing on a very specific functionality. This is called the Oblivious Transfer functionality. The idea here is that the input of Bob is made of two strings or two bits for simplicity and the input of Alice is just one bit. The functionality that we want to compute will return to Alice just a string of Bob with index B, in this case CB. Unfortunately, we know that we cannot realize in the blame model this functionality with information-threatening security. So we need to rely on computational assumptions. So to be precise, we can have statistical security only against one party, but we cannot get statistical security against both parties. So one way to circumvent these impossibilities is to again rely on computational assumptions. And what we consider in this work is the assumption that one-way permutations exist. So one-way permutation is first of all a permutation, as you can imagine, and is described by a set of algorithms. The first is just a generation algorithm that takes its input, the unary description of the security parameter, and it returns the description of a function g and the value t, which we will call the trapdoor. The properties of one-way trapdoor permutations are that, given a function g and given a value x, it's easy to compute the g of x, but it's hard to invert a randomly sampled element y. In addition, one-way trapdoor permutation is equipped by a special algorithm that we call trap, which on input any value y and the trapdoor can easily invert, can easily find the pre-image of y. In the work of even a tool in 1982, the authors show how to obtain a three-round oblivious transfer protocol that is secure against semi-amnesty adversaries. Moreover, this protocol can be proven secure against malicious senders under the assumption that the function, the trapdoor function, is certifiable. That is, it is required that any party, but just inspecting the description of the function, can claim whether the function is indeed a permutation or not. Katzowski in 2004 proved that five rounds are necessary to realize any non-trivial functionality, and they match this lower bound by proposing a two-party computation protocol relying only on certifiable trapdoor permutations. In the work of a source catalog in 2015, the authors show how to achieve the same result, but this time relying on the underlying crypto-primitive in a black box wave. Trapdoor permutations have been used also to realize other interesting primitives like non-interactive zero-knowledge, and Bellard Jung in 1993 showed how to construct an interactive zero-knowledge relying on trapdoor permutations without requiring this form of certifiability. This work was later improved at an extent by the work of Kanai Tatal in this year's 2018, and so what we do in this work is to try to understand whether we can do the same for the case of oblivious transfer time, more in general, two-party computation that is going to show how to obtain a round optimal two-party computation protocol without requiring trapdoor permutation to be certifiable. So assuming that we know how to generate a trapdoor permutation, EGL82 showed that you can realize the oblivious transfer functionality in the following way. So Bob, which will act as the sender, will sample a trapdoor permutation together with the trapdoor, and it will send a description of the function to Alice. So for simplicity here in this example, I am assuming that the input of Alice is one, but you will see that it's basically the same thing where the input of Alice is zero. So even that the input of Alice is one, what Alice will do? Alice will sample a random element x1 from the domain of the permutation, and she will evaluate g of x1, and she will sample another element, and she will send to Bob y0 and y1. So the observation here is that basically Alice has sampled this element in such a way that she knows the preimage of only one of those. At this point, Bob, which has the trapdoor, can invert both y0 and y1. He will get x0 and x1, and he will compute a one time bet of his own secrets. The first secret is encrypted, let's say using x0, and the second secret using x1, and he will send back to Alice these encryptions. Now the observation here is that since Alice knows x1, she can compute c1, because she can just remove the key from this encryption and get c1. Now of course, it's easy to see that if Alice, the receiver, is corrupted, then this protocol isn't secured anymore, because nothing in this protocol prevents Alice from just sampling x0 and x1, and then evaluating the function over g does getting y0 and y1, so that she can then get both secrets from Bob. The nice thing about this protocol is that if Bob instead is corrupted, then he cannot get any information about Alice's input. Indeed, if g is a permutation, the only information that the adversary sees is y0 and y1, and y0 and y1, they have distributed exactly the same way, so their distribution is independent from the input of Alice, so a corrupted Bob cannot invert anything about Alice's. But what we should observe here is that this claim is true only if g is a permutation. Indeed, it might be that if g is not a permutation, then we might be in a situation where Alice might figure this out, so she could refuse to participate to the property. And if a trapdoor permutation has this property, we say that the trapdoor permutation is self-certifiable, because just by inspection, Alice can efficiently say where the rate is a permutation. But what if instead the function does not have this property? And indeed, we know that there are trapdoor permutations that are not self-certifiable. So as I said, in the case where g is a permutation, we have code, and so this example here, g is a permutation, and as I argued before, there is nothing the adversary can do to infer the input of Alice in this case. But now consider the following. So consider the case where there are collisions, let's say. So a collision is where, for example, b2 has two brain inches, and we have a second collision, which is b3, because b3 has two collisions. So collisions like when two or more elements evaluated on g are mapped to the same element of the domain. Now we can observe the following. So if Alice wants to get the input of Bob within that one, then she will take x1, and then she will evaluate g on x1. But now observe that this means that y1 can only be equal to b2, b3, and b5. So y1 will never be equal to b1, for example, because there is no element that from the domain is mapped to b1. On the other hand, for the input that Alice does not want to get, which is in this case the input with the index 0, she will pick a random y from the domain. And in this case, I mean, she is just sampling a random value. So this means that y0 might be equal to b1, to b5, to any of these ones. So now the distribution of y0 and y1 is really dependent on what is the input of Alice. And so in this case, Alice's adversity might be able to distinguish easily by just looking at y0 and y1. What is the input of Alice? So how do we solve this problem? A candidate solution would be to say, well, let's use zero knowledge, because what we can do is that we can force Bob to provide zero knowledge proof that shows that g is indeed sampled using the generation algorithm. And then we would be fine. One problem of this approach is that if we don't want to increase the run complexity of this protocol here, which consists of just three rounds, then we have a problem because either we rely on heuristic assumptions that we obtain to obtain non-interactive zero knowledge. Or if you want to stay in the play model, we need at least four rounds to compute the zero knowledge. So the overall round of complexity, in the best case, we can hope for by using this zero knowledge approach is six rounds. Another approach would be to use credentials, where the party receiver challenges the sender by, for example, sampling multiple elements from the co-domain and asking the sender Bob to invert those, so that Alice can get a level of confidence about how good this permutation is. The problem is that, again, this would require some additional rounds, because like Alice needs to make sure that g is a permutation before she sends a Y0 and Y1. And as you can imagine here, there is no room to do the cutting juice unless we increase the number of rounds of the protocol. So what we do, we take a completely different approach in the sense that the observation here is that Bob can get something about Alice's input by inspecting Y0 and Y1 when g is not a permutation. So the idea would be to do the following, to encrypt, to keep hidden the second round of Alice for the case where g is not a permutation. So the idea is to put us ourselves in a win-win situation where either the function is a permutation, so the adversary can see what is encrypted in the second round, but in this case, it's okay if the adversary can see what is encrypted because, well, g is a permutation, so Y0 and Y1 does not give away anything about the Alice's input. On the other hand, if g is not a permutation, then this encryption should retain the secrecy of Y0 and Y1. To be more precise, what we want to design is an encryption scheme that is defined by the algorithm encryption and decryption, where the encryption algorithm just takes as input the description of the trapter permutation and the message we want to decrypt, whereas the decryption algorithm takes as input again the description of the function, the trapter t, and the cybertext we want to decrypt. And the idea is that, again, if g is a permutation, then we can always decrypt k, the cybertext, using the trapter. If it's that g is not a permutation, and to be precise, what you will prove is that if there are g has a lot of collision, so, for example, to the n minus 1 collisions, so then m remains it, so the message that you want to protect is protected, but we need a lot of collision. So this is how the final protocol would look like. So instead of Alice sending in the clear Y1 and Y0, she will encrypt Y0 and Y1 using this special encryption scheme. And now, because of the properties of this encryption scheme, like, we can get some stronger security guarantee where if the function has a lot of collision, then the security of this encryption scheme will not disclose anything about Y0 and Y1, and the input of Alice is protected. We need to discuss what happens in the case where there are not that many collisions, so where the function g is still not a permutation, but we have less than 2 to the n minus 1 collisions, but we will do later on. I just want to show you now first how this encryption scheme works. To encrypt a message m, we just sample a random element from the domain, and we compute a one-time bet. I mean, what we do precisely is to use the hardcord predicate of the constructor function, but for simplicity, let's say that we just sample the random element and we use it as a key of a one-time bet to encrypt m. Then we evaluate g of a, and the cybertext, it's simply just these two elements, so the one-time bet of the message using a and the evaluation of a over g. For the decryption, first of all, we can invert the second element of this cybertext that we denoted with k2, and we can retrieve a, and once that we have a, we can just remove the one-time bet key from this encryption, let's say, and we can get the message that we want to compute. It's very, very simple. And of course, when g is a permutation, well, we have no problems at all, right? So in this example, during the encryption, let's say that the random element is a3, so we can compute a3 plus m, and then we evaluate a3 over the draft of permutation g, and we get b3. And the output of our encryption is just m plus a3 comma b3. And the reason why we can decrypt is that b3 has exactly one print image, so there is no ambiguity in the decryption. On the other hand, if we are in the case where g is not a permutation and has a lot of collisions, and let's say that, again, like we are doing the encryption with sample a3, but now observe that a3 maps to b3, which has two print images. So now, even if the decryptor has the trapdoor, he doesn't know whether, during the encryption, a3 or a4 was used to compute the ciphertext. So this creates an ambiguity that keeps hidden the message m. Okay, so let's see what we have now. Again, we are in the situation where Alice does not just send an agreeable one, but she encrypts those using this encryption scheme we've just described. And if Bob is honest, then he can compute the decryption using the knowledge of the trapdoor. We said that in two situations, this scheme is secure. The first is where g is a permutation, and the second is where g is not a permutation, and it has a lot of collisions, right? Because we can prove that if half of the domain has collisions, then the encryption scheme is semantically secure. But what happens, for example, in this case, where there are just a few collisions, right? I mean, the problem here is that we might be in this situation where the adversary can successfully decrypt the second round, but on the other hand, maybe, because of these few collisions, Alice was unlucky, and she got some of those, and then the adversary can, again, infer something by just inspecting Y0 and Y1. But the observation here is that there aren't many collisions. So we are finding in the two extreme cases where g is a permutation, where g has a lot of collisions. But if we are in this middle ground, it means that there are not that many collisions. So it means that, let's say, the number of collisions is less than 2 to the n minus 1. So we can argue that then the probability that Y0 and Y1, sampled by Alice, using either of the two procedures, they are good. So they have no collisions. And the probability that this happens, in this case, for example, is at least 1 over 4. Because half of the domain has collisions, so with probability 1 over 2, Alice will get a value that has exactly one range. But we want to get two of these values, so the probability becomes 1 over 4. So what we can do then is the following. So we can try to amplify this probability that Alice gets good, let's say, values in the second round of the protocol. And one way to amplify this type of probability is by just repeating the protocol many times. So if we repeat this protocol many, many times, some lambda squared times where the lambda is the security parameter, then we know that at least in one of these executions of the protocol, we will have like good values of Y0 and Y1. So we will end up in this situation where in at least in one of these executions, we will have Y0 and Y1 with exactly one pre-image each. Unfortunately, this approach doesn't work because, yeah, it's true that we will have one execution where that will be somehow, we can say it's secure in the sense that it will protect the input of Alice. But on the other hand, there are other executions that are not like that. And so given that we are just repeating the protocol, and Alice is using the same input in all the protocols, well, the fact that only one of these executions, it's fine, it's not broken, it doesn't really help. So what we can do is that we can rely on an idea similar to OT combiners. Let me give you a brief recap of what an OT combiner is. Let's say that we have different realizations, different protocol realizations of the Oblivious Transfer Functionality. So in this case, let's say we have M protocols, they use different assumptions, they have different procedures for the receiver and for the sender, they differ in some way. And what an OT combiner does, more precisely what one out of M OT combiner does, is to take all these instances and construct one protocol that again realizes the Oblivious Transfer Functionality. So the property of the combiner is that as long as in this specific example here, there is one Oblivious Transfer Instanciation that is secure, that is not broken, then the overall protocol that we get, the overall OT protocol that the OT combiner gives us is secure. So we do not need to know which of the OT protocol is secure, and we can still get an overall protocol that is secure by just combining these multiple OT protocols in some way. Our approach is to look at the specific instantiation of an OT combiner and apply it to our case in order to amplify the probability that at least one execution will be fine. And at the same time, protecting the input of values, even if the remaining OT executions are completely broken because of the choice of the function made by the sender. And the idea is pretty simple, as you can imagine. So instead of requiring Alice to use the same input in all the executions, Alice will secret share in some way, I mean in some meaningful way her input would be among these many executions, and Bob also does something similar, he does some type of secret sharing. In such a way that at the end of these many executions, she can combine the outputs received from these many OT executions, and she will be able to reconstruct the final output. And the nice thing is that because of the security that the combiner gives you, it doesn't matter if there are other executions that are broken, as long as there is one that is good and that where we know that the Y-values has one brain image. So just to summarize, to argue that our protocol is secure, we can distinguish between three different main cases that can happen. So the first is the case where in one of these executions, the sender uses a trapter function that has at least two to the n minus one collisions. So in this case, we would be fine because like the one share of Alice will be protected because of the semantic security of the encryption scheme we have designed. If such an execution does not exist, then we can say that, well, this means that each function that the adversary uses, that the sender uses, contains at most two to the n minus one, minus one collisions. If the number of repetition now is big enough, we can argue that with overwhelming probability, one OT execution will have Y0 and Y1 with exactly one brain image. And this basically gives us a three round OT protocol that doesn't require the function G to be self-certifiable. So we remove the certifiability property and we keep the run complexity of the protocol down to three. As I mentioned at the beginning of this talk, the protocol unfortunately remains insecure in the case where the receiver is not semi-honest. If the receiver is corrupted, is malicious, then he can always get both secrets. And to solve this issue, we need to add another round to the protocol. And our approach will follow the approach used in the work of Strauss K et al of crypto 2015. But we need to do it with some care because we also want to keep the protocol black box in the use of the underlying trap terperimitation. So this is another challenge that we need to overcome, but we managed to do that. And for this, I will refer to the paper for more details. Just to conclude, so what we do in this paper is to circumvent the need of proving something about a statement that we care about. In this case, the trigger solution would have been to force the sender, Bob, to prove that the function was a permutation in some way. And instead of doing that, we use a witness encryption-like approach where if the sender is honest in some sense, he can actually conclude the protocol successfully. If it's not honest, then he will be stuck at some point, and all the information of the honest party will be protected. The first result that we get is this three-round protocol that relies only on trap terperimitations and offers security against malicious senders. And what we show is that this protocol can be extended to a case where we also tolerate security against corrupt receivers. On top of that, we show how to turn our protocol into a two-party competition, a protocol that realizes any functionality and that uses the underlying trap terperimutations in a black box way. And also in this case, no certifiability properties required on the trap terperimutations. And with this, I conclude. Thank you very much.