 Hi everybody, I'm Aksana, and today I will present our paper with John Canetti about Incarcerable MPC. So what is Incarcerable MPC? In this MPC, you have standard correctness guarantees, and in addition, you have a stronger security guarantee, which is called Incarcerability. Intuitively, this guarantee tells you that parties can freely lie about their inputs and outputs in the protocol. In particular, the adversary may first listen to the communication in the protocol, and then it can approach any party, or even all parties, and demand to know what are their inputs, outputs, and randomness in the protocol. And the protocol is called Incarcerable if it comes with additional fake algorithm, which can produce fake coins to every party, which looks like a real randomness in that protocol. In this talk, we will look at two main scenarios. One is called Cooperative Incarcerability, and it only gives protection when all parties simultaneously lie or simultaneously tell the truth. This still can be good enough in many scenarios when parties are working together against some external attacker. Still, it will lose Incarcerability properties if some party decides to disclose its true coins and some other party decides to disclose its fake coins. This could happen, for example, from lack of coordination, or if some party decides to turn against the other. That's why we are also looking at a stronger notion called Full Incarcerability, which even gives you protection in this mixed scenario. We capture Incarcerability in the standard real ideal world paradigm. In the ideal world, there is no transcript of communication, and there is no randomness of parties. Only inputs and outputs exist. In particular, the cursor in this case will only know fake inputs and fake outputs of the parties, and it has no other choice other than to trust parties. Still, if the parties' inputs and outputs are inconsistent, the adversary will know that some parties align. Yet, it won't know which exactly parties align, and it won't know which values are true and which are fake, and what are the real values. So, this is the best possible security we would like to achieve with our protocols as well. Incarcerability is known to imply adaptive security. Let me remind you what adaptive security is. Adaptively secure adversary can corrupt parties during and after the execution of the protocol, and this boils down to the following syntactic requirement on the simulator. So, the simulator should be able to first simulate the transcript T without knowing any inputs or outputs of the party, and then later, given some inputs and outputs, it should come up with fake coins, which are consistent with that transcript T. It is easy to show that Incarcerable Protocol is also adaptive-secure, so this is because adaptive simulator can simply simulate the transcript by generating an honest transcript on input 0. And later, when it's time to open the randomness, it can run the faking algorithm to come up with fake coins consistent with any input and output. Note, however, that Incarcerable Protocols are stronger than adaptive-secure protocols, and intuitively, this is because in Incarcerable Protocols, you have to open some real computation in a different way, while in adaptive-secure protocols, you only have to open simulated computation in some different way. For instance, let's look at Incarcerable Avriliva's transfer. Like in standard OT, here you have a sender with two inputs and the receiver with a choice bit B, and you want the receiver to learn M sub B. In addition, both parties are equipped with faking algorithms. For example, the faking algorithm of a sender takes as input everything the sender knows, including the target fake values, and it outputs some fake coins which are consistent with the transcript and those fake values. It's important that the faking procedure of every party is local, and only uses the values known to that party. What about security? No matter what are the fake inputs and outputs, we want to require as much protection as possible. This means that if the values are consistent, then we want the joint distribution of randomness of both parties to look indistinguishable from the true randomness. If the values are not consistent, still we want to hide as much as possible, in particular we want to hide who is lying and what are the true values. Let me know that it's very important that the receiver faking algorithm takes as input the fake output Y prime of OT. If it didn't, you would actually show that the resulting randomness could be used even by semi-honest receiver to violate the sender's privacy by obtaining the other bit. We know that static oblivious transfer implies static MPC and the same holds for adaptive oblivious transfer, so it's natural to ask whether incursable OT also implies incursable MPC, and the answer is we don't know. Let me roughly outline what the problem is. The problem is with coordination. For example, look at GMW protocol. In GMW protocol, parties hold shares of the value of each wire in the circuit. If parties want to fake it, they will need to locally generate shares which add up to some wire value of f of x1 prime x2 prime. The problem is that either party knows these fake values and they will have to do it locally. So it's not clear how to achieve this, and I think it's an interesting open problem. Let me briefly mention prior work on incursable MPC. In this work, we will look at what's called semi-honest coercion, which is also known as receipt-free-ness in voting literature. Basically, it says that parties follow the protocol and adversary only comes at the end and demands to see the state of each party. In this setting, we knew a protocol which can protect up to half participants, and some recent protocols which are actually devised for adaptive security also can withstand up to one coercion. Thus, a natural open question which we'll be tackling in this work is can we achieve semi-honest incursability for all parties? Let me mention that there is another line of work on what's called an active coercion. In this type of coercion, the adversary can force parties to deviate from the protocol, and in particular it can try to force parties to output something committing to their inputs. It was shown back in the 90s that such security actually requires some inaccessibility assumption like a voting booth or hardware token. This notion, although it seems stronger, actually turns out to be incomparable to semi-honest coercion, and roughly the reason is that in semi-honest coercion, the main problem we want to solve is how to invert the computation, make it look like it stems from a different input. In active coercion, inaccessibility assumptions avoid this problem altogether, so they have different issues in that line of work. Finally, let me mention some prior work on deniable encryption. Deniable encryption is a special case of semi-honest incursable to PC, and in particular the lower bound for deniable encryption, which says that you need at least three rounds of communication, immediately translates into the lower bound for incursable MPC. At the same time we know some positive results for deniable encryption, in particular in this work we will be using a bi-deniable encryption of Kanati, Park and Pavorina from 2013. This construction takes three messages and requires the CRS. Finally, let me mention our results. Our first result is a transformation from deniable encryption with special properties into incursable ambiguous transfer. This immediately gives you incursable to PC for short domain. Our second result is how to use deniable encryption to turn adaptively secure MPC with some properties into cooperatively incursable MPC. Finally, we show a certain lower bound. We show that protocols with some communication patterns cannot be incursable. So these are our results and now let me briefly mention each result separately. So first, how to translate any deniable encryption into incursable T. Let me first describe the building block and what exactly properties we want from this deniable encryption. Let me remind you that deniable encryption is basically a protocol which allows parties to transmit a message, such that both parties can later claim some fake plaintext M' and here are the properties we will require. The first property is called public receiver deniability. Basically, this property says that any party or outsider can fake on behalf of the receiver. In particular, knowledge of the true coins of the receiver are not required to fake. The second property we need is called obliviousness of the receiver. This means that the receiver can actually generate its only message, the second message, without knowing the underlying coins are. This can be alternatively stated as follows. Receiver can participate in deniable encryption in two different ways. One way would be a normal way where the receiver picks its coins and generates everything properly, and then these coins could be used for decryption. The other way is oblivious way, where the receiver doesn't choose coins are, but instead it will generate second message obliviously, let's say at random, and in particular it won't be able to decrypt the resulting transcript. Armed with these two properties, you can probably already see how we build our incursible OT. Indeed, the sender and the receiver should engage in two parallel executions of deniable encryption, where in the first execution the sender sends M0, and in the second execution it sends M1, and the receiver should participate in one execution normally, such that it is able to decrypt, and in the other execution it should participate obliviously, such that it cannot decrypt. In particular this means that the receiver can obtain exactly one plain text out of the two. And let's talk about how parties can fake their inputs. The sender randomness consists of two random coins used in deniable encryption, and therefore the faking procedure for the sender is pretty straightforward. It should just use sender fake of deniable encryption. What about the receiver? So the receiver randomness consists of two things. First, it is the true random coins used in the normal execution of deniable encryption, and second, it's the randomness used to generate the second message in the oblivious generation of deniable encryption. Thus to lie about the bit B, the receiver has to lie about which transcript was oblivious and which was real. And you can probably already see how to fake for the receiver. So first, the receiver should claim that the first execution was oblivious, and that it had never generated R sub R. Second, it should claim that the second execution was normal, and to come up with corresponding coins R, it should use receiver fake of deniable encryption on the transcript T for the desired output Y. So this is how to obtain incursible OT. And the incursible OT immediately gives incursible to PC for short domain of one party, and our scheme takes three rounds and requires a serious model because of the deniable encryption requirements. Now let's move on to the second result, which says that you can compile any adaptively secure MPC with certain properties using deniable encryption into a cooperatively incursible MPC. So let's start with a natural attempt, which is take any multi-party computation and let parties encrypt every message sent using deniable encryption. Essentially, deniable encryption erases any communication which happened, and indeed the results in protocol will be incursible but just for one party. For example, if the adversary comes to P1 and demands to see some inputs and outputs of computation, P1 can always lie about the transcript of underlying MPC. In particular, if P1 wants to pretend that the input was zero, it can simply generate fresh messages of T consistent with zero. The problem with this, however, is that the protocol only withstands one coercion. So for example, if the adversary now comes to the second party, the second party is in trouble. Actually, there are two problems. First problem is the problem of coordination. Second party has no idea what the first party claimed as the transcript of underlying MPC, and those two guys have to be consistent. Second problem is the problem of equivocation. The second guy has to be able to equivocate T to his own fake input. Let us first deal with the second issue. The solution is to use adaptively secure MPC. In particular, when parties need to reveal T, they will generate T to be a simulated transcript of underlying MPC. In addition, they should all know a trapdoor which will allow them to use the simulator of that MPC to come up with fake coins corresponding to their fake inputs. In fact, we need a stronger property from the underlying MPC. Intuitively, the issue is that in incursible MPC, every party should fake locally without knowledge of other parties' inputs and outputs. Unlike adaptive MPC, where the simulator can actually use the knowledge of all parties' inputs in order to simulate. So the protocol consists of two parts. In one part, parties are actually running the underlying MPC with transcript T encrypted under deniable encryption. And the second part is coordination part. It's only there to help parties to fake. There, one party's AP1 sends to everybody the same simulated transcript T' together with corresponding trapdoor TD. Let's first look at what happens at the coercion when parties decide to be honest. First, parties will claim that they send and receive messages of T and they will disclose that two coins are in the protocol. In the second part, parties will honestly disclose that they have received or sent T' together with the trapdoor. And it is important that T and T' which the adversary sees are completely independent. What happens when parties decide to fake? Well, in the first part, parties gonna claim that they sent and received the messages of T'. And they're gonna use the trapdoor to come up with simulated coins R' consistent with this T'. However, what should they do with the second part? If they reveal honestly that they have received T', then the adversary will be able to link T' used in the first part and in the second part and that would be a total fail. Therefore, intuitively, they should claim some other T' which should be completely independent of the claimed T'. However, now we are back to square one because how do parties agree on this T', it should be the same for every party. And of course, we could try to let P1 send T' together with T'. But now, parties have to reveal two transcripts and one of them should be some T'. So, we didn't really solve the problem. The problem can be described as follows. P1 should send some value which should carry two pieces of information at the same time. First piece will be a fake value which parties will have to claim they have received from P1. And the second part is the actual simulated transcript which they will use to deny. And the problem is that the fake value has to be of the same size as the real value which doesn't quite work out as you can see in this picture. But this is actually very easy to solve with compression with a simple PRG. P1 should actually send a PRG seed to all the parties, the same seed. And then each party including P1 itself can stretch this PRG seed into a pseudo-random value. This value they will interpret as a fake seed and a simulation randomness. Fake seed is what every party is going to claim they have received in the protocol from P1. And simulation randomness Rc can be further expanded into the simulated transcript T' together with the simulated trapdoor. And this is pretty much the whole protocol. The final protocol is very close to what we had before with the difference that P1 should send the seed encrypted under denial encryption and transmitted to everyone. It should be the same seed transmitted to every party. And note that this seed is completely independent from the transcript T'. When it is time to fake, parties are going to do the following. First, they will use the PRG to expand the received seed into a stream which they interpret as seed' and randomness of simulation. They use this randomness of simulation to come up with the fake transcript T' and they can use the corresponding trapdoor to find the coins R' consistent with the new input. When asked about what they have received from P1, they will claim that they have received seed'. Note that in this coercion scenario seed' and T' are also independent at least to a computational adversary because these two values are basically the output of the same PRG. So this concludes the description of the protocol. Let me just mention that we want the underlying adaptive MPC to have two properties. First property is the corruption oblivious simulation I talked about earlier. And the second property is that the CRS has to be global if the protocol uses the CRS. As a result, we are getting cooperatively incursible MPC in the CRS model which takes four rounds of communication. Finally, let me talk about our lower bound. Let me first start with some motivation. The MPC protocol I just described was only cooperatively incursible. How about full incursibility? The first attempt would be to start with deniable encryption which is actually fully incursible and tweak it just a little bit to get fully incursible MPC. The idea in the underlying deniable encryption is roughly as follows. The sender should send a hash of the message in the encryption of the message and the receiver will send a hash of its randomness and then there will be a special program which decrypts for the receiver. And this gives rise to a very natural attempt to build MPC which is designate one party as the receiver, make all other parties senders and let them in a similar manner send the hash of all their inputs and then send their inputs encrypted and then have the receiver use this program to basically evaluate the result. This is maybe very promising but our lower bounds says that this protocol cannot be incursible. Let me now describe it more precisely. First we will call a party lazy if it only sends messages in the first and in the last round. In all other rounds it only listens and computes but doesn't talk. Our theorem states that if a protocol is for at least three parties and among these parties there is at least one lazy party and at least one other party which received the output then the protocol cannot be incursible even against just two coercions. In particular in our example from the last slide you can see that all the senders are lazy. They only send messages in the first and last round. Therefore this protocol cannot be incursible. Finally let me show some examples and discuss why this theorem is tied. First the requirement that n is at least three is important because we actually do get a two-party computation which is incursible with lazy parties. Same way the requirement to have lazy parties is important because our NPC doesn't have any lazy parties and it is incursible for n mon three. Finally the requirement that the protocol should be incursible as opposed to just adapt to be secure is also very important because we do know protocols which are adaptively secure which follow this lazy party communication pattern. And finally let me just note that in the original paper of Connecticut narrow they also don't have any lazy parties and thus the theorem doesn't apply to them as well. Let me conclude the talk with some open problems and random faults. So we do get cooperatively incursible NPC and fully incursible 2PC. This gives rise to a natural question can we get fully incursible NPC. There might be some reason why it's not so easy to obtain which has something to do with coordination issues. So ideally we would like to use deniable encryption as a building block but then parties will need to somehow agree on the values they send or received using deniable encryption. And this gives some coordination problems. The alternative way would be to build incursible NPC from the grounds up without using deniable encryption as a building block but this could be fairly complicated. Another open problem is can we get incursible NPC in three rounds because our protocol uses four rounds and we know that it is impossible to get in two rounds. One issue with that could be that deniable encryption itself probably requires three rounds and therefore in our three round protocol only the last message could be protected by deniable encryption. However other messages have to also be input dependent and at the same time deniable but they cannot be protected with deniable encryption. So these are the open problems I have for you. Thank you very much for your attention and feel free to stay for a cat blooper. Bye-bye. Fake shares which together add up to some value of the wire of the computation f of x1, x2.