 Hello, I'm Mahakbanjali and today I'm going to talk about reverse firewalls for adaptively secure MPC without SADOC. This is a joint work with Subradiv Chakraborty, Chaya Ganesh and Pratik Sarkar. Suppose Charlie and Lucy want to compute a function F on their inputs X1 and X2 securely. By securely we mean that Lucy who is corrupt here should only learn about the output and nothing else about Charlie's input. In the classical setting, Charlie and Lucy will run a 2PC protocol and exchange some round messages and towards the end both of these parties would be able to compute the function F of X1 and X2. And in the process the guarantee is that Lucy will not learn anything about Charlie's input except for whatever is revealed from the function output itself. It is important to note that the security guarantee crucially relies on the assumption that the honest parties execute the protocol honestly. That is, Charlie's computer has the honest implementation of the protocol. But what if this assumption is not valid? It is possible that the party's implementation is tampered by the adversary. So you have Lucy who is corrupt and Charlie who is honest. But now Charlie's machine is tampered. In this case, instead of standing the message prescribed by the protocol, a tampered implementation can just output some secrets and so there is no privacy. This type of leakage is called exfiltration. Note that a tampering is different from actual corruptions. When an adversary corrupts a party, it gets full control over it. While in the case of tampering, the adversary tamper the machine in the picnic and once the machine is owned by Charlie, the adversary cannot arbitrarily control or see the internal state of this machine. So the question that we ask in this work is, can we design an NPC protocol that has some meaningful notion of security even when the machines of the honest parties are tampered by the adversary? In general, the answer is no. For example, consider a simple tampering that just sends the honest parties input in clear instead of sending a valid false wrong message. But with slightly more assumptions and a weaker class of tamperings, we can get positive results. In this work, we assume cryptographic reverse firewalls or an RF. Again, Charlie's implementation is tampered here, but now a firewall sets in between the implementation and the outside world. And its job is to sanitize all incoming and outgoing messages so that nothing important is leaked. So here, Charlie's implementation tries to leak some secrets, but the RF sanitizes them into innocuous looking messages before sending them to Lucy. You can think of a firewall as a software provided by an external party, and its code can be trusted. One of the objectives while designing an RF is to keep its operations very, very simple so that the code can be tested and verified before using it. Moreover, it is not like we are shifting our trust to the RF as we do not allow the RF to hold any of Charlie's secrets. It is allowed to keep its own internal state and it only gets to act on the transcript messages. The security properties that we want from an RF can be informally stated as follows. We want it to be functionality preserving. That is, it must preserve the original protocol's functionality. It should be exfiltration resistant, which means that it should prevent the tampering from leaking or exfiltrating any secrets. It should also preserve the security of the original underlying MPC protocol. And this is called security preservation. And finally, it should be transparent, which means that the transcript messages should look indistinguishable from the ones that would have been generated by an honest implementation of the original protocol. I will talk a little bit about previous results before talking about our contributions. So this notion was introduced by Mironov and Stephen Stavidovitz in Eurocup 2015. In their work, they provide a construction for a 2PC protocol along with its RF, and this is secure even in the face of tampering. However, their construction only works with passive and static corruptions. By passive corruptions, we mean that the corrupted parties follow the protocol but try to learn secrets from transcript messages. And a static corruption means that the adversary corrupts the parties at the start of protocol execution. Then Chakraborty et al. extended this result to multi-party protocols and active corruptions. Here active corruptions means that the corrupted parties can now behave arbitrarily and may not follow the protocol steps. But this result assumes a common reference string or a CRS, which is a trusted setup. In this work, we strengthen the result further to active and adaptive corruptions, where adaptive corruptions means that the adversary can corrupt some of the honest parties during protocol execution. Mironov for our construction is in the plain model. Now let me talk about our contributions. In this work, we do the following. We introduce new definitions for adaptive corruption case, as the older definitions do not suffice. We also prove an implication between exfiltration resistance and security preservation. This result is particularly interesting as it was conjectured in the previous works that these two notions might not be equal or related. Thus in all the previous works, one has to prove both SP and ER for security. Moreover, designing protocols is a bit more complicated as you need to keep both these properties in mind. But we prove that exfiltration resistance implies security preservation. This makes protocol design relatively simple and to argue security one must just prove ER. As mentioned, we also give a new MPC construction that is secure against adaptive and active adversary. Our construction follows the general approach of the GMW compiler. The GMW compiler has three steps. First, the bodies generate and commit to a random tape, then they commit to their inputs, and finally they execute a passively secure protocol and at each step prove the correctness using some zero-knowledge proof. There are two problems with general GMW approach. One should note that there is no guarantee of security if the random tapes are not random. So in the GMW compiler construction, each party contributes to the random tapes of every other party so that if at least one of the parties is honest, then the tapes of all parties, including the correct parties, are guaranteed to be honest and random. But we cannot assume this. In our case, the honest parties implementations are tampered and so even the honest parties pick biased coins. The second problem is that the round messages themselves might leak some secrets. So our approach here would be to make RF ensure that the random tapes are indeed random and the RF also must randomize and mall every outgoing and incoming message. And therefore the challenge here is to design the underlying MPC protocol for which we can build such RFs that change transcript messages without breaking the correctness. Our construction at a high level can be seen as follows. First there is the augmented coin tossing phase which allows the parties to generate and commit to their random tapes. Then the parties commit their inputs and execute a passively secure MPC protocol while proving each step using some zero knowledge proof. This step, we can use any existing MPC protocol which is adaptively and passively secure. And for these three steps, we construct an augmented coin tossing protocol, NSEC protocol, which are adaptively and actively secure. And they are constructed in a way that they admit an RF. However, these constructions assume a URS or a uniform random string. However, we would like our constructions to have minimal trust assumptions. And so we instantiate the URS by a coin tossing protocol which is also adaptively and actively secure and admits an RF. In the rest of the talk, I will talk briefly about the constructions and proof ideas for the augmented coin tossing, zero knowledge proof and coin tossing in the plain model. Before explaining our constructions, I would like to highlight some of the problems encountered while extending the previous work of Czech Republic at all. In their construction, the RF acts on the messages from the augmented coin tossing protocol in such a way that each party has a different view of the commitment to the same coin. That is, the committer sees some commitment C and the rest of the parties see a commitment C prime to the same coin. Due to this, during protocol execution, the committer tries to prove a statement with respect to C, while every other party expects a proof with respect to C prime. To preserve functionality, the RF must mall every proof. This requires a strong primitive called control malleable NSECs. And these are not known to be adaptively secure. But a crucial observation that helps us to give a much simpler protocol and with simpler assumptions is that we can make each party have consistent view. The augmented coin tossing functionality allows one party to get a random coin and every other party to get its commitment. We start with each party having its own URS. Now, Charlie wants to generate a random string S and everyone wants a commitment on it. Our construction follows a commit an open approach. First, every party tosses a coin SI and commits to it using RI under its own URS, URS I. For example, Charlie commits using URS 1 while Snoopy commits under URS 4. The commitment scheme we use here has the following properties. It is adaptively secure and actively secure and it is additively homomorphic under a particular URS. Then, in the second round of the protocol, each party except for Charlie opens the commitment by broadcasting SI and RI. Finally, each party checks if the openings were correct and valid and then computes a commitment of each SI and RI using Charlie's URS. Now, all the commitments are under Charlie's URS and thus all of them can be homomorphically added. Note that everyone has the same view of the commitment capital C here. The proof idea is that if the commitment scheme is added adaptively secure then while simulating, we can extract the adversary's coins in the first round and equivocate in the second. However, in our setting, Charlie's implementation is tampered. This causes two problems. First, the coin SI might not be picked uniformly at random and therefore the final coin is still not random. And second, the randomness RI might be biased in such a way that the commitment exfiltrates. One easy way to fix this is that the reverse firewall would use the additive homomorphism of the commitment scheme to introduce fresh randomness in each commitment. The final result that we get is that if the commitment scheme is adaptively secure and additively homomorphic in the URS model then there exists a protocol that securely implements the augmented coin tossing functionality against additive adaptive corruptions of parties in the URS model. For our zero-knowledge protocol, we build on the recent result of Canati et al. which itself is an extension of the FLS protocol by making it adaptively secure. Here, Charlie is the prover and Lucy is the verifier and the statement is a graph G and the URS for the commitment scheme along with the public key pk. The public key encryption scheme allows oblivious ciphertext sampling which means that the ciphertext can be sampled oblivious of the plain text message. Charlie in addition knows a Hamiltonian cycle in G. In the first round, Charlie picks a random n-note cycle edge and commits to its adjacency matrix by committing to 1 if there is an edge otherwise he commits to 0. In addition to this, he also sends encryption of the commitment randomness. Suppose he committed to bit 0 then he would encrypt encryption 0 honestly using the public key pk while encryption 1 will be obliviously sampled. Then in the next round, Lucy replies with the challenge E. Finally, Charlie responds to the challenge E as follows. If E is 1 then he opens commitment to edge by sending the commitment randomness and the randomness used for encryption. So for instance, he's opening commitment to a bit 0 then he sends the randomness for encryption 0 while he claims that encryption 1 was done obliviously. And if the challenge bit E is 1 then he opens non-edges in pi of edge where pi is a random permutation of graph edge and also sends the permutation pi. Notice that the RF can sanitize the first and the last round messages by permuting the graph edge and similarly permuting the commitments and the encryptions and also randomizing the commitments and the ciphatex using additive homomorphism. And then the RF adjusts the response according to the changes it made in the first round. However, the RF cannot change the challenge E without breaking correctness. For example, if Lucy sends challenge E equals 1 and Charlie's RF changes it to E equals 0 then Charlie would respond by opening graph edge but Lucy would be expecting a different response. The RF cannot generate the correct response for E equals 1 without knowing the witness. We fix this by splitting the challenge into two parts and making both the parties contribute to it. Now Charlie sends an encryption EP in the first round and Lucy replies with EV and the challenge is defined as EP XOR EV. In the last round Charlie also opens EP by sending EP and the encryption's randomness. The RF can now introduce fresh randomness without breaking correctness as the RF can wall EV and gets a chance to adjust the response by changing EP and opening of EP. So our final result here is that if the commitment scheme is non-interactive equivocal commitment under the URS model and PKE is a public key encryption scheme with oblivious ciphatex sampling then there exists a protocol that realizes ZK functionality for all NP relations against adaptive corrections in the URS model. We follow the commit and open paradigm for our coin tossing protocol. In particular, first each party commits to a coin in the commitment generation phase and then everybody opens their commitments and then all the openings are put together to evaluate the final output. Since we would like to remove any kind of setup assumption we also have to generate the parameter, commitment parameters as a part of the protocol. In more detail, in the parameter generation phase each party first interacts with every other party to generate a pair of vice Peterson commitment parameters G.I. Yonagai. They also generate a pair of vice public key PKE. The public key encryption system supports oblivious ciphatext and public key sampling by oblivious public key sampling. We mean that the party can sample a public key without knowing the corresponding secret key. So here in the parameter generation phase a party who generates public key PKE would not know the corresponding secret key. Then each party tosses a coin and commits to it using the Peterson commitment parameters and it also encrypts the commitment randomness R.I. using the public key PKE. Then they each open their commitments and finally the random coin is XOR of all the S.I.s. In general, the proof strategy is as follows. The simulator invokes a functionality to get a random coin and then the simulator interacts with the adversary and it should bias the output to the coin that it receives from the functionality. To do this, we want the simulator to be able to equivocate and extract commitments. So in our construction, we do it as follows. Using knowledge assumptions, the simulator can extract Peterson commitment parameter trapdoors and it can also set the public key such that it actually knows the corresponding secret key. So equivocation is taken care of using the trapdoors and extraction is taken care of by using the secret key. So in the next phase, in the commitment generation phase, when the simulator has to generate commitments on behalf of the honest parties, it just commits to some random coins and it extracts the adversaries' commitments by decrypting it. Then in the next phase, it equivocates its own commitments using the commitment trapdoor and finally, the result that we get is as follows. Assuming discrete log and knowledge assumption and a public key and Christian scheme with oblivious ciphertext and public key sampling and also, which is additively homomorphic, there exists a protocol that securely implements coin tossing functionality against adaptive corruptions in the plain model. To conclude, in this talk, we talked about the cryptographic reverse firewall model and we also discussed the construction that is secure against adaptive and active adversary and is secure even when honest parties machines are tempered. Thank you.