 Hi, I'm Aarushi. I'm going to present this joint work with Benny Applebaum on actively secure elementary MPC reductions Secure multi-party computation is an interactive protocol that allows a group of mutually distrusting parties to jointly compute a function on their private inputs The security of MPC guarantees that an adversary who corrupts a subset of the parties Does not learn anything beyond the output of the function Since its inception the design of MPC protocols have quite heavily relied on the notion of secure reductions We say that a problem of securely computing a function F reduces to the task of computing a simpler function G if There exists an interactive protocol for computing F where the parties have access to an oracle implementing the function G Given such a reduction the original task is now reduced to simply designing a secure protocol for the function G Classical examples of protocols by Yao and Goldreich et al Can be viewed as secure reductions to the two-party oblivious transfer functionality Going a step ahead we say that F non interactively reduces to a function G If the party is in the protocol only send single queries to the oracle G and do not interact with each other at all Before moving ahead, let me emphasize on a key point here The function G is allowed to have its own internal randomness and need not necessarily be deterministic In fact, there exists general non interactive reductions from such functionalities to deterministic ones In this work, we consider the notion of elementary reductions These are non interactive reductions where the function G is a constant degree function Moreover, the reduction is only allowed to make use of a PRG in a black box way This additionally means that G is independent of the PRG Such black box restrictions typically make things highly efficient And this notion of elementary reduction also captures many classical reductions as we will see going forward Pictorially an elementary reduction looks like the following The parties run some local pre-processing on their inputs This pre-processing function can make oracle queries to a PRG The parties then send the output of the pre-processing functions to the oracle G Given the output of G, the parties do some local post-processing to learn the final output Note that while the pre-processing and post-processing functions in this protocol are allowed to make oracle queries to the PRG Oracle G or the functionality G is completely independent of it Several existing non-interactive reductions in literature are in fact elementary reductions In the semi-honest setting, the two-party protocol of Yao and the constant round multi-party protocol of Beaver, Mikali and Rokovi is an elementary reduction for all efficiently computable functions in the dishonest majority setting In fact, the BMR protocol also has an actively secure version based on zero knowledge but that protocol is interactive and non-black box with respect to a PRG In the malicious setting, however, we either know of elementary reductions with full security in the honest majority setting or only for NC1 functionalities in the dishonest majority setting Existing elementary reductions for all efficiently computable functions in the dishonest majority setting are only known to achieve security with a part We do not know of any elementary reduction in this setting that achieves full security Applebaum et al. showed that a non-elementary non-interactive reduction in this setting to a constant degree function G where G depends on the PRG Therefore, the main question that we consider in this setting is whether there exists an elementary reduction that achieves full security against a dishonest majority in this setting We show that while an elementary reduction with full security is unlikely to exist in this setting it is possible to design a reduction that achieves identifiable reward Let me now elaborate on these results To establish our low amount, we show that for two parties, existence of an elementary reduction for all efficiently computable functions with partial fairness That is where fairness is only guaranteed when one of the parties is corrupt implies existence of an information theoretic reduction for any efficiently computable function in the CRS model with inverse polynomial average case privacy against passive adversaries Here, by information theoretic reduction, we mean that the parties are not allowed to make black box calls to a PRG Even though we don't prove an unconditional impossibility in this setting, the implications of this result are highly non-trivial This is because this implication implies constant round information theoretic MPC protocols that are far beyond the current state of the art For instance, existence of such an information theoretic elementary reduction implies existence of a constant round two-party protocol for efficiently computable functions with inverse polynomial average case information theoretic security in the OT hybrid model This in turn implies a constant round protocol for all three-party efficiently computable functions with inverse polynomial average case information theoretic security Existence of such a protocol is a well-known three-decade open problem While the original question formulated in the work of Beaver-Mikali in Rogaway is with respect to standard security, the relaxation to inverse polynomial average case security does not seem to make it any more tractable As for our positive result, we show that an elementary reduction with identifiable abort does exist in this setting A similar reduction is also implicit in a recent concurrent work We additionally show that it is possible to achieve fairness if the parties are allowed to interact more than once with the oracle G And moreover, if G is allowed to depend on the PRG, then it is also possible to get full security In fact, any passively secure elementary reduction can be transformed into an actively secure one which is non-interactive but does depend on the PRG in a non-black box way Let me now elaborate on the main ideas used in our lower bound result To build intuition, I will begin with by showing why existing passively secure elementary reductions fail to achieve full security against active adversaries and then prove our general theory All existing passively secure elementary reductions can be viewed as a distributed variant of garbling where in the pre-processing phase, the parties sample random keys corresponding to each wire in the circuit representing the function f These keys are then sent to the function G Function G computes and outputs a garbling of the circuit representing the function f using these keys And finally in the post-processing phase, the parties evaluate the garble circuit to learn the output But so far I haven't addressed what the PRG is used for or if it is even used at all Recall that in the distributed garbling, the parties sample random keys for every wire in the circuit At a very high level, each gate of the circuit is individually garbled where garbling of each gate corresponds to four randomly permuted ciphertexts Each of these ciphertexts is a distributed encryption where the keys correspond to the keys associated with the incoming wires of the gates And the message that is encrypted corresponds to the keys associated with the outgoing wires of the gate For circuits with more than polylogarithmic depth, the keys are required to be shorter than the messages Therefore, the distributed encryption algorithm first expands the keys using PRGs and then uses them to encrypt the messages This key expansion can also be moved outside of the encryption algorithm and can be done by the parties locally So going back, the parties sampled random keys for every wire in the circuit and also expand them using the PRG They then send both the original and expanded keys to the function G And G now implements the distributed encryption algorithm using these keys The original keys in this distributed encryption act as messages and the expanded keys act as encryption keys While this reduction achieves privacy against passive adversaries, it fails to achieve full security against active adversaries This is because an active adversary may send inconsistent original and expanded key pairs to the function G Since G is independent of the PRG, it will be unable to detect such inconsistencies and will proceed to implement the distributed encryption normally Upon receiving the gavel circuit from this function G, the honest parties will be unable to decrypt the ciphertext Or in other words, evaluate the gavel circuit because they are unaware of the quote-unquote new mapping between the original and expanded keys But the corrupt parties who are aware of this mapping can still evaluate the gavel circuit As a result, they can obtain the output and this protocol does not achieve fairness or full security So far, we discussed why existing elementary reductions are unlikely to achieve full security against active adversaries Let me now elaborate on how we generalize this idea to show that any such fully secure elementary reduction is unlikely to exist To quickly recap, we show that for two parties, existence of such an elementary reduction with partial fairness implies existence of an information theoretic elementary reduction for efficiently computable functions in the CRS model with a weak form of privacy against passive adversaries This implication holds even when the parties have access to a random oracle, which only makes our results statement stronger Also note that a multi-party fair elementary reduction implies a two-party fair elementary reduction, which in turn implies a two-party partially fair elementary reduction Therefore, if we rule out a two-party elementary reduction with partial fairness, we can also rule out a multi-party fair elementary reduction As a result, this restriction to partial fairness again only makes our theorem statement stronger Finally, this caveat of weak privacy in our implication can be removed if the parties are only allowed to make random queries to the random oracle Let us now prove this main theorem We consider two parties Alice and Bob Let's assume for the sake of contradiction that there exists an elementary reduction from every polynomial size two-party function with partial fairness against active adversaries And here we assume that we have partial fairness against a corrupt Bob For simplicity, we assume that the PRG is instantiated with a random oracle each And for now, let us further assume that the parties invoke the oracle on randomly chosen seeds I will later elaborate on how the simplifying assumption can be removed We consider a corrupt Bob that lazy samples its own oracle g and uses that in the pre-processing phase instead of the common random oracle each In the post-processing phase, it uses both g and h In particular, it uses each in the post-processing phase for queries that Alice made in its pre-processing phase And it uses g for the queries that it made during the pre-processing phase Since we assume that the parties query the oracle on randomly chosen seeds with a very high probability The set of inputs on which Alice and Bob query the random oracle will be different with a very high probability As a result, we can pretend that Alice and Bob honestly invoke the protocol on a new random oracle which is obtained by combining g and h Both oracles are available to Bob who can then use them in the post-processing phase to correctly recover the output From our original assumption, since the protocol is fair against corrupt Bob In this case, since Bob can recover the output, Alice should also be able to recover the output As a result, correctness of this modified protocol holds Moreover, since Alice's view in this modified protocol remains unchanged If the original protocol was private against a semi-honest Alice, this modified protocol is also private against Alice We can further modify this protocol such that only Alice gets an output by simply removing the post-processing phase of Bob Since Alice's post-processing algorithm is independent of g, we can further modify the protocol and ask Alice to lazy sample its own oracle each And ask Bob to sample its own oracle g This gives us an information-theoretic passively secure non-interactive reduction that delivers an output only to the first party To get a protocol that delivers outputs to both parties, we can simply run two copies of this reduction in parallel where Alice acts as the receiver in one copy and Bob in the other As a result, we obtain an information-theoretic elementary reduction for all efficiently computable two input functionalities But as discussed earlier, such a reduction implies constant round protocols whose existence is a long-standing open problem in information theory Although we have discussed the main ideas that we used to prove our low amount, we did make certain simplifying assumptions I am now going to briefly discuss how those simplifying assumptions can be removed and what technicalities arise as a result of that The first assumption that we make is that the simulation-based definition of fairness implies that if a corrupt Bob gets the output, then so should Alice Which in fact is not true. Instead, it only ensures that Alice can generate an output with respect to some effective input of Bob and not necessarily with respect to the real input that is given to Bob In order to remove the simplifying assumption, we work with an authenticated functionality that delivers to Bob an authenticated version of his input under the key that is chosen by Alice Fairness with respect to such functionalities implies a notion that is being used in the above argument The second assumption that we make is that Alice and Bob's queries to the PRG do not intersect To avoid this, we use standard approach of identifying heavy queries by Barack and Mahmoudi and let Bob use its local oracle G only on the non-heavy queries This modification introduces several technicalities that only allows us to achieve inverse polynomial average case security To make sure that this does not affect correctness of the protocol, we add to the functionality G a detect and reveal mechanism that identifies a collision event and reveals the private inputs of Bob in case such an event occurs Therefore, even if such an event occurs, the correctness remains unaffected Before concluding the discussion on our lower bound, I want to mention an interesting observation about our main theorem Our main theorem shows that the task of designing an elementary reduction for all efficiently computable functions with full security in the dishonest majority setting is an example of a cryptographic problem for which an information theoretic solution cannot be ruled out Black box use of a given primitive is useless for solving the problem. A non-black box use of the primitive allows us to solve the problem This combination seems rather unique to our setting since we are only aware of examples that satisfy at most two of these conditions For example, Heitner et al and Mahmoudi et al showed an example that satisfies the first two conditions while Applebaum et al gave an example where the last two conditions are satisfied Moving on to our positive result now, as mentioned earlier, we show that there exists an elementary reduction with identifiable abort against a dishonest majority of parties for all efficiently computable functions To construct such a reduction, we first define a notion of distributed encryption with identifiable abort and propose a construction of such a scheme We then show that this distributed encryption when combined with the standard garbling protocol achieves security with abort Distributed encryption is a primitive involving multiple parties. Each party locally runs the key generation algorithm to generate its encryption and decryption keys All the encryption keys are used for encrypting a message and all the decryption keys are used for decrypting the ciphertext that was encrypted using the encryption keys The decryption algorithm either outputs the correctly decrypted message or it outputs port along with a subset of corrupt parties who were responsible for the decryption failure Although this is a symmetric key primitive, we distinguish between two types of keys, the encryption keys and the decryption keys This will allow us to define the encryption algorithm in a way that is independent of the underlying PRG Also, as discussed earlier, for garbling circuits with more than polylogarithmic depth, we require decryption keys whose bit length is shorter than the message length We need one-time security as long as at least one of the key pairs is honestly generated In addition to privacy, we require different flavours of correctness which hold even against active adversaries that corrupt some of the keys The first one is security with a bot where the decryption algorithm either outputs the correctly decrypted message or it outputs a bot In this case, we want that just by looking at the key pairs, we should be able to predict whether the outcome of the decryption algorithm will be a valid message or bot Similarly, for security with identifiable a bot, we want that given only the key pairs, we should be able to predict whether the outcome of the decryption algorithm will be a valid message or if it will be a bot along with a correctly identified subset of bad parties For constructing distributed encryption with these properties, we first design a scheme that achieves security with a bot and then show how we can use cut-in-choose to amplify its security to identify a bot Our distributed encryption with a bot uses standard MAC then-encrypt idea In particular, the parties sample random decryption keys and encryption keys are PRG evaluations of these decryption keys The encryption algorithm samples a random MAC key and then implements the standard MAC then-encrypt idea While the decryption algorithm evaluates the PRG on all the decryption keys in order to obtain the corresponding encryption keys and then uses them to decrypt the message and the tag It then checks if the tag is a valid MAC on the message using the MAC key that was output by the encryption algorithm For predicting the outcome of the decryption algorithm, we can simply compute an XOR of all the encryption keys and of the PRG evaluations of all the decryption keys If the result is a non-zero string, then the output of the decryption algorithm will be bot, otherwise it will be a valid message For upgrading to security with identifiable a bot, the key generation algorithm samples multiple key pairs of the distributed encryption scheme that achieves security with a bot During encryption, we sample a subset of the keys and use it for validation by outputting the encryption keys and using the remaining keys in a redundant way by re-encrypting multiple copies of the message under these keys The decryption algorithm checks if the revealed keys are consistent with the corresponding decryption keys and identifies any bad key pairs It decrypts the ciphertexts using the remaining keys For predicting the outcome of decryption, we can simply sample a random subset of the encryption keys and check whether they are consistent with the corresponding decryption keys and identify any bad key pairs Finally, we show that when such a distributed encryption scheme is used along with the standard garbling approach, the resulting protocol achieves security with a bot Unfortunately, due to time constraints, I won't be able to get into the details of this construction But to conclude, we show that elementary reductions for efficiently computable functions that achieve full security against a dishonest majority of parties is unlikely to exist While elementary reductions that achieve identifiable abort against a dishonest majority of parties does exist