 Good evening everyone. Thanks for being here. I'm Aarushi and I'm going to talk about our results on two round information theoretic MPC with malicious security. This is based on joint work with Prabhanjan, Aarka and Abhishek. So we work in the similar setting as wrote and described, but in case some of you just arrived, let me just briefly give an overview of our setting. So as the title of our paper suggests, we consider security against malicious adversaries, particularly a malicious adversary who can corrupt less than half of the parties in the system. This is also known as the honest majority setting. But why do we really care about the honest majority setting? Well, there are a couple of reasons. To begin with, information theoretic security is possible in the honest majority setting, unlike the dishonest majority setting. Also, simulation proofs for honest majority protocols are typically straight line and UC secure. And also interestingly, the round complexity lower bounds of dishonest majority MPC do not apply here. So we can hope to achieve extremely efficient and round optimal protocols. Also, honest majority MPC finds applications in a lot of other cryptographic primitives. For example, it was used in the construction of efficient zero knowledge protocols, also for constructing leakage resilient circuit compilers and bounded key functional encryption. Thanks to Rotem for talking about the prior results in this area. So I don't have to spend too much time on the slide. But just to give a brief overview, Benor, Goldwasser and Victor Sun gave the first construction of information theoretic MPC in 88. Then in 89, Barlaan and Beaver gave the first constant round construction of information theoretic MPC. In 2010, Ishai Kushilevitz and Paskin gave the first two round construction for T less than N by three malicious corruptions. However, they considered a weaker notion of security called security with selective about. And finally, concurrent to our work, the talk that you just heard, Apple Bomb Rekersky and Sabri gave a two round protocol for optimal corruption threshold. But they also considered a weaker notion of security called selective about. So we consider security with about and give a two round protocol with optimal corruption threshold that achieve security with about against malicious adversaries over P2P and broadcast channels. And we also give a two round protocol that achieves security with selective about over just point to point channels. In this talk, I'm just going to focus on our results for security with about. So let me start with a brief overview of our approach. So we start with any arbitrary round information theoretic protocol and use round compression techniques to get a two round protocol that achieves privacy with knowledge of outputs. Privacy with knowledge of outputs is a strictly weaker notion of security than security with about. And I'll get to the definition of this notion in a bit. But going back to our approach, we finally give a round preserving transformation from privacy with knowledge of outputs to security with about. Okay. So on to the security definition, let's just recall the definition of security with about. So we say that an MPC protocol achieves security with about if it captures the following in the ideal world. So all the parties send their inputs to the trusted party implementing the functionality. The trusted party evaluates this function on these inputs and sends the output to the adversary. The adversary can then either choose to send the same message or the output or just send bottom and the trusted party forwards this message or response from the adversary to all the honest parties. Security with about satisfies two important properties. One is privacy of inputs for the honest parties. Additionally, it also satisfies output correctness. That is that the honest parties either learn the correct output or they learn nothing. Privacy with knowledge of outputs is a strictly weaker notion in the sense that output correctness for the honest parties is not guaranteed here, which basically means that the adversary can choose to send any garbage value y prime as output for the honest parties to the trusted functionality. Okay. So let's focus on our first step, which is around preserving transformation from privacy with knowledge of outputs to security with about, for which we devise a new tool that we call multi key Mac. So message multi key message authentication codes are can be seen as a multi key variant of regular multi message authentication codes where each party can sample its key independent of everybody else. Now given a set of keys, a signer or a trusted party can compute a multi key Mac over a message X and send it back to all the parties. Now given the signature or multi key Mac, each party can locally verify if the signature is valid just with corresponding to their own key. And for correctness, we require that if the signature or multi key Mac was computed honestly, then all honest parties should be able to verify the signature corresponding to their own key without having any knowledge of the other parties keys. For security, we say that a multi key Mac scheme is multi key unfoldable. If given a multi key Mac on a set of keys out of which some of these keys might be chosen by the adversary himself, it cannot produce another multi key Mac on a message of its choice such that it verifies against any of the honest parties keys. So in other words, an adversary cannot output any valid message signature pair other than the one it received. So essentially, multi key Macs ensure that either all parties or at least all honest parties verify a signature or all honest parties are not able to verify a signature with a very high probability. Okay. So assuming we have a construction for this primitive, how can we go from privacy with knowledge of outputs to security with a bot? So let's say f is the function that we are computing using this MPC protocol and X1, X2, X3 are the respective inputs of the parties. We propose to modify this function using similar techniques as used in IKP 10, where this modified function f prime now additionally takes a key for the multi key Mac scheme as the input from all the honest parties. And it computes the function f on the original set of inputs as before. And now in addition to that, it also computes a multi key Mac over the output of f corresponding to the input keys. And now given the output of f, which is Y, and a multi key Mac on this on this output, each party can locally verify if this is a valid signature on Y or not. And depending on that, it can choose to either output Y or just output bottom. Okay. So now, given the correctness guarantee of this multi key Mac scheme, if the if the adversary chooses to send the same value that it received from the trusted party in response as output to the honest parties, then we are guaranteed that each honest party will be able to verify this multi key Mac with respect to their own keys. And if in case the adversary tries to modify this value and sends anything other than the value that it received from the trusted functionality, we know that with a very high probability, all honest parties will not be able to verify this and hence just output bottom. So using this tool, we can essentially guarantee output correctness and hence achieve security with a bot. Okay. Unfortunately, I won't have time to go with our information theoretic construction of this primitive, but I'm happy to talk about it offline. Moving on, now that we have around preserving transformation from privacy with knowledge of outputs to security with a bot, let's just focus on trying to get a two round protocol that achieves privacy with knowledge of outputs against malicious adversaries. Okay. So our starting point is the standard round compression techniques that have been used in literature. In particular, we start with our previous work on honest majority MPC, where we construct a two round protocol that is secure against malicious adversaries and only uses one way functions. And interestingly, the only place where one way functions are used in this paper or construction is in the use of Kabul circuits. So a nice solution to get analogous results in the information theoretic setting would be to just replace the use of Kabul circuits with information theoretic variants of Kabul circuits. But unfortunately, the simple idea runs into a fundamental problem. And to explain why, let me dive a little deeper into the round compression technique used in this paper and most prior works on these round compression techniques. Okay. So the idea is pretty simple. The idea is to start with an interactive arbitrary round MPC protocol and have the parties somehow commit to their inputs in the first round. Then in the second round, the parties send a garbling of their next message functions for each round of the underlying protocol. And all this is done while ensuring that the Kabul circuits can only be evaluated once. Also, that after the second round, these Kabul circuits can sort of talk to each other without any additional interaction and essentially implementing MPC in the head. To explain how these Kabul circuits actually talk to each other, let me just consider this simplistic example where the second Kabul circuit of party one wants to get inputs from the first Kabul circuits of the two parties. This is enabled using a helper protocol that implements the OT functionality. And so basically, the first Kabul circuit of party two just outputs its first round message in the underlying protocol. And the first Kabul circuit of party one just sends all its input Y labels for the second Kabul circuit. And now this OT functionality computes the Y labels corresponding to the first message of party two. And now given these input Y labels, we can evaluate the second Kabul circuit of the first party. Okay. So now coming back to the problem with this initial approach is that the problem with information theoretic Kabul circuits is that the size of the input Y labels grows exponentially in the depth of the circuit that is being garbled. And here since we are garbling a circuit that's proportional to the or similar to the next message function in the initial protocol, the size of this function could potentially be as large as the size of the circuit that the MPC is computing. And so the communication complexity just grows exponentially. And just to note that this is a very simplistic explanation of why we run into this problem. But I mean, implicitly, this is this is essentially why this is the problem. Okay, so how do we solve this? We use techniques that are very similar to the ones used in the work of Panamorda and Lin, which is to basically modify the function that the first party garbles. So now instead of just garbling its next message function, it just outputs its actual input extra. And the computation of the next message function is delegated to the helper protocol. And we modify the functionality implemented by this helper protocol as follows. So the helper protocol first computes the next message function of the for of the second party, and then implements an OT, O T functionality over this as as before. Okay, so now that we've successfully managed to shrink the size of the circuit that we want to garble, all that really remains is to design a two round protocol or a two round helper protocol for this particular functionality. But unfortunately, this is not as straightforward as it seems for the following reasons. So recall the template for our two round protocol or our two round compressed protocol now that uses this two round helper protocol. The first round messages of this two round helper protocol are actually used to implicitly commit to the inputs of the parties in our main two round protocol. And we know that for malicious security, the simulator must be able to simulate messages on behalf of the honest parties and extract the inputs of the adversary to send it to the trusted functionality or the trusted party. And if we have this helper protocol running as a sub routine, as a part of this part of our two round protocol that helps commit to the inputs of the parties, there must be a way for the simulator of this helper protocol to extract the inputs from its adversary. And which basically means that we require this helper protocol to be maliciously secure. And now it seems like we've run into a circular problem. We started with the goal of trying to design a two round protocol that achieves malicious security. And if we knew how to do this, I mean, we wouldn't have to deal with all this in the first place. So but in fortunately, in we show that we don't really require full malicious security from our helper protocol. And a weak asymmetric security doesn't mean that we need to be able to do this. I think that's why this is so important. We also give a construction for the same. Unfortunately, I don't have time to go over more details, but I invite you to to take a look at our paper for more details. Thank you for your attention. especially with the Roten or Roten presented, whether this will also give us simpler protocols in the computational setting with the technique that you have proposed, like this function reduction. Simpler in what, in which context? Yeah, but also in the computational setting, our reduction is generic and it works both in the computational setting and the information theoretic setting. So it depends which protocol you start from before you reduce the degrees, but yeah. Oh, for computing the degree two function itself. Yeah, so. Oh, sorry. I think that we didn't need any new protocol for computing the degree two function in the computational setting. And we needed a new protocol for the perfect security setting in order to get three rounds, but for the computational which is existing now. So it does give you simpler protocols even in the computation. Yeah. Yeah, different, okay. Okay, so let's thank any more questions from the audience. Then let's thank the speakers.