 Thank you. My co-authors are Ron Cohen and Daniel Wicks. Ron would have liked to give this talk, but he didn't receive a visa in time. So I'm very sad about that. So the talk is about adaptive security. So in normal MPC, the standard model for static corruptions allows an adversary to pick the corrupted parties before a protocol begins. So an adversary can, for example, say this set of parties are corrupted. At that point, sort of in the ideal real simulation paradigm, the adversary is given the inputs of these particular parties and then can basically cheat as it wishes. And the simulator is required to basically produce a transcript of that protocol. But this, unfortunately, is unrealistic in the sense that the adversary has to pick the set of corrupted parties at the beginning of the protocol. And perhaps that's overly optimistic. So in the adaptive corruption model, the adversary gets to pick the corrupted parties at any time. So for example, the adversary could corrupt this party. And then at some later point in the execution, it can corrupt another party or another set of parties and so forth. In particular, it can corrupt the parties after the end of the protocol. So for example, the entire protocol can run. So at this point, the simulator of an MPC needs to produce a transcript of that protocol. And then after the execution of the protocol is finished, the adversary then decides to corrupt all of the parties. And at this point, the simulator then needs to explain the transcript that it produced to the adversary. So in particular, at this point, the simulator is given the inputs, but it has to explain how the transcript that it produced before seeing the inputs corresponds to the inputs in the randomness that the honest parties would have chosen. And so this is naturally a much harder task than the static corruption model. And it yet represents a realistic model for protocol composition in the sense that adversary can attack a protocol at any point. Moreover, it's very important in composition theorem. So for example, in the UC composition theorem, in order to make the entire thing work, oftentimes parties get corrupted, and a sub-protocol that involves that set of parties has to then be explained. To give a better explanation, I've created a little video emoji that explains exactly why it's important to study adaptive security here. So I'll let you take a look at that here. So this right here is what I consider to be the study of adaptive security. The sound is not there, but in this video, the skier is explaining why this creates a rush of adrenaline. And I get the same rush of adrenaline when I study adaptive security. So you might consider the static corruption model is one way of skiing down a mountain. And the adaptive security model is another way of skiing down the mountain. And so if you ever dream of skiing down a mountain like this, you can understand why studying and achieving what you can achieve with adaptive security could be exhilarating. So personally, if you ask me why I study this, it's the same reason why this person skis this way. And if you ask any funding agency why they should fund the research into adaptive security, then you might as well ask any funder of this skier why they fund that activity. It's the same answer. So the scientific question of the talk is, at what cost do we achieve adaptive security? So let me give you a slight summary of MPC work. So this blue line right here represents the n minus 1 security for MPC, meaning that the protocol tolerates n minus 1 corruptions. And if you sort of see starting from GMW87 all the way to last year at Fox, essentially we've achieved quite a bit in this particular model. The state of the art is two rounds of communication with communication right here that is sublinear. So since a few years after the advent of FHE, we were able to achieve multi-party computation that requires sublinear communication in the size of the circuit, which is quite remarkable. And a few years later, we're actually able to achieve that with two rounds, with communication that's sublinear, and also in various settings, online work for one of the parties that's also sublinear in the circuit. So that's really an epitome of what one can achieve with standard n minus 1 security. Now when we consider the adaptive model, we actually go from having the first protocols in this model, took order D rounds where D is the depth of the circuit. But recently, we're actually been able to achieve two rounds, but unfortunately not succinct communication. So order C, where C is the side of circuit communication. There was a prior work right here by Garg and Polychronia do that achieved two rounds and sublinear communication. But the way that they achieve it is by putting a reference string that has to be the size of the circuit. And so these are the two first targets of this research. Can we achieve two-round communication with sublinear communication, and also a reference string that's sublinear in the circuit? So there's really no cheating. And in the answer to that question is yes, I'll show you how we can achieve that. Another thing to study is the honest majority case right here. So assuming an honest majority, we can achieve MPC with either two rounds with sublinear communication or three rounds with fewer assumptions. But in the adaptive case, it actually takes three rounds. The latest result right here takes three rounds, and that achieves sublinear communication. That's what every time I see that blue thing right there, that's sublinear communication. And so the question is can we actually achieve two rounds just like we can do in the static setting, right? So these two are the static settings, and the outer ones are the adaptive settings. And in fact, we can do that. And so that will be the set of results that I want to explain. So to understand why this is actually difficult, let me explain the bottleneck to achieving two-round sublinear adaptive security for MPC. So the framework for achieving MPC is basically work something like this. All of the parties encrypt, so everybody has basically a public key to some sort of threshold FHE scheme, as well as some component of their secret key. And in this model, we can assume that is basically part of the setup model. Everybody encrypts their input with this FHE, and then they broadcast that to everybody. So after the first round, everybody receives encryptions of everybody's input. Then everybody runs the evaluation of the FHE to compute the function, and this gives an encrypted version of the output. Everybody should have received the same Y because they're computing the same function. Then they can use their secret key right here to decrypt Y to produce some decryption chair. And then finally, they can broadcast that, and then given everybody else's decryption chair of the output, they can then compute the final value of the function right here by just using FHE. So that's basically two rounds. You can see why it's sublinear in communication, because the only thing that's sent is basically an encryption of the input right here and an encryption of the output right here in D. And so basically, a very good recipe for achieving the goal that we want is to basically come up with an adaptively secure FHE. And so what would an adaptively secure FHE look like? Well, essentially, it needs to support the following feature. So basically, a secret key is generated, and then essentially, a simulator needs to produce for the public key a set of ciphertexts for a set of messages. And again, since it's adaptively secure, the simulator doesn't know what these messages have to correspond to. Then later, an adversary says, you know what? These messages need to correspond to M1 through ML. And so the adversary gets to basically select those, in particular, after seeing this adversary also gets the ciphertext. And finally, the simulator then needs to give an explanation of its ciphertext, right? So it basically needs to say, given these messages M1 through ML, I'm going to produce a short secret key that basically decrypts each of these ciphertexts to M1. And so that's expressed in this formal experiment where now the adversary is given these ciphertexts, as well as the M1 there, and basically can check that the decryption actually works out. So that's the primitive that we need. Unfortunately, as cats, Thiruvangadam and Zucho, this is actually impossible. So this is the bottleneck for the main problem that I study right here, is that adaptively secure FHE is impossible. And there's a really nice, clever argument for why this holds. So if you recall from the previous experiment, essentially, S1 has to produce a bunch of ciphertext. And now for any particular circuit, Cf, or any function f, we can basically run the evaluation of the FHE on this to produce basically a very short ciphertext, right? So this is basically a short ciphertext that supposedly represents the output of this function. If this were adaptive FHE, if adaptively secure FHE was possible, well, then what we could do is compute the function f of M in this very clever way. So we could then run the second simulator that, given each of these messages, produces a short secret key right there. And then we could basically decrypt this ciphertext using that short secret key. And now you can sort of see why this is impossible, because take a look at this blue box right here. This blue box is a circuit that computes f of M. The size of this circuit, all it did, it take basically it took C right here. And since FHE is naturally succinct, then this is a short ciphertext that's basically the size of the output. And the size of this circuit is basically the size of the simulator, which only takes its input m1 For example, it doesn't involve the circuit right there. And then this decryption of the FHE, which only takes the secret key that's coded in there. So this entire circuit is really, really small. It's the size of the input and the output. And if adaptively secure FHE was possible, then one could compute f of M, i.e. any function, in basically a circuit that has the size of its input and output. And that basically, KTZ basically shows impossible. There are certain functions for which this cannot hold. And so therefore, adaptively secure FHE cannot hold in general. It's a very nice argument. But it basically represents a bottleneck for using all of the prior work on MPC for this particular problem. So we actually have to do something slightly different. In fact, the story gets even worse because erasures don't even help. So erasures are this trick in adaptive security. It's a natural assumption that says, you know what? Each party can erase some of their random coins so that the simulator never has to produce them at a later point. They don't help in this model. For example, we could erase the coins right after we encrypt and then erase the SK right after we decrypt it. But the KTZ impossibility still plugs in. Because if we, again, draw the green box around things that happened before and the blue box at things that happened after the inputs are known, then essentially the same argument is going to hold. This would basically, the problem is, of course, if a corruption happens right here, the party right here still needs to keep its SKI in order to complete the function. And so if the corruption happens right here, then the simulator would need to create a secret key that explains the ciphertext. And then that invokes essentially the KTZ impossibility. So two-round sublinear MPC that's adaptively secure, it has a bottleneck. And we need a new idea to basically achieve it. And essentially, it's going to be the same technique that we use all over cryptography with a different twist. So we're going to take basically two primitives, one primitive that is succinct but not adaptively secure, one that is adaptive but not succinct, and then we are going to basically combine them in several ways. And we do this for all of the basic results. You can look at our papers essentially applying this with various components that we need. The first, for our main result, the major component that we need is this new technique from Willie, Hotec, and Daniel from last year. It's called laconic function evaluation. It's a kind of a dual of FHE. And essentially, what it allows us to do is to generate a very compact CRS that basically depends on the depth of function that we would like to compute. So if we want to compute depth 80 functions, we can just run, create a CRS that basically handles all depth 80 functions. Then one can basically take essentially any circuit right here that is of the right depth and compress it to create a very small digest C of that particular circuit. Next, one can basically encrypt a particular input right here using the digest and the CRS. And that produces ciphertext, which basically encodes that input. And finally, one can decrypt that ciphertext using this circuit and essentially the CRS to basically produce the output y. And notice why this avoids the impossibility right here. Because if we basically plug these components in, when we actually decrypt here, unlike FHE, which just takes the sk and the c right here, the laconic function evaluation decryption actually takes the circuit as the input. So this box, which computes the circuit, is in fact the size of the circuit and therefore avoids the impossibility of KTZ. So this is going to be our new primitive. And thus, we are going to use it in the following way. So we're going to generate a CRS. Everybody is then going to compute. Once we know the circuit that we want to compute, all of the parties are going to compute this digest, which at this point is deterministic. It doesn't require any random points to compress. And then we are going to basically jointly encrypt all of the inputs of all of the parties. And the way we're going to do that is we're actually going to use, if you saw, that BLPM result from 18, that is a two-round MPC that's adaptively secure but not succinct. And we're going to use that to compute the ciphertext for the LFE scheme. And then once we know the ciphertext, everybody will basically erase the random coins that they used in this and basically decrypt the cipher text. And so therefore, since this one is adaptively secure, and we use, in our security proof, one more property. We show that the LFE is all but one secure adaptive, all but one adaptively secure. With an erasure, we can basically prove the result that we want to prove. Basically composing something that is adaptive but not succinct with something that is succinct but not adaptive to get basically the best of both worlds there. And now one can ask, how do we remove erasures? And one can actually do that using a result by Dr. Menso-Led Katzen-Rau, which is called the explainability compiler framework. And this can basically transform the previous protocol we have to explain those random coins R.I. But unfortunately, it requires I.O. But unlike the prior results that use I.O., this will not require a common reference string that's the size of the circuit. It'll actually be a very small explanation for the LFE random coins. So essentially a summary of this part of our results, we basically achieve a two round protocol. And notice it's going to be succinct in the communication. It's going to be succinct in the online computation. And it's also going to be succinct in the setup right here. In contrast, essentially the prior work from last year, it also achieves two rounds but wasn't succinct. And Garg and Antigone basically have a result that is two rounds and succinct but requires a setup size that's essentially the same size of the circuit. And that's why we basically improve upon both of them to achieve the best of both worlds in the fully adaptive case. We also consider special cases in two-party protocols where essentially Alice and Bob can be optimal in terms of communication and computation. So this is kind of an amazing thing that we can achieve. Alice optimal and Bob optimal protocols using just FHE. But once we add adaptive security, we can basically only do it for Bob optimal protocols. We can show that essentially Alice optimal protocols are impossible. For more information, I refer you to the paper there. I want to basically summarize the rest of the results in the paper, which basically consider how we can achieve lesser versions of adaptive security. So that's in the case of, for example, honest majority and all but one corruptions. And as an example of one of the building blocks right there, we consider non-interactive zero knowledge. So GOS basically show how to achieve adaptive NIZK. But again, it's not succinct because the size of the proof is going to be related to the size of the circuit that you're considering. GGIPSS, they show how to achieve succinct NIZK. And the way they do it is basically what you do is the prover generates a secret key and a public key for an FHE scheme. And then you basically encrypt the witness to produce essentially these are encryptions of the witness right here. And then you can basically FHE evaluate the predicate of the language that you want to do to produce essentially the result. This right here is an encryption of 1. If x was in the language, then this right here is the predicate for the language. Then this would be an encryption of 1. And then all you need to do is show basically an NIZK that essentially that the FHE decryption of this is basically 1. The problem here is that it can't be adaptively secure because of the prior FHE impossibility. So to get around that, we basically have to apply our same methodology right here. And we're going to use this new technique or this older technique from GVW, which is a homomorphic trapdoor functions. And there essentially, you can think of them as homomorphic commitments. And that's kind of the easiest way to show that. Essentially, there's a way to evaluate like essentially an inner approach to evaluation and an outer approach to evaluation. And of course, there's a trapdoor way of basically inverting a particular image. That, of course, this part will only be used in the security proof. So essentially, the method of our protocol is to work as follows. So just like before, what we're going to basically do is commit to our witness. So these are the commitments to each of the bits. And then we're basically going to produce an inner and an outer evaluation. The outer evaluation is something that the verifier can do, can just use these commitments to the bits and basically get a commitment to the output bit. And then basically what we're going to use is an adaptive NIZK, the one that the GOS one that I just discussed, to basically show that this commitment opens to one. So that, again, is a way of using something that's adaptive but not succinct and something that's succinct but not adaptive in combining them to get this. We can use this now succinct adaptive NIZK for a number of other results. So for example, in the all but one corruption model, we can basically achieve a two round protocol matching the best static model right here that only requires basically succinct communication and requires essentially a threshold PKI. We can basically remove the assumption of the threshold PKI by basically expanding the number of rounds. So essentially that's an improvement of the prior work which required three rounds right here. And this basically matches the static case right there. In the honest majority case, we can essentially apply similar techniques and get, again, a two round protocol that matches the best static model protocol and that essentially requires essentially the same assumptions right here. And we can, of course, remove the need for a threshold PKI by basically combining it with this damn Gartyshye protocol that's constant number of rounds. OK, so what I want to do in the last few minutes is explain some of the open questions here, the sort of gaps that are still left in this elegant study of adaptive security or the field, essentially. And the first basic question is essentially the all prior techniques that are succinct that are basically succinct, if we ignore everything else, basically require either IO or they require erasures. And that's a question of whether one can get around that. So IO erasures, are they necessary for adaptively succinct MPC? And the next question is essentially reference strings and erasures. So either we use erasures right here or we use reference strings. Again, that's related to the question of IO, but much simpler. Can we actually reduce the reference string to a random string and basically remove that basic bottleneck and really achieve parity? And finally, the last question is can we consider setup relaxation? So in the all but one adaptive case, we basically need this threshold PKI right here. And is that basically necessary? Here all of the results that have two or three rounds basically require this threshold PKI. And the prior results basically that have constant number of rounds don't. And so can you achieve basically optimal round complexity without these basic setup assumptions? These, I think, are addressable and basically one can basically attack these particular questions. We've thought about them, but I think they're basically the next in line to fall from this basic question. And I'll basically leave you with that. Thank you for your attention. Questions? It's just a comment. So it's a great working, great talk and animation. Just a small comment. There actually is work on succinct CRS, succinct communication, round complexity to ground adaptive secure MPC in PKC 2017 with the XANA and MUTU. It's based on the work of San German Antigone to just make it more efficient. Are you talking about CPV? Yes. Yeah. So that's constant number of rounds though, right? No, no, it's two rounds. It's two rounds? It's two rounds. But it's really high. It's sub-exponential IO and the same framework of... So OK, we basically mis-sided you here in this result. Because we were aware of the work, but I guess... Yeah, but never mind. But your protocol is much better. But... Thank you. Thanks for the citation. Any other questions? OK, let's take Sabi.