 Okay, so let me go to the second part of the talk. I am Provenjan again. This is from FE combiners to, still Provenjan, yeah. From FE combiners to secure MPC and back. So now, so we saw how to use MPC techniques to get FE. Let's see how to use FE techniques to get MPC. So in particular, I'm going to focus on certain efficiency measures of MPC protocols. I'm going to focus on two efficiency measures, namely, round complexity and communication complexity. Round complexity just measures how many rounds you need to interact in order to securely compute a function. And communication complexity just deals with how many bits do you need to exchange in order to securely compute a function. Okay, and the goal of this work is going to be, to simultaneously optimize both round complexity and communication complexity of secure MPC protocols. And to keep things simple, I'm only going to focus on passive security and I'm only going to work in the all but one corruption model. And the main tool I'm going to use in this construction is that of functional encryption combiners. So what are functional encryption combiners? So we have these different constructions of FE from different assumptions. And let's say you want to use a secure FE construction. So which of these assumptions do you believe? So maybe you start with the first FE construction and later it turns out to be insecure. So what a combined allows you to do is to combine these different FE instantiations, these different FE candidates into one secure FE candidate with the guarantee that the resulting combined FE candidate is secure as long as any one of the original FE candidates were secure. Just clear? And an alternate perspective of FE combiners is that they're even useful if all the original candidates I started with are all the same. So what does this even mean? Let me explain this by an example. Suppose let's say you have a server that has the master secret key and public key of a Nephi's key. And whenever you want a functional key to be issued, you talk to the server and the server gives you the functional key. The disadvantage with this is that there's a single point of failure. If the adversary corrupts the server, then he can learn the master secret key. He can decrypt all the ciphertext and get the information. So a natural approach to overcome this problem is to sort of distribute trust, right? So now you have many servers and each server will have its own instantiation of the FE scheme. So the first one will have MSK1, PK1, second one MSK2, PK2, and so on, right? Now the conundrum is which public key will you use to encrypt, right? One advantage with this is that the only way the adversary can actually learn all the secret keys is to corrupt every single server which might involve more effort to do. But the disadvantage is that what is the public key you're going to use to encrypt, right? Maybe you start with using the first public key, but what if the adversary's already corrupted the first server, right? So here is where FE combiners are useful, okay? So you're not going to use any individual public key. You're going to run the FE combiner on all the public keys and you're going to encrypt with respect to all the public keys, okay? And the guarantee here is that the combined instantiation is secure as long as there exists at least one instantiation of the original FE scheme that is secure. So recall that I'm still using the same FE scheme throughout, right? It's just that I'm running different instantiations, right? So here the guarantee is somewhat different from the traditional guarantee you would have seen in FE combiners in that the resulting instantiation is secure as long as the adversary cannot obtain all the secret keys of the original instantiations, okay? Okay. So what is the relationship between FE combiners and secure MPC protocols? So I can really view every invocation of the FE scheme as being a party in the secure MPC protocol. And if the i-th invocation of FE is compromised, I can consider the analogously the i-th party in the MPC protocol as being corrupted, okay? So this is how FE combiners are related to secure MPC protocols. And I'm going to use this analogy to construct MPC protocols, okay? Okay, so let me state our results. So we are going to initiate a formal study between secure MPC protocols and FE combiners. And we are going to show how to get a two-round MPC protocol with communication complexity that only grows polynomial in the depth of the circuit and the input and output of the circuit being securely computed. And this protocol is secure assuming learning with errors. And the main tool used in this construction is the tool of FE combiners. And concurrently, Willy Pertek and Daniel also obtained this result via the tool of electronic function evaluation, okay? Okay, so before our work, the prior two-round secure MPC protocols were either based in the CRS model or they had large communication complexity, okay? So our protocol is in the standard model. So what about the other direction? Can we use secure MPC protocols to construct FE combiners? And we show how to use constant-round MPC protocols to construct FE combiners for polynomial-sized circuits. And the constant-round MPC protocols we are going to use can be based on the existence of PRGC and NC1. So we get FE combiners for polynomial circuits from PRGC and NC1. I'm not going to talk about this theorem in the stock. And we can instantiate PRGC and NC1 from DDS learning with errors and so on. Our result is a little more general. We identify a class of MPC protocols that actually imply FE combiners. So in a sense, we give some equivalence between FE combiners and a class of MPC protocols. So let me jump into techniques. So we are going to use a large communication two-round MPC protocol. And then we are going to combine this with FE combiners and succinct single-key FE schemes to get a low communication two-round MPC protocol. So our transformation is going to be generic. And all these three different things can actually be instantiated from learning with errors. So we get the final result also from learning with errors. Is it clear? So before I show how to achieve our result, let's recall how the low communication MPC protocols look like. So typically, this is the framework used in constructing low communication MPC protocols. There is a CRS. And every party encrypts their input with respect to some public key that is derived from the CRS. And these encryption schemes, the particular encryption scheme used in the literature is called multi-key fully homomorphic encryption. But for this talk, you don't really need to know what it is. And once they compute the ciphertext, then every party broadcasts their ciphertext. And each party non-intractively homomorphically evaluates on this ciphertext. And in the second round, they partially decrypt their final ciphertext and broadcast the partial decrypted values to everyone. And using the partial decrypted values of all the parties, you can recover the output of the function. So that was in the CRS model. But you can adapt it to get a three-round protocol in the standard model. And the construction is really simple. So if you are the first party, then you are the one who is going to generate the CRS. And this is a semi-honest setting. So the party is always going to generate the CRS, honestly. And the other two rounds are the same. So let's see how to go from three rounds to two rounds. If we do this, then we will get a low communication protocol for the two rounds, which is our result. So towards achieving this result, a natural question to ask is if we can parallelize some rounds. So can we parallelize the first and the second round? And we cannot. And the reason is because the public keys are derived from the CRS. So it's unclear how to compute the ciphertext without even knowing what the CRS was. So then the other option is to parallelize rounds two and three. And here it seems like if you were to do this, we would need a non-interactive decryption phase. And primitives like FHE are not useful because you homomorphically compute some ciphertext, send it to the other party, that person decrypts and sends the answer back. So it's sort of an interactive process if you want to obtain the answer. So FHE and its variants are not useful to achieve non-interactive decryption. But it turns out that functional encryption is really useful if you want to obtain non-interactive decryption. Because you send the encryption to all other parties and if they have this functional keys, then they don't have to talk to you. They can decrypt the answer by themselves and get the output. So here is going to be the warmup attempt to get low communication MPC from large communication MPC using single key FHE. So there are n parties. They have inputs x1 through xn. So here is what they're going to do. They're going to run an MPC protocol. And the functionality associated with this MPC protocol computes an encryption of x1 through xn under master secret KMSK. So who computes MSK? The first party is going to be the one who computes MSK. So he's going to compute MSK. He's going to feed it to the MPC protocol and then they compute the MPC. They compute the ciphertext under his master secret key. And also the first party is going to compute a functional key associated with the circuit C that is being securely computed. And he's going to send the functional key to all the other parties. So how do you recover the answer? So all the parties are going to run the MPC protocol, get the FHE ciphertext. They have the functional key, decrypt, and obtain the output. So for correctness, I've already argued. So if you have the FHE ciphertext, FHE key, then the correctness of FHE implies that you'll get the correct output. What about the round complexity? So a couple of years back, BL and GS showed how to get MPC for arbitrary or for poly-sized circuits in just two rounds. So you can just use their two-round MPC protocol. And moreover, the FHE key generation phase can be parallelized, right? So the first party will compute the functional key and send it alongside the MPC protocol being executed. So you only need two rounds. What about communication complexity? So what is being communicated by the parties? So the communication complexity is essentially the communication complexity of this MPC protocol and the size of the functional key, right? And the size of the, and the communication complexity of the MPC protocol essentially grows polynomial in the encryption complexity of the FHE key. So the communication complexity is essentially computational complexity. Oh, it's again not working. Oh, oh, sorry. Okay, it's essentially computational complexity of the FHE encryption plus the size of the functional key for C, okay? And we want both of them to be polynomial, grow polynomial in the depth of the circuit. And we are going to use a succinct FHE scheme to achieve both these goals, okay? So a succinct FHE scheme allows, gives you an FHE encryption, an FHE scheme with encryption complexity that grows only polynomial in the depth of the circuit. But unfortunately, the size of the FHE scheme in the succinct FHE scheme grows polynomial in C, okay? But what would have been ideal is if this only grows polynomial in the depth of C, right? But this is too, too strong as property to achieve anyway because if you're generating a functional key for a circuit C, then of course the size of the key is going to grow proportional to the size of C, right? So let's weaken this property. So we are going to start with an FHE scheme that has some structure on the FHE scheme, okay? So the FHE keys will be split into two parts. The first part will consist of a short private string and the second part will consist of a long public string, okay? So we require the short private string to be computable using the master secret key of the FHE scheme, but any party can actually compute the long public string, okay? Even if he doesn't know the master secret key, okay? So why is this useful? This is useful because the first party who knows the master secret key can compute the short string and all other parties can compute the long public string, a public string on their own, okay? So now the only thing the first party has to communicate is the short private string, okay? So the communication complexity only grows proportional to this short private string, okay? And it turns out that the FHE scheme of GKPVC has this property that the FHE keys can be split into two parts where the first part grows only polynomial and depth of C and the second part is polynomial in the circuit size, okay? So which means that we get the desired bounds for both these complexity notions, okay? So now we have communication that grows polynomial in depth of C. What about security? So I'm running out of time, so let me just explain this very briefly. This is no longer secure because the first part is the one who is generating the master secret key. So of course he can decrypt the ciphertext and know all the inputs of the FHE scheme, right? So what we are going to do is to make all the parties generate the functional keys and not any individual party, okay? And we are going to use FHE combiners to combine all these different invocations of the FHE scheme, okay? So as I said earlier, every single party generates the FHE functional key and then you're going to feed in all the master secret keys into this MPC scheme, right? So how do you recover the output of this protocol? So you're going to run the reconstruction of the underlying Turon MPC protocol to recover the FHE encryption of x1 through xn and then you're going to combine all the FHE keys generated by the different parties. You're going to decrypt the combined ciphertext using the combined FHE key to get the desired result, okay? To conclude, we get the Turon MPC protocol with communication complexity that's only proportional to the depth of the circuit and what I didn't talk about was how to construct FHE combiners from PRZ and MC1. Any questions?