 from key combiners to secure MPC and back. And this is in joint work with Sai Krishna, Ayush, Nathan and Amit. But common to both these talks is understanding two notions that we are all familiar with, which is function encryption and secure MPC. And we're going to explore techniques to construct both function encryption in the intersection of both function encryption and secure MPC. We're going to see how to use constructions in one area to achieve constructions in the other area. So let me start by giving the preliminary background on function encryption. So for most of the talk, I'm going to stick to public key function encryption. FE is a generalization of standard public encryption that additionally has a key generation algorithm. What the key generation algorithm allows you to do is generate functional keys associated with circuits in such a way that if you have a ciphertext corresponding to encryption of some message, say X, then you can use the functional key associated with the circuit C to obtain C of X. So function encryption gives more fine-grained access to encrypted data. In the case of standard public encryption, either you can get the whole message or nothing. And the security guarantee roughly says the following. The adversary is allowed to submit challenge message query X0 and X1. It is also allowed to submit functional queries corresponding to circuit C1 through CQ. And the challenger responds back with either encryption of X0 or encryption of X1. And the challenger also gives functional keys associated with the circuit C1 through CQ. And the goal of the game is for the adversary to be able to distinguish between these two experiments. So as stated, this doesn't make sense because the adversary could query a circuit that outputs different values on both X0 and X1. So we need to impose an additional restriction that the output of all the circuits on messages X0 should be the same as the output of all the circuits on messages X1. And moreover, we also require that the length of X0 should be the same as the length of X1. So FE, in the past few years, has found many applications in cryptography and beyond. For instance, FE was used to construct indistinguishability obfuscation. It has been used to construct delegation schemes, public key watermarking schemes, hardness of finding a Nash equilibrium, lower bonds for differential privacy, and so on. And last but not the least, it's also useful for constructing secure multi-party computation protocols. So what is Secure MPC? All of you already know this, but it doesn't hurt to recall. Secure MPC is this remarkable notion that allows multiple parties, each with their, each party has its own private input. And they want to come together and jointly compute a function on their private inputs. And in terms of security guarantee, what we want is even if the adversary is able to corrupt a subset of the parties, all it should be able to learn are the inputs of the corrupted parties and the output and nothing else. And as I mentioned earlier, the goal of both the talks will be to explore the techniques that line the intersection between these two notions. And there have already been some works that have studied at the intersection. So for instance, in 2012, GVW showed how to construct a bounded collision secure function encryption schemes, starting with the honest majority MPC protocol of BGW. And a few years back, there was also construction of non-interactive MPC in the reusable correlated randomness model from a generalization of FE called multi-input function encryption. And recently, we also saw how to construct combiners for function encryption using two round MPC protocols. Okay, so and both the works sort of continue to this line of resource direction. Okay, so let me start with the first part, which is what, oh, what happened? Was this only this slide or have been talking this all while? Oh, only this slide? Yeah, only this slide. Okay. It would have been funny if I had continued throughout and then you pointed to me then. Anyway, so I'm gonna talk about optimal bounded collision secure FE scheme and this is in joint work with, you know. So recall that I define the security of the FE scheme where the adversary can make Q functional query, right? So what is this Q, right? A natural question to ask is, how do the parameters of the FE scheme grow with Q? And in particular, I'm gonna focus on the encryption complexity of the FE scheme in terms of Q. As we will see soon, this is not an artificial question and it has some important implications in cryptography. Okay, it's moving here but not there. What's going on? Just a second. Oh, okay. Okay, so I'm going to plot the schemes from on this scale. On the left hand side, you have FE schemes that have an encryption complexity that grows poly logarithmic in Q. And on the right hand side, you have FE schemes with encryption complexity that grow polynomial in Q. And for now, just ignore the multiplicative factors in polynomial in the security parameter. What's going on? Hopefully this won't happen again. Okay, so on one hand, on one extreme, we have FE schemes that grow polynomially in Q and we call these schemes bounded key FE schemes. Bounded key FE schemes have some limited applications but are still useful to construct interesting primitives such as public key watermarking schemes. And we know a lot about how to construct such schemes. I mean, we know how to construct it from standard assumptions such as DDHLW and so on. And on the other extreme, we have schemes that grow poly logarithmic in Q and we call such schemes collision resistant FE schemes. These schemes are even more powerful than bounded key FE schemes. They have a lot more interesting applications. But the drawback is that we only know how to construct these schemes from newer assumptions. And the reason is because this implies indistinguishability application. If you want to know more about how to construct these schemes from newer assumptions, attend Rachel's talk on Wednesday. And there is a lot to be known in the middle. And in the past few years, there have been works to understand better how the schemes in the middle look like. And very recently, it was shown that if you have a public key FE scheme with encryption complexity that is sublinear in Q, then that is as powerful as FE schemes that grow poly logarithmic in Q. Meaning that an FE scheme with encryption complexity sublinear in Q already gives you collision resistant FE scheme. So then the whole part on the left implies indistinguishability of fiscation. And so we still don't know how to construct this from standard assumptions. What about the situation on the right? In 2012, GBW gave the first construction of bounded key FE scheme. They showed how to construct FE for polynomial size circuits from weak PRFs in NC1. And currently, we only know how to construct weak PRFs in NC1 from standard assumptions. And but for NC1 circuits, they showed how to construct FE schemes from the minimal assumption of public key encryption. But the drawback in the scheme was that the encryption complexity was Q to the 4. And there have been efforts to reduce this dependence on Q since then. So a couple of years back, Shweta and Alon showed how to achieve encryption complexity that was quadratic in Q from learning with errors assumption. So now we have these two sections. What about the middle? So we didn't know whether FE schemes with encryption complexity linear in Q was in the red region or in the green region. And in this work, we show that linear complexity is actually in the green region. So we show that assuming public key encryption, which is the minimal assumption, there exists a public key FE scheme for polynomial size circuits with encryption complexity that is linear adaptive security. Adaptive security just says that the adversary can make the functional queries in an adaptive manner. And moreover, our construction makes black box use of public key encryption. So our work establishes a dichotomy in functional encryption. So if you can construct a functional encryption scheme that, oh, this should have been opposite. So if you can construct a functional encryption scheme that is sublinear in Q, then that would imply IO. And we show that if you can get a FE scheme that grows linearly in Q, that is equivalent to public key encryption. And as I said earlier, the previous best known result achieved FE with encryption complexity that was quadratic in Q. And this was only selectively secure and from learning with errors. Okay. Yeah, so it was only selectively secure and from learning with errors assumption. So we improve the state of the art in three ways. First is we improve the encryption complexity. Second, we get adaptive security as again selective and moreover, we get it from the minimal assumption of public key encryption. So we also give a construction of private key functional encryption. Private key FE is just an adaptation of public key FE in the private key setting. So the setup algorithm now only outputs the master secret key and the encryption algorithm now needs the master secret key in order to compute the ciphertext. And we showed that assuming one way functions, again, this is the minimal assumption, we get a private key FE scheme for polynomial side circuits with encryption complexity that grows linearly in Q. And again, we get adaptive security and make black box use of one way function. And in last TCC, there was a construction of private key FE scheme in the bounded key setting that had linear complexity, but that was only selectively secure and it was from learning with errors assumption. Okay, so for the rest of the talk, I'm going to focus on constructing public key FE schemes. The construction can be very naturally adapted to the private key setting as well. Okay, so the first step, we are going to use public encryption to construct a bounded key FE scheme that has large encryption complexity. Meaning that it has encryption complexity that is arbitrary polynomial in Q. Recall that GVW only constructed such a scheme from weak PRFs in NC1 and we show how to get it from public encryption. And in the second step, we give a generic construction to go from FE schemes with large encryption complexity to FE schemes with linear encryption complexity, meaning that the complexity only grows linearly in Q. And for the first step, we are going to use techniques from secure MPC literature. And the second step is going to be really simple and it's only going to use elementary tools. So let me focus on the easier step, which is the second step. And we're going to see how to achieve this. And I'm going to call the FE scheme with large encryption complexity to be the inner scheme and the FE scheme with linear complexity to be the outer scheme. And in terms of notation, I'm going to use small letters to denote the inner scheme and capital letters to denote the outer scheme. And in terms of the query bound, I'm going to use T to denote the query bound for the one for the inner scheme and Q to be the query bound for the outer scheme. So this is the notation I'm going to use. Okay, so the main idea in this construction is repetition. So I'm going to take the inner scheme and repeat it many times. How many times? Q times. Once I do this, I'm going to get Q public keys and Q secret keys. So this is going to be my setup. And to encrypt, I again repeat. I encrypt my input with respect to all these different public keys. Small pk denotes the public key of the inner scheme and capital pk denotes the public key of the outer scheme. So note that I'm encrypting the same message under all these different instantiations. And what is the key generation algorithm? I'm going to pick a random index from one through Q. And I'm going to generate a key for corresponding to the master secret key associated with this index. So I'm going to generate the key for C corresponding to MSKI. Is the scheme clear? So decryption is easy. So I have the ith functional key. I'm going to look at the ith ciphertext. I'm going to ignore everything else. And I'm going to decrypt both of them to get C of X. So the correctness just follows from the correctness of the inner scheme. Why is this efficient? Why does this satisfy linear encryption complexity? So note that I'm repeating this Q times. So the complexity is Q times poly and lambda S. S is just the size of the circuit C. And recall that T was the query bond for the inner scheme. So this is going to be the complexity. So all I'm going to do is set T to be security parameter. So then the complexity will now become Q times polynomial and lambda S. So all I did was take an FE scheme that had large encryption complexity, set the bound to be small and use this to get an FE scheme with linear complexity with a large query bond. Okay, security is really simple. So let me explain this with this example. Suppose let's say you have Q buckets and Q balls. And you're going to place every ball independently in a random bucket. So this is easy to see that the probability that any bucket has at least lambda balls is going to be negligible in the security parameter, right? So why is this example useful? Should really think of the buckets as being ciphertext. And the action of placing every ball independently in a random bucket will be the key generation algorithm. Is this clear why it's such a case? Every bucket is a separate instantiation of the inner scheme, right? And picking a random instance is the same as picking the random bucket to put the ball in. What's going wrong? Oh, finally. Okay, so with this analogy it's easy to see that the probability that for every index from one through Q, the number of invocations of the key generation algorithm of the inner scheme is at most T, the probability of this event is negligible in the security parameter. Yeah, this just follows from turn off and union bound. So why is this claim useful? So now this shows that all the Q instantiations of the inner scheme are secure. Oh, it's still not working. Did I do something? Oh, okay. Okay, so this shows that all the Q instantiations of the inner scheme are secure. If all the instantiations are secure, then the security of the outer scheme should also hold. That's it. So this completes the second step. So let me talk about the first step, which is to construct a public key FE with large encryption complexity from public key encryption. As before, I'm going to use the terminology of inner scheme and outer scheme. So I'm going to break the first step into two parts. First, I'm going to start with a public key single key FE scheme for poly size circuits. We know how to construct this from public key encryption. And we're going to use this scheme to get public key FE for polynomial size circuits in the bounded key setting. And the encryption complexity of this is going to be large in terms of Q. That's okay because we have already shown how to go from that to linear complexity in the first step. Okay, so I'm going to call the single key FE scheme to be the inner scheme and the one with the large encryption complexity to be the outer scheme. So in terms of notation, I'm going to denote the single key FE scheme to be one FE scheme. And I'm again going to use small letters for the inner scheme and capital letters for the outer scheme. Okay, so to construct this bounded key FE scheme, a natural idea is to just repeat whatever we did in the first step, right? But we are going to instantiate the inner scheme with single key FE scheme. So we are going to again repeat the inner scheme Q times, encrypt it, encrypt the same message under all the ciphertext, right? And then for key generation, I'm going to pick an index at random and give a key. Okay, so now this no longer works because note that I'm starting with the single key FE scheme. What does this mean? It means that even if the adversary gets two keys corresponding to an instantiation, then the security of that instantiation no longer holds. And in this case, even if you take two keys, the probability of an adversary obtaining two keys with respect to the same instantiation of the inner scheme is at least one over Q. Okay, so this is not good enough. So this is where the intuition of GWW is helpful. So they used secure MPC protocol to achieve privacy amplification. So their intuition was as follows. So treat every invocation of the underlying inner scheme as a party in a secure MPC protocol, okay? And if you're issuing at least two keys for the ith invocation, then that particular instantiation is insecure and treat this as a corrupted party in the MPC protocol, okay? So with this intuition, they construct the scheme as follows. So recall that they show how to construct a bounded key FE scheme for NC1 circuits from public encryption. So now I'm going to start with many, many instantiations of the inner single key FE scheme. But instead of encrypting the same message under all the instantiations, I'm going to secret share this message into many shares, and then every instantiation is used to encrypt one of the shares. And what kind of secret sharing scheme I use? I'm going to use threshold secret sharing scheme, okay? Where the threshold was said to be T and I'll use T out of O. It's again not working. Oh, I didn't even realize it was not there. Oh, okay. Oh, I think you missed the, did you guys see this slide? Oh, okay, I didn't realize, okay. Because it's showing up here, but not there. Okay, so this is the main intuition as to how the bounded key FE scheme, how GVW actually considered different invocations of the FE scheme as corresponding to different parties in the MPC protocol. Okay. Okay, so the scheme constructed by GVW is as follows. So they start with a message X, and then this secret share this X into multiple shares, and then they encrypt each share using an instantiation of the one FE scheme. So earlier I used Q instantiations, but now I'm going to use N instantiations where N is an arbitrarily large polynomial in Q. And for key generation, what I'm going to do is consider a circuit G that homomorphically evaluates on the secret shares. So threshold secret share scheme has this property that you can do homomorphic computations on the secret shares. So I'm going to use this property there, and I'm going to generate the functional keys corresponding to these circuits, and these functional keys will be with respect to the inner scheme. So I'm going to pick a subset of the instantiations of the single key scheme, and I'm going to generate functional keys for these circuits. And during the decryption phase, once you decrypt these functional keys with the ciphertext in the encryption scheme, so you will end up with the output of the circuits on the secret shares. So you can run the linear reconstruction algorithm to recover the output C of X. And the reason this only works for NC1 is because you can only perform homomorphic evaluation using low degree polynomials on these shares. And our approach is to not start with BGW. So if you had looked at the previous scheme closely, implicit in the construction was a two round BGW protocol. So instead of using BGW, our observation is to use BMR. And the advantage of BMR is it allows you, if you adapt it suitably, you get a two round MPC protocol from BMR for polynomial size circuits as against just NC1. So roughly BMR looks as follows. Every party does some PRG computation. And the output of this PRG computation is fed into a two round information theoretic secure MPC protocol that is, that's only for low degree polynomials. And every party at the end of this protocol will recover gobbling of the circuit being securely computed on the inputs X1 through XN. Then they run the evaluation algorithm on this gobbling scheme to recover C of X1 through XN. Okay, so how are we going to use BMR to construct an FE scheme? And roughly this is how the construction is going to look like. So you're gonna generate the first round messages of the MPC protocol. And you're going to view the N instantiations as N parties. So throughout this talk, the analogy would be between every instantiation and every party in the MPC protocol. Okay, so the IF message will be encrypted under the IF instantiation. So you will end up with, and if you started with an N party protocol, you'll end up with N ciphertext. And each ciphertext is computed with respect to the inner scheme. And in the key generation algorithm, you again give a key GI, key for a function GI that computes the second round messages of the MPC protocol. Okay, and again you're going to pick a subset of the instantiations and give functional keys corresponding to these instantiations. And what does a decryption do? So during the decryption phase, you're going to use these keys of the inner scheme and ciphertext of the inner scheme, decrypt both of them to get the gobbling of the circuit C and along with the input X. Once you get the gobbling, you can decode to recover the output. So, oh, is the scheme clear, by the way? So is this secure? And the claim is that this is not secure. And the reason is because, suppose let's say I give you two functional keys corresponding to one ciphertext. So using the same ciphertext, you obtain two gobble circuits. But the randomness for this gobbling scheme is already fixed as part of the ciphertext, which means that you have computed both these gobble circuits using the same randomness. And this is no longer secure. So idea is to use a correlated gobbling scheme that guarantees security of gobble circuits even if they're computed with the same randomness. So you might think that this is sort of easy to construct because I can think of this random string as being a long block and then chop it into many blocks and I use the first block to compute gobble circuit for the first circuit, second block for gobbling the second circuit and so on. But I really want this construction to be a stateless transformation. So each gobbling algorithm does not know what the other gobbling algorithm is doing. Okay, even if you want a stateless construction, there is actually a simple solution for this. Use PRFs. You can think of the random string as being just a PRF key and then you generate randomness for this and for every circuit you generate fresh randomness and generate gobble circuits with respect to this fresh randomness. However, the PRF solution doesn't work because we really want the gobbling algorithm to be computable by low degree polynomial. And this is necessary for our construction because we are going to run an information theoretic MPC protocol for the gobbling algorithm and that's why this has to be low degree. Okay, and we show that there exists correlated gobbling schemes from one way functions. And once you have this primitive, instead of getting a standard gobbling scheme, you get a correlated gobbling scheme from this MPC protocol. And then we use this to obtain construction of an FE scheme. The only difference between the previous construction and this construction is that after decryption you obtain a correlated gobbling of the output, okay. Is this clear? And the security of the scheme essentially follows from the security of the single key scheme and the security of the correlated gobbling scheme. Okay, so if so, I'm going to conclude. So we show how to construct bounded key FE schemes from minimal assumptions. We get adaptive security and black box and make black box use of crypto. And as I mentioned earlier, this establishes dichotomy in the complexity of FE schemes. Previously there was also a dichotomy established for identity-based encryption. So a natural question to ask is whether there's a dichotomy for AB. So AB lies in between IB and FE but it doesn't seem to be any dichotomy for AB so it sort of is very surprising as to why it is the case. Okay, do you have any questions before I go to the second part? It's a construction black box, also in the use of PRG or PRF that you are using. Yeah. We use PRG, so it's black box. We only make black boxes of crypto. Isn't the BMR construction not black box, the use of PRG? No.