 Hi, I'm Nathan, and I'll be presenting Combiners for Functional Encryption Unconditionally. And this is joint work with Iuse Jane and Amitsa Hai. So what exactly is a cryptographic combiner? Well, what a cryptographic combiner does, it allows you to take many candidates for some cryptographic primitive, in this case, four candidates, and combine them to create a new candidate construction. And the guarantee of a cryptographic combiner is even if all but one of the candidates are broken, as long as there's a single secure candidate, the new candidate will be secure. So those are the properties that we want from a cryptographic combiner for a primitive. And additionally, we want the property that the cryptographic combiner is efficient, meaning that as the number of candidates that we combine increases, the running time of the new candidate remains polynomial in the number of combined candidates. Now, why should we study cryptographic combiners? Well, cryptographic combiners allow us to hedge our bets on the security of any individual candidate. In particular, in this example, there are four candidates, but we only need one of the candidates to be secure in order to have a secure construction. Consider the following situation. We have a candidate that's secure assuming LWE and another candidate that's secure assuming DDH. Which candidate should we use? It's not clear whether LWE or DDH is a more secure assumption. What a combiner allows us to do is combine these candidates to get an explicit construction that is secure, as long as either LWE or DDH is secure, which is a weaker assumption than the assumptions for either candidate individually. Moreover, the security of a combiner can be unconditional. This is because a combiner security is only relevant if one of the initial candidates is secure. And in fact, there are many constructions of unconditional combiners in the literature. In particular, there are constructions of unconditional combiners for one-way functions, for collision-resistant hash functions, public key encryption, oblivious transfer, and more. And indeed, one of the main goals of cryptographic combiners is to make constructions future-proof in case assumptions break down. Thus achieving unconditional combiners is important because we don't want our actual combiner construction to introduce additional assumptions into the mix. In this work, we study cryptographic combiners for functional encryption. So in functional encryption, there's some trusted authority that contains the master secret key of the system. And then there are users of the system. And what the trusted authority can do is it can generate function keys for various functions and give them to these users. And then users in possession of the function keys given an encryption of some message X can learn the function evaluated on the message. And the security notion for functional encryption says that an adversary that possesses a bunch of different function keys is not able to distinguish the encryption of two different messages, provided that the functions evaluate to be the same thing on the two different messages. Essentially, the adversary isn't able to learn anything more beyond what it can learn from the function evaluations. So why should we study functional encryption combiners? Well, for starters, there are many kinetic instructions of functional encryption. There are also various crypt analytic attacks. Thus, we would like to minimize the trust placed on any individual candidate, which is exactly what cryptographic combiners allow us to do. Furthermore, functional encryption combiners give rise to robust functional encryption combiners. These are combiners that don't even require the underlying candidates to be correct. And this implies universal functional encryption, which is a functional encryption candidate that is secure if functional encryption exists. That is an explicit construction of functional encryption that's secure if functional encryption exists. Furthermore, studying functional encryption combiners leads to other results in other branches of cryptography, such as round optimal depth proportional communication MPC. So prior to our work, what was known? Well, Anunth et al and Fishlin et al studied a related problem of IO combiners. And then Anunth Jain Sahai built FE combiners from sub-exponentially secure FE or LWE. And then Anunth et al built FE combiners assuming the existence of pseudorandom generators in NC1. However, the ultimate question in constructing FE combiners, namely, can we construct FE combiners unconditionally, remain open? And in this work, we ask this question and the related question of whether we can construct universal functional encryption unconditionally. And the answer is yes. So how might we go about this? Well, for starters, we can consider the notion of function secret sharing introduced by Boil, Giboa, and Ishai. And in function secret sharing, there's a function F that is secret shared across say N shares. And then the property of the function secret sharing scheme is that for any input X, it is possible to secret share X cross N shares. And then one can locally compute the shared function on the shared input to get these partial function evaluations. And then using these partial function evaluations, they can be recombined to recover the original function evaluated on the original input. And the security notion for function secret sharing says that an adversary that possesses descriptions of all the function shares, along with all the input shares except one, is unable to distinguish the real final function share, evaluated on the final input share from a simulated version where the simulator is given the true function evaluation along with all the other information that the adversary possesses. And so using function secret sharing, we can come up with an FE combiner construction following using the following template. So say we have a bunch of candidates and candidates, what would we do? Well, to encrypt, we would take an input and we would input share it according to our function secret sharing scheme. And then we would simply encrypt each of these input shares with respect to the appropriate FE candidate. And then how would we generate function keys? Well, we would simply function secret share the function, and then generate function keys for these functions with respect to the appropriate FE candidate. Okay, and now to see the correctness would hold, well, if the encryption is just the concatenation of all these individual underlying ciphertext and the function key is the concatenation of these underlying function keys, well, then what an evaluator can do, given the function key and the ciphertext, it can compute these partial function evaluations and then use the function secret sharing recombination procedure to recover F of X. And this is how correctness holds. And now to argue security, observe that if all but one of the FE candidates is broken, in this case the second candidate is secure but all the others are broken, well, then the adversary can effectively learn all the, potentially learn all the input shares except for the second one because we now have no guarantee on any of those candidates. However, the second input share remains secure. And now we observe that we're in a similar situation to one for function secret sharing where the adversary possesses descriptions of all the functions because the function keys do not hide the function descriptions and possesses all but one of the input shares. However, the input share for the second candidate X2 remains hidden. And so this is very similar to the situation for security for function secret sharing. And indeed, using underlying security of the function secret sharing scheme and various techniques, it is possible to prove security of this construction. Moreover, since the existence of functional encryption implies the existence of a one-way function and there exists unconditional one-way function combiners, we can assume the existence of one-way functions in our FE combiner construction and still get an unconditional FE combiner construction. Unfortunately though, we don't know how to build such a function secret sharing scheme assuming one-way functions. However, this isn't the end of the world because we have another idea that we can try. Namely, there's an easy way to obtain a combiner for a constant number of candidates. So in this example here, we have two candidates. All you do is you simply nest the candidates together. And so what I mean by that is to encrypt a message X, you first encrypt it under candidate one. Then you take the resulting ciphertext and encrypt that under candidate two. So the message is doubly encrypted. And to generate a function key for a function F, well, you first generate a function key for F with respect to candidate one. And now using this function key, you consider the function that effectively is running the decryption functionality of candidate one using this function key. And this function, you generate a function key with respect to candidate two four. And now observe that if you have the ciphertext and the function key will end up happening as you will run the decryption procedure of candidate one with respect to the function key for F on an encryption with respect to candidate one. And then that by the properties of FV decryption gives you F of X. So correctness holds. And argue security. Suppose that the second candidate again was secure and the first one was broken. Well then intuitively one layer of encryption is broken. However, since the message is doubly encrypted, the second layer of encryption remains secure and the message still remains secure. This is just intuition, but it's possible to formally prove security in this manner. Moreover, this approach extends for any constant number of candidates. In this example, I just used two candidates, but you could nest for example, 10 candidates in the same approach. However, we cannot nest n candidates in a row because then we would violate the efficiency requirement of a RAFI combiner because the size and the running times of these candidates grows exponentially with the number of combined candidates. However, a constant, this is not an issue. So this gives us the following idea. What if we take our original FE candidates and then create these nested candidates and treat these as new candidates in our construction? So what do I mean by this? Well, so here's the new approach, right? Say we had three FE candidates initially. When now in our construction, we're gonna take these original three candidates and we're gonna keep them and we're also gonna include all possible two nestings. That is FE one and FE two nested together, FE one and FE three nested together, and FE two and FE three nested together. And now we're gonna try very similar approaches before. So in encrypted message X, we're gonna secret share it into three parts, one for each candidate. And then we're gonna put the appropriate shares with respect to the appropriate candidates. That is the one that's the nesting of candidates one and two will get one and two. The ones that only contain a single candidate only get a single input share. And we're gonna encrypt all these messages. And this will be our ciphertext. And then to generate a key for a function F, what we would wanna do is somehow secret share a function into function shares for these different candidates and nested candidates, such that it is possible to evaluate, get these partial function evaluations on either one input or two inputs, such that using these, we can recombine them to obtain F of X. Okay, and this notion that we call combiner-friendly homomorphic secret sharing sort of describes what we need in order to construct an FE combiner. That we need a way to share an input and assign it to these two nestings, I guess of FE candidates and a way to similarly share the function. And now to analyze what security condition we would need, let's take a look at the candidates again. So suppose the second candidate was secure. Well, okay, FE1 and FE3 are broken. But we know by the fact that the two nesting is a combiner, that the nestings that contain FE2 will remain secure and the one that doesn't will be broken. So effectively the adversary is going to not be able to learn the input shares that contain X2, but we'll be able to learn all the others. And to think about what our security condition should be, let's just take a step back. So in normal function secret sharing, there are n shares and the adversary learns all but one share. In this case, the i to share, the adversary doesn't learn, but it learns all the others. But in our combiner-friendly homomorphic secret sharing scheme, we have n choose two plus n shares and they correspond to the various n out of n secret sharing based on the FE candidates they correspond to, right? So for example, two i is the nesting of candidate two and candidate i. And now the adversary is gonna learn the inputs corresponding to every share except for the ones that contain say i. And now our security condition is analogous to that in the function secret sharing scheme, namely an adversary that learns the function descriptions corresponding to every share and all the input shares that are read in this diagram should not be able to distinguish the real function evaluations on these unknown input shares from simulated versions. And our question is, can we construct this from only one way functions? And so to do this, we turned a secure multi-party computation. So in secure multi-party computation, there are many parties that each possess some private input and they wanna compute some function f on their joint inputs. And to do this, they execute some protocol for the function f and this generates some transcript and using the transcript they then are able to recover f on all their joint inputs. Okay, well, this is one way of generating the transcript. However, there's another way of generating the transcript. Suppose we could jointly compute on all the party's inputs. Well, then the transcript could be computed deterministically as a function of all parties inputs and randomness. Now this seems somewhat nonsensical because if you could compute jointly on all the party's inputs, you would just compute f on all the party's inputs. There's no need to even generate a transcript. However, just bear with me for a second because this leads us to the following idea. What if each transcript bit could be computed using only a constant number of parties' inputs? See, in the previous example, we were G was computed on every party's input. Now, what if it was the case that each bit could be computed only on a constant number of party's inputs? So in this case, and this is what we call input local MPC in our paper, it is possible to generate the transcript in this alternate manner by generating each bit computing only on a constant number of parties' inputs. In this case, two in this example. See, for example, here B1 is computed using only the inputs and randomness of parties one and two. And B2 is computed using only the inputs and randomness of parties five and nine, et cetera. Well, if this was possible, then we could build a combiner-friendly homomorphic secret sharing scheme by simply having the function say f12 that computes on x1 and x2 generate all bits of an MPC transcript that depend only on these two parties. And similarly for the other function shares. And then given all the function evaluations, you would have the entire transcript of the MPC protocol, and then you could use the transcript to recover f of x. So here's our input local MPC again. And the question is, do such protocols exist in the literature? And unfortunately, the answer is no. However, certain protocols, namely GS18 and GS18, come close. And the reason they come close is these protocols take some arbitrary MPC protocol with some specific structure that they call conforming protocols and compress them into two rounds by essentially having each party send like garbled circuits over for each round of the original protocol. And in a sense, it feels kind of input local because the garbled circuits themselves don't depend on other parties. However, there are various technical hurdles that make it not input local. For example, in their protocols, the garbled circuits have hard-coded values inside them that depend on all parties. However, in this work, we're able to modify these protocols to make them input local and thus construct an input local MPC protocol. So I glossed over some various challenges, one of which is that MPC security up to MI1 corruptions necessarily requires OT. However, we're trying to construct an FE combiner unconditionally. We don't want to have to assume the existence of OT. However, we can get around this by pre-processing the OTs. That is, we can work in the correlated randomness model where a party's input is now their input, their randomness, and then correlated randomness with each of the other parties. However, this seems to raise another problem which is that a party's input randomness and correlated randomness now depends on all parties' inputs. In particular here, this is the input in correlated randomness for party one. Observe that the string R12 depends on parties one and two because it's the correlated randomness between parties one and two. R1n depends on parties one and n. So the whole string together depends on all parties. However, it turns out it is possible to get around this by only considering the dependence on parts of a party's input instead of their entire input. And another challenge is we need to handle many ciphertexts and function keys. So far in this talk, I've only really been looking at a single function key and a single ciphertext. However, FE needs to support many ciphertexts and function keys. And moreover, function keys are for deterministic functions but MPC protocols are randomized. And the way we can get around this is we can use PRFs to generate randomness and correlated randomness that we'll then use to run the MPC protocol. So the way we do this is we assign PRF keys for each party and this will generate the randomness for parties and then we use shared PRF keys between parties and this will generate the correlated randomness between parties. And then we give a tag to each function key that we use as input to the PRF. So to sort of show this pictorially for one of the nested canids. In this case, FE1 and FE2. The encryption algorithm encrypts both X1 and X2 along with PRF keys K1 and K2 and this shared PRF key K12. And then the function key F12 has a tag embedded in it and actually it's for the function that we'll do the following. It'll first use K1 in the tag to generate some randoms for party one, K2 in the tag to generate some randoms for party two and then it'll use K12 in the tag to compute some randomness that is then used to generate some correlated randomness and then it will evaluate F12 on this input and partial correlated randomness of these two parties to generate some bits of an MPC protocol transcript. And in our actual construction, we need three nestings in this talk or simplicity I only considered two nestings. So just to quickly summarize, so far I did a top-down approach now let's do a bottom-up approach. We start with the MPC protocols of GS18 and GIS18 and modify them to make input local MPC. That was the first step. Then using input local MPC, we build a combiner-friendly homomorphic secret sharing scheme and using this we're able to obtain our unconditional FE combiner. And finally, using the results of AJS and ABJMS, this immediately implies an unconditional universal FE construction. Thanks.