 Hi, my name is Alexis Korb and this talk is on amplifying the security of functional encryption unconditionally. This is joint work with Iuse Jain, Nathan Manohar, and Amit Sahae. Thanks for watching and I hope you enjoy the talk. So I'll start by defining functional encryption. Here I define secret QFE, but our work also extends to public QFE. In a functional encryption scheme, an authority wishes to allow users to learn functions of some encrypted input and no other information about the input. So, more formally, an authority with a master secret key can generate function keys for specific functions which you can then hand out to different users. Then, using the master secret key, the authority can encrypt some message M. Correctness holds if a user with a function key for function F and a ciphertext for a message M can compute F of M. Note that different users with different function keys should be able to compute their corresponding function on the encrypted input. Intuitively, for security, we want it to be the case that the user learns only F of M and nothing else. We can formalize this by considering an adversary that is allowed to request function keys for functions of its choice along with encryptions and messages of its choice. At some point, the adversary outputs two messages M0 and M1 and receives an encryption on one of the two messages. Security holds if the adversary cannot distinguish between the case where it gets an encryption of M0 and the case where it gets an encryption of M1. We also require that for every function for which the adversary requests a function key, that a function evaluates the same value on these two messages. Otherwise, by the correctness of the functional encryption scheme, the adversary could trivially distinguish between an encryption of M0 and encryption of M1 by simply computing this function of an encrypted message he received and comparing it to the value of the function on either M0 or M1. This brings us to FE amplification. Amplification is where you take a weekly secure primitive and use it as a building block to construct a fully secure primitive of the same type. So what does it mean for an FE scheme to be weekly secure? Returning to our definition of security, we will define a P secure FE scheme to be one in which the distinguishing advantage of the adversary is at most P. As an example, the standard notion of FE security would require P to be some negligible function of the security parameter and a completely insecure scheme would have P equals one. But in the general case, P can be any value between 0 and 1. Note that our notion of security weakens as P increases. Returning back to FE amplification, we see that our goal is to reduce the distinguishing advantage of the adversary. So, for example, we might want to take an FE scheme which is secure with probability only one half and use it to build an FE scheme which is fully secure. Now apart from being a fundamental question in this own right, this is especially useful for FE since we do not currently know how to build the most general version of FE for all functions from any standard assumptions. So amplification results mean that if we can show how to construct even a weekly secure FE scheme from standard assumptions, then this would imply the existence of a fully secure FE scheme from standard assumptions. And constructing such a weekly secure FE scheme may be a lot easier than constructing a fully secure one. Finally, we note that in amplification, unlike in many other areas of cryptography, the results can be unconditional and that the security of the fully secure FE scheme is dependent only upon the weak security of the weaker FE scheme. So what has previously been done in FE amplification? In AGS-18 and AGLMS-19, they show that you can amplify FE from 1 minus 1 over polylambda security, meaning that the adversary can almost always break this scheme to full security, assuming sub-exponentially secure LWE. And though I haven't defined either compactness or sublinearity, for those who know, this transformation does preserve both compactness and sublinearity. Additionally, they have both polynomial and sub-exponential versions of their theorem. In the polynomial version, we consider adversaries of polynomial size and wish to make distinguishing advantage negligible. In the sub-exponential version, we consider adversaries of sub-exponential size and wish to make the distinguishing advantage sub-exponentially small. And apart from these two works, we do not know of any other FE amplification results. So, since their work assumes sub-exponentially secure LWE, this brings up the question of whether we can get FE amplification from weaker assumptions. And as the title of this talk might have indicated, the answer is yes. So in our work, we show that you can amplify FE from constant security to full security unconditionally. Our transformation also preserves compactness and, as in the prior version, has both polynomial and sub-exponential versions. We note that the prior work actually allows amplification from slightly weaker FE, whereas our work requires that the weak FE be at least constantly secure. However, our transformation has the advantage that it's unconditional. We achieve our amplification in two steps. First, we show that any constant secure FE scheme can be transformed into an arbitrarily small constant secure FE scheme. Then, we show that from a small enough constant, say less than 1,6, secure FE scheme, we can go to full security. The reason we break this into two steps is because we actually use two different instructions for the two different transformations. And the reason we do that is because the parameters are actually quite sensitive. And neither construction by itself was sufficient to provide for the entire amplification all at once. For the first transformation, we nest our FE scheme. And I will describe later what nesting means in the context of FE. But to prove security, we use a new nesting technique for hardcore measures. This nesting technique also allows us to prove this simple nesting of public key encryption. That is where you encrypt the encryption of a message, provides the amplification we would expect. That is, if the original PKE scheme was broken with probability epsilon, then the amplified public key encryption scheme, formed by encrypting an encryption of a message, would be broken with probability roughly epsilon squared. Now, though we do already have amplification results for PKE, prior to this work, we did not know how to prove such amplification for simply nested PKE. In fact, our new nesting technique can apply to other simply nested primitives as well. The second transformation has a much more complicated construction. But as a couple of high-level highlights, we use a parallel repetition. And we also create and use a new form of secret sharing, which we call set homomorphic secret sharing. We had to use this new form of secret sharing, since in our case, the parameters were actually quite delicate. And the other forms of secret sharing were insufficient to provide the parameters we needed for amplification. Now, unfortunately, in this talk, I will not have time to go into more detail about the second transformation, or about set homomorphic secret sharing. Instead, I will focus the remainder of this talk on our first transformation. Now, one of the nice things, though, about our first transformation, is that it uses a new technique that I will be able to explain reasonably fully in the running time. And this technique also conveys a few of the important insights that are used in our second, more complex transformation. So for our first transformation, we amplify by nesting our affi. So how do we nest an affi scheme? So we'll first start off with a normal affi scheme. The psychrotext is the usual encryption of the message, and the function key is the usual function key. In this diagram, the key chain with the function f on it indicates that the yellow function key is for the function f. So now we take an independent affi scheme, and lay it on top of the original version. So the encryption is now the encryption under first the yellow affi scheme, and then the blue affi scheme. For the blue function key, we create a function key for the function that will decrypt its input using a hardwired yellow function key for function f. Now, for correctness of our nested scheme, recall that we want it to be the case that if we decrypt the blue psychrotext with the blue function key, we should get f of m. So why does this work? Well, if we decrypt the blue psychrotext with the blue function key, then we should get the blue function of the yellow psychrotext. But the blue function is the decryption of the input with the yellow function key. So then we are decrypting the yellow psychrotext with the yellow function key, which gives us f of m. And so we satisfy correctness. And as one final note, you can also extend this to nest more than two layers of fe. Now nested fe is a special case of the more general idea of nested primitives. The intuition here is that if at least one layer is secure, then the whole thing should be secure. So in order to get to the message inside both the blue and the yellow encryption shown here, you'd expect that you'd have to break through both the blue and yellow encryptions. So again, if each layer is broken with probability epsilon, you'd expect that both layers would be broken with probability roughly epsilon squared. And intuitively, this makes a lot of sense. But proving it formally is actually quite difficult. And this is what we will show next. As a last note, for the remainder of this talk, instead of considering nested fe, I will instead consider nested public key encryption as a dissimpler. But the techniques I show do also apply to nested fe. So just a reminder, in nested public key encryption, our ciphertext is formed by first encrypting under one public key, and then encrypting this encryption under a second public key. This secret key is simply the two individual secret keys. Security is the standard notion that the encryption of a message m should be indistinguishable from encryption of zero. To prove security, we will rely on hardcore measures. And there's actually already a long line of study on using hardcore measures for various types of amplification. For those who are not very familiar with measures, a measure is basically just a generalized notion of distribution. And for the remainder of this talk, if you want to just consider a measure to be a distribution, then the talk is still very understandable. So let us now consider a weakly secure public key encryption scheme. This means that an adversary can distinguish between an encryption of m and encryption of zero with some probability epsilon. Recall that encryption is a probabilistic process. So epsilon weak security could potentially mean that no matter what randomness you use to encrypt either the message or to encrypt zero, then the adversary always has at least an epsilon probability over its own randomness to break the encryption. But in fact, the hardcore measure theorems show that this is not in fact the case. It turns out that there's actually a small hardcore randomness, such that if you encrypt using the hardcore randomness, then all slightly smaller adversaries have very little distinguishing advantage. That is, if you encrypt either the message m or zero with the hardcore randomness, then the adversary has a very hard time distinguishing between the two. And in fact, the density of the hardcore measures is directly related to the distinguishing advantage of the adversary on the original primitive. So if the original adversary could distinguish with probability epsilon, then the density of these hardcore measures is one minus epsilon. But last note, these hardcore measures depend on the input to the encryption. So consider the case here where you encrypt either m or zero. Then you have some hardcore measures for this process. Now suppose in the set of encrypting m, you encrypted some other message m prime. Then you might have completely different hardcore measures. So it might look something like this. Notice that the hardcore measures now are different than in the previous case. But in summary, you should just remember the following. When you sample from the hardcore measures, you expect to have strong security, meaning that the encryption of m and the encryption of zero are strongly indistinguishable. Okay, so now let's go back to our nested game. Here, we will assume that each layer of encryption is epsilon secure, meaning again that the distinguish advantage of the adversary is at most epsilon. Now in the usual case, we will sample the randomness for the blue encryption from uniform randomness. But sampling from uniform randomness is equivalent to sampling from the hardcore measure of the blue encryption with probability proportional to its density, and sampling from the complement of this measure with probability proportional to its density. So we can think of sampling from uniform randomness and sampling from the hardcore of the blue encryption with probability one minus epsilon and sampling from this complement with probability epsilon. Now when we sample from the hardcore of the blue encryption, we expect that our blue encryption will be strongly secure. So we expect that we should be able to swap out our blue encryption for encryption of zero. And in fact, this turns out to be the case. The hardcore theorem states that the outer blue encryption in this case is strongly indistinguishable from the encryption of zero. And since we have replaced the blue encryption with an encryption of zero, we now have security that we have lost all information about the yellow encryption and their message. Now consider the bottom case. Since we do not sample from the hardcore of the blue encryption, we want to instead rely on the security of the yellow encryption. So again, in the normal case, the yellow encryption is encrypting using uniform randomness. And similarly, this is equivalent to sampling from the hardcore of the yellow encryption with probability one minus epsilon and sampling from the complement with probability epsilon. Now, in the case when we sample from neither hardcore measure, we're just going to give up and call the thing insecure. And this happens with probability epsilon squared. In the middle case, when we sample from the hardcore of the yellow encryption, we again expect strong security for the yellow encryption. So we expect that we should be able to replace the yellow encryption with the encryption of zero. In fact, we want something that looks like this. We want to show that you can swap out the yellow encryption with the encryption zero, which would give us security because the encryption zero would completely wipe out information about the message M. So this is what we want. And now let's focus in on how we can achieve this. Okay, so we know that the encryption M is strongly indistinguishable from the encryption of zero. When the randomness for these encryptions are drawn from the hardcore measures. And now we want to show that the two cases from before are also strongly indistinguishable. So how might this reduction work? We will first receive a mystery yellow encryption, which is either an encryption of zero or encryption of M, where the randomness is drawn from the hardcore measures. And then to finish the reduction, we will just sample randomness from the blue measure and use it to compute the blue encryption. And now it looks like we're done. The two cases we end up with are the two cases we wanted to show that are strongly indistinguishable. So great. What's the problem here? So there are actually two problems. First, the blue measure might not be efficiently sampleable or computable. Recall that the blue measure is the complement of the hardcore measure of the blue encryption. In fact, we only know that there exists such a measure. So we might not be able to efficiently sample from it, which means that this reduction might also not be efficient. Secondly, it turns out that the hardcore measure of the blue encryption depends on what we're encrypting. And so we might have two different blue measures, depending on whether we received an encryption of zero or an encryption of M. Clarify this point. Recall that the hardcore measures depend on the input to the encryption. And previously, we had noted that if you encrypted M prime instead of M, then your hardcore measures might change. So returning back to our diagram, we see that the blue hardcore measure may depend on which yellow encryption we received. And the blue measure in the case when we have a yellow encryption of M may be different from the blue measure in the case where we have a yellow encryption of zero. And so now it's unclear how we should carry out this reduction. We need to sample randomness for this blue encryption, but we don't know which measure to sample from. And both measures might not be efficient to sample from anyway. To solve the first problem, we will efficiently simulate the blue measures. So we will first observe that the blue complement hardcore measure actually has high density. And by a theorem from PTV09, it turns out that all high density measures can be efficiently simulated. So consider a function f that takes input, either the encryption of M or the encryption of zero, computes the corresponding blue measure, and then outputs a sample from the blue measure. Then, since the output of f has high density, then there exists an efficient simulator that, given the yellow encryption, can output a sample from a distribution indistinguishable from the corresponding blue complement hardcore measure. And so now we have solved our first problem. Instead of sampling from the blue measure, we will instead sample from the simulated measure, which should be indistinguishable. And as a final note, instead of using TTV09 directly, we actually use a theorem from Scorsky 15, which is a leakage simulation variance of the theorem from TTV09. So now we want to fix problem two. We want the simulator to be independent of the yellow encryption. So how can we do this? Now, the key observation is that the efficiency of the simulator is only dependent on the output of f. So it doesn't matter if we increase the run time of f. We just need to somehow get f to take in some input that is not very dependent on the yellow encryption, so that our simulator also doesn't need direct access to the yellow encryption. So to do this, we will use a commitment of the hidden information. Instead of receiving the yellow encryption directly, both f and the simulator will instead receive a commitment of the yellow encryption. Now, f does need to know the yellow encryption in order to compute the blue measures. So we will modify f so that brute force breaks open the commitment to retrieve the yellow encryption. This way, f can still compute the blue measures. Now, one might think that giving both f and the simulator a commitment is so that the actual values will harm the efficiency of the simulator. But in fact, the simulator is just as efficient as before. This is because the efficiency of the simulator only depends on the output of f, which is unchanged. So now, we have the problem that the simulator needs to know what yellow encryption to commit to. But this is a commitment, and a commitment should hide the information inside it. So in fact, we can swap out the commitment of the yellow encryption with the commitment of zero. And if the commitment is strong enough, then neither the adversary nor the simulator should be able to tell the difference. But this means that we can now simulate either the blue measures by simply running the simulator on the encryption of zero. So now, our simulator is independent of which yellow encryption we have. And this solves the second problem. Returning back to our original reduction, we see that we had two problems. First, to compute the blue encryption, we needed to sample from some blue measures that might not be efficiently sampleable. And secondly, which measure we needed to sample from depended on which yellow encryption we received. So to fix these problems, we can now use our simulator. Instead of sampling the randomness for the blue encryption from one of two potentially hard to compute measures, we can instead sample the randomness by running our simulator on a commitment of zero. And by what we just showed, the simulated measure is computationally indistinguishable from each of the blue complement hardcore measures. And so now we have the reduction we want. And we can show that these two cases are computationally indistinguishable. Going back to the original diagram, we see that with probability one minus epsilon, we sample from the hardcore measure of the blue encryption. And by the hardcore lemma, this means that we can swap out the blue encryption with an encryption of zero, which is secure since it completely erases all information about the encrypted message. And then with probability epsilon times one minus epsilon, we sample from the hardcore measure of the yellow encryption, which by the reduction we just showed means that we can swap out the yellow encryption with an encryption of zero. And this is also secure since we have also erased all information about the encrypted message. And then finally with probability epsilon squared, we sample from neither hardcore measure in which case we give up and say it's insecure. And so indeed, we get the intuitive result we expected. That is, if one layer is broken with probability epsilon, then both layers are broken with probability roughly epsilon squared. And so we have achieved amplification of nested primitives. In summary, we show that you can amplify FE from constant security to full security unconditionally. And this transformation preserves compactness. We also show how to amplify nested primitives with our new technique. And finally, we introduce our new set homomorphic secret sharing scheme, which may be of independent interest. This is used in our second more complicated step of amplification. I encourage you to read the full version of our paper for more details. Thank you.