 Hi everyone. This is a talk for PKC 2020. I'll be talking about a new paradigm for public key functional encryptions for degree 2 polynomials. First, let me give you some context with an example, filtering encrypted emails. So here, suppose Alice wants to send some confidential email to Bob. Bob will generate a public key that Alice will use to encrypt the email and an associated secret key that will allow Bob to decrypt and recover the content of the email. But Alice doesn't send the encrypted email directly to Bob, but rather to a server. And Bob would like the server to do more than just storing the encrypted email without compromising his privacy. So Bob would downgrade his secret key to produce restricted secret keys that allow the server essentially to get some bit of information on the encrypted emails but doesn't completely recover the entire email. For example, a key that will tell whether an encrypted email is spam or is urgent. Functional encryption allows to do that. Bob's key is called a master secret key and every restricted key is associated with a function and it's called functional secret key. And a functional secret key associated with a function F will allow the server to recover from an encryption of M the value F evaluated on the message M and nothing else. And it's possible to generate functional secret keys for different functions in a specified class of function. And each of these key will be different and will yield different partial information. And intuitively the security guarantee, the security notion guarantees intuitively that an adversary that can corrupt arbitrary many secret keys, functional secret keys will not learn anything more than what each of these key individually allows the users to learn. In particular, it's not possible to combine these keys to obtain extra information. That comes up is can we build practical FE from sound assumptions even if it is for a restricted class of function? And answering first was the work of Abdullah Adol, which build a functional encryption for inner products. So in that, in their case, the message being encrypted is a vector of some dimension called D. And every functional secret key is associated with a function described by a vector y, which is also of dimension D. And the encryption recovers the inner product of X and Y. So that means you can actually compute weighted sum on the encrypted data. So compute basic statistics. So this is the first non-trivial FE from standard assumption that has practical efficiency from non-trivial functions. And so this gives a fine-grained access to the encrypted data as opposed to all or nothing. And it captures already, for instance, and C0 circuit. Later on, Baltico Adol gave the first public key a FE where functional secret keys are associated with D2 polynomials and where the ciphertext remain succinct. That is their size grows proportionally to the dimension of the vector being encrypted and not quadratically. That means you can compute more advanced statistics on encrypted data. It's useful in its own rights. It has some implication, for instance, to predicate encryption, trade or tracing. And in a work with Teorifel and all, we build actually a concrete implementation of D2 FE, and we use it for private inference on encrypted data. So this thing is practical and it runs. And there's also a theoretical reason why we care about D2 FE. And the reason is that it's not far from IO, in fact, by a surprising result from Lean and Tesaro. They show that D3 FE, succinct, implies IO. So the gap is actually smaller than we may have thought at the beginning. Although we still don't know how to build D3 FE. And even later on, Jane, Lin, Matt, Sahai show that it suffices to have a degree two and a half. I won't go into details, this is beyond the scope of this talk about what it means. But it basically a degree two computation on private input and there's some public input and you can compute also the public input. So this is sufficient for IO together with non-standard assumption on PRG. So basically what I'm trying to say is function and encryption for low degree polynomials is actually really advanced to FE for richer classes of function. There's a lot of bootstrapping. So a better understanding of degree two FE will most likely shed light on richer classes of function. So in our work, we also build a degree FE for degree two polynomials that has the advantage of being simulation secure and their standard assumption. Prior works being either indistinguishability based selectively secure or rely on generic group model. So let me tell you briefly what indistinguishability based security is. It's a game where the adversary gets a public key then can query key generation oracle to get functional secret keys for function of her choice. And then at some point sends a pair of message to a left-right oracle and gets back the encryption of one of the two messages chosen at random and the adversary has to guess which message was encrypted. Of course you need to take into account the information that's being leaked by the functional secret keys. The adversary is supposed to learn f of m v for all f query to the key generation oracle. So to avoid trivial win, it needs to be the case that for all query function f, f of m zero is equal to f of m one. This is a natural variant of in CPA securities as we know for public encryption to the setting of functional encryption. People usually settle down for indistinguishability based security because the stronger notion of simulation security is known to be impossible for some function, for some classes of function. But it's not the case for quadratic function or in the products. And in fact, in some cases, simulation security is more meaningful. It's closer to the intuition that a functional secret key for the function f reveals nothing else than f of m. Let me explain what simulation security is in a simplified setting. So the adversary will choose a message and a bunch of functions. And then it will either receive from a real experiment, a public key and encryption of m and a bunch of functional secret keys or a fake public key and a fake ciphertext and a fair bunch of fake functional secret keys that are simulated by an ideal experiment that only sees the function f and the outputs f of m. In particular, it doesn't see the message m. And this is computationally indistinguishable for the adversary. So that means basically the ciphertext doesn't convey any information on m beyond f of m. This is stronger than indistinguishability based security. And in some setting it's more meaningful. Or rather, the indistinguishability based security is meaningless. For example, suppose you want to encrypt a PRG seed and you generate a functional secret key for that PRG. Then simulation security tells you that you reveal nothing beyond the evaluation of the PRG, which is sort of random. However, indistinguishability based security gives you pretty much nothing, not security at all. Also, so simulation security is harder to achieve, but also easier to use, especially when using FE as a building block for larger classes of function. I haven't mentioned it so far. There is an adaptive and selective variants of this security notions where in the selective variant, the adversary is artificially restricted to choose the message and the function beforehand, before receiving anything a public key or a secret key. Adaptive means it doesn't have such restriction. It can choose the messages based on the keys he has previously seen. Of course, adaptively secure is more desirable. It's a natural notion. Selectively secure is still meaningful and it's usually stepping stone towards adaptive security. Also, you can get adaptive security from selective security generically with a guessing argument at the price of an exponential security loss. The gap between these two security notions is quantitative, whereas between these two it is more qualitative, and these two are actually incomparable. Of course, we would like to have a simulation-adaptively secure scheme, which is an interesting open problem. Let's go one step forward here by providing the first simulation-secure from standard assumptions. Both of these are using standard assumptions on pairings. Another advantage that we have is that we essentially get CCA security for free. CCA stands for chosen ciphertext attack security. It is a de facto security notion for public key encryption. When you consider a general purpose FE, you don't really care because CCA comes for free. But for restricted class of function, then it makes a difference. Also, importantly, for FE, the typical transformation such as fujizaku or okamoto doesn't work because it requires a decryption algorithm to recover entirely the message, which is not the case in FE. So it is not useful here. And the CHK transform also doesn't help. It helps for ABE, so you get actually CCA security for free, but not for a restricted class FE. You can always use now a young transform, but it requires the extra assumption of NIZK. You can do NIZK from pairing, but still it's relatively heavy because you need to do NIZK for potentially complicated NP language. What we do instead is use very efficient quasi-adaptive NIZK for a specific language, a linear language from Kilswi, and that's just two group elements. So that's really a little computational overhead. So to be honest, this is not technically challenging to get that, but that's not the point. The point is to show the advantage of having a simpler scheme. It's more efficient and you get features for free, essentially. But the most important technical novelty and contribution here is the proof and the new techniques that we'll see. Okay, so I'll first give an overview of the scheme, then give a simpler scheme that's only private key and finally show how to upgrade it to the public key setting. The general overview is that to encrypt vector x of dimension D, you'll be using what's called a Billion Maps or pairing, that's the same thing, which maps group elements from a group G1 and with elements from G2 into a target group. So these groups are cyclic, generated respectively by little g1, little g2. And for any exponent a and b, pairing this guy with this guy gives that guy. That means you can compute one multiplication in the exponent. And this is going to be particularly useful for us because we want to compute degree two polynomials in the exponent. So equipped with this Billion Maps, we'll be giving an encryption of x in G1, so a bunch of group elements and another encryption in G2. And with this map we can pair together these group elements depending on a particular function f to get another encryption in the target group of the methods we care about, x times f times x, which is the evaluation of the degree two functions on x. Under some particular public key that will depend on f. So there is some key homomorphism going on here. And the functional secret key will be the secret key that's associated with that particular public key. And this secret key will be tied to f so that it only is useful for an encryption in the pk sub f. And it will reveal x times f times x. It will not be useful otherwise. That is what we want to build. And in fact prior works use that blueprint, Baltico et al. The difference with our work is that we are going to use different encryption in G1, G2, and G2. In our work they use fairly structured encryption in G1 and G2. For those who know, they use implicitly function hiding in a productivity. You can think of it this way. And as a result there is encryption in Gt that is fairly simple. You can think of it as al-gamal actually in Gt. In our work this is kind of the opposite. We take very simple in G1 and G2 and consequently give a simpler scheme and shorter ciphertext. And to compensate for that, we'll have to use a slightly more complicated, more structured encryption in Gt. But that is okay. I mean the point is here to have shorter ciphertext. The ciphertext only contains G1 and G2. It doesn't contain Gt, which is only computed during the encryption. As a drawback we'll have a more complicated functional secret key. But this is really not the thing we're trying to optimize. We're trying to optimize a ciphertext size because actually functional secret keys can be reused many times. All right, so now what do I mean by simple? I mean almost as simple as al-gamal, which is like the natural options that come to mind when talking about cyclic groups. Slightly more structured than al-gamal but not much more. And just to confuse you a little bit, I'll switch and use additive notations. All right. Okay, so let's go now into more detail. So first I'm going to give you a private key FE. This is not really what we aim for, but that's a good stepping stone. So how does it look like? As I said, like the first thing that comes to mind is using al-gamal, so we'll do that. So you want al-gamal encrypt X. So what is the public key? It's just a vector of group elements and the secret key is the exponents. All right, so sample a random R and this is your encryption. And do the same thing in G2. All right, for a different public key B and different randomness S. Now if you just pair this vector with this vector, brutally what do you get? You get X times F times X. This is a message you care about. Plus some extra terms. I could have written it out, but it doesn't really matter. What matters is that this extra terms happens to be computable from an inner product of a vector of dimension that is proportional to D. And not D square, which only depends on the input X and the randomness R and S. All of this is known by the person that encrypts. So in fact, that person can compute this vector. And the fact that the vector short means the scheme has a chance to succeed. So now we need to find a mechanism to essentially recover this extra term to decrypt. This inner product is between this vector and another vector of same dimension, which only depends on the secret key and the function F. So this can be computed by the key generation, the algorithm that generates functional secret keys. All right, so what is the idea to recover this extra term and decrypt this value? We will be giving an encryption of this guy under inner product FE. So functional encryption that handles, that can allow computation of inner products. And so this will be part of the ciphertext. And that will be the functional secret key associated with the function F. So by the correctness of the inner product FE scheme, this key, this ciphertext will recover the extra terms, which exactly gives out X times F times X. That's great, but there's an issue here. So okay, inner product FE will hide this vector. That's great. We want to hide the randomness in order to argue security of the algorithm encryption. And we also want to hide X for that matter. So this is hidden, but this is actually not hidden. Functional encryption doesn't hide the function, unless you use a function hiding that exists. In fact, that's known for inner product from pairing. So that's great. And then we'll hide the secret key. And of course, it's important to hide the secret key. Otherwise, you would basically break the algorithm. You would completely destroy the security. So yeah, it seems to work. But there's an issue with that. The issue is essentially, there's no such thing as a public key function hiding in a product. Y, it has to be private key. It's an inherent restriction. Y, so suppose you have a function hiding in a product FE. So what does that mean? I told you briefly, but it means if you encrypt a vector X and you generate a key for a vector Y, you will hide X and Y up to revealing this value X times Y in a product of X and Y. And nothing else is revealed, right, essentially. But if it's a public key encryption scheme, then everybody can compute ciphertexts for any X. So you will recover X times Y for any X of your choice. And that means you can recover entirely Y. There is no function hiding here. Right, so the scheme has to be private key for if we want to achieve any meaningful function hiding. And in fact, this is the reason prior works, such as the word by Rachel Lim and Anand and Tahai. This is the reason their scheme is private key because they use function hiding in a product FE. So the main technical challenge we solve is how do we get around this? How do we make the scheme public key? So the idea is we'll use some function hiding, but not completely. Our scheme will actually be partially hiding the functions. And this is possible because we only care about generating ciphertexts for vectors that lie in a span of a particular matrix M. That means a public key will automatically reveal M times Y. This isn't necessary, but nothing else beyond that information. And the part of Y that is hidden will be sufficient to hide these extra terms and to hide the message. That will be sufficient for us to prove security in the overall scheme. And the reason we can afford to only generate in a product FE encryption for vectors that lie in some specific span is because we use slightly more structures than in Agamal. We use actually down guard Agamal, tiny bit more structure, but still much less complicated than what was used before. And that implies that the vector has a special structure that we can exploit to bypass the impossibility result of a public key function hiding. So we have seen a scheme that is simulation secure under standard assumption for a degree 2 polynomial and it's even CCA secure almost for free. Natural question that comes up is can we get actually adaptive security? More generally, it would be interesting to study in the class of function that can be built from standard assumption and thereby narrowing this gap and understanding exactly what separates us from a full-fledged FE. For example, it would be interesting to build FE for degree 2 polynomial for large messages. I didn't mention that, but the prior scheme and our scheme all share this feature that the message is recovered in the exponent of a group element, so you need to brute-force the discrete logarithm and that restricts the scheme to small message. It would be interesting to get loud message. It would be interesting in its own right and it's likely to have implications for larger classes of functions. For example, lattice-based scheme would be a good candidate. Although there is some partial negative result on that, I can now show that function hiding in a productivity from lattices can be subject to generic attacks like a large class of schemes. But still there's a lot of things to explore in this area. That concludes my talk. Thank you very much for your attention.