 So first, let me define functional encryption, or FE. Suppose Alice wants to send some data, M, to the Cloud Bob to delegate storage and computation on this data. But she still wants some privacy on her data. So she's going to encrypt it in such a way that Bob only learns some particular function F on M, and nothing more. So you can think of F as a function that computes descriptive statistics on the encrypted data, or some SQL query, or some such thing. So because Bob only learns this particular F of M, we have some notion of privacy. In particular, Bob doesn't learn the entire message M. So this is what we want to achieve. And FE allows to achieve that using a trusted setup that is going to generate a public key, thanks to which Alice can encrypt the message M, and a master secret key that is used by a key generation algorithm that can produce different key for different functions. So SKF is a particular key for the particular function F, thanks to which Bob can extract from the encryption of M, F of M, as I said, and nothing more. And you can think of other user, Carl, that wants to compute D of M, so that he will get a different key, S key of J of G, sorry, that allows him to compute D of M. And to be a bit more precise, the security we want is a resistance to a collision of secret keys. So if an adversary gets the public key, but also different secret keys for different functions, the adversary should not learn anything more than what each individual key allows him to learn. So here in this case, we only learn F of M and J of M. And nothing more. Cannot combine keys to get extra information. Formally, this is captured by indistinguishability-based security, where the adversary chooses some pair of message, M0 and M1. And as long as the adversary only gets secret keys that do not distinguish between these two messages, the adversary will be enabled. Computationally, it will be enabled for him to distinguish between the encryption of M0 from an encryption of M1. And it's also interesting to, so usually, the adversary can do everything adaptively. So it first gets a public key and then choose the message M0 and M1, depending on the public key and the secret key gets. In general, but you can also consider restricted kinds of adversary that are called selective, because these adversaries all choose M0 and M1 before seeing the public key, so independent of everything. So that will be useful later. So this is a weaker notion of adaptive security. So now, what do we know how to do for FE? So there are some general feasibility results for all circuits. They are based on strong assumptions, namely, indistinguishability of investigation for circuits. And on the other hand, there are some work to try to build FE from standard assumption much weaker, such as DDH. But they do so for restricted classes of function. So this is the case of ABDP, which will build a functional encryption for DDH for this particular classes of function in a product where the message is a vector. And the function is also a vector of same dimension. And the function applied on M gives you the inner product and nothing more. And so you could do, you can compute weighted sum on encrypted data. And the ciphertext size is linear. It's a number of group elements that is linear in the dimension N. So following this more bottom-up approach, we try to build from standard assumption FE for slightly richer classes of function. So what we did was to build FE for quadratic functions, essentially. So the message now is a pair of vector m and x, x and y, two vectors. And the function is a linear map. So f of m will give you x times f times y. And the important thing is that the ciphertext size is linear in n plus m. So we do so using standard assumption based on pairing. So this is important that the size is linear in n plus m because, as you can see, you can already do quadratic function from inner product by simply expressing a billionaire map as a huge vector. But that would blow up the size to n times m. So this is really about efficiency improvement here. Yeah, I was saying that there are some independent words that also build FE for quadratic functions. There is the work. Anand Sahai and Lin, the main difference with these works and our work is that they are private key and we are public key. But to be fair, they also achieve some slightly stronger notion where the secret key hides the underlying function. And it's not achievable in public key. So this is an incomparable set of results. And also, these papers do much more than just building a quadratic FE. And so actually, we have two constructions. So let's look at the assumption. So the Anand Sahai is based on Haddock assumption just defined in the generic group model, whereas Lin is based on standard assumption, SxDH, so pairing. And we built two constructions. One is from standard assumption and the other is based on generic group model. But first of all, this is more efficient. This one is more asymptotically more efficient. And also it satisfies a stronger notion of security. So at the beginning, I mentioned that you can also think of a respected security, selective security. So this is the case of all of these constructions. They are only selectively secure. And this one, the one based on the generic group model is adaptively secure, which is the security you want in the end. Okay, so for this talk, I'm mostly going to talk about this construction. The reason being that proof in the generic group model tend to be less intuitive, so I'm more going to focus on this one. And also one contribution of our paper is some application to predicate encryption. What is predicate encryption? It's a particular case of functional encryption where the message contains two things, a plaintext and an attribute, which is as before a pair of vector X and Y. And now the function of M will be output the plaintext if the billy norm map evaluates to zero. And nothing else. So nothing is revealed about the plaintext if this is not zero. So this is a predicating encryption, fully hiding predicating encryption to be precise. And what we do is build this thing with Cypher textiles. Again, linear in N plus M versus prior works which with Wim, so you could obtain N times M with prior works, all right. But I'm not going to talk about this predicating encryption. I'm only going to talk about functional encryption for this talk, for the rest of this talk, all right. So this is roughly the high-level view of our construction. We want to build FE for this function here that takes a pair of vector and outputs X times F times Y, all right. The idea is to encrypt X, the pair of vector X and Y by first encrypting X and independently encrypting Y with linear size encryption. And we want to combine these two encryption here in a way that depends on F to obtain an encryption of F of X, Y, and there's some public key which also will depend on F. And this thing here should be decryptable only if you know the secret key of SK of F. And nothing else, otherwise nothing should be revealed about F, X, Y. So the way we do this is using a pairing, a billionaire maps that takes group element from some source group which is generated by little g. So you can pair a group element little g to the A with another group element little g to the B to obtain in a target group some E paired with G and G and the multiplication in the exponent. So you can do one multiplication, all right. So we'll use that to combine things from here to obtain an encryption in the target group, all right. That's roughly the idea. Now I'll go more into details. I'll do so in three steps. First, I'll give you a private key FE that's a little bit simpler and that will be simplified and it's only going to be secure in a generic group model. And it's not, yeah. So this is for simplicity and then I'll show you how to get from private key to public key but also still a scheme that is too simple to be proven secure in the standard assumption. Just to give you some intuition and finally I'll show you briefly some of the techniques we use to get actually security from standard assumption. All right. So first let me go and describe the private key FE. So as I said, we'll use a pairing and I'm going to introduce some notation. So yeah, so these are all primary groups in our case. And the notation I'm going to use is I'm going to write bracket A for any exponent A over A in ZP to denote little G to the A. You can generalize this notation for vectors and so on. And so what is the master secret key is just a bunch of random group elements. One for each index I and J. That's it. The encryption takes a pair of vector X and Y is going to compute a bunch of row vector which are XI concatenated with the randomness RIs that comes from the master secret key times a random invertible two by two matrix W that's picked freshly at random at each encryption. And there is also a bunch of column vector which are, as you can read, W minus one, the inverse of W and multiplied by the vector YJ SJ. So that's all the ciphertext. An important property is that if you pair the Ith row vector here with the Jth column vector here, what we get is a target group using a pairing is this. XIYJ plus RISJ. And in general, if you know Billy Neal map F in the over ZP, you can, by pairing appropriately, the row vector with the column vector, what you get is F of X, Y, which is the useful information we want, plus some blinding factor, F of RS. So if you set up the secret key for F to be exactly this term here, then you can recover F of X, Y, only if you have the secret key for F, right, intuitively. So this is a scheme. So the security comes from the fact that the only meaningful thing an adversary could compute is given many side challenge ciphertext is the only thing it can compute, meaningful that is pairing a row vector from one ciphertext with a column vector from this same ciphertext. Because if it tries, for example, to do mixed and matched attacks by pairing a row vector from one ciphertext with a column vector from another ciphertext, that would be meaningless because it will be paired with a different W. So it can, so yeah, it will be meaningless and this can be formalized in the general model. So essentially it has if the adversary view only contains this kind of value here for any F of the adversary's choice. So this is all the adversary can see, essentially. You can compute that formally using DGM. And of course, the adversary is allowed to get some secret key for some functions. So if you get SK of F, of course, the adversary is going to be able to recover F of X, Y. That's normal. But if F is not the collusion of keys that the adversary gets, then as I said, F of X, Y is going to be computationally blinded by this factor here. So in the end, the adversary only learns F of X, Y for the functions F that it could get a Q for. So intuitively, this is a proof. So now let's look at the, so this was a private key scheme. If we want to make it a public key scheme, well, we need to make the RI and the SJ as group elements public. So the public key is the previous master secret key. But now, of course, all the secret key for all functions F are publicly computable. So that should not be. So we modify this secret key for all F to be in the source group now. But still, you can still decrypt Cypher text using only the public key. So we need to make use of the fact that now the secret keys are in the source group. So the way we do so is by adding some randomness in the encryption, we'll pick freshly at each encryption, a random scalar sigma that is going to be inserted here. And also add it as part of the Cypher text as a group element, all right? So remember, bracket notation means that, so this is G to the, little G to the sigma, all right? So now when we pair this thing with this thing, what we get is as before, F of XY plus now F of RS but multiplied by this sigma, right? So now essentially, intuitively, the public key becomes useless because the public key only gives you F of RS but in the target group. And what you need is to decrypt is sigma times F of RS. So you really require the fact that the secret key of F now is in the source group, all right? So again, intuitively, so the intuition can be translated into a proof, at least in the generic group model. For standard assumption, we'll have to do a little more work but intuitively it's as if the adversary view only contains some random sigma from the Cypher text and this kind of quantity here and nothing more. Everything else the adversary can compute will be useless, intuitively. And from this, the adversary can only get F of XY if he knows or she knows the secret key for F. All right, and if she doesn't know, then this is blinding again. That's it for security. So now we want to have a proof that is secure in the standard model. One technique we use is inspired by what is called dual pairing vector space. That's introduced by Okamoto Takashima in the context of attribute-based encryption, adaptively secure attribute-based encryption in primary pairing groups. You don't need to know exactly what this is. All you need to know is that essentially we are gonna transform the scheme, the previous scheme I showed you, by transforming RI and ASJ into vectors now. What are these vectors? So RI is just a random vector and ASJ is just a random vector also of appropriate dimension, dimension two in fact. It's sufficient and concatenated with some zero slots. You will see why this slot is useful later. And all of this is also used with a dual basis. V and V minus one, so V again is a random invertible matrix, okay. Okay, so the impact on the secret key would be this. So before this was a secret key. Now this is just like this. I just replaced RI and ASJ by this thing. This is what I obtain, okay. So far so good, but why do we do that? What is the magic of a dual pairing vector space? It allows to computationally switch this blue vector here by this vector here. So essentially you can add anything you want in the extra slots. So you can, for all Xi and Yj of your choice, you can say that this is computationally indistinguishable from this by using a standard assumption, dealing in our case. And also you have this quadratic term and somehow we could show that we can, this effect on the secret key is to have exactly had what we want, f of X, Y. So originally in the DPVS, there is only one, there is only these vectors. There are no quadratic terms. So the novelty, the technical novelty in this one is to also, so it's to carry on the DPVS proof in the presence of quadratic terms. This is a technical challenge result, all right. So now let's go back to our scheme and see how this applies. So now I just replaced RISJ with vectors. This is a scheme we have. This is very close to the scheme that's actually in the paper. And so now everything is blown up, the size is a bit larger. And so again, intuitively there is one part of the proof that I'm not going to exactly explain, but intuitively the adversary can only extract from this type of text this kind of information. The only meaningful thing can compute again is that pairing this vector with this column vector. Anything else is useless. And the challenge we also saw is to prove that in the standard assumption. All right, so again, it's as if the adversary only gets this information. This is the adversary view, essentially. And now using this magic of DPVS, we can switch the secret key for f for all the secret keys that are not queried by the adversary, we can switch them. And as before I showed you, you can add an offset f of x. So essentially with the linear assumption using the DPVS, we can erase all the information about f of x, y. All right, so that's essentially the proof. To conclude, so we built a FE scheme for quadratic functions with ciphertext size linear in n plus m from pairing. So these are, we have two schemes. One is it adaptively secure, the other is selectively secure. But they are based on different assumption and this is exactly the ciphertext size in number of group elements, of source group elements. So that's an open question. Would be interesting to explore from standard assumption, for example, pairing but maybe something else. What are the classes of function, more expressive classes of function that we could build. That concludes my talk, thank you.