 Great, thank you all for that introduction. So I'm David, and I'll be telling you about some of our work on constructing functional encryption for randomized functionalities. This is a joint work with Shashank Agrawal at Visa Research. So in the last 10 years or so, functional encryption has really emerged as a general paradigm for unifying different kinds of encryption schemes. So as you probably have heard throughout the day, in a functional encryption scheme, keys are associated with a deterministic function F, and ciphertexts are associated with messages. The guarantee in a functional encryption scheme is that if you take a message, a ciphertext in cryptium message M, and a key for a particular function F, if you then run a decryption algorithm, what you learn is a function evaluated on the underlying message M. So more formally, a functional encryption scheme in a public key setting consists of the following four algorithms, which I will briefly enumerate for you. So there's a setup algorithm that on input, the security parameter outputs a secret key and public key for the underlying encryption scheme. There's a key generation algorithm that takes in the secret key and a description of a function F and produces a function key for that function. There's an encryption algorithm that takes in a message M in the public key and produces a ciphertext. And finally, and most interestingly, there's a decryption function that takes in a description or a secret key for a function F and a message and outputs F of M. So in all of the most of the existing works on functional encryption, we have considered deterministic functionalities, namely the inputs of the key generation algorithm is a description of a function that is deterministic. However, not all functions in this world are deterministic. In fact, there are many natural examples of randomized functions, and we may actually want to introduce or define a functional encryption scheme where we can actually give out keys for randomized functionalities. And let me motivate this by giving you two concrete examples where this might be useful. In the first example, so consider the case of proxy re-encryption. So suppose Alice runs a mail server and now Alice goes on vacation. So Alice now wants to get her take her incoming mail which is encrypted under some public key encryption scheme. Let's say a public key functional encryption scheme and she wants to delegate some subset of her emails to her secretary, provided that they're tagged with something work related. So in particular, what the proxy should do is if it sees a personal email, it should do nothing. But if it sees a work email, what the proxy should do is re-encrypt the contents of her email under her secretary's key so her secretary can process the email while she's on vacation. So if we want to solve this with a functional encryption scheme, what we could do is Alice would provision her email server with a function key for doing this re-encryption process. But as we know, for public key encryption schemes to be semantically secure, typically the encryption algorithm needs to be randomized. So if we want to solve this naturally using a functional encryption scheme, we need functional encryption schemes that support randomized functionalities. Let me give you another example where an application that's more naturally expressed or captured by a randomized functional encryption scheme. So this is the case where we have a bank as opposed to bank is encrypting a bunch of records. And what we want to do is we want to audit the records in the bank. So we have a third party auditor and the auditor is basically going to sample some subset of these records and check for their integrity. Now of course, in order for the audit to be meaningful, it better be the case that the auditor actually gets a random sample rather than something that could be fixed or maliciously influenced by the bank. So hopefully this has motivated you to think about functional encryption schemes for randomized functionalities. So the natural next question is to ask is whether these schemes exist at all. So in a work by Goyal, Jiang, Coppola and Sahai in 2015, they not only define formally the notion of functional encryption scheme for randomized functionalities, which I will often refer to as RFE for short, so randomized functional encryption. And in the same work, they also showed that starting from the general purpose indistinguishability authorization, you can actually build general purpose functional encryption schemes for all randomized functionalities, which is a really nice result because this basically shows that functional encryption schemes for randomized functions certainly exist, assuming IO exists. Now, if we look at what we know about functional encryption schemes, though for deterministic functionalities, it actually turns out that we know a lot more. We know that we have many different classes of constructions, and in particular, a very nice line of work has shown that starting just from public key encryption or from standard assumptions, such as the learning with errors assumption, we can actually build functional encryption schemes in a bounded collusion setting. So in particular, we can support functional encryption for all circuits as long as the adversary only gets to see an arbitrary bounded number of keys. And of course, from multi-linear maps or obfuscation, we can actually build functional encryption schemes secure in the standard model. So it turns out that if you look at the landscape or the existing constructions of functional encryption, there seems to be a large gap between what we know in the deterministic setting and what we know in the randomized setting. In the deterministic setting, we have many different kinds of results, many different kinds of constructions, satisfying different security properties, different compactness properties, and so on. In the randomized setting, we really just have one defining work, which is the original work constructing randomized functional encryption schemes from IO. So a natural question to ask then when you look at this picture is whether it is necessarily harder to build functional encryption scheme to support the more general class of randomized functionalities. Do we necessarily need stronger tools or stronger cryptographic assumptions in order to go from deterministic functions to randomized functions? So in this work, we actually show that this is not the case. So our main result in the work that I will show is that starting from any general purpose functional encryption scheme for deterministic functionalities coupled with some very standard, very well-studied number of theoretic assumptions, namely the decisional Diffie-Hellman assumption, the DDH problem, and the RSA problem, we can actually boost any functional encryption scheme for deterministic functionalities into an equivalent functional encryption scheme that supports all randomized functionalities. Another way to view this implication is it's basically saying that randomized functional encryption is not that much harder to construct than deterministic functional encryption. So in particular, if you have a particular application that needs or is more naturally captured by using randomized functional encryption, what you can do is you can actually start with a deterministic functional encryption scheme and actually generically transform that into a randomized one. So now let me define for you more precisely what a randomized functional encryption scheme is, and then I will tell you how our generic transformation works. So let's begin with the basic correctness definition. This is what I enumerated at the beginning of the talk, but let me just review. So in a deterministic functional encryption scheme, we have a ciphertext encrypting message, we have a key for a particular deterministic function F, a decryption operation with output F of M. When we extend to randomized functionalities, things become a little bit more difficult, but let's start simple. So again, if I have a ciphertext encrypting message M and a key now for a randomized function F, the decryption function is a deterministic function that should output a random draw from the output distribution of F of M. So in particular, the distribution is taken over the randomness used to encrypt and the randomness used in the key generation, the decryption operation itself, however, is deterministic. And things get a little bit more complicated because we can also consider the case where we have two different ciphertexts. So suppose we have two independent encryptions and yet we apply the same function key to decrypt. What we should get in a randomized setting is actually two independent draws from the output distribution. So even though in this case, we use the same key to decrypt two ciphertexts, as long as those ciphertexts are independently generated, even if they may encrypt the same value, which is what we should obtain upon applying the decryption operation, it actually independent draws from the output distribution. And this is very, and similarly, if we have the same ciphertext but apply two different function keys, again, if the two function keys are independently generated, even if they're for the same underlying function, we again should see two independent draws from the output distribution. But that's the correctness definition, hopefully it's clear. And then for the security, I will just briefly summarize the simulation-based notions of security that we work with in our paper. So the way that we usually capture security, at least in a simulation setting, is we say that the only information that's revealed by a ciphertext and a key is basically the function evaluation on the underlying message. And the way we capture that is we define two distributions, a real distribution, where we have an honest encryptor generating a key for a function F and a ciphertext for a message M and outputting the secret key and the ciphertext. And we have an ideal distribution where we have a simulator, so some efficient algorithm. And the simulator here is only given a description of the function F, so we're not hiding the function F here, and the function output on M. So nothing more about the underlying message is revealed other than the actual function evaluation. So the simulator is able to simulate the ciphertext and the key given only the function evaluation. And we can also generalize this and extend this to the randomized setting accordingly, so I won't get into the details there. But another caveat that often comes up when we deal with randomized functionalities is it's actually very concerning if the encryptor is no longer behaving honestly. So let's consider our previous example where we have an auditor that's trying to audit a bank. So the bank is encrypting records and the auditor is actually going to sample records from the encryptor database. What happens if the bank here is being adversarial? What if the bank is malicious? Well, what it could potentially try to do is choose bad randomness or construct these ciphertexts in such a way such that when you apply the decryption function, you no longer see an independent draw from the underlying distribution. Because in randomized functionalities, the decryption operation should output some distribution. If this distribution can be biased or skewed, then that completely compromises the integrity of the audit. So more concretely, let me give you an example. What a malicious encryptor can potentially do is construct two different ciphertexts encrypting messages m and m prime, such that when you apply the decryption function using some honestly generated key, what you end up with is not two independent draws from the output distribution of f of m and f of m prime that could be completely correlated or even use identical randomness. Both of these would subvert the integrity of the underlying audit in process. So the way that we capture this more formally is we give the, in the security reduction, we give the adversary access to a decryption oracle just like we capture malicious encryptors in a CCA setting. So we believe just that CCA security is sort of the natural or the most correct notion for practical deployments of public key encryption. This sort of robustness against malicious encryptors is the correct notion of security when we look at functional encryption in a randomized setting. So now having defined the security notions, let me now show you how our generic transformation works. So just to briefly review, our generic transformation starts from a functional encryption scheme for the class of deterministic functionalities and boost straps that into a functional encryption scheme that supports all randomized functionalities. So somehow we're going to make use of the underlying deterministic encryption scheme. So not surprisingly, the fundamental tool we're going to rely on is de-randomization. So we're going to first construct a de-randomized function where instead of evaluating the function f, the randomized function f, using uniformly sampled randomness, we're instead going to derive that randomness using a pseudo-random function. So what that entails is basically we're going to construct a de-randomized function and the de-randomized functionality is going to have hard-coded inside it a PRF key and the PRF key will be used to derive the randomness for the actual function evaluation. Just to make it more concrete, the key generation algorithm in the randomized functional encryption scheme will begin by sampling a key k and then issuing a key for the underlying deterministic functional encryption scheme for the de-randomized functionality. This will produce a key k and it seems that we will be done. Now I wouldn't be up here telling you about this transformation if this is all we needed to do and the problem really boils down to the fact that in the public key setting, keys do not hide the function. Function hiding is very difficult to achieve. There are many lower bounds saying what kind of function hiding is possible in the public key setting. So what this means is that an evaluator who holds a decryption key can actually look at the decryption key and read off the bits of the PRF. And once the key for the PRF is public, then no, you can no longer appeal to PRF security and argue that the randomness you're using is actually hidden or it looks uniform. So we need to do something more and what we're going to do is we're going to rely on the fact that functional encryption schemes do provide us some hiding properties. In particular, we can hide things in a message. So what we're going to do instead of putting the key entirely in the key, in the function key, we're going to split it into two pieces. One of them we're going to embed in the ciphertext and one of them we're going to embed in the actual function key, okay? So let me just make sure this is clear. So on the encryption operation, what we will do is the encryptor will sample a key share and when it encrypts, instead of encrypting the message by itself, it's going to encrypt the message with the key share and the key generation function will also sample a key share and it will output AD randomized function where the PRF key is actually combined from both the secret, from both the key share in the function key as well as the key share in the message, okay? So there will be some operation that will do the recombination. So here we actually need a stronger property on the underlying pseudo random function because if you look at this picture here, the encryptor actually gets to influence the key that's used to de-randomize the functionality and so we actually require pseudo random functions secure against related key attacks. But using RKA secure PRFs, we can actually make this part of the transformation go through. Now unfortunately, this construction is still not quite enough and for the precise reason that it actually does not provide robustness against malicious encryptors. So the way to understand this or the way to view this is we can still see that the encryptor has a lot of flexibility in constructing the ciphertext. So let's see where the encryptor can sort of influence the ciphertext that is produced. Certainly an encryptor can choose the key share that's going to be used in the ciphertext but this is actually not too bad because we can just appeal to RKA security of the underlying pseudo random function to say that this is not problematic. What does turn out to be problematic for a fairly subtle reason is that the encryptor can actually choose the randomness used to encrypt using the underlying functional encryption scheme and this turns out to be surprisingly quite problematic and let me give you an example for why this is the case. So consider an encryptor who simply chooses the same key share and encrypts twice. So he uses two different random values and constructs two different ciphertexts both encrypting the same underlying plain text pair. What happens is we now have two distinct FE ciphertexts because they were independently generated and yet they encrypt the same underlying message and so what happens is no matter which key the encryptor uses is always going to produce the same output. We're never going to see two independent draws from the output distribution even though we have two ciphertexts that look distinct, that look different and in the ideal distribution sort of if we want to really capture security against dishonest encryptors what we really desire is that if we have two different ciphertexts they should produce two independent looking outputs and the problem really boils down to the fact that here the encryptor has too much freedom in constructing the ciphertext. So when building functional encryption scheme for randomized functionalities there's a fine line that we have to tread where we try to balance between the need for encryption and the need for security and the need to constrain the malicious encryptor from producing bad ciphertexts. So the way that we're going to address this final problem is we're actually going to take ideas inspired by deterministic encryption and the key observation I think is that honestly generated ciphertexts actually have a lot of entropy in them. So if you look at what the honestly generated ciphertext looks like it contains the message and it contains a key share for a PRF and because this is a key share for a PRF it's going to be generated from a distribution with a lot of entropy. So the key idea here is instead of having the encryptor choose the randomness used for the underlying FE encryption arbitrarily it's instead going to derive that randomness from the message itself. So in particular the encryptor is going to derive randomness using the key K1. This might seem problematic because now the randomness used for encryption is correlated with the message but with a careful choice of how we actually construct the scheme we can actually make the security proof go through. So let me show you what the final construction looks like. So the encryption algorithm will on input a message M it will first sample a key share and when it encrypts it's going to encrypt the pair containing the message and the key share and then the randomness used for the encryption is going to be derived from the key share itself. And then moreover we also need to attach a proof. So this is resembling a lot of the how you go from CPA security to CCA security by attaching a NISIC argument or a NISIC proof of knowledge. In this case we're going to attach we're going to have the encryptor provide a proof that it actually generated the ciphertext in a prescribed manner. And so the NISIC will explain that the ciphertext was properly generated. And the intuition for why this works is boils down to the following. The ciphertext now can be viewed as a deterministic function of the message. So once the encryptor has chosen a message in the key there is only one ciphertext that can be produced. So if the encryptor ever produces two distinct ciphertext then it must be the case that either the message differs or the key share differs. And if either of these two components differ by security of the underlying PRF in particular by related key security of the underlying PRF we can argue that the function evaluation actually is evaluated using a random string. Okay, so to summarize our transformation we begin with a simulation secure functional encryption scheme for deterministic functionalities and combined with several ingredients that can all be instantiated using a DDH and RSA assumptions we can actually obtain a simulation secure functional encryption scheme for all randomized functionalities. And let me note here that our transformation preserves a lot of the underlying security properties as well as compactness properties of the underlying scheme. So prior to this work the state of public key functional encryption scheme could really be divided into two parts. We have a lot of different kinds of constructions different flavors of security different flavors of compactness and so forth for the deterministic functionalities. For randomized functionalities we really had only one construction based on IO. So using the generic transformation of this work we can actually take all of the existing knowledge we have for functional encryption scheme for deterministic functionalities and bootstrap them and transform them in the corresponding schemes for the class of all randomized functionalities and as an added bonus we get adaptive security even against malicious encryptors. So let me conclude with a few open questions. So one natural question to ask is whether a more direct or more efficient construction is possible and we want to look at weaker functionalities, right? So here I have described a generic transformation that starts with functional encryption scheme that supports all circuits and we have transformed it to all functions that support randomized circuits. Maybe if all you care about is sampling elements from a database or sampling as few random entries maybe there's a more efficient construction that's possible there. Another open question is whether we can do this construct this transformation from deterministic FE to randomized FE without making additional assumptions such as the DDH and RSA assumptions. I should note here that this is possible in a secret key setting. So in a work by Komagarsky, Segev and Yolev they showed a starting from functional encryption schemes for deterministic functionalities in a secret key setting you can boost that to functional encryption scheme for all randomized functionalities without needing to make additional complexity assumptions. And finally, another technical detail is whether it's possible to have a similar transformation operate in a case where we start with a functional encryption scheme that's secure under an indistinguishability-based notion of security rather than the simulation-based notion of security that we require in this work. And thank you very much. I'll take questions now. We have time for a question. Hi. So the previous construction of randomized FE was used IO, right? That's right. So I'm wondering why doesn't the following work start from a deterministic FE go do the FE to IO transformation or do IO and then apply the construction? Right, so yes. So that's one way that you can do this transformation. So that makes no other assumptions, right? Yes, but if you start with a bounded collusion secure functional encryption scheme, so something from PKE or LWE, then you can't boost drop that to IO. So this gives a new class of randomized functional encryption schemes that's secure in a bounded collusion setting from standard public key encryption-based assumptions. So your open question is how to start from a bounded collusion FE deterministic for deterministic functions? Sure, so you can also reformulate it that way. Yes, so another way to formulate it is the FE to IO construction incurs a sub-exponential loss in the security reduction. So maybe can you do everything only a polynomial loss in the security direction? I think that's also still open.