 Hi, everyone. I'm Joseph. I'm going to tell you about my paper, Handling Adaptive Compromise for Practical Encryption Schemes, which is joint work with Novan Diage. In this paper, we introduce new security notions for symmetric encryption at PRFs in an adaptive compromise setting. So let's start by discussing what this looks like. We think of there being some number of parties communicating while accruing cryptographic secrets. An attacker observes this communication and, based on observations, may adaptively compromise the secrets of any subset of the parties. This general setting appears regularly in cryptography for many different purposes. In particular, the examples that motivated this work consisted of adaptive compromise for symmetric encryption and PRFs in simulation-based security definitions. In such a setting, we consider the difference between the real and an ideal world. In the real world, the attacker makes queries to some cryptographic construction pie. In the ideal world, its queries are instead answered by a simulator, which is given leakage about the queries. The advantage of the adversary is defined as the difference in probability that it outputs one in these two worlds. In our motivating examples, symmetric encryption came up as a small component of a complicated protocol such that the attacker could interact with it in two ways. The attacker could ask for encryptions of messages under a secret key K. Then later, he can ask for this key to be exposed to it. In the ideal world, the leakage given to the simulator on encryption queries is only the length of the message to be encrypted. Only later, when it is asked to expose a key, is the simulator told the content of the messages. Such simulation is possible with non-committing encryption. However, by result of Nielsen, this requires the key to be as long as all of the messages that will ever be encrypted under it, which is impractical in many settings. Unless we are willing to work in a programmable ideal model, pseudo-random functions appear similarly. The attacker can ask for the function to be evaluated at chosen points, and then later ask for the underlying key to be exposed. In the ideal world, the responses to the evaluation queries are chosen randomly. As before, the techniques Nielsen showed that such a PRF would necessarily need long keys or to use a programmable ideal model. In our motivating example, they tended to follow a common approach. They would first fix a particular encryption scheme or PRF, and then prove security of their higher-level protocols using this scheme in some programmable ideal model. A common scheme used in these examples worked as follows. First, we pick a PRF that will be modeled as a random oracle. To encrypt a message M with key K, we pick a random value, feed this value through F sub K, and then XOR the result with the message. To simulate encryption by this scheme, we can simply pick the ciphertexts at random. When we need to expose a key, we pick one at random and reprogram the random oracle to be consistent with our earlier encryptions. However, this leaves us with some questions. What if we want to use a PRF that can't be thought of as a random oracle? Say some block cipher, which is better modeled as an ideal cipher. What if we want to use a different model of a mode of operation with this PRF? Following the status quo for this, we would need to write completely separate proofs for each choice of a mode of operation, PRF, and application that we want to prove secure. Summing over these different combinations, we have a cubic number of proofs that need to be written. Moreover, each of these proofs is an ideal model programming proof, which, while conceptually straightforward, can be quite detail-intensive. To exemplify this, let's categorize the proofs of all those papers we cited earlier, by whether they provide all the details of the programming proofs and the correctness of said proofs. The majority of these papers had no bugs, but did not provide a detailed analysis of the programming. All of the examples we found that did provide the details had bugs and a proof. We also identified bugs in some papers that only sketched the details of the programming analysis. To be clear, I'm not trying to pick on these earlier papers. After all, my co-author and I were separately authors on two of these papers. These difficulties inviting a thorough correct proof indicate the need for a new approach. In this work, we provide an alternative to the status quo by introducing layers of abstraction. We introduce a target definition for PRFs called SEMAC PRF security that captures the needs in simulation security under adaptive compromise. In appropriate ideal models, PRFs can be shown to achieve this notion. We additionally introduce various target definitions for encryption schemes. Then modes of operation can be shown to achieve these assuming that they use a SEMAC secure PRF. Finally, each application security can be shown assuming the SEMAC security of the underlying encryption scheme. Using this approach, only a linear number of proofs are required and, moreover, the complicated programming proofs are relegated to the lowest level proofs, particularly those regarding the PRFs. The details of this are then inherited for free and all of the higher level proofs. Before we proceed, let me give a high-level overview of our results. First, as noted, we provided new security definitions for symmetric encryption and PRFs and adaptive compromise settings. Then, we established the usefulness of our new notions by using them to prove the security of several applications, including a searchable encryption scheme due to cache at all, the bone box self-revocable encryption scheme up to Yorgie at all, and the opaque password-authenticated key exchange protocol of Yorgie at all. Finally, we provide lifting results. These show that classic results in symmetric crypto extend to our new definitions. This includes results for modularly achieving our CCA security notion, results that random oracles and ideal ciphers make good PRFs, and results establishing the SIMAC security of various classic modes of operation when using SIMAC secure PRFs. This last part is derived as a general result that any mode of operation which meets a notion of extractability will achieve our CPA notion. Let me start by discussing our security definitions. These can be viewed in the same simulation-based setting we considered earlier. Our basic CPA notion is a multi-user notion. In the real world, the adversary has access to three interfaces. By the encryption interface, it can ask a user U to encrypt a message M. The exposure interface allows the attacker to obtain the secret key of any user it chooses. Finally, the ideal primitive interface allows the attacker to make queries to the ideal primitive. The ideal world works similarly to what we have seen before. On encryption queries, the simulator is told the life of the message. When asked to expose the key of some users, the simulator is given all of the messages that were previously queried to that user. Finally, for submitting the ideal primitive, it sees the attacker's query in the clear. To obtain the CCA version of our definitions, we simply add a decryption interface. The attacker is disallowed from querying this oracle on ciphertexts that are received from the encryption interface. In the ideal decryption interface, the simulator is given the ciphertexts that the attacker queries in the clear. For symmetric crypto, it's common to target strong notions such as key privacy, indistinguishability of ciphertexts from random bits, or authenticator encryption, which combines security both of confidentiality and integrity. Each of these can be captured in our setting by placing appropriate restrictions on the behavior of the simulator. Our PRF definitions work similarly. The encryption interface is replaced by an evaluation interface, and for unexposed users, random values are returned at this interface in the ideal world. As discussed, we used our definition to prove the security of several applications. For the searchable encryption scheme we consider, cash at all provided a thorough proof of non-adaptive security using standard security notions for encryption and PRFs. And they provided a brief sketch of a proof when using a particular random oracle-based construction. In our work, we re-proved this result using our key private security notion for encryption and our PRF security notion. To yoggy at all, prove the security of their burn box self-rebrokable encryption scheme in the ideal encryption model. This is a new and very strong ideal model, which they introduced specifically for this purpose. In our work, we re-prove their result under the assumption that the encryption scheme meets our new CPA notion. Finally, the opaque password-authenticated key exchange protocol of directly at all is proven secure, assuming that an equivocable encryption scheme is used. This is a new notion of security that they introduced for symmetric encryption in the e-print version of their paper. In our work, we observe that this new notion is implied by our own SIM AC CPA notion. In particular, there is basically a single-user one-time use special case of our notion. If you'll allow me, I'd like to take a brief aside to discuss some philosophical points about how to interpret the results of our paper. In traditional motivation for ideal models, say for example the random oracle model, cryptographers have some scheme Pi and they would like to understand if it meets a security notion X when using, say, your favorite SHA hash function. They run into an issue and are unable to come up with any reasonable standard model assumptions that they can make about SHA, which would imply the X security of Pi. So they show the scheme is secure first, just when using a random oracle in place of SHA. Then they use this as a sort of heuristic justification by which they can believe that the security will be maintained when the random oracle is instantiated with a concrete SHA hash function. If we try to apply this to our result definitions, say thinking of X as being SIM AC CPA security, then we run into a problem, particularly the techniques of Nielsen that we discussed earlier will extend readily to rule out the possibility of any standard model scheme achieving our notion. Then okay, about this, we used our scheme and the assumption that achieves some notion X to imply that some higher level protocol Pi prime achieves a notion Y can we at least use our proofs to heuristically assume that Pi prime achieves Y when the random oracle is instantiated? Unfortunately, no again. At least for the applications we considered, Nielsen again here to ruin our fun. This leaves us with something of a conundrum. What's the point of all of the results that showed in our paper? If there's no corresponding standard model assumption, we can be assuming when we instantiate a random oracle oracle cipher. The interpretation I would like to put both works like this. First note that our notions X and the higher level notions Y that we work with this in this paper are in some sense is incredibly strong notions. So our perspective here is that we could think of there being some weakened analogous definitions X prime and Y prime which are strong and give us desirable properties but avoid these impossibility results of Nielsen and may achievable in the standard model. Because of the clear strength of X and Y, we can be convinced that they will imply any reasonable X prime and Y prime even if we aren't quite sure what the right choice for X prime and Y prime are yet. I'll finish out this talk by discussing how we showed that various PRFs and encryption schemes meet our desired notions of security. We start at the lowest level showing that random oracles and ideal ciphers achieve our PRF notion. These proofs provide the conceptually straightforward yet detail intensive ideal model programming that's at the core of all of our results. The simulation strategy is fairly straightforward. Evaluation queries are given random responses. Then later when we need to expose some user use key, we pick this key at random and then we program our ideal model to be consistent with the previous evaluation results we returned. The core of analyzing this simulation strategy consists of bounding the probability of some bad events. The first of these is that the same key happens to be sampled by multiple different users. Second is that the attacker is somehow able to query the ideal primitive using a user's key before that key is exposed. And the third, which is only applicable for ideal ciphers, is if the random responses that would return by evaluation are inconsistent with the ideal cipher being a permutation. Missing the first of these bad events was the source of many of these proof bugs that we discussed earlier that arose in prior papers. To move toward proving that symmetric encryption schemes meet our new notions, let's first discuss how we lifted modular results for achieving CCA security to work with our new definition. Blawian appropriately showed that in CPA security plus integrity of ciphertexts implies in CCA security. We obtained the analogous result with our new versions of CPA and CCA security. To prove this result, all we needed to do was observe that the integrity of ciphertexts means our simulator can respond to decryption queries simply by rejecting all ciphertexts. Next, consider the encrypted MAC construction using an encryption scheme E and a vestige authentication code M. Blawian and Prymphry showed that such a scheme will be in CCA secure assuming that E is in CPA secure and M is UFCMA secure. We again obtained the analogous result using our versions of these definitions. The flow of proving this mirrors that originally used by Blawian and Prymphry. We borrowed the results that show the security of the MAC gives integrity of ciphertexts. Then we show that SIM CPA security is preserved by the encrypted MAC transformation. And with these two results, we can then call back to the first result we discussed on this slide to complete the proof showing that CCA security is achieved. This just leaves us with understanding how to show that schemes achieve SIM AC CPA security. Commonly, symmetric encryption schemes are constructed as modes of operation on some underlying PRF. Here we can think of the encryption being done by some algorithm which has only Oracle access to the PRF. A typical result would say that the encryption scheme has ciphertexts which are indistinguishable from random if the underlying PRF is secure. Our goal is to provide the analogous results with our new definitions. A core part of this new security reduction involves showing how to use a simulator for the PRF to construct a simulator for the encryption scheme as a whole. As a concrete example, let's think about that encryption scheme we were looking at earlier. When F was a random Oracle, we saw how to simulate for this scheme. Ciphertexts were just chosen at random. And then for key exposures, we first picked the key at random and then programmed our random Oracle with that key to be consistent with the previous ciphertext return. If we instead just assume that our PRF is SIM AC PRF secure, we have to modify the exposure part of this simulation. First, what we do is we kind of work backwards and figure out the input output pairs to F that would be consistent with these previous ciphertexts we returned. And then we give these pairs to the simulator for F and return for a key. This key is what we then output and return to the attacker. We refer to this step of choosing input output pairs as extraction. For a general security result, we broadly introduce the notion of extractable encryption schemes. These are just schemes that come with a procedure for determining explanatory values of the PRF. For such a scheme, there's a canonical strategy that we can use for simulating. First, ciphertexts are just chosen at random. And then at exposure time, we use the extractor to obtain these explanatory values. We feed them to the simulator to the PRF to obtain the key that we're going to return. Our main CPA theorem says that any extractable mode of operation is secure when using a sim AC secure PRF. Formally, extractability requires three separate properties. A correctness property requiring that encrypting with values which were extracted from some ciphertext C will give back that same ciphertext C. A uniformity property says that if the ciphertext is of picked at random, then the extracted PRF outputs should look random. And finally, there's a security property which captures that extraction should be unlikely to give inconsistent outputs for the PRF. Typically, the first two of these properties are straightforward to verify from modes of operations. Security is typically the most interesting property. Fortunately, what we find is that the required analysis is typically already implicit in the existing security proofs for the scheme when using the standard notions of PRF security and in dollar sign security. So putting it together, our framework of extractable modes of operations provides clean way of efficiently lifting up known results to hold with our new sim AC style notions. And with that, I have reached the end of my talk. I will leave you here with this slide that recalls the main contributions of our work. Thanks for your attention and have a nice day.