 Hi, I am Ustujit Ghoshal and this is Joint Work with Joseph Jaeger and Stefano Tessaro. Very broadly, this work asks whether we can prove time memory trade-offs for authenticated encryption. In general, concrete security theorems attempt to prove an upper bound on the advantage of an adversary in breaking security in terms of the resources of the adversary. In most prior work, the resources considered are an adversary's time complexity and data complexity are the number of queries. In this work, we additionally considered the adversary's memory as a resource. This is important because the feasibility of a certain attack might be seriously affected by the amount of available memory. Hence, concrete security analysis should take this into account. Even though this question is clearly fundamental, only very recently we have seen some prior work on this, and it has generally taken one of two approaches. The first gives time memory trade-offs for symmetric encryption, while the second focuses on either giving or proving impossibility of memory-tied reductions. We will encounter more precisely what these things are later on in the talk. The former approach has mainly focused on confidentiality of encryption, while the latter has focused on public e-crypto. In this work, we shall combine elements from both lines of work for the first time. We ask whether we can lift the time memory trade-offs for the confidentiality of encryption to the nonce-based authenticated encryption setting. Very briefly, the takeaway message from our work is, it's complicated. We provide both positive and negative results. Since this talk is about nonce-based encryption, let me quickly refresh what I mean by that. A nonce-based encryption scheme consists of three algorithms. The key generation algorithm returns a key. The encryption algorithm takes as input a message and a nonce, along with the key, and returns a ciphertext. The decryption algorithm takes as input a nonce and a ciphertext along with the key, and either returns a message or the bottom symbol indicating failure. A nonce-based encryption scheme guarantees security only if encryption is done under distinct months. There has been a long line of work proving concrete security of nonce-based authenticated encryption. These works, of course, ignored the memory of the bursary. Now, the question here is, can we extend them to take memory into account? Well, the problem has already been very hard just for the case of confidentiality, without the additional property of integrity that we need for authenticated encryption. To illustrate this, here is a toy example of a nonce-based encryption scheme based on our n-bit block cipher. It just encrypts the nonce using the block cipher and exhausts the message. One can prove this theorem, which is a corollary of two previous works, one of which makes connections to communication complexity. It proves an upper bound on the advantage of distinguishing ciphertext from random, which is the standard notion for confidentiality for nonce-based encryption. In particular, the advantage of an adversary can be upper bounded in terms of its memory and number of encryption, and the advantage of breaking the underlying block cipher as a pseudo-random perpetrator. The first term here shows that there is a time memory trade-off for breaking the confidentiality of the scheme. The reason we want to have such trade-offs is that it gives us exactly how many queries a memory-bounded adversary needs to make to break security. For example, for adversaries with memory less than 2 power n by 2, we get beyond both the security here. So we ask whether we can prove similar results for Ae security, in particular for widely used schemes like GCN. We target the notion of Ae that combines confidentiality and integrity. The usual approach is to prove separately a bound on Indar security, that is, indistinguishability from random ciphertext and a bound on c-text security, that is, ciphertext integrity, and then combine them to show Ae security. This can be cast as a concrete security theorem that upper bounds the advantage against Ae security in terms of the advantages against Indar and c-text security. The question, however, is how does this result look like when we take memory into account? Ideally, we want a memory-tight reduction. That is, we want S1 and S2 to be very close to us. Unfortunately, the known reduction is not memory-tight. Let's see why. Let me be a bit more precise. This combined notion of security of authenticated encryption considers a real world where the key is generated and the adversary is given access to two oracles, one issuing encryption of messages under a chosen nonce, and the other one decrypting ciphertext under a chosen nonce. This is compared with an ideal world where the encryption oracle provides random ciphertext and the decryption oracle provides only decryptions if the message was previously encrypted with the same nonce. We need to upper bound the advantage of an adversary running in time t, making Q queries, and using memory S of distribution between these two worlds. The traditional approach first introduces an intermediate world in which the adversary has access to the encryption oracle from the real world and the decryption oracle from the ideal world. It is actually easy to upper bound the advantage in distinguishing the real world from the intermediate world by invoking c-tech security, and this reduction is actually memory tech, meaning that the resources of the c-tech's adversary are almost same as that of the a-a adversary. The real problem occurs in upper bounding the advantage of distinguishing the intermediate world from the ideal world. Here we need to assume inner security for an amount of memory that grows with the number of queries. We shall see why that is the case next. Remember that inner security only gives us the corresponding encryption oracle. If we want to simulate the two worlds that the a-a adversary is trying to distinguish, we would need to simulate the ideal decryption oracle which requires remembering all the ciphertexts. So the memory grows linearly in the number of queries. Well, this was one particular proof. The question is whether we can come up with a clever proof that bypasses this problem. That brings us to our contributions. In a nutshell, our results are centered around the question whether we can make the reduction memory type. First of all, we show that we can indeed make this reduction memory type in the restricted setting of channels. This setting captures usage in protocols like TLS. We introduce memory adaptive reductions, a new technique for giving memory type reduction where the memory of the reduction depends on the memory of the adversary. Secondly, we give an impossibility result for a memory type reduction in the most general setting of non-based authenticated encryption. Next, I shall introduce the channel setting and talk about the memory adaptive reduction. The channel setting is motivated by the typical use of authenticated encryption as a means to establish a secure communication channel as in TLS. In particular, only certain restricted adversarial interactions can offer. For example, nonces are implicit, they are used as a counter, the receiver aborts upon the forced encryption failure. One goal in particular is to ensure in-order delivery of messages. The channel setting that we use captures this exactly. A bit more precisely, a channel consists of three algorithms, state generation, sender and receiver. The state generation algorithm generates initial sender and receiver states. The sender algorithm takes as input the sender state and message and outputs the updated sender state and a ciphertext. The receiver algorithm takes as input the receiver state and ciphertext and returns as output the updated receiver state and either a message or the error symbol border. Correctness of a channel requires that if the receiver is given the ciphertext sent by the sender in order and without modification, that is if the receiver is in sync with the sender, then the receiver will output the same sequence of messages that were sent. In terms of security, we want to look at the setting where the adversary is responsible for delivering messages from the sender to the receiver. Obviously, the adversary should not learn any information on what is being sent. But also, we want to prevent the adversary from delivering messages out of order. For example, if an adversary collects ciphertext C1, C2 and C3 and delivers them out of order, security requires that the channel receiver returns bottom as soon as the first out of order ciphertext to receive. Therefore, in addition to confidentiality, the A security for channels has a strong additional integrity requirement that as soon as messages are delivered out of order, decryption fails. Again, we shall formalize this using our distinguishing game. In this case, the real world runs the state generation algorithm to get initial sender and receiver states. The encryption oracle inputs the queried message along with the current sender state to the sender algorithm. The sender returns an updated state and a ciphertext. The oracle outputs the ciphertext as its answer. The decryption oracle, in this case, inputs the queried ciphertext along with the current receiver state to the receiver algorithm. The receiver returns an updated state and a message. The oracle outputs the message as its answer. In the ideal world, the encryption oracle produces random ciphertext that are only decrypted if delivered in order. Concretely, the encryption oracle returns a random ciphertext and enqueues the message ciphertext pair. The decryption oracle first dequeues a message ciphertext pair. If the dequeued ciphertext matches the queried ciphertext, then it returns the corresponding message. Otherwise, it declares itself out of sync forever and returns bottom. The advantage of an adversary against A security for channels is it's advantage in distinguishing between the real and the ideal words from it. Our main theorem shows that suitably defined notions of confidentiality and ciphertext integrity tailored to channels imply A security in channels in a memory type. In particular, we prove an upper bound on the advantage against A security for channels in terms of advantages against Indar and Ctext security for channels, but this time the reduction is memory type. In the literature, all memory type reductions use a small amount of memory independent of the underlying adversary. Crucial to our result is that we use a reduction whose memory grows linearly in the memory of the adversary and this is enough to prove memory types. Let me give some intuition about how the proof of our theorem proceeds. We follow the same pattern as before. We introduce an intermediate word with the encryption oracle of the real word and the decryption oracle of the ideal word. Again, we can upper bound the distinguishing advantage between the real and the intermediate words using Ctext security. This is easy and I'm not going to talk about this reduction. The core which I shall be talking now about is going to be to look at the other reduction. The technical issue that we face when trying to give a memory type reduction is that the size of the cube grows with the number of encryption queries the adversary makes. For example, if the adversary makes three encryption queries, we need to store three message ciphertext pairs in the queue. The size of the queue shrinks only if the adversary makes a decryption query. The key idea to make the reduction memory type is that bounding the queue size does not change the behavior of the experiment. What do I exactly mean by that? Suppose, for example, we store at most two plaintext ciphertext pairs in the queue. Now, if the adversary makes three encryption queries on messages m1, m2, and m3 and receives ciphertext c1, c2, and c3, only m1, c1, and m2, c2 shall be stored in the queue. Now, if the adversary makes decryption queries on c1, c2, and c3, it would receive m1, m2, and bottom when you really should have received m3 instead of bottom. This seems like a problem. However, in order for the adversary to cause this problem, it has to remember three ciphertexts c1, c2, and c3. Otherwise, it could not have caused this. In general, if the memory of the adversary is not much larger than the length of the queue, the adversary will not be able to cause this problem. In particular, we can show that the adversary has very small probability of causing this problem if we set the length of the queue to delta, which is proportional to the memory of the adversary. We shall show this by reduction to an information theoretic game which captures the essence of what the adversary needs to do to cause this problem. In this game, l and delta are parameters and it's played by a two-state adversary. The first stage of the adversary is given as input, l-bit string r, and it outputs a state sigma of size at most s and an index i. The first i-bits of the string r and the outputs of the first stage are given as inputs to the second stage of the adversary. In order to win the game, it needs to guess the next delta bits of r correctly. Setting delta to depend linearly on s with a fairly straightforward compression argument, we can show that the probability of an adversary winning this game is small. I shall refer you to our paper for more details of the proof and move on to the main application of our result. We have shown that a security of a channel can be reduced to its constituents in r and ctec security in a way that preserves memory complexity. This of course is only meaningful if we have channels for which we can give provable time memory trade-offs for their in-dar and ctec security. We prove such time memory trade-offs for GCM, one of the most widely deployed encryption schemes. CAU, an abstraction of GCM, is an encryption scheme from an n-bit block cipher e and an almost-exhaught universal hash function h. We prove a memory-sensitive upper bound on the advantage of an adversary against a security of the channel induced by CAU. In particular we show that the advantage of an adversary against a security of this channel is upper bounded in terms of the memory and the number of queries of the adversary and the advantage of breaking the underlying block cipher as a pseudo random permutation. The second term here shows that there is a time memory trade-off for breaking the a security of this channel. Since CAU is an abstraction of GCM, we have essentially shown our time memory trade-off for a simplified version of a channel obtained by using GCM in TLS. Now I shall briefly talk about our second contribution. The impossibility result for a memory tight reduction in the general nonce based encryption setting. Our result is in line with prior work proving impossibilities for memory tight reductions. I would like to explicitly note that our result rules out not only memory tight reductions that use memory independent of the underlying adversary but also memory adaptive reductions like the one we gave for channels. These results provide quite a bit of evidence that some restrictions are necessary to prove a security from Indar and CTAX security in a memory tight fit. I shall now state our theorem for the negative result. We show that for all Indar and CTAX secure nonce based encryption schemes NE, we can construct an inefficient adversary A star against a security of NE making Q queries that has advantage close to one. We show that for all efficient black box reductions, if the reduction given access to A star uses memory in the little O of Q, then its advantage against Indar security of NE is negligible. Additionally, for all efficient black box reductions, the reduction given access to A star can achieve only negligible advantage against CTAX security of NE. The A star that we construct is low memory. Its memory grows only logarithmically in the number of queries so this rules out even memory adaptive reductions. There are a few caveats however we can prove this only for a restricted class of reductions. The first restriction is that the reduction is faithful. That is if it answers an encryption query with a ciphertext C, it has made the same query to its own encryption oracle before and had received C as the answer. This restriction is natural and we are not aware of any reductions made in it. The second restriction is that the reduction does not ask for encryptions with the same nonce if the adversary does not do so. This is very natural for nonce based encryption scheme since the security guarantees hold only if nonces are distant. The final restriction is that the reduction either does not rewind the adversary or is fully rewinded. That is it can rewind the adversary at any point but can only restart it from the beginning. This restriction is milder than those in some of the prior impossibility results where only straight line reductions were ruled out. Handling more general rewinding strategies seem to require new proof techniques. I shall briefly talk about the basic idea behind the adversary A star that we construct. It has our challenge rounds. In every round it asks for the encryption of few random messages m1 through mu each l bits long under distinct nonces and it receives ciphertext c1 through cu. It chooses an index j star among one through u at random and asks for the decryption of cj star under the same nonce with which it had asked for the encryption of mj star. If the decryption query returns an answer different from mj star it just abounds. If it did not abort in any of the r rounds it tries to inefficiently break the scheme which helps it distinguish the real world from the ideal world with probability close to one. The main intuition here is that a reduction using k times l bits of memory succeeds in each round with a probability of at most k by u. Of course making this precise requires lots of work. I refer you to our paper for details of the proof to conclude. In this work we proved memory sensitive bounds for a security of channels and subsequently time memory trade-offs for the a security of our tls-like channel. We introduced a new technique of proving memory tightness where the memory of the reduction depends on the memory of the underlying adversary. This technique might be of independent interest. Additionally the we proved the impossibility of a memory tight reduction in the most general setting providing some evidence that restrictions might be necessary for giving a memory tight reduction. There are a few open problems. One open problem here is proving memory sensitive bounds for other practical examples of channels. Another is finding new applications of memory adaptive reductions that we introduce in this work. The full version of our paper is available on eprint. Thank you.