 Hi, this is joint work with my advisor, Stefano Tessaro. Very broadly, this work is about a memory lower bound for a security reduction. As we all know, cryptographic security proofs show that given a particular computational assumption P, a scheme S is secure. This usually means that we show how to transform any adversary A that breaks the security of S into an adversary B that violates the computational assumption P by giving an explicit reduction R. In the context of this work, we are interested in studying reductions from a concrete security perspective. Hence, we captured the power of an adversary in terms of its time complexity T and advantage epsilon. For such a reduction to be meaningful, we want it to be as tight as possible. Namely, the running time and the advantage of B must be very close to that of A. But time is not the only important resource. Another resource which is important is memory as pointed out by our back et al in crypto 2017. This means that we can now enhance the picture by also taking into account the memory complexity of the adversary. This then leads us to an analogous definition of memory tightness, which means B uses memory very close to that of A. Note that the memory complexity of the adversary B is the memory complexity of A plus the extra local memory complexity of the reduction R. In fact, in all known examples, a memory tight reduction is given by a reduction whose memory complexity is small and independent of the memory of A. That is the only technique we know in the context of proving memory tightness. Why should we care about memory tightness? Here is a small example. The graph shows for which memory and time complexity the best known algorithms succeed in breaking the discrete logarithm problem in a 4096-bit prime field. Suppose that we want security against adversaries using time to power 160 and memory to power 70. Now, if we have a memory tight reduction to the discrete logarithm problem in a 4096-bit prime field, then we can infer that the scheme is secure. However, if the reduction is not memory tight, then we do not necessarily get any guarantees from the security proof. Therefore, the central question is whether we can always give a reduction, which is memory tight. In fact, in this work, we show that certain reductions cannot be made memory tight provably. Of course, we are not the first ones to consider this question. There has been some prior work on this in the last few years. These are impossibility results about generic reductions. For example, one of them shows that there is no memory tight reduction from the multi-challenge version of UFCMA for signatures to the single-challenge version. Our work, on the other hand, proves an impossibility result for a concrete scheme. In particular, we look at the security of hashed-elgamal, which is widely adopted in practice. The scheme was conjectured to not have a memory tight reduction by our back at all. This work takes a substantial step towards confirming this conjecture. We shall now introduce the hashed-elgamal chem. Consider a group G of order P with generator G. The generation algorithm returns a public key secret key pair such that the secret key is the discrete log of the public key with respect to the generator G. The encapsulation algorithm takes as input the public key, samples a random U from ZP, and returns g power U as the ciphertext and the hash of the public key power U as the chem key. The decapsulation algorithm takes as input the secret key and the ciphertext and returns the corresponding chem key. The standard notion of security for chem is CCA security. As pointed out by Abdullah et al, this security notion for hashed-elgamal can be represented as a succinct assumption called the Oracle-Diffie-Helman assumption or ODH. Therefore, we are just going to talk about ODH from now on. Let me state the assumption. Consider the following game. U and V are sampled uniformly at random from ZP. The adversary is given G power U, G power V, along with either the hash of G power U V or a randomly sampled value from the image of the hash function. It has access to the decapsulation Oracle-DV that takes a group element Y as input and returns the hash of Y power V as output, unless Y equals G power U, because otherwise the adversary can always find out whether it received the hash of G power U V or not. The ODH assumption is that the probability of the adversary guessing correctly whether it received the hash of G power U V is only negligibly larger than random guessing. In this talk, we shall be talking about the assumption in the random oracle model. Here, the hash function H is modeled as an random oracle and the adversary can issue queries to this random oracle. Abdullah et al showed that this assumption is implied by another assumption known as the strong Nephi-Helmen assumption or SDH. Note that the SDH assumption has also been sometimes referred to as the gap DH assumption in the literature. So let me introduce the SDH assumption. Consider the following game. U and V are sampled uniformly at random from ZP. The adversary is given G power U and G power V. It has access to the OV oracle, which takes as input two group elements X and Y and returns one if and only if X power V equals Y. The SDH assumption is that the probability of the adversary computing G power U V is negligible. The classical reduction shows that given an adversary against ODH, we can build an adversary against SDH. However, the reduction is not memory type. In particular, its memory grows linearly with the number of queries by the ODH adversary. In fact, it is very important for the rest of the talk to see why this is true for the reduction. In the next few slides, I want to review this classical reduction because it will help us better understand the main theorem of our work. The reduction receives G power U and G power V as inputs. However, it needs to also provide a value from the image of the hash function to an ODH adversary. It samples a K uniformly at random and provides G power U, G power V, and K to the ODH adversary. Remember that there are two types of queries the reduction needs to simulate, DV and H. The reduction maintains two tables for storing previously answered queries. Suppose the reduction answers new DV and H queries uniformly at random. There would be a consistency problem if the adversary made a DV query on Y and an H query on Y power V. To avoid this, the reduction shall use the OV oracle. Recall that the OV oracle on inputs X and Y returns whether X power V equals Y. First, we shall see how the reduction answers the DV queries. On receiving a DV query on Y, the reduction first checks whether DV of Y was previously answered. If it was, it returns the previous answer. Otherwise, for every XI on which H query was made, the reduction queries the OV oracle on XI comma Y. In case any of the OV queries return one, the reduction returns the corresponding H of XI. Otherwise, it returns a value uniformly at random and remembers it in its table. The reduction deals with H query similarly. On receiving an H query on X, it first checks whether H of X was previously answered. If it was, it returns the previous answer. Otherwise, for every YI on which a DV query was made, the reduction queries the OV oracle on X comma YI. In case any of the OV queries return one, the reduction returns the corresponding DV of YI. Otherwise, it returns a value uniformly at random and remembers it. One minor difference here is that for every H query on X, the reduction also checks if OV of G power U and X is one. If so, it returns X as the solution to the SDH problem. Note that it is crucial here that the reduction remembers all the H and DV queries made by the ODH adversity. So the memory of the reduction grows linearly with the number of queries. We are going to show that this usage of memory is inherent in a black box reduction. We construct an inefficient adversity A star with large advantage against ODH. We show that for all efficient black box reductions, if the reduction given access to A star achieves non-negligible advantage against SDH, then the memory of the reduction must grow linearly with the number of queries by A star. This in particular shows that an efficient black box reduction needs memory. Therefore, this excludes the common approach of proving memory tightness that we consider in the literature. That is, giving a reduction that is low memory independent of the underlying adversity. Okay, but now you might ask whether this result works for any possible group G. Well, that cannot be the case because if the discrete logarithm problem is easy in group G, then the reduction can just solve discrete log right away and consequently SDH. Now, here we will consider only reductions that do not exploit any specific structure of the group and work on any group. So they are black box with respect to the group, which means we consider only reductions in the generic group model where the discrete logarithm problem is hard. Many of the reductions we are aware of are exactly of this form. That is, they do not leverage any specific property of the group. This is the theorem that we would like to prove, but there are some caveats. We need to restrict the reduction a little bit. The first restriction is that it forwards G power B. Forwarding assumptions are commonly used by other black box impossibility results and more importantly forwarding is what reductions generally do. The second one is that the reduction does not rewind a star. This no rewinding assumption is required due to the limitations of our proof techniques. Proving our result for a reduction that rewinds a star is an important open question. We have a conjecture for this, which we shall come back to at the end of the talk. So here is a statement of our main theorem. Next, we shall give some intuition about how we go about constructing a star. The design philosophy behind a star is that it shall force the reduction R to accomplish some memory intensive task. Then a star would use brute force to break ODH only if R succeeds in this task. So if R does not succeed, a star is efficient and hence the reduction adversary is efficient. So it cannot have non-negligible advantage against SDH. Therefore, a star is useful to R only when R succeeds in its memory intensive task. Next, we'll see the explicit construction of a star. Adversary a star samples I1 through IK uniformly at random from ZP and makes DV queries on G power I1 through G power IK. It then samples a permutation pi on one through K and makes H queries on G power VI pi one through G power VI pi K. Recall that by the definition of the DV oracle, DV of Y equals H of Y power V. So the pi Ith DV query should give the same answer as the Ith H query. A star does this consistency check. If the check succeeds, it breaks ODH by brute force. Otherwise, it outputs a random bit. So the memory intensive task here for the reduction is to answer the H queries consistently with respect to the DV queries. Our basic proof strategy involves looking at the reduction in its two stages. The first stage R1 answers DV queries, passes on some state to the second stage R2, which then answers H queries. Our strategy would be to prove a lower bound on the state passed on from R1 to R2, assuming R1, R2 answer the DV and H queries consistently. Note that R1, R2 have access to the OV oracle. Additionally, in the generic group model, R1, R2 and the adversary have access to the generic group oracle. We shall describe the generic group model next. In the generic group model, all the group elements are represented by labels. One way to think about the generic group model is replacing g power x by sigma x, where sigma is a random injective map from Zp to a set of labels. Also, we are given access to the generic group oracle that allows efficient addition in the exponent. The OV oracle in this model takes as input labels corresponding to x and y and returns one if and only if y equals Vx. To understand the structure of our proof, it is really important to understand the relation between the queries that are done by the reduction in the first stage and the second stage. In particular, we want to understand which labels the reduction remembers from the first stage and makes queries on in the second stage. We shall call these repeat queries. The flow of our proof fundamentally relies on this. We consider two cases. Here is the first one. Consider the labels that R1 got as queries from A star. Denote them as A1 through AK. Any query made by R2 on A1 through AK to either the OV oracle or the generic group oracle is a repeat query. Here's the second case. Consider the labels that R1 received as answers from the generic group oracle. For example, C is such a label here. Any query made by R2 on such labels to either the OV oracle or the generic group oracle is a repeat query. Given that R1, R2 answer consistently, consider two cases. The first is that there are many repeat queries. It is intuitive in this case that we can show a lower bound on the state passed on from R1 to R2. The proof follows from a compression argument. However, the presence of the generic group oracle results in multiple subtleties in the proof. The other case is when there are few repeat queries. In this case, we show that we can construct an adversary against a game called the permutation game that always wins. Further, we can show that the advantage of the corresponding adversary against the permutation game is negligible. Thus showing that the probability of R1, R2 answering consistently while making few repeat queries is negligible. To answer consistently, R2 needs to figure out pi. How can R2 get information about pi? Well, it can use the OV oracle. But it's actually not trivial how R2 uses the OV oracle to find out this permutation pi. Recall that R1 receives DV queries on labels A1 through AK and R2 receives H queries on label B1 through VK such that A pi I raise power V equals VI. Now R2 can do the simple thing. Issue an OV query with AI, BJ and it will learn whether pi maps J to I. However, R2 can do much more contrived queries. It can pack several AIs and BJs raising them to different powers and make OV queries on them. In the example shown here, R2 makes an OV query and learns whether X pi I equals VI for all I in one through K. The permutation game captures this exact setting combinatorially. In this game, a random permutation pi is sampled on one through K. The adversary has access to an oracle O that takes as input two vectors of dimension K and returns one if the first vector permuted using pi results in the second vector. The advantage of an adversary against the permutation game is the probability that it guesses pi correctly. If X1, Y1 through XU, YU are the queries of an adversary A that return one and the rank of X1 through XU is upper bounded by K over 80. Then using a compression argument, we can show that the advantage of the adversary in winning the permutation game is negligible. If R1 and R2 make few repeat queries, it would imply that we can construct an adversary of this form that wins whenever R1, R2 answer consistently. This is a technical step of the proof for which I refer you to the paper for details. To conclude, in this work, we show an impossibility result for a scheme with a lot of algebraic structure. Our proof is very much tailored to handle the over-oracle. This result can be bypassed in certain ways. For example, we can show that if you are in the algebraic group model where the adversary sends a representation of group elements for every query, then there exists a memory-type reduction. Also, there is complementary work by Bhattacharya that shows a memory-type reduction for a less efficient variant of Hashtagamal, the Kramer-Schu variant, or if the underlying group has a pairing. This is a good example showing that our result can be bypassed for specific groups, that is, groups with pairings. But we would also like to note that typical instantiations of the scheme are on prime-order fields or elliptic curves for which no pairings exist. There are many open problems. One would be to remove the restriction that the reduction does not rewind the adversary. We conjecture a lower bound of K log K for that case. Our result does not exclude a memory adaptive reduction, that is, one that changes its memory depending on the memory of A star. To exclude such a reduction, we would need to show our result using a low-memory A star. In the paper, we suggest a low-memory A star, but analyzing the reduction for such an A star seems to require different techniques. Also, an interesting research direction would be coming up with memory lower bounds for other concrete schemes without assumptions like the generic group model. Thank you.