 Hello everyone, welcome to the talk. My name is Rishi Rajivathacharya and I'm going to talk about memory type reductions for some key encapsulation mechanisms. I'm going to start with the notion of a reduction. A reduction talks about two families P and Q. You can take for example, P to be the family of cyclic groups, whereas Q can be thought as a public encryption schemes. Each family has its own security game, say the DDH game or the INCP game in our examples, where they are hard to win against. The reduction comes paired with a construction C. C takes an element F from P and produces CF from Q. The reduction R ensures that if there exists any attacker A winning against CF in GQ, then the reduction with the help of the algorithm A can beat the function F in the game GP. As the algorithms are probabilistic, we measure the efficiency of a reduction in terms of success probability. We say our reduction is tight if for the same success probability as A, the reduction needs to spend approximately same time. Interestingly, the traditional definition of tightness that we have discussed does not take memory into consideration. However, memory consumption of the reduction is also a very important parameter. Consider primitive P against which the best known attack works in 2 to the power lambda time, where lambda is the security parameter when only constant space is given. However, if more memory is available, say 2 to the power lambda by 8, then the time requirement of the adversary suppose drops to 2 to the power lambda by 8 as well. Now consider the function F from P and the construction CF. Suppose the adversary A breaks CF in the game GQ in taking constant space and 2 to the power lambda by 4 time. A tight reduction would imply that the reduction R would win the game GP against F in time 2 to the power lambda by 4 as well. As we can see, if the reduction R takes more than 2 to the power lambda by 8 space, then the reduction is meaningless. On the other hand, if the reduction takes same time as A, then the reduction breaks the security conjecture of P, that is, it breaks the record of the best known algorithm for P. The issue is not imaginary. Many hard problems are indeed very memory sensitive. To remedy the situation, Orbach-Cache fashion case in crypto 2017 introduced the notion of memory tightness. In this case, we have a memory parameter in the reduction as well. We say a reduction is memory tight if in addition to the same success probability and the same time complexity criteria, the attacker and the reduction must have the approximately same memory as well. The natural question that arises is whether a memory tight reduction exists. In the same paper, the author showed that a memory tight reduction for existential unforgibility of RSA full domain hash exists. That technique formalized the notion of simulating the random oracle by pseudo random function where the key of the pseudo random function is sampled and the saved by the reduction. In addition, the reduction rewinds the adversary once. Apart from this, all the known results about the memory tight reductions are either lower bound or impossibility results. The starting point of this work is the open problem raised by ACFK, namely finding memory tight reduction for the in-CC security of hash L gamma KEDM. Let us informally define the notion of KEDM. There are three algorithms, key generation, encapsulation and decapsulation. The key generation takes the security parameter and produces the public key secret key pair. The encapsulation algorithm, which runs on the public key, produces the ciphertext and the key pair C comma K. The decapsulation algorithm takes a candidate ciphertext and produces a candidate key. The correctness requirement says that if C, which is input to the decapsulation algorithm is indeed produced by the encapsulation algorithm on a public key PK, then the decapsulation algorithm running on the corresponding secret key SK must produce the matching key. The security requirement says that given the ciphertext C, the key K is indistinguishable from a random limit from the key space. In this paper, we find memory efficient reduction for the in-CC security of hash L gamma. Our reduction works for two variants, namely the Kramer-Schub variant and the ECIES variant. For the Kramer-Schub variant, our reduction works for over all the gap groups. However, for ECIES variant, we require the group to have pairings. With an extent, this result to the Fujisaki Okamoto transformations, specifically, we consider the modular analysis from the work of Hofens' governments and Kills from CCC 2017. And we show memory-type reduction for two variants with implicit rejections, namely FO and FOM in addition. We could show a memory-type reduction for a variant with explicit rejection as well. The variant was named QFOM part by HHK. Now, we recall the hashed L gamma KKM. There are two versions, namely ECIES and the Kramer-Schub version. There is only one, but very crucial difference among them. Anyway, the setup is the following. Capital G is a cyclic group of order Q, where Q is a prime, and small g is the generator of the group. The public key is g to the power x, a random element from capital G, whereas the secret key is the small x. In this paper, we include the generator g as a part of the public key as well. The encapsulation algorithm takes g and g to the power x, samples random y from zq star, and computes g to the power y and g to the power x y. It then derives the key K by hashing g to the power y and g to the power x y. It outputs g to the power y as hypertext, and k as the key. The encapsulation algorithm takes a hypertext y with the secret key x. It computes y to the power x and derives the key by H of y and y to the power x. In the ECIES version, everything except the key derivation is 10. The key derivation is ECIES. However, hashes only the y to the power x, which is actually g to the power x y, instead of hashing both g to the power y and g to the power x y. As we shall see, this seemingly small difference makes a huge impact. So first, we are going to discuss about the Kramer-Schul variant. In this variant recall, the key derivation function hashes both g to the power y and g to the power x y. In the usual reduction, the random oracle transcripts are saved in a table. However, this cannot be done in a memory-type case. The challenge here is to simulate the random oracle in the memory-efficient way, maintaining consistency between the simulation output of the hash oracle and the decapsulation oracle. We need to be consistent with the output of these two oracles by maintaining a small state. The question is, can we apply the PRF trick? The first attempt would be to replace the hash function h. By the PRF, everywhere it is used. Namely, h of g to the power y and g to the power x y can be replaced by PRF of k evaluated on g to the power y and g to the power x y. However, this is not good. The reduction cannot simulate the decapsulation query as the reduction cannot find g to the power x y from g to the power y. How about dropping g to the power x y from the hash evaluation? This certainly helped in the decapsulation simulation. However, the adversary can indeed find collision from the hash query. Hence, the hash oracle simulation is not right. However, we note that things would have worked if the adversary was restricted to making only correctly derived hash query. Namely, it could have worried with only g to the power y and g to the power x y. Unfortunately, we cannot restrict the adversary to make only such queries. So we do the next best thing. We call that in the gravity fulfillment game, the reduction is given a DDH oracle. Using that oracle, the reduction can check the well-formedness of the hash queries. The idea is to puncture the PRF on the well-formed points. That is, if the DDH oracle returns one on input g to the power x and y and z, where y z are the inputs of the hash query, then we are going to puncture the PRF on those points. Now on those punctured points, during the hash oracle simulation g to the power x y is dropped and is replaced by a fixed element of the group. For example, g, the rest of the points are untouched. So from the decapsulation query simulation, the reduction can easily compute the PRF on the punctured points and it needs to compute the PRF only on the punctured points. So this solves our problem. We can simulate the RO for hash Del Gamal Kramer-Schrupp version in the memory type way. This idea extends directly to the module U of the Fusac-Yokamoto transformation. Here, one constructs in CCSEcured KEEM from an encryption scheme, which is one way against plain text checking attack. The same idea works here. Instead of the DDH oracle, we have the plain text checking oracle. Thus, for the hash query, we can puncture the points m prime c prime, where c prime is a valid encryption of m prime. Similar to before, we can replace the message m prime by a fixed message on the punctured points. The decapsulation works perfectly. However, note that the whole thing works because the adversary cannot compute the PRF on this k00 to the power mu and c through hash queries for invalid cybertexts. The generalization to U part is simpler. Here, the reduction has a cybertext verification oracle. The hash simulation is same as before. However, for the decapsulation simulation, we can use the cybertext verification oracle and return a part in case the input is an invalid cybertext. For UM, the underlying encryption is deterministic. Thus, there is no need to puncture the PRF. We can simulate the hash of m as PRF evaluated on the encryption of m. The decapsulation simulation simply runs the PRF. Note, like before, it is important that the adversary cannot evaluate the PRF on the input of its choice. Otherwise, the whole thing wouldn't have worked. The same idea works for UMPARP as well, where the verification oracle CVO is also available. Finally, we come to the ECIS version of hashed Elgama. Here, unfortunately, none of the previous idea works. The reduction simply cannot get g to the power y from g to the power xy. So, none of the other trick of math and PRF idea works here. The challenge here is to simulate or produce h of g to the power xy in the decapsulation oracle simulation knowing only g to the power y. However, we observe that we just need to produce the same result. And it is not needed for the reduction to follow the same procedure as in the decapsulation oracle simulation as it did during the hash oracle simulation. For this, appearing fits the bill perfectly. We simulate h of g to the power xy by evaluating the PRF on e of g to the power xy. And during the decapsulation simulation, the reduction can retrieve the same value from evaluating e on g to the power x and g to the power y. This way, we can make the ECIS reduction memory data when the group admits bearing. So finally, we come to the big picture, the fully secure converter transformations. We note that in the modular analysis to secure converter transformations, there is a pre-processing transformation as well, namely t, which fixes the random coin used in an encryption scheme by evaluating the message m by a random order. The in-CPA to one-way PCA reduction translates via VRF simulation perfectly. This gives memory type reduction for two variant, namely F4 and FOM, where there are implicit rejections. However, we see that the transformations U-PARP and U-M-PARP require verification oracle. In the paper, we introduced another module called V for which we could show a memory type reduction from one-way PCA to one-way PCVA security. Fine, so to conclude, we introduce the mapped and PRF approach in simulating random oracle in memory type setting. This technique is quite simple and it seems natural for key encapsulation mechanism proofs. Interestingly, as we show in the paper, it can also handle correctness errors. Many open problem remains. The foremost in my opinion is about the case of ECIS where on groups when no pairing is available. In a recent result, there has been an interesting approach by Ghushal and Tesaro who showed impossibility result for straight-line programs in the generic group model. However, in the general case, where the reduction can rewind the adversary or the memory of the reduction can actually depend on the memory of the adversary, the cases are still open. Similar open problem exists for the modules FOM and FM-PARP and FO-PARP. Thank you.