 Okay, so Now that we know that there's a computer scientist with there's a bit of reason to be happy and We moving to hiding in plain sight memory tight proofs via randomness programming I've reached it will give the talk and this is joint work with 3d go saw Joseph Yega and Stefano tesaro by the amount of time For a security reduction to be meaningful. We want it to be Time and advantage tight meaning that we want the guarantees for pi and sigma to be as closely related as possible In the context of memory aware reductions We additionally take into account and adversaries a memory while giving the security guarantee This then allows us to analogously define this notion of memory tightness But first we must ask ourselves. Why do we even care about memory tightness? Let's take an example Consider the hard problem pi to be the discrete logarithm problem in a 4096 bit prime field now consider the two following scenarios one where a scheme sigma has a Memory tight reduction to pi and another where a scheme has a Reduction to pi but the reduction is not memory tight now For the former say we need security against adversaries running in time to power 160 and using memory to power 70 In the former case we need guarantees for pi which is plausible On the other hand in the latter case the guarantees that we need for pi are known to be false due to existing attacks Therefore the latter reduction gives us no guarantees at all For this reason memory tightness has been studied quite a bit over the last several years There have been impossibility results as well as techniques Given to make reductions memory tight But the landscape of results is a bit strange because there have been examples of Generic impossibility results being given and then later bypassed by considering specific schemes or settings even impossibility result tailored to specific schemes have Been shown to be bypassed by just slightly tweaking the schemes Further in this world we show that the ability to give memory tight reductions Actually depends a lot on the definitional choices we make for this reason we were motivated to increase the toolkit to make reductions memory tight and here we introduce a new class of techniques that Will make several different reductions memory tight When we show prove that hardness of some problem pi implies a scheme is secure What we are really doing is we are showing that given an adversary a that breaks the security of the scheme sigma We can transform it into an algorithm r a which can solve this problem pi If the memory of our a is close to the memory of a Then the reduction r is memory tight The main task here that our needs to accomplish is to make sure it Simulates the Challenger for the security game of sigma 2a in a way that it is indistinguishable to it to do this are often needs to store state and For the reduction R to be memory tight this state needs to be small The starting point of our work is this following key observation that for certain reductions are if our answers some query with some little a Then it needs to store some state sigma a that it requires only if the adversary Replies with a at some point in the future For these kind of reductions we need to figure out a way how we can avoid storing sigma a Our main idea here is like hiding in plain sight We end up showing that we can sometimes hide sigma a within a itself and then recover it later Of course, this is not always possible, but sometimes a has enough redundancy that this can be done We give three different techniques of doing this one where the sigma a is The state is a bit and we can recover very efficiently We come up with a similar technique where the recovery is not efficient and adds to the running time of the reduction We also give a more powerful technique which can recover sigma a that is more than a bit But still of bounded length, but efficiently recoverable I'll start with an example of the first technique efficient tagging Let me first first tell you the story of digital signatures and memory tightness So unfortunately of digital signatures can be defined in two ways One where the adversary can make only one for Jedi attempt, which we refer to as you FCMA Another where the adversary can make multiple for Jedi attempts. We refer to this as me FCMA In the memory unbounded setting these two results. These two notions are equivalent However, in the memory restricted setting are back at all showed that the reduction FCMA implies me FCMA cannot be both memory and advantage state. Let's see why So this reduction first follows the verification key Now for every signing query it receives it uses its own signing oracle to answer the query When it receives a forgery query It has to first check if the Signature sigma is that sigma star is valid for the message m star which it can do because it has the verification key But additionally it needs to check whether this message is fresh meaning that whether or not it was query to the signing oracle To do this check one option is to simply remember all the messages on which the adversary made signing queries This will of course make the reduction non-memory type Another option is to guess if it's fresh and then it will become non-advantage type We the second option is not very important in the context of this talk We'll show how to get around this problem using our technique of efficient tagging in more detail. What we'll show is We will For any digital signature scheme DS we will give a which is you FCMA secure We will give a generic transformation that converts it into another digital signature scheme RDS For which the ME of CMA security has a memory and advantage type reduction using efficient tagging the Signing algorithm for this scheme RDS just samples some randomness and signs the message concatenated with that randomness and includes the randomness as part of the signature This is a generalization of the probabilistic full-domain hash We show that you FCMA security of DS implies ME of CMA security of RDS in our memory and advantage type way in Concurrent and complementary work by Demot at all they show that for certain class of digital signatures DS and Enhanced form of strong FCMA security implies strong ME of CMA security of RDS in a memory and advantage type way Our main idea here will be to use this randomness To hide in the randomness some kind of a tag that will later help us identify whether a query is fresh or not In more detail the When the reduction receives a signing query it will choose the randomness R. I in a way that It will hide the the randomness will hide the info whether or not the message is fresh and Later when the reduction receives a forgery query, it will use the same randomness The hidden info in the randomness to determine whether or not to output the forgery The way we implement this is When the adversary queries a message MI to be signed the reduction computes the randomness by evaluating the an injective tweakable random function F on the message MI and I and Later during a forgery query it checks whether the inverse of The forge message and the forge randomness is in one through Q where Q is the total number of signing queries You see why this works suppose The adversary for John a message M star with signature Sigma star R star Suppose if indeed this was a valid forgery if Sigma star R star is indeed a valid signature for M star Then if the M star R star had been queried by the reduction to its own signing oracle It's easy to check that the inverse I will indeed lie in one through Q And hence the reduction will not output the output the forgery like we wanted to Again if M star R star had not been queried by the reduction to its own signing oracle One can show that with high probability This inverse will not lie in one through Q and therefore the reduction will output the forgery again like we wanted to So here the additional memory required by the reduction is the memory required to see to implement this Function F which is actually a large random object Here we use this standard trick of replacing a large random object with the pseudo random one and then pseudo random Object can be implemented using little memory. For instance the example we had in the previous slide which would require require a Weakable injective PRF it can be instantiated using a block cipher So we saw a technique where this recovery was efficient which namely it just required inverting this function F Now we'll see a technique where this recovery is not efficient and leads to increase running time for the reduction But this let me first recall the common left or right Formalization of CCA security for public key encryption Here the an adversity has to distinguish between a left world and a right world It chooses two messages M0 and M1 and receives an encryption of M0 in the left world and an encryption of M1 in the right In both the worlds it has access to a decryption oracle which returns the bottom symbol if queried On the ciphertext that was returned by the encryption oracle Of course now we can define a multi version of this definition where The adversity can make Q encryption queries instead of one A very very simple crypto 101 result shows that one CCA implies Multi CCA using a hybrid argument However, this reduction is also all is not memory tech and it can be a little subtle and easy to miss Let's see the reason why So this reduction first gets as input the public key which it forwards now it chooses some K uniformly at random in one to Q and On the ith encryption query it answers with the encryption of the right message if I less than a K With the encryption of the left message if I is greater than K and otherwise use its own encryption oracle to answer the query Now when it receives a decryption query on a ciphertext C it can of course use its own decryption oracle However, if C is same as one of the ciphertext returned by the encryption queries Then the reduction needs to return the bottom symbol and to do this the naive way We generally give the reduction is we make it remember all the prior CI stars and this makes it non-memory day We will use inefficient tagging to solve this issue Our key idea here is Instead of sampling the random coins during encryption We will use randomness which is actually completely determined by the message and some counter I Later when a decryption query is made on a ciphertext The reduction will first use its own decryption oracle to get the decrypted message M And then to figure out whether this ciphertext C is a challenge ciphertext It will really encrypt the message using the randomness corresponding to the message and every every counter in more detail the Randomness during encryption would be computed as the evaluation of a random function on the message and a counter I Later when the reduction receives a decryption query on C It will use its own decryption oracle to get the decrypted message M Then it would re-encrypt M with randomness fmi for every I and Then check whether if any of these encryptions are same as C If so it will return the bottom symbol otherwise it will return the message M Why does this work? if this ciphertext C was same as Some CI star of course one of the re-encryptions will be same as C Therefore it will correctly return the bottom symbol and one can show that with high probability If if C is not one of those CI stars the none of the re-encryptions will be same as C And therefore it will return the message like we want Okay, so here we have a reduction, but it's not time-tight because it has to iterate through all of the counters So we must ask ourselves our non-time-tight reductions completely useless Well, no sometimes it might it might be better to have memory tightness over time tightness Because for many of the hard problems that we use in cryptography the fastest memory-less algorithm is much slower than the fastest algorithm in general But for this case suppose we really really want to have a memory tight reduction that is also time tight Can we do anything? Turns out that if we change the definition We can actually get a memory tight and time tight reduction So let me introduce to you the Okay, so we will use our technique of message encoding for for this result We will I'll first introduce this Definition of real or random CCS security Here an adversary has to distinguish between a real world and an ideal world It chooses a message and in the real world gets it gets its real encryption While in the ideal world it gets a random ciphertext in response In both the world it has access to a decryption oracle which returns the Message if queried on the ciphertext that was returned by the encryption Of course, we can again define a multi version of this definition where the adversary makes Q encryption queries instead of one Again we can show using a hybrid argument that the single version implies the multi version and again This is not memory tight, but the reason is a little bit different So here again the reduction forwards the public key Chooses a K uniformly a random in one through Q for diet encryption query answers with a random ciphertext If I less than K answers with the real ciphertext if I is greater than K and otherwise uses its own encryption oracle Now when it receives a decryption query Keep in mind that this if the ciphertext is same as one of the challenge ciphertext CI stars It needs to return MI so it will use its own decryption oracle However, if C is same as a CI star for I less than K Then the CI star was chosen uniformly at random and there's very little chance that it would indeed be the encryption of MI so The naive way to fix this is to make the reduction remember all the MI and CI star for I less than K And of course this makes the reduction non-memory type So we will use method encoding to fix this issue our main idea here is Instead of sampling these CI stars uniformly at random for I less than K We will encode the message MI into CI star and later when a decryption query is made Will first decode the ciphertext and check if the decoded answer is of the right format If so, we'll return the decoded message. Otherwise use the decryption oracle So we see an example here where depending on the definition We use we have a memory tight reduction that is time tight or for another definition. We have Where it is memory tight, but not time take So a very important lesson is the quality of the memory tight reduction that we can give relies a lot on our definitional choices In addition to this in the paper we also show a memory tight Remember tight authenticated encryption security for the encrypt and PRF construction that bypasses a generic impossibility result from an earlier work We also generalize the memory tight reduction result for RDS that I showed to capture a setting that captures Signatures used in TLS 1.3 Further we give our time memory an advantage tight reduction for Mufcma security of RSA PFDH to RSA To conclude I would again like to reiterate the message that our ability to give memory tight reduction strongly couples with the definitional choices we make Also, we should always we should take Impossibility results in context of memory tightness with a pinch of salt because as we saw we can often bypass them The important open problems in this area one of them is to come up with more techniques beyond the handful ones we know and Also, we need to understand which definitions are the right ones in these memory restrictor setting. Thank you So are there questions? Okay, so one question is have you made any progress on this last open question that you say like understanding what are the right definitional choices for memory tight reductions No, we have we are like We have a lot of even more examples of where like first if you tweak the definition in a way that there are You can you cannot give any memory tight reduction at all But we have not been able to like prove that it's impossible for those cases So what one starting point could be to like show that? Like a separation so we have not shown any formal separations We have located examples where for one definition you can have every tightness and for another it's it appears hard Because this seems very artificial right? Yes. Yes, that's why it's important to figure out which definitions to use Okay, so if there are no questions we will move to the next speaker So