 So since I'm going to talk about the role of memory in cryptographic reductions I will start with a short introduction to reductions So usually when we constructed some cryptographic scheme, and we want to show that it is secure We use we rely on cryptographic reductions. So suppose we have some Hard problem P what we do is we show that given any Adversary breaking the security of our scheme can be turned into an algorithm Solving instances of our problem more precisely the algorithm we construct will run the adversary as a subroutine and Answer potential oracle queries of it and this in particularly Implies that the running time of our reduction will be at least the running time of the adversary and the same holds for the memory consumption Now the tightness of our reduction can be seen as some sort of measure of its quality what we usually do there is we compare the resources used by our algorithm to the resources resources used by the Adversary it runs as a subroutine and what we usually compare there are both running time and success probability and if both time and Success probability essentially stay the same. We say that the reduction is tight Now to see why The tightness of a reduction matters for giving concrete security guarantees We will look at the time success probability trade-off plot for our problem P Which might for example of something like this So we see we have two axes indicating success probability and running time of our algorithm and The plot is divided into two areas labeled as unbroken and broken now if a point lies somewhere in the area labeled as unbroken this means that according to current cryptanalytic knowledge there exists no Existing exists no algorithm, which is able to solve an instance of our problem in the corresponding time with at least The corresponding success probability Okay, so now suppose that we want out of the scheme We construct it to be secure against all adversaries satisfying some constraints on Running time and success probability by our reduction We know that we can turn this adversity Adversary into an algorithm solving instances of our problem P however, if The reductions non tight we might actually end up with an algorithm which runs in considerably Higher time and also succeeds with less probability So in this example here We see that we actually ended up with an algorithm in the area for which our problem is known to be broken So we can't conclude anything on the security of our scheme however, if Our reduction was tight the situation would look different In this case, we would see that given any adversary breaking the security of our scheme We would be able to construct an algorithm Which outperforms all the currently best known algorithms in solving our problem and this in this way We gain trust in the security of our scheme So as just discussed usually when talking about tightness we only consider running time and success probability however, if we if one talks to cryptanalysis They will probably tell you that actually memory is also a very important resource and actually the most expensive one So in our work, we consider the role of memory in cryptographic reductions And we conclude that indeed memory can be crucial when making concrete security statements We also investigate Start to investigate how to make reductions memory tight meaning how to obtain reductions essentially preserving the same memory requirements as the algorithms we started with and We do this by giving a couple of tools Which can be seen as a memory efficient replacement for typical non-memory efficient steps in reductions We use those in one concrete application namely giving a memory tight reduction for the RSA full domain hash signature scheme and Finally, we asked ourselves whether there also exist Reductions which cannot be both tight and memory tight and actually found some evidence pointing in that direction by proving some lower bounds So now let's look Why memory in reductions? Matters so for this we are going to look at time memory trade-offs So an important observation here is that For several problems which are relevant for crypto the best-known algorithms actually require high memory Or to phrase it differently We are some problems are harder to solve if less memory is available and this in particularly holds for several lattice or coding based problems And as we both see now this needs to be taken into account when Deriving concrete security statements and we will look at a concrete example. Namely the learning parity with noise problem So so far we have looked at time success trade-off plots Now when we add memory would we would actually have to look at a three-dimensional time success memory trade-off plot However to keep things simpler in two-dimensional I will only consider algorithms having a constant success probability So we will now look at trade-off plots in this plane. So, okay We see that the time memory trade-off plot again has two axes now the x-axis indicates the memory consumption of an algorithm. So now if a point lies in the area labeled as unbroken this means that According to current cryptanalytic knowledge, there are no algorithms known which are able to solve instances of LPM in the corresponding time And with a corresponding memory Now for LPN this time memory trade-off plot is essentially determined by two algorithms the first one being the Gauss algorithm Which we see here. So as we can see it's able to solve instances of LPM given enough running time But the memory essentially plays no role here Now the second algorithm would be the BKW algorithm We see it's able to solve instances of LPN in considerably faster time. However, this comes at the cost of some an increased memory consumption So overall the time memory trade-off plot for LPN looks something like that and Actually due to some recent results by Essay et al which will also be presented here at Crypto. It looks more like this So now again suppose we constructed some scheme S and we want to show And we want it to be secure against all adversaries satisfying some time and memory constraints Now an observation we can make is that even if our reduction is tight meaning that it preserves essentially the running time it might still not be memory tight and actually Have a way higher memory consumption Now we would end now again. We would end up with an Algorithm in the area for which LPN is known to be broken and we can't derive anything on the security of our scheme however, if the reduction would also additionally be memory tight meaning its Memory consumption would essentially be the same as the memory consumption of the underlying Adversary the situation looks different now. We see that from any algorithm breaking the security of our scheme We are able to construct an algorithm outperforming even the best currently known Algorithms in solving LPN and we gain trust in the security of our scheme So as we have just seen for some problems, which provide particularly good time memory trade-offs Memory can be crucial when making concrete security Statements so we call these problems memory sensitive and they for example include LPN has just seen the shortest vector problem or the problem of finding a collision of three points in a hash function and Also discrete logarithm over finite fields, for example however for other problems The best known algorithms Only require small memory. So in this case while still a memory tight reduction yields a stronger result, it's harder to measure its advantage and Well problems of this type for example could be to find a collision or a In the hash function and pre-image resistance and hash functions or solving the discrete logarithm problem over elliptic curves defined above prime fields Now I want to illustrate that indeed typical reductions often contain steps which Result in our algorithms having a higher memory consumption for this we're going to look at two examples The first one being the simulation of a random oracle So a random oracle is an idealization of a hash function and works as follows the Adversary has access to an oracle which it may query on points and as a in response It's going to get a uniformly distributed bit string. However, since we are modeling a hash function Curring the oracle on the same point several times will result in the same answer. So this is usually simulated in Reductions using lazy sampling And this is by keeping track of a list of query answer pairs So whenever the adversary queries its oracle for the oracle value on some point We check whether this value has already been defined and if not we would sample a fresh random bit string Store it and then return the answer and it's important to To note here that we really have to store the answers to our queries Since the adversary might query its oracle several times on the same point So in the worst case we could actually have an adversary which essentially does nothing but query It's a random oracle and in this case we would end up with a reduction Which has to provide Additional memory, which is of the order of the running time of the underlying adversary So the second example occurs when checking for freshness of Messages in the unforgeability game for signature schemes So here we have an adversary which has access to a signing oracle Which it may query on messages in order to obtain signatures on them And in the end of the game, it's going to output a forgery attempt consisting of a message and a Signature and it wins the game if the signature is valid on that message and what I want to concentrate on here If the message is fresh meaning it has not so far not been queried to the signing oracle And these checking for freshness is usually done by simply keeping track of all messages And queried to the signing oracle so again in this case we might end up with a reduction having to provide Significant additional memory So to give a short recap of what I have discussed so far so currently at least in most of theoretical work the memory consumption is often ignored in reductions and And also many existing reductions contain steps which result in the Constructed algorithms having highly increased memory consumption and this seems particularly problematic for Memory sensitive problems that is for problems which provide good time memory trade-offs So how can we achieve memory tightness to give some examples for that? We are going to revisit the examples from a minute ago So the first one was the simulation of a random oracle via lazy sampling where we had to keep track of a list of queries and answers to queries in order to be able to provide the adversary with a consistent simulation of the random oracle and Now this can be made memory efficient by using a pseudo random function This is actually something which was already noticed by Bernstein in 2011 but not formally analyzed So here what we would do is we would sample a key for a pseudo random function And then whenever the adversary queries its random oracle on some point We would derive the answer to this query deterministically using our pseudo random function so in this way we don't have to store the answer anymore to be able to provide the adversary with a consistent simulation and Yeah, it's quite easy to see that any adversary which is able to distinguish this simulation of a random oracle From a proper random oracle would be able to break the security of our pseudo random function So the second example occurred when checking for freshness of messages in the affordability game for signature schemes So here the adversary had access to a signing oracle and in the end output A forgery attempt consisting of a message in the signature and we wanted to check whether the Output message indeed was fresh So this can be made memory efficient in the following way. So the general idea here is to use the adversary itself as a memory so what we would do is We would still answer the signing queries in the usual way, but no longer store the corresponding messages and At the end of our reduction when the adversary outputs its forgery attempt We would store the message and then we bind the adversary meaning we would run it a second time providing it with the exact same random coins and also Answering all of its signature queries with the same answers If we do this The sequence of messages submitted by the adversary to the signing oracle will be the same So now whenever the adversary queries for a signature We could simply check whether this message Is actually the message it forges on So in doing this we only have to store one message however, this comes at the cost of potentially running the adversary twice and an important thing to note here is Since we want to rewind the adversary. We actually have to store all the coins provided To the adversary and used to derive Signatures, but this could be done in a memory efficient way again using a super random function So a final thing I want to talk about is lower bounds so Showing that for some reductions, it seems not possible to obtain Version of the reduction which is both tight and memory tight for this We're going to look at two versions of the unforgeability game for signature schemes The first one just being the usual unforgeability against chosen chosen message attacks For signature schemes so in this game the adversary has access to a public key and A signing oracle which it may query on messages in order to obtain signatures on them And in the end of the game, it's going to operate a forgery attempt Consisting of a message and a signature and it wins the game if the signature is indeed valid on this message And if the message is fresh meaning it has not been carried to the signing oracle so far Now the second game we're going to look at can be seen as a multi forgery attempt version of the first game so again The adversary has access to a public key and a signing oracle, but now it's no longer restricted to make a single Forgery attempt instead it's allowed to make many and can do so at any point in time And it wins the game if any of those forgery attempts is successful Meaning that the signature is valid and the message so far has not been created to the signing oracle Now one could argue that this actually models more realistically what we expect of a Signature scheme since the restriction to a single forgery attempt seems somewhat artificial However, of course, we would like to use the more simple version And this also seems justified since there is a tight reduction from the multi forge game to the forge game to see that note that a Forge Adversary can check signatures itself for validity So the reduction would essentially boil down to checking whether a message is fresh and this could for example Be done by simply keeping track of all messages Submitted to the signing oracle. However, then the reduction is non tight Another thing we could do is To simply guess which of the forgery attempts of the adversary would be on a fresh message However, then we would end up with a reduction heaven a lower success probability And the final thing we could think of is actually to rewind our adversary at each point When it makes a signing query in order to check for freshness of the message, but this would result in a higher running time So as we have seen there are several trade-offs for reductions from the mForge to the forge game, but all of them Lose a fact factor or in order of the order of the adversarial queries in one of our resources but since mForge is probably the more realistic game and Forge the simpler one We would clearly prefer a reduction which is both tight and memory tight So obtains once in all in all over the row However, this seems not possible and indeed we are able to show a theorem which informally looks something like this So a for a certain class of blackbuck reductions from mForge to forge It's not possible to be both tight and memory tight So the proof users streaming algorithms which are algorithms Which have to process a large input but only have a small working memory so they can only Process its input as a ordered stream of small chunks, but make several passes over those stream and so the restrictions on our reductions essentially forced them to behave as a streaming algorithm where these single chunks will correspond to the signing and forgery queries and in this way we are able to use a lower bound on the amounts of passes a Streaming algorithm has to make over its input in order to compute certain functions So to give a very short conclusion memory in cryptographic reductions does matter and it should be addressed when Coming up with reductions also it seems possible for many reductions to Add a simple fix, but there also are reductions which seem to be inherent memory loose Thank you