 Hello, I'm Xihu. I'm going to talk about time memory trade-offs in symmetric cryptography. This is joint work with Wei Lai at Sanfana-Tessaro. Traditionally, securities prove against the added service that have bounded time complexity and query complexity. The goal of security proof says, almost always, to overbound the advantage of an added theory in terms of its time complexity and query complexity queue. In this work, we talk about concrete security. It will not dispel to be as precise as possible. However, the memory complexity is another important factor in the fact of feasibility of our attack. In fact, many cryptodynamic attacks achieve time memory trade-offs. That is, given decreased usage of memory, the time consumption of our attack will increase, sometimes significantly. A memory bounded setting usually limits better security guarantees in terms of the number of queries and the time needed to break a system. So ideally, we like to prove an upper bound on the advantage, which also depends on the memory complexity of the added theory. This is, however, not very easy. And only recently we have seen progress in the memory bounded setting. To see what the challenges are and how memory affects security, let us take a simplified version of randomized counter-mode encryption as an example. Encryption algorithm, upon encrypting plaintext m, first samples a random mb3r. Then it renders prff with a secret tsk and then put r, to get a mask. And now it puts the random base r and the masked plaintext together as its ciphertext. Because it got textbook security theorem, it can also memory complexity of the added theory. It's also one of the best ideas to basically do an advantage for an added theory running in time t and encrypting q messages. This, in the upper bound, is in terms of the prf security advantage for f, and additional information theoretic term that only depends on q. The q-square over begin tells us the game is secure only when q is smaller than the square root of b again. Where b again equals 2 to s more. This is due to the fact that the randomly sampled rs would collide with good probability when q is around the square root of b again. So I'll text you to simply record all queries until condition is found to break a scheme. This attack, however, appears to require memory. And therefore, we would like to see how the upper bound changes if we consider memory of the added theory. Such a bound was only proved recently by Yeager and Tercero and subsequently improved by Denur. For this bound, the information theoretic term becomes s times q over begin. This term implies that there exists a time memory trade-off because the smaller the memory, the more messages we need to encrypt to distinguish. Actually, proving this result is quite involved. And to show the tightest possible bound, we need to use techniques from communication complexity. We also know that essentially the same bound can be proved when f is a fissure random permutation but use a version of a PRFP switch in them that takes memory into account. Of course, this is about one specific construction. The question we are asking in this work is what is the highest security a construction can achieve in terms of a number of messages that can securely be encrypted in a boundary memory setting? Even further, can we obtain a trade-off that makes q to be larger than b again for small memory? Note that q larger than b again is impossible to achieve with single techniques without boundary memory. This is interesting because it would allow us to reduce the domain size of an underlying PRF while still securely encrypt a large number of messages. A bit more precisely, the main contributions of this paper is to show the security of two natural symmetric encryption schemes in a memory-bounded setting. The first scheme is a sample that extracts scheme proposed by the several and through Vagadam. And the second scheme is a character source scheme by Bernere, Godre and Kraftsik. Both schemes depend on parameter k which corresponds to a number of cores throughout the nine PRF and achieve better and better security as k increases. In both setting, the trade-offs and the miscule to be larger than b again for small memory-bounded s and sufficiently large k. In order to analyze the constructions, we use a number of techniques, some of which are normal. For example, the proof of ST realized on Tidal Hybrid via KL Divergence and the normal use of a decomposition lemma by Goose et al. For kxor, we never wedge a connection between the security of kxor and the nest decoding of kxor codes. In particular, we show a new and tightly combinatorial nest decoding bound for the kxor code, which is of independent interest. We start with the construction of ST scheme. ST can be regarded as a generalization of a counter-mode encryption. First, instead of setting 1 random r, we set k random values r1 to rk, which we are going to refer to as probes in the following. Then, we have added rk probes in the function f, concatenate the results and apply an extractor to obtain the mask for the plaintext. The extractor is composed of the k probes r1 to rk, the seed of the extractor, and the mask plaintext. Here, specifically, to get the claim bound, we choose a two-inverse hash function as the extractor. The complexity of the scheme depends on the parameter k, and we expect that the security of the scheme will increase as k increases. But the main challenge is to understand quantitatively how. Previously, the cellular and through-veglance show that the security of a bound of 2 can achieve up to begin. However, the statement is only proven to be true when k is larger than smaller. Even for a very small amount of memory. In our work, we show a much better result, which already gives strong guarantees for a constant number of probes k. When we have a plaintext box size being 1, or constant, essentially the security bound implies that at least big n over s to the k queries are needed for an attack to achieve constant advantage. For example, if a memory s is less than big n to the 0.99, then only constant number of probes are needed to achieve q larger than big n. It is important to note that the PRF advantage can be small, even if t is much larger than the big n, as long as a key net is larger than log t. In our paper, we also have another version of theorem for the PRP instantiation and a more direct construction that saves randomness usage. Now, to prove a manned theorem, we need to show that each mask plaintext is close to an independent uniform distribution. We follow the standard approach by first replacing a PRF with a truly random function, which costs us a PRF advantage. Then, the main focus is to show that the 2.02 hybrids are indistinguishable in the information theoretic sense, given q queries and s-bit memory bound. Here, our first step is to cast a problem as a special case of a streaming distinguishability setting that we will go to introduce. In the stream model, we consider there are two streams x and y, each of q random elements. The goal of the streaming adversary is to distinguish these streams. The process is divided into q stages. Within each stage, the adversary receives a single stream element and the state from the previous stage. Then, it outputs a boundary-side state to the next stage. The adversary receives a final q-th stream element and the state from the stage q-1. Then, it makes a prediction by outputting a 0-1 value. The advantage of the adversary a distinguishing two streams is defined as a priority of a auto-putting 1, where we receive a stream x minus a priority of a auto-putting 1, where we receive a stream y. In particular, in our case, we are interested in the following two streams. For every element x-i in the stream x, we have x-i connects all the probes, the randomly sampled seed, and the unit from L-bit stream. The y-i connects all the probes, the randomly sampled seed, and the extracted mask from the L-bit pen and text message. And then we have all the sides of state sigma bounded by s-bit. Here, we point out that the advantage of distinguishing the two targeted hybrids over all s-bit bounded adversaries that ask q queries is upper bounded by the advantage of distinguishing the two streams of that q over all streaming adversaries with s-bit state sides bound. This can be proved by memory tight reduction. So, we only need to upper bounded the advantage of s-bit bounded adversaries in the streaming model. And in fact, we can turn this into a problem of lower bounded and appropriately define the channel entropy. In particular, by hybrid argument, it is enough to look at one single stage and study the distribution of the y-i conditioned on the state of the adversaries so far. We will use a formula in that one, that is divided by a formula-improved hybrid argument by Yeager and Tecero, and it can be proved with a KL divergence. Concretely, this number tells us that it is enough to look at the channel entropy of the extracted L-bit mask, conditioned on the received state sigma m-1, the probes, and the seed. And we would like the term to be as close to L-bit as possible to minimize the entropy loss. Now, our goal is to prove a claim that, for each query, the channel entropy term is close to L-bit. If the entropy loss of velocity is over, begin to decay. Clearly, if we can prove this, then we will obtain a desired upper bound. In fact, we can simplify the problem a little further. Note that we can think of the state of the adversaries after i-1 states as some randomized leakage of the underlying random functions. In fact, we are going to be generous and show that the desired lower bound holds even if we look at the arbitrary S-bit leakage of the random function. And again, we are going to prove the claim that the channel entropy is close to L-bit. With entropy loss of velocity, S over begin to decay. To get the result, naturally, we want to understand the amount of mean entropy produced by RFR1 to RFRK to plan the extractor. However, the challenge is RF is no longer truly random, given the S-bit leakage. A solution to overcome the challenge is to approximate RFR3 S-bit leakage by function G that has a set of size S over the domain. Such that for any value X in the set, G of X is fixed. And for any value X outside the set, G of X is truly random. The approximation is made formal by the decomposition lemma introduced by Lewis et al. Here, for simplicity, we just take any formal angle and assume we have such a G. Having the extractor instantiated by the two universal hash function, we apply a version of the left over hash lemma for channel entropy. Notice that the negative term is the expectation takeover of all possible K probe configurations. So, we need to understand the mean entropy term for any probe configuration. We take a closer look at how the choice of probes would impact the amount of mean entropy. Given that G has S values fixed, then some of the random probes will hit at the S fixed values, and the others hit at the truly random values. Because those probes hit at the truly random values as good probes. And given the probes hitting at the fixed values contribute no mean entropy, and the probes hitting at the truly random values contribute an equal amount of mean entropy independently, the mean entropy grows linearly as the number of good probes increases. Because the number of good probes ranges from 0 to K, we have K plus 1 different mean entropy levels. Crucially, the higher the mean entropy, the more channel entropy the extractor outputs will have. Notice that all 1 and 2 are K, or sample independently, uniformly, at random. We can even explicitly compute the probability of exactly J probes are good, which is a binominal. Now, we shall put all the techniques and steps we went through already, and the rest is some calculation. Then, we obtain the design of the lower bound for the entropy, and hence, we prove the trade-off. Now, we start to discuss about the K-source scheme, slightly different from the ST construction. The K-source construction X was a PRF output as a plaintext mask after sampling the K-probes. Instead of applying an extractor to concatenate the PRF outputs. The K-source construction was first proposed by Bernier, Godre and Krabsick. They also proved the bound, which is, of course, memoryless and holds in the regime when the number of queries is upper bounded by a big N. We can also derive a suboptimal trade-off using branching programming techniques, which was recently used by Gar, Koteli and Raz, such a time-memory trade-off of Godre PRG. However, in our work, we are targeting a bound which is as tight as possible. For K-source, we prove the following theorem, which implies that big N over S to the K over 2 queries are needed for constant advantage attack. We also give an attack in our paper to show that such a trade-off cannot be improved for very small memory S. To see how the proof works, in the following, we look at the case of M equals 1 for simplicity. The proof starts with, again, replacing the PRF by a truly random function, giving us a PRF advantage upper bound. Our focus is still the information theoretic term for the lower two hybrids. As before, we can still cast the whole indistinguishability analysis into a streaming model. Here, we have two streams, X and Y. Where XI connects all probes and a single uniform random bit, and YI connects all probes and the extra values of their functional valuation results. And as before, we can upper bound the advantage of distinguishing our targeted two hybrids by the advantage of distinguishing the two streams of all S-bit bounded streaming adversaries. Here, it is not worth it to use the tighter hybrid argument via KL divergence. We just use a conventional hybrid argument. Since we are looking at a one-bit output random function, now we can use a guessing advantage instead of a indistinguishability advantage. We define the guessing advantage of random bit B, conditional given information Z, to be the maximum advantage of all the predictors at the end of the survey, guessing B, given Z. So, by the standard hybrid argument and indistinguishability from predictability argument, we can upper bound the advantage of distinguishing and by the sum over guessing advantage of the I-th query when given all the probes and that most S-bit of state received from the previous stage. Here, similarly as in the previous case, in ST, we can also be generous to consider an arbitrary leakage function for the S-bit leakage state, which subsumes all the possible states produced by an random survey. So, we will be done if we show the guessing advantage given the probes and an arbitrary S-bit leakage is upper bounded by S over B again to the power of k over 2. To prove it, it will be convenient to adopt a view of the problem in terms of a nested coding of kxor codes. For this reason, let me briefly recall what the kxor code is. The kxor code encodes a table of point of function with a small n-bit. Each position of the codeword is indexed by choice of probes R1 to Rk, and the corresponding bit of that position is defined as the x-word function evaluation results of the pros. For example, suppose we have a 2-bit input function table Rho and the k equals 2, then we have the kxor of Rho table as the following. For each position, the bit equals the x-word result of Rho, R1, and Rho, R2. Similarly, we can think of a particular adversary, which given R1 to Rk and some side information z above Rf, and tries to predict a bit as a noisy codeword for the kxor code. More specifically, after fixing the side information z of the adversary, we can write down the prediction result of each input probe explicitly, as this would be a noisy codeword associated with this z. If the correctness of probability of this predictor for the fixed z is at least 1 plus epsilon over 2, then clearly this is equivalent to the fact that the relative distance between the kxor code and the noisy codeword produced by predictor A within the z is at the most 1 minus epsilon over 2. Now we want to see how this interpretation helps us derive the guessing advantage of a bound. The intuition can be explained graphically. First, the backpoints on the kxor encoding of all possible functions. And every fixed choice of the auxiliary information z defines a noisy codeword, which is depicted here as a blue point. We have at the most 2 to the s noisy codewords, and given the auxiliary information is at the most s bits. So now, let us look at the Hamlin ball of radius r, that satisfies r over n to the k equals 1 minus epsilon over 2 around each noisy codeword. And let us imagine that we draw the function rf uniformly at random. Now we have two cases. First of all, if the encoding of the function is outside of all of the Hamlin balls, then the advantage of guessing xor correctly for this particular choice of function is at the most epsilon. In other case, if the encoding of rf is at some point within the Hamlin balls, then the guessing advantage is no longer operable by epsilon. However, the probability of landing inside the ending of the balls is upper bounded by the 2 to the s minus n times the next k epsilon term. You may define the next k epsilon as the maximum number of kxor codewords that are within relative distance, 1 minus the epsilon over 2 to a ball center. Therefore, what we need is to upper bounded the quantity this k epsilon, which is exactly the combinatorial list size for the kxor code. If we want to use two rows from the literature to obtain a bound on this size, prior works focus on approximate list decoding. And if we use such results, we obtain bounds that lead us to the suboptimal space time trade-offs. For this reason, our main technical contribution here, which is of independent interest, is to prove a tight bound on this side. Approval is purely on concentration bound proved via higher moment methods. In conclusion, we give a better trade-off analysis for both the sample that extracts and kxor constructions. For ST, we prove that the ethnics began over as to the k queries is needed to break the scheme. For kxor, we prove that such a weaker lower bound of began over as to the k over 2 for breaking the scheme. Both trade-offs allow the scheme to tolerate q larger than began queries in bounded memory setting. The result of kxor also complements an analysis in BGK for the case of q larger than began. Further, in our paper, we have extended our ST scheme to handle multiple plain text blocks. We also provide approval of ST essentially with PRP instead of PRS. And the variant of ST that reduces the randomness used per query. But we still don't know whether the trade-off bound for ST is tight or not, even though aesthetically it looks optimal. And for kxor, we also provide several attacks and show that the trade-off is tight for very small memory case. However, it is interesting to examine whether the trade-off is tight for general memory and whether we can improve a bound so that it matches the memory that's bound in the q smaller than the big n case. The full version of our paper can be found on the A-print. And thank you.