 Yesterday the problem I'd like to share today is the security of the people in production. Joe Dowdap, if a block-signal E has Keyland력 Key that is vulnerable to a naive key recovery attack of 2-time time. The way to increase the effective Keyland without countering the desired block-signal is to use the double encryption construction. However, in the video attack, we cancel both keys within two to the same time. Based on this observation, the conventional wisdom is that double equation is longstanding by showing that double inclusion doesn't improve security substantially if we look at a broader angle. So let me first recall the conventional CCA notion that we used to measure the security of double encryption before I explain what broader angles that we'd like to look at. So here's the CCA security notion for block-sathever pi that is built on top of an ideal cypher E. Under this notion, an adversary is brought into either a real world or an ideal world. In the real world, the oracles implement the construction pi and its inverse under a random secret key. In the ideal world, they instead implement an ideal random permutation app and its inverse. In both worlds, the adversary has access to the ideal cypher E and its inverse and the goal of the adversary is to guess which word it is in. The notion that we've seen, however, consider only the security of just a single user. In practice, an adversary typically attacks on mass, adaptively distributing its resources across multiple users. The adversary doesn't target any specific users. It is happy as long as it can compromise somebody to model the multi-user security. In the real world, the oracles implement infinitely many instances of the construction pi, but all abuse on top of the same ideal cypher E. Likewise, in the ideal world, they implement many random permutations at one, at two. The new security can be implicitly obtained from a single user setting by a hybrid argument, but now security degrades proportionally to the number of users. This artificial decoration, however, can be pretty loose in some setting. So we call that according to the conventional wisdom, double encryption is useless. But that's because we look at just the single-user perspective. So I argue that if we instead consider multi-user security, then double encryption does improve security substantially. In particular, AES has only 64-bit security in a new setting due to a key collision attack. Under this attack, the adversary first chooses random keys K1, K2, and then encrypts some designated message under those keys. It then uses the encryption oracle to encrypt the message under many user's keys. If some of the adversary's chosen keys is also some user's key, then the adversary can realize that by checking for matching entries between the two tables and then recovers that user's key. In contrast, double encryption provides a good way to preserve security in a multi-user setting. In particular, double encryption AES has nearly 120-bit of new security. So far, there has been no prior work on analyzing the new security of double encryption except the naive bow by the hybrid argument. Why this is already enough to show that double encryption is quite better than single encryption? It is way weaker than what double encryption can potentially offer. The goal of our work is to achieve this dream bow. While we focus on double encryption, the scope of our work is much broader. We actually provide a technique for bowding information theoretic muse security. Our method can handle many types of constructions such as authenticated encryption, PIF, or block cipher, and many types of ideal primitives such as random oracle, ideal permutation, or ideal cipher. As long as a security notion is an indistinguishability game, we then showcase the new method by double encryption. The advantage formula is somewhat complex, but if the block length n is greater than the key length k, then we essentially achieve the dream bow. So here's the visualization that for the bow that you've just seen. The hybrid argument tells us that double encryption has about 80-bit of muse security, but double encryption is actually much stronger providing about 150-bit of muse security. Thus, there's a huge gap between the security of double encryption and that of single encryption. Our proof technique, which we call almost proximity, is very general as I mentioned earlier, but because of that, it can be overly complex in some setting. We therefore provide a simplified framework of our technique that is more restricted in scope, but hopefully improves usability substantially. This simplified treatment can handle many real-world settings such as the Galar counter mode, but unfortunately it doesn't work well with double encryption. We therefore provide another specialized treatment of our technique that is tailored to the specific setting of double encryption. This specialization can be viewed as a generalization of our point-wise proximity technique in crypto last year. So let me now introduce the simplified framework. So under this setting, one wants to bow the distinguishing advantage of two randomized systems, S0 and S1. Here S1 is the real system implementing many instances of a construction pie that is built on top of an ideal primitive. S0 is the ideal system implementing many functions at I that are sampled from some prior distribution, independent of each other and independent of the ideal primitive. In each system, they provide access to two oracles, one for construction query and the other for primitive queries. In the context of double encryption, the first oracle is used to encrypt and decrypt via double encryption and the second oracle provides access to the ideal cipher. The agreements for the queries can further encode some information to specify say whether it is an encryption query or a decryption one. We will use the following metrics to account for the cost of the adversary, the number p of construction queries, sorry, the number q of construction queries, number p of primitive queries and the generic data complexity sigma on construction queries. You might think of sigma as the total length of construction queries, but it is much more general than that. And we assume that if you make q construction queries of complexity sigma, internally these invoke at most sigma t primitive queries. When the adversary interacts with the two systems, its queries and answer are recorded in a transcript tau. So the advantage of the adversary is at most the statistical distance between the distributions of the transcripts that the two systems produced. To bảo this statistical distance, we classify the single user transcripts into good and bad ones. This classification, however, involves only construction queries. That is, if two transcripts have the same construction queries and answers, then either both of them are good or both of them are bad. Based on that, we then classify the new transcript into nice and not nice ones. A new transcript is nice if for any user, the corresponding induced transcript for that user is also good. After classification, we then bảo the probability that one can encounter a not nice transcript in the ideal world. These are the analysis in a multi-user setting, but because we are in the ideal world, the analysis are often simple. Now note that the statistical distance is a sum of some products. If we plot some rectangles whose width are the first term in the product and the height are the second term in the product, then the statistical distance is the area of those rectangles. Here the green area corresponds to not nice transcripts and the blue area corresponds to nice ones. The bảo that we just obtained allows us to replace the green area by the orange rectangle. We now only need to bảo the blue area by using some single user quantities. In order to achieve that, we consider an arbitrary good silver transcript tau and then establish a bảo on the probabilities of real and on the ideal and real probabilities. This is exactly what one would do to achieve the single user bảo by the edge coefficient technique. We then factor out the bảo into two terms epsilon and epsilon prime. The first one must be a super additive function, meaning that epsilon must satisfy this technical inequality. Many common advantage formulas such as q square plus sigma square over 2 to the n are super additive. Having obtained some single user quantities, we now need to translate those into multi-user settings. For simplicity, let's start with non-adapted adversary A, meaning that the adversary has to fix the way it distributes the resources at the very beginning. So suppose that the adversary makes qi construction queries of complexity sigma i on user i, and assume that for any single user adversary b is advantage as at most epsilon plus epsilon prime. Then by using a hybrid argument, the advantage of A is at most the sum in which x amount is epsilon plus epsilon prime. The first argument of these functions however is p plus sigma t instead of just p because during the hybrid argument, we have to simulate some construction queries and these involves making primitive queries. When we sum it up over all users because epsilon is super additive and they are at most q users, the sum is at most epsilon plus q epsilon prime. The argument that you've just seen however only works for a non-adapted adversary. The main issue in multi-user setting is that the adversary can adaptively distribute the resources. To deal with that, we instead do a hybrid argument at the transcript level because everything is fixed there but this in turn requires the single user bow at the transcript level as well. However, that is exactly what we got when we bowed the ratio of the real and ideal probabilities. Now recall that if the adversary is non-adaptive, then you can bow the blue area by epsilon plus sq epsilon prime. For adaptive adversaries, if you use the hybrid argument at the transcript level, you can obtain essentially the same bow but now there's an extra multiplicative factor to because it's probably the artifact of our technique. The framework that you've just seen however doesn't work with double encryption. We therefore provide another specialized framework of our technique to deal with that. Our goal is to obtain the new bow but only using sealed quantities. To achieve that, again we classify the sealed transcripts into good and bad ones but this time there's no restriction on the classification meaning that it can involve primitive queries and again we bow the probability that one can encounter a bad sealed transcript in the ideal world. Having done so, we now can reach and focus on good sealed transcripts and again establish a bow on the ratio of real and ideal probabilities. We then factor these into three terms that last involve a transcript dependent quantities, collision tau. To have an intuition of what this means, consider this specific transcript. Here if you make a construction query to encrypt x, you get a string y. We therefore draw a blue arrow from x to y. Likewise, if you make a primitive query to encrypt, to decipher u, it's on a key k1, you get an answer v. We correspondingly draw a red arrow from u to v and collision tau is simply the number of red arrows in which one of two n points hits by another blue arrow. We now need to translate those sealed quantities into the mu setting by using a hybrid argument at a transcript level as before. Under this translation, again, epsilon has blow up two thanks to its super additivity and epsilon prime and epsilon stars both have blow up two q. To have an intuition for the blow up of the last term, note that for a mu transcript obtained in the ideal world, it's very unlikely that a red arrow is hit by too many blue ones. The specific threshold here is obtained by a bond into Bean's analysis. And the blow up is essentially twice that threshold. And here's the direct theorem to move the sealed conditions to the mu setting. The term 2 to the minus n is the probability that some red arrow is hit by too many blue ones. We now apply our technique into a setting of double encryption. So let's consider an arbitrary sealed transcript. Let's now extend that with the keys shake 1, shake 2. In the real world, these are the actual keys of double encryption revealed at the end when it must be finished querying. In the ideal world, these are random strings independent of anything else. If the graphical representation of the extended transcript contains some particle chains as highlighted here, it is trivial to distinguish. It's therefore important about the probability that you have chains when you extend the transcript tau in the ideal world. But if the boundary inferior if you have too many red arrows hitting the same point, for example here you have six red arrows, but there are nine paths leading to nine possible chains. So deal with that, we define a sealed transcript to be bad if it has B or more red arrows hitting the same point. And the threshold is selected so that the probability of having a bad transcript is very small. And you can then obtain a bound on the ratio of real and ideal probabilities. So summing up, today we propose almost proximity of a powerful technique in handling multi-user bounds. When you apply that to the setting of double encryption, one can realize that double encryption does improve musicality substantially. The bound is tied if the block length is greater than the key length, but for a particular case that the block length is very small compared to the key length, we cannot find any matching attack and thus leave it as an open problem. That's it. Thank you.