 Hi everyone, I'm Yusai Wu. Today, I'm going to talk about the tight security of a variant of three-round key alternating cypher, in which all the round permutations are the same. This is the joint work with Li Qingyu, my advisor, Zheng Huca, and Xiao Lei Dong, and we are all from ECNU. At first, let's take a quick look at this work. It aims to study the security of our case HP, in which all the round permutations are the same. Due to that, we propose some general techniques for handling the dependence caused by using the same round permutation. More specifically, the techniques consist of a new representation and a type of combinatorial problems, for which we also give a solution framework, and some specific tricks which are useful during the proof. Based on these new techniques, we finally obtain a tight bound of 3k zsp, and our techniques also show some evidence that a similar result holds for general rk zsp. Okay, this is the outline of my talk. We start from the motivation of this work. Then, we will talk about the previous work and also show what's new in our results. After that, we will give the proof of view. At last, we summarize this work and give some open problems. In symmetrical cryptography, a typical provable security result is like this. If the underlying components satisfy assumption A, then the result construction achieves a certain security level. Here, the measure of security often depends on the model we consider. This type of results can be used to measure the soundness of design rationales for different objects, such as block ciphers, hash functions and so on. First of all, we can know that the assumption A in the result is closely related to the practical significance. As a result, we wish that the assumption can be as close to the actual implementations as possible. The key alternating cipher is well-known as it captures the high-level structure of many SPN block ciphers, such as AS. The following figure illustrates the r-round kac, in which all the round keys and round permutations are chosen independently. At this point, we can ask our question, is the kac construction good enough? At first, we cause a well-known result, proved by Chen and Sternberger, and then refined by Hoang and Tassaro. It says that if the round keys and round permutations are chosen independently and randomly, then we can get a tight security bound of rkac. However, for the reasons of efficiency and cost, practical ciphers usually use the same round permutation and generate the round keys from a short master key through a deterministic algorithm. Obviously, the round permutations and round keys are not independent at all. Thus, the assumption in the well-known result is too strong, and there still exists a big gap between them. To reduce such a gap, we have to consider the kac with dependence. In such a construction, the underlying components are no longer independent, and it is closer to the practical ciphers. Obviously, there are a lot of variants. To match the practical ciphers, our ultimate goal is to minimize the kac construction. In other words, we should find out the weakest kac having the same security level. In this work, we focus on a very natural variant, the kac with a single permutation. We can see that in the kacsp, the usage of round permutations is minimized and matches the practical implementations. Okay, we now move to the second part. In this part, we will review some related work and also talk about our results. For the classic kac construction, its security bound has been solved completely. The question of the security bound of our kac was proposed in 2012 and was solved in 2014. We can see that the development is rather fast. By contrast, the development in the field of kac with dependence is much slower, since it usually becomes very involved when the underlying components are no longer independent. Dankelman et al initiated the study of minimizing one round kac construction, and they showed that several strictly synchronous variants provide the same level of security. After that, the best work was given by Chen et al at the crypto 2014. They proposed several two round kac variants having almost the same security level. However, we still know little about the higher round kac. In this work, we proved that it's a tight bound of three kacsp. Based on the findings in our proof, we can also conjecture that this result holds for any rkacsp. Compared to Chen et al's work, our methodology stands in a higher level, so that the results can be generalized to higher round. Also, we solved the three round case, which is much more involved than the two round cases. An important thing is that the tricks used in Chen's work can be used to solve two round sub-problems in three kacsp. This idea is very important to general rkacsp. For more than three round kacsp, the most different thing is we have to deal with more than one sub-problem, and it is indeed a challenging task to balance all the sub-problems and get a desired bound. In addition, we find that the bound of a higher round case has more space to adjust. Okay, we now move to the third part. In this part, we will show you an overview of our proof. At first, let's make clear the indistinguishability framework. The adversary D can interact with two permutation oracles denoted as oracle inner and oracle outer. There are two worlds. In either world, the inner oracle is a URP denoted as P. The outer oracle in ideal world is an independent URP denoted as E. Well, in real world, the outer oracle is the rkacsp computed from the P and a random key. In information theoretical setting, we can assume that D is a deterministic algorithm with a query ahead. We denoted its number of queries to oracle i and oracle o as QP and QE respectively. Furthermore, we will give the actual key to D at the end of its interaction with the real world. Accordingly, we will give a dummy key if it is interacting with the ideal world. The insecurity of rkacsp against any CCA adversary making at most QE queries and QP queries can be defined as the formula. In our proof, we use the well-known ethical efficient method to up-bound the advantage of a adversary. In brief, the set of attainable transcripts can be divided into two parts. The good transcripts and the bad transcripts. Before any good transcript talk, the ratio has a uniform lower bound such as 1 minus E1. And the probability to obtain a bad transcript in ideal world is at most E2. Then the advantage is bounded by the sum of E1 and E2. For using this method, the key point is to determine the partition. Intuitively, if the ratio is very close to 1, then the transcript should be a good one. For knowing that, we will use the simple fact. Consider arbitrarily a transcript tau. We can define a value p tau for it. Where p tau is the probability that rkacsp extends QE when the p is a URP extending QP. Then the ratio equals to a constant times the cube p tau. Thus, it is equivalent to calculating the value of p tau. Actually, our work makes an in-depth study on the value of p tau from which we can judge whether the transcript tau is good. To characterize the underlying problems, we will take a quick look at the new representation. In brief, we use directed edges to represent the binary relation and the permutation. Naturally, we can also define a directed path which consists of directed edges. For instance, this formula can be denoted as a path as follows. In this path, we have two edges, and we call the x and y as the source and the destination of this path respectively. Next, let's see a special path called the target path. In such a path, the source and the destination are known, but also in the nodes are undefined. Thus, the target path has a form like this. In addition, we defined a useful notion named strongly-destroined that q1 and q2 be two sets of query-answer pairs. Then, we see that q1 is strongly-destroined with q2 if also xi and uj are distinct, as well as also yi and vj are distinct. Using the new representation, the value of p tau can be reduced to a type of combinatorial problems called circumpleting a group of target paths. The problem is rather intuitive from its name. Given a group of target paths with the same construction yp, and the sources and the destinations are fixed by the set q1, in addition, the set of non-points of p is q2, and we know that q1 is strongly-destroined with q2. Then, we want to know how many assignments of p can connect all the paths rightly. Of course, the p must extend q2. For such a permutation p, we refer to the set of edges used for completing the paths as a core. As a warm-up, let's see a simple example. In this example, p is a permutation on c5, and q1 contains two pairs, 0,4 and 1,0. The construction yp invokes p two times and maps x to this value. For simplicity, we let q2 be empty set. Thus, the two target paths we want to complete are as follows. And the star1 and star2 are the inner nodes to be assigned. It is easy to find several assignments which can complete the two paths. The first case is to let star1 and star2 be two and six one respectively. Then, we can complete the two paths as follows. And its core contains four apps. The second case is like this. Then, the two paths can also be completed. And its core only contains three apps. The third case is as follows. As a result, its core only contains two apps. From this simple example, we can see that the cardinality of core may be different. In essence, the main idea of our framework is classifying the permutation p according to the cardinality of its corresponding core. To solve such problems, we propose a general counting framework. For the convenience of counting, we actually only consider the cores which are strongly destroyed with q2. It means that we will not use any ad in q2 to connect the targeted paths. From the inequality, we can know that if we can know the number of cores with the specific cardinality m and also know how to calculate the submission, then we can get a lower bound of p. Following the intuition, we propose a general framework. It takes four steps. In step one, we should model the problem. It means that we should determine the set q1, q2, and as a construction file, this task is often trivial. In step two, we need to know how to construct a core with a specific cardinality. More specifically, we complete the targeted paths by assigning some key points and control the cardinality of a core by constructing a specific number of shared edges. In the next step, we should count how many cores can be constructed in step two. During the proof, we choose the proper ROC for each assigning, and the sizes of ROCs determine the number of cores. Roughly, ROC is a set of elements which are suitable for this assigning. At last, we should calculate a submission. During the calculation, we will use an tail inequality as well as a combinatorial inequality. We have said that p2 can be reduced to the type of problems. In more detail, there are three sub-problems that should be solved. More specifically, the first two sub-problems are both two-round cases, where the constructions are 511 and 512 respectively. During our proof, we solve these two problems by the general framework with the techniques adapted from Chen's work. Of course, the extra problems should be handled. The third one is the three-round case, where the construction is 50. This problem is completely new and much more involved than the two-round cases. Fortunately, we can still solve it with the general framework. We should point out that knowing how to solve the sub-problems individually is still far from enough. More specifically, it is a bigger challenge to combine all the results together to get a desired bound. To do that, we should design all the ROCs in different sub-problems integrally, and it is indeed a huge project. There are numerous technical specifics in the formal proof. We refer you to the paper on imprint for more details. Okay, in the last part, we will summarize this work and also give some open problems. To the best of our knowledge, it is the first time to study the security of cases P in a high level. And we also developed some general techniques to handle the dependence caused by sharing the same permutation. Based on the new techniques, we obtain a tight bound of three cases P in a random permutation model. First of all, based on the findings in this work, we conjecture that ROCs P has the same security level with ROCs. At the end of this talk, we leave some open problems. This work aims to minimize the usage of random permutations. Thus, how to minimize the key space is still open. Secondly, following our methodology, the proof for higher-round cases would be fairly involved. Thus, it is interesting to find a simpler way to analyze the general ROCs P. First of all, we introduce a type of combinatorial problems in this work and such problems seem to be rather general, so we hope to find more applications for them. Okay, this is everything I wanted to say. Thank you very much for your listening.