 Thank you. Good morning, everyone. I'm Kev Sinckel from the Institute of Information Engineering at the Academy of Sciences. It's a great pleasure to be here with you today. My topic is the new collision attacks around the reduced check check. And my talk will be divided into these aspects. Let's start with a short description of the check and a comparison of previous work and our contribution and also the main ideas in our work. Check is also known as the winner of Shastri competition. It applies the sponge construction. The check F permutation is an iterated operation on a 5 by 5 arrays of 64-bit length. Note that the last C bits of the initial state are set to 0, so it is uncontrollable by the attackers. There are 24 rounds in the permutation. Each round consists of five steps. The state of the is usually illustrated as a three-dimensional arrays of bits. And its lower dimensional sub arrays can be called a slice, row, length, column as common terminology. So the effect of that step is to x all the current bits with parities of two columns. The row step is a length-level rotation. And the pie step is a permutation on length. And the chi step is the only nine-linear step. It x all each bit with a nine-linear function of two bits in its row. It can also be regarded as a five-bit S-box operating on each row of the state. And the Utah step is to add a constant to the state. So this is the specification of these steps. Just to remember, the only nine-linear operation is chi. And we write the composition of set row pie as L. We focus on colliering attacks in this work. And the best previous colliering attacks on Kachak family include practical results on three-round members and four-round members. Here, the surface number is the digest size. Theoretical results include four-round members and five-round members. All these results are presented by Daenerys Central in about 2012 and 2013. In our work, we find the practical collierance of five-round shake 128. It is a member in standard SHA-3. We also find the collierance on two five-round challenge versions with smaller digest size or state size. These smaller versions are also proposed by the Kachak team to promote crypt analysis on this family. And we also improve the theoretical results to five-round Kachak 224 and the six-round challenge version, with complexities below the burst boundary bound. We developed an extended algae break and differential hybrid method. This method is based on a crucial observation that the S-box can be re-expressed as linear transformations when the input values are constrained in some of fine subspaces. Also, we developed a dedicated strategy for searching differential trails. Now let's move on to the overview of the collierance attack, include the strategy of S-box linearization and how it can be used to build a connector covering two rounds. The five-round collierance attack is divided to a three-round differential that covers the last three rounds and a two-round connector that links the input difference of the differential with the initial value. Once a high probability differential, data SI to data SO is found, we build two equation systems, E-data and EM, to find the message pairs such that after padding and the two-round iterations or round permutation, the difference of the messages is exactly the target difference, data SI. The equation system E-data is built on differences. It is well-determined, and its unique solution is the difference of the two messages. And the equation system EM is built on messages. It is undetermined, and its solution space is the message space, in which we will search for collierance in the last three rounds. So the key point is how to build the equation systems E-data and EM. Some previously studied properties of Cachex S-Box can help. Firstly, given the input-output difference dirty in, dirt out, the set of all input values that satisfy the differences is an affine sub-space. Secondly, given the output difference dirt out, the set of all compatible input differences contains at least five two-dimensional affine sub-spaces. Based on these properties, Dana H. Zetro built a one-round connector to find the message pairs that after padding and one-round permutation, the difference of the pairs is exactly the target difference, dirt out. This algorithm contains difference phase and value phase. The aim of the difference phase is to find exact input differences to the chi-layer, the bat zero. So for each active S-Box in chi-layer, instead of choose the input difference directly, they choose an affine sub-space with four candidates instead. This is a more flexible approach to avoid inconsistence in the system. So when the bat zero is given, the value phase reduces to solving linear equations to obtain the actual message pairs that lead to the target difference, dirt out, dirt SI. Our idea is to extend this one-round connector to a two-round one. And the hard core of this extension is that there is a nine-linear layer chi in the first round. We tackle this problem based on some more properties of S-Box. We observed that when the input values are constrained to some affine sub-spaces, S-Box is equivalent to some linear transformations. So we define these input affine sub-spaces as linearizable affine sub-space and call it LAS for short. For example, when the input values are constrained to this two-dimensional LASV, the S-Box is equivalent to this linear transformation. As LASs are to be used together with differentials, we are more interested in those with fixed input and output differences, which is more relevant with the differential distribution table of K-Check S-Box. We observed that when the DDT value is two or four, then the set satisfying the difference is LAS. However, when the DDT value is eight, it does not allow in linearization. Exhaustive search of LASs shows that the largest LAS is of dimension two. However, this three-dimensional set contains six two-dimensional sub-sets that are LASs. For example, the three-dimensional sub-space corresponding to input difference one and output difference one, it contains these six LASs. So in our two-round connector, we use this property to linearize the first CHI layer. Specifically, we build the equation systems, E-data and EM on X variables, which is located before the first CHI layer. And initialize E-data and EM concerning the uncontrollable base in the initial state. Given alpha two, which is the input difference of the differential, we choose randomly a compatible beta one and inverse the linear layer to get alpha one and decide beta zero by Dana's target difference algorithm. Then we constrain X to LASs so that the first CHI layer will be satisfied with probability one. And at the same time, the output value of the first-round Y is linear to X. Also constrain Z to sub-spaces to make sure that the second CHI layer can be satisfied with probability one. As Z is linear to Y, Y is linear to X. The equation systems exerted on Z can be converted to those on X. So those of this equation system we get message pairs that can bypass the first two rounds with probability one. So far, we talked about the algebraic part of the method. Now let's see how to search for differential trios of high probability. An n-round differential trio is of the form of iterated alpha i, beta i. The weight of the differential from beta i over CHI layer is denoted by W i. An n-round trio core defined by a series of beta i is a set of n-round trios where the first round is a minimum weight and the last round all compatible alpha ns are considered. There are three requirements that the differential trios should satisfy. Firstly, there should be no differences in the digest part as we are considering clearance. Secondly, the degree of freedom of the LLG break apart must be large enough for the clearance search. The DF here is an estimation of the degree of freedom in the two-round connector. It cannot be predicted precisely so we set this value to be larger than the weight of the last three rounds. Note that in the last round, only the weight related to the digest part is considered. And thirdly, we want the attack to be practical so the weight of the last three rounds must be low enough. Our searching starts from search for lightweight beta threes. We are after three and after four are in CP kernel. In the forward direction, we test if there's no difference in the digest part and in the background direction, we traverse all compatible beta twos and in the trio core defined by beta two, beta three, beta four, we check whether the other two requirements can be satisfied. This is a summary of differential trio cores we obtained. We listed the number of active S-boxes from the second round to the fifth round and also the weight of the differentials from the second round to the fifth round. The sum of W two, W three and W four D decides the final complexity of the collision attack. There are many active S-boxes in the second round. It doesn't matter because the second round is covered by the algebraic part. We also find a trio core with one more round. So with this algebraic and differential hybrid method we practically find clearance on three versions and also theoretical results on two versions. We listed here the searching complexity in the differential phase, the degree of freedom in the algebraic phase and the core time spent on these two phases. As long as the degree of freedom is large enough for the searching it is very likely to launch an effective attack and in the cases where the degree of freedom is not large enough, we can consider messages of two block lengths or longer and our experiments are all completed in less than three hours. And finally I give some directions in field work. Apparently as long as the degree of freedom is large enough, linearization can be extended to more rounds. Actually with three round connectors my colleagues have already successfully find practical clearance on six round versions and this result will be presented in the ramp session later today. And secondly, the S-box linearization can also be viewed as a row level or five-bit level linear approximation. The equivalent to linear transformation is correct on sub-spaces. It can also be said it is correct with some probability and in the linear crypt analysis bit level linear approximation is used to construct distinguishes. So does linearization on alternative levels exist and how to find them? This kind of linear approximation can be used in crypt analysis on both block divers and hash functions. And last, we have system of higher degree work. Systems of degree two can also be applied to build connectors. Okay, that's my presentation. Thanks for your attention. Are there any questions? I have a small one. So you say you have already found three round connectors. Do you think four is possible or the degrees of freedom? I do not allow that. This is six round clearance. Sorry. Do you think four round connectors are something that might happen? That depends on the experiment if the degree of freedom is large enough. Okay, so maybe you will still be able to extend it to more rounds? You'll keep on working on this? Yes. Okay, thank you.