 Thank you for introducing me. I'm Xin Yang Huang from Tsinghua University, China. So our topic today is on conditional kubectike, reduced around the kachak sponge function. Here's our contents today. We will introduce kachak sponge function and the kubetester. Then we will describe our new model here about the conditional kubetester. And then we will apply our new model to several algorithms. So we start from the kachak sponge function. Kachak sponge function family was designed in 2007. It was selected by the NIST in 2012 as a Shashree winner to be the next generation of standard hash function. Since then, correct analysis of kachak has attracted increasing attentions. But most of the attentions are on the keyless setting. For example, many results have been obtained on collision attack, pre-image attack, second pre-image attack, and distinguishing attack. Besides, in the keyed mode, the security of kachak mic and the kiaq has also been analyzed. But most of the results are based on kubetike and kubetike attack. Here's the construction of kachak sponge function because of the different sizes of input. There are four versions here. And the round function of kachak has five steps because the Yota step will not impact our result. So we omitted here. The first three steps are linear ones, and the kai step is non-linear one. The degree of kai is 2 so that the outputs of around kachak are with a degree no more than 2 to the n. The new sine cube variables in the column parity kernel, this is a very important property for kachak sponge function. They use this property to control the propagation of cube variables and prevent them from being multiplied with each other in the first round. But in our work, we will see the propagation in the second round. And the bit tracing method, which proposed by Xiao Yunwang, is a very powerful technique to analyze hash functions. Inspired by the provider's works, we propose a new approach by imposing bit conditions on the inputs to control the propagation of cube variables caused by non-linear operation kai. Here's the structure of kachak mic and kiyaka. We assume that the message of kiyaka is of two blocks. In our work, we first propose a key recovery attack on reduced round kachak mic. You can see that from the table, we improve the existing result sharply. And we have a practical attack on the sixth round kachak mic. The second part of our work is to analyze kiyaka. We extend one more round. And we have a practical attack on the seventh round kiyaka. The third part of our work is to design. The third part of our work is to design distinguishing attacks on kachak's function. This is a much stronger result than a kachak f distinguisher because a kachak f distinguisher is working on the whole state. And our result is working on the input part. And you can see that all of our distinguishers here are practical ones. Then we will discuss cube tester. Cube tester was proposed by Dino. And if we have a Boolean function f, we have these variables. And if we can write f as this form, we can sum the value of f over all the possible values of the cube. So the cube here is the coefficient of term t. And that's the value of pt. We call the sum cube sum here. And you can see that this cube sum can be taken as a higher order derivative of the output polynomial. And the existing results are working on the properties for polynomial pt, such as its low degree and highly unbalanced truth table. But in our work, we will prove that pt is 0 with the big conditions satisfied. Then we will talk about the new model here. We divide the cube variables into two cans. It says conditional cube variable and ordinary cube variable. We will show the out view of big conditions here. We sent the cube variable here. And you can see from the picture that it will diffuse the two black bits in the first round and the colorful bits. Our technique is to control the diffusion of the colorful bits. Yeah, the idea of our new model is to attach some big conditions to a cube tester. You can see that the colorful bits is related to the plain text. And if we attach some big conditions, the colorful bits will be controlled and the diffusion will not continue. We divide the cube variables into two cans by the conditions. If we attach some condition to control the propagation and the multiplication of the cube variables, that's the conditional cube variables. Otherwise, the variables will be called ordinary cube variables. In the next theorem, we count the number of conditional cube variable and ordinary cube variable. If we have this many cube variables, we can prove that the term will not appear in the output polynomials. That's Pt in the previous theorems, which is equal to 0 for error. Here, we have some properties of Kachak's function to construct a conditional cube tester. Before we describe the properties, we will give a bitwise derivative of 5 here. And we want to say that there is an equivalent relationship between the differential characteristic and the bitwise derivatives of Boolean functions when we are expressing the propagation of a cube variable. So the advantage of the differential characteristic is that it's very straightforward. You can see what bit the cube variable are involved. And for the later one, the advantage is that we can show the properties here mathematically. Here is the first property here. If we want to control the diffusion in step chi, we have to attach two bit conditions. You can see that with these two conditions, the diffusion of v0 is controlled. And in this picture, we compare the propagation of an ordinary cube variable and a conditional cube variable. You can see that with gray bits controlled, the conditional cube variable only affects 22 bits in the 1.5 states. So the diffusion of a conditional cube variable takes less space than the ordinary cube variable. So in the next step chi, there will less probability for the conditional cube variable to get multiplied with ordinary ones. We call a conditional cube variable in such a pattern for we have two bits in the input state. And two bits in the first round state. And 22 bits in the 1.5 states. We have following two properties here to describe the multiplication and exclusion. The multiplication is for the two variables v0 in the neighborhood bits. It will get multiplied in the chi step. And the exclusion is that you can't control the propagation of these two variables because the bit conditions will affect each other. You can't fix f1 to be 0 and 1 at the same time. Based on these properties, we have some algorithm to determine the relationship of cube variables. We want to see whether they will get multiplied after the first round or the second round. You can see in our algorithm, we only need to compute the output difference. And we only have to compare the output difference to get the relationship back. So the advantage of our three algorithms is that we can determine the relationship very efficient because we don't need to compute out the exact representation of the Boolean functions. And then we will talk about the applications. The first application is the key recovery type on round reduced kachak mic. Here's the general size of the type. We just need to guess the value of Ic equivalent key bits and calculate the cube sum. If we get the 0 cube sum, the guess may be the right one. If it's not 0, we will turn to another guess and compute out the sum. We analyze the complexity of the general precise here. When the accuracy of the above precise takes this much kachak cause, and this is determined by 2 to the i minus p. Usually, the number of conditional cube variable grows. I mean, when p grows larger, there will be more key bits involved in the bit conditions. I mean that when p grows, I see growth faster than p. As it grows faster than p. So in our case, we use one conditional cube variable and 2 to the n plus 1 minus 1 ordinary cube variable to construct our key recovery attack. Moreover, we choose this variable as a conditional cube variable because there are only two equivalent key bits in the bit conditions. Let's save the cost of the attack more. In this algorithm, we search the corresponding cube variables along with the conditional cube variable we have assigned. We run algorithm 4 and get the other ordinary cube variables and devise the bit conditions and guess the key in the attack. The next example is a simple illustration of the attack where the key was generated randomly. You can see that when we get the right value, we will get a cube sum with 0. But for other wrong guesses, the cube sum is a random string. For a 5-round K-check mic, the key is recovered in 2 to the 24 time and data. Similarly, we use one conditional cube variable and 31 ordinary cube variable to recover the four key bits. We also run algorithm 4 to get other ordinary cube variables here. We present an instance here, and the key is generated randomly. For the 7-round K-check mic, we do the same thing and recover the four key with 2 to the 72 K-check calls. Here we come to the key recovery attack reduced-round K-check. This is very easy and natural because we have a lot of output bits. So we can inverse the chi-snap in the last round. Because of this, we can extend the former key recovery attack by one round. The only difference here is the bit conditions and the key we guessed. Actually, this is a state recovery attack. When we get 256 bits, we can inverse the first K-check internal permutation, and we get the master key. Here is our result. And then we will talk about the distinguishing attacks on K-check's bunch function. By theorem 3, if we use 2 to the n plus 1 conditional cube variables, we can construct a distinguisher for n plus 2 round K-check's bunch function. This is because these conditional cube variables will not get multiplied with each other in the first two rounds. So after another n-round K-check's bunch function, the degree of these cube variables is no more than 2 to the n. So we get a distinguisher because of this property. And our construction of the distinguisher includes two parts. First of all, we will find a combination of sufficiently many conditional variables. And then we derive the corresponding bit conditions for the chosen conditional cube variables. But how we can find the combination of sufficiently many conditional variables? We can take the differential of cube variables as toy bricks. Because we have different shapes of the differential, we have different types of toy bricks. And the state of K-check, we can take it as a box. So the problem here is to put up the toy bricks into a big box as many as possible. At the same time, the bricks should have a distance between each other because we don't want the variables to get multiplied in the type type. We abstract this problem into a mathematical model. For every cube variable candidate, we denote it with a new variable. And we derive the constraints over the new variables because there are still multiplication and exclusion between the conditional cube variables. We abstract this problem into a mixed integer linear programming here. We want to get enough conditional variables and with these constraints. We general the constraints on the cube variable candidates by this algorithm. We run our algorithm to get the constraints. We solve the model with a group B optimizer and get the conditional cube variables as follows. The time complexity for the distinguishing attack on the sixth round K-check is 2 to the 9 K-check calls. And data is the same. We say it's a sixth round K-check because we can extend it by one more round. It is because we have more than 320 bits of the output. We can inverse the last case type freely. Similarly, we find a combination for this version. And we get a seventh round distinguisher. But for K-check 2 to 4, the same precise can be applied with conditional cube variables in 2 to 2 to 22 part. But this problem, the searching problem here is very difficult to solve. So we turn to some better patterns. That's the double kernel pattern. The B2I is derivative of such a chosen variable as due invariant with respect to the operation theta in the second step. Here you can see the better differentials which are much sparser than the former one. In this problem, we can have less candidates and less constraints to consider. So we find a combination of 30 conditional cube variables in double kernel patterns associated with three conditional cube variables in the former pattern. So the distinguisher here, the property is the complexity is 2 to the 33. Sorry. That's the contents of our talk today. Thank you. Thank you. There is time for a very short question. That's a very impressive improvement over previous works. But I'm surprised that all the attacks you presented for the distinguisher are completely practical in the complexity. And you stopped at 8. So what happens if you go to 9? Will you get non-practical but still better than exhaustive search result for distinguisher? I think if we want to extend it to more rounds, we need some more conditional cube variables. But it's very hard to get such a combination. If we want to discuss it for a few rounds more, we need to discuss the multiplication in the third round. But it's very difficult because diffusion is of a mess. I'm sorry. Thanks. Yeah.