 This is the presentation of our paper rotational analysis in a differential linear perspective. Practical distinguishes for wrong-reduced ferrets, zudu, and aardets. I am Yunwen and this is joint work with Siwei Sun and Chaoli. Differential and linear commonalities are the most general method to analyze block size, and they have many variants, including the boomerang and the differential linear attack. These combined variants could be stronger than pure differential or linear ones, and they are especially effective in the trails covering a smaller number of rounds have a high probability, but the probability drops exponentially with the increased rounds. In this talk, we will focus on differential linear analysis. With the vectorial Boolean function, giving an input difference delta and an output schema, we can build a differential linear distinguisher by a linear approximation on the output difference. And the probability of the distinguisher is defined by the number of right inputs. Accordingly, we have the correlation as 2 times p minus 1, and the bias is half of the correlation. Now the problem is how to estimate the probability given delta and gamma. Often, we split the cipher into two parts. In some cases, three parts are with an intermediate layer connecting E0 and E1. In E0, we find a differential with probability p, and in E1, we have a linear approximation with bias. Then the total bias of the distinguisher is 2 times p times epsilon squared. This is a rough estimation, and for a better evaluation, many new approaches are proposed. For instance, attacks on air-exciber Chesky were proposed in Eurocrypt 2015 and Crypto2020 using differential linear analysis with the partitioning technique. In 2017, theoretical formula was proved for evaluating the differential linear bias which requires to enumerate the Boole intermediate mask space. In 2019, the differential linear connectivity table was proposed to better evaluate the middle layer between E0 and E1. In 2015, it was noticed that the theoretical and experimental differential linear bias have a huge gap due to dependence, and this problem was answered in a very recent Crypto2021 paper from an algebraic viewpoint. So our motivation for this paper is two parts. How to extend the framework of differential linear analysis, and can we improve the accuracy in the bias evaluation? Before further discussion, I will first explain rotational analysis. We will later show its connection with differential linear. Rotational analysis was proposed in 2010 for air-excibers based on a rotational property of modulation. Then Marwiski applied a rotational analysis on kitchen. Following works on rotational analysis were proposed for the applications of the technique, and in several papers on rotational exhaust analysis it was shown that the rotational analysis can be regarded as a generalization of differential analysis, where our rotational difference replaced the XOR difference. The definition of rotational XOR difference is to add a rotation on one operand. When the rotational amount is zero, then it generates two ordinary XOR difference. Having the rotational difference in mind, what we do next is to generalize the differential linear criminal analysis by replacing the XOR difference to rotational difference. So we proposed rotational differential linear criminal analysis where giving a pair of rotationally related inputs X and X prime. We studied the linear approximation with mass gamma on their output difference through the side. As this formula here, and accordingly we defined the bias of such distinguisher by the probability minus half. Now we can see that rotational differential linear is a generalization of differential linear because when the rotational amount is zero, the distinguisher becomes a differential linear distinction. We will come back to this observation later. So can we borrow the differential linear bias computation here? We tried the first approach. In fact, we can borrow it, the previous idea on differential linear of separating a cyber into two or three parts. And we find good rotational differentials and linear approximations to concatenate. So here is the detailed direction on computing the bias. Assume that we have a good rotational differential on the first half of the cyber and a good linear approximation on the second half. It can be shown that similar formula can be achieved. But instead of the bias of the linear parts squared, we have two linear biases here and where the masks are rotated. But using this formula can be inaccurate in some cybers because we didn't consider the connectivity effect. So we further proof link between rotational differential and linear approximations and extend the previous formula on differential linear to rotational differential linear as well. The details proof can be found in the paper on the print here. This theoretical formula gives a unified view on rotational differential linear and differential linear. In practice, it will still require us to enumerate all intermediate linear masks. And unlike differential probability, the correlations are signed. So estimation or subspace of linear masks doesn't reflect the real correlation over the whole space considering the transformation. So we decide to take it another approach, and it is based on the previous work by Maurice Key on rotational analysis with application to K-check F. Given a three dimensional state A, X, Y, Z, where the nonlinear operation is on the X axis, we rotate the states on the D axis to get a rotational pair. So in other words, the inputs rotational difference is zero. The K-check F permutation without constants is invariant under such a notation, and aim is to find out which position has a high probability that's output pair differs on that bit. There are three rules to compute the probability that's output difference being zero, giving the probability that's the input difference on certain bits are zero. So three operations, the end operation X or N not. For instance here, the probabilities for the input difference being one is B and Q for the end operation. And then after the end operation, the output difference is one with the probability of P plus Q minus P times Q divided by two. So with these three rules, we can predict the probability of each output bits being n equal, round by round. We first observed that the rotational distinguisher on K-check F was a special case of rotational differential linear, where the output mask is one bit. Then our second observation is that the probability of the output bits being n equal through a Boolean function can be predicted by the following formula, where in the submission, the first term shows the difference transition probability. And the second term here gives the initial probability distribution of the input difference. Then our third observation is on the effect that the constants have on the rotational pairs. Here I show an example to show the effects when there are consecutive nonzero bits in the constants, like here 011 constants. We actually introduce a new rotational difference, which is the constant C x4 with C rotated. We call it delta C with difference. If the bits of that rotational difference is nonzero, then we should flip the rotational differential linear probability of the specific bits on the state. And this is called the adjusted zero. So, given an input rotational difference delta, the initial probability is fully determined, and we can evaluate the round function by regarding its circuit with Boolean operations. So we can compute the rotational differential linear probability, round by round, and to find out the position of the output bits that is the most biased. Here we show an example on Zudu permutation. Zudu is like with permutation designed by the kitchen team arranged in three times four cube with 32 bits on each side. One round of Zudu has five steps, where except for the step four, all the others are linear. Notice that the constant addition is before the linear layer. We can control the input rotational difference such that the difference before step four in the first round is zero. And that helps us to extend the distinguisher one round for three. So this is the input difference, rotational difference, which is. So with this input rotational difference, the rotational amount is set to be one bit left. And we get a full round rotational differential linear distinguisher with correlation one and output mask is one non zero bits on the 16th bit of the cell one zero. Next, we are going one step further to extend rotational differential linear analysis to areas. First, we'd like to mention that what works for rotational differential linear also works for differential just with the rotational amount being zero. So in this talk, we will speak about differential linear on Eric's and the full discussion can be found in our paper. To get the probability of propagation rule for addition modular to the end, we found that's the dependency in the carry function cannot be ignored. So if we see we apply the three and rules here and two extra rules here, then it will give us bias estimation. So what we did is the following using the observation to from our previous. We didn't use a cable carry room that takes the dependency into full constellation. So giving the probability of the input difference being zero, you can predict the carry difference being zero using this expression. Then it follows. That's the modular addition rule for differential linear propagation can also be right here as this. Let's see an example. Given the input differences being seven and seven to eight beats modular addition, and the probabilities that's the output difference beats are no zero can be computed as this table shoes. And our experiment confirms the results. And this is particularly efficient for modular additions with 64 bits or even more, where a direct computation of the differential linear bias would be computationally invisible. Another interesting thing that we observed is that the differential linear probability seems to have rotational behavior when the input differences rotate. For instance, when we rotate the difference zero one to zero two, the resulting probability is also shifted here. And this can be used to explain an experiment experimental results on the rotational property property in the differential linear distinguishes of seep hash. And we give a theoretical evaluation of the found differentially new distinguishes there. Then we apply rotational differential linear analysis and the new method of evaluating the probability to several cryptographic permutations. Here I showed the application of RZ for some more details. RZ is a 64 bits area space permutation presented in crypto 2020. It has two branches so each with 32 bits. The structure has only modular addition rotation and XOR because our previous propagation rules show the modular addition for differential linear. It contains quadratic and higher degree terms, and also the size of this permutation is not too large. So we can actually use the quadratic constraints programming in Groovy to search for a good input difference, where our objective is to minimize overall probability for all one bits output masks. And we observed that the input differences with low or high hemorrhage tend to give better rotational differential linear distinguishes in RZ and we also carry out experiments to compare the probability with the theoretical evaluation. For instance, here we have input difference, this one in a differential linear setting and the results are depicted in the following figure. The X axis is the position of output difference from 0 to 63 and the Y axis is the probability. The statistics show basically the same pattern for theoretical evaluation and experiments. And then overall of all our applications can be found here in this table. We found a certain round of rotation differential linear distinguisher for the permutation threads, four round for azuto and four round for RZ. We tested the experimental probability to verify the distinguishes whenever possible. The distinguishes show an advantage over traditional differential or linear distinguishes either in the number of current rounds or the probability. To conclude, in this paper we proposed rotational differential linear analysis as a generalization of differential linear and the theoretical analysis on rotational differential linear is given. Then a new measure for computing the probability of rotational differential linear distinguisher is presented, which is efficient by evaluating round by round. Finally, our technique is applied to three permutations where practical distinguishes are obtained. Thank you for your attention.