 The first talk of this session is reflection, quick analysis of brain-slide cypher. The authors are Adi Suleimani, Selin Glanda, Xiaoli Yun, Wenning Wu, Aisha Leibang, Hui Ling Zhang, Lei Zhang, and Yan Feng Wang. I think this is a joint paper, so we get a long list of authors here. Adi will be passed to the second Adi. Okay, hello everybody. I'm going to talk about reflection, quick analysis of brain-slide cypher. So this is the outline of the presentation. First, I will describe the brain-slide cypher. Then, I will introduce you to these three researches for this kind of cypher. And after that, I will explain how to acquire cure power, and also various classes of water reflection, and finally, I will talk about it. So, the print is a low latency black cypher. It was proposed at last Asia Cape. It's a fixed construction. The master key is divided into two parts. The first part is used as a whitening key in the beginning and end. And also, K1 is used in the core of the cypher. So, the last whitening key, K0 prime, is also obtained by K0 simply, in this way. And also, the cypher has an interesting property, which is called alpha reflection. It means that instead of using the encryption algorithm, we can use encryption algorithm by just swapping the whitening keys, K times 0 and K0, and also use K1, XOR, to alpha, instead of K1. And independently of the value of alpha, designers show that the print is secure against non-attacks, standard attacks, like differential, linear attack. And also, this work is to study the effect of value of alpha and the security of the prints like certain structure. So, at the first, before describing the cypher, because there is another expert about prints in the next talk, I should say that our description is a bit different from the original description. So, I'm going to talk about this. Anyway, in the middle of the cypher, there is a linear, there is an involution linear layer and also a nonlinear layer S, and inverse of S. Also, there are five more rounds at the beginning and also five more rounds at the end, and each round in the beginning is consist of K1, adding key, adding ground constant, and a nonlinear layer S, and another linear, the second linear layer, which is called M. M can be obtained from M prime simply, as we will describe. And actually, in the last round, each round is consist of similar elements, but in inverse, it means first we have inverse of M, inverse S, adding ground constant, adding key. And so, it's the core of the cypher, and as I mentioned before, so also for having alpha reflection property, the constants should have this property, which holds in this equation. And also, as I mentioned before, there are two more widening key at the beginning and at the end. So, the original piece has the length of the block, is 64, and the specific value of alpha is chosen this value, and the S layer, a nonlinear layer, is a linear wise, each linear is proceed by the same spot. And also in prime, the first linear layer is an inclusion matrix, 64 times 64 bits, and it's diagonal, and it can be constructed by two smaller matrices, M0 hat and M1 hat. The M0 hat and M1 hat are also constructed by four smaller matrices, M0, M1, M2, and M3. Finally, the second linear matrix M in the pins, in the original pins, can be obtained by a position of M prime and also a simple permutation SR, which behaves like a shift row in the areas. Now, first let's look at the progress works. Reflection attack is a kind of attack, and it has been applied on some primitives in black cyphers and hash points with first structure. The idea is that if in the middle of the cypher, if the left part of the state and is equal to the right one, then also assume that the subkey are equal, then without knowing the key, since the input of the functions are equal and also the functions are equal, then without knowing the key, we know that out of the function are equal. So, if you look at the two arms of the cypher in the middle, then we know that if probability, at the same probability, the input and output are equal. So, the difference between input and output is zero. And of course, if for more rounds, if symmetrically the subkeys are equal, then we can extend the distinguisher for more rounds of the sub. So, in this work, we use a probabilistic reflection, means we use probabilistic scenario to extend our distinguishes because in the primes, in the primes cypher, we have round constant which are related to each other, but they are not equal. So, we can't use a deterministic reflection distinguisher. And also, we try to do this for an SPN, SPN as well. To introduce the distinguisher, first we have this determination for function f from the set a to the same set and a point x, an element x, is called fixed point, if out without the function is equal to the input. And it can be proved that if function f be a linear resolution, then the number of fixed point is at least to the n over 2. And we use this fact. So, the idea is to take the advantage, the fact that there are at least to the n over 2 fixed point in the middle of the cypher and also use the alpha reflection property and construct the reflection distinguisher. So, let's look at the middle of the cypher. For the first, we introduce one distinguisher for two rounds of the middle of the cypher. With this probability, we have a fixed point here, the difference is zero. So, we can pass the nonlinear layer with the same probability. And finally, because of the relation between round concept, the difference between here and here is alpha with this probability. Similarly, we can introduce another distinguisher. So, with this probability, we have a difference alpha from here to here. And so, after the round concept, the difference is zero. So, we can pass one round. And finally, the difference between input and output of the forums of the cypher in the middle of the cypher with this probability is alpha. Now, to extend, but say we are far from the whole of the cypher. So, to extend the distinguisher, we should do. And also, in the case that there is another point, in the case that this probability is zero, there is still a kind of distinguisher. We have an impossible differential. It means that, namely, it's happened. This happens also for the original value of alpha in the prints. So, to extend our distinguisher, assume that for two or four rounds of the middle of the cypher, the difference between here and here is alpha. Now, if you look at the from inside to the input and inside to the output, the rounds function behave similarly to each other. There is only a difference here and here. And the round concepts are different, but they are still a relation between them and the difference between them and alpha. So, the idea is to look at this cypher and try to find for a specific value of alpha, try to find a differential related key characteristics for more rounds and extend this distinguisher for more rounds. Now, how we can do Kim Covey based on these distinguishers? Assume that with high probability this happens, it means that the difference between here and here for two or minus two or rounds of the cypher are delta. So, with the same probability, we can pass the linear layers. And as usual, the idea is to get corresponding keys in the first round and the last round and check whether the distinguisher holds or not. But still to speed up the key recovery, there are, we can do more, there are two ideas. The first one is doing the key recovery needle by needle. So, first we get the corresponding key case from here and also the last round and check whether for the specific needle the value of delta as well if holds or not. Also, in the case that, in the case that delta s star in the specific needle is zero, we can pass the non-linear layer and so the effect of k1, so k1 is cancelled and so it's not necessary to know the value of k1 in this specific needle. And so by this idea, we can also speed up the key recovery. Now based on the distinguisher and key recovery, which I explained before, we are trying to classify the alpha value. It means that if we if we choose a specific alpha, how this construction, how this, how choosing alpha affects the security of the cipher. So, to maximize, to maximize, let's look at again to the external characteristics, to maximize the external bounds, the probability of the distinguisher for the external rounds. We have two ideas. The first one is cancellation and also for a specific alpha, we can use branch and bond to find the best related key differential for more rounds. First cancellation idea, assume that for two or four rounds of the cipher in a different series alpha. So, with this probability after inverse of n and inverse of s, we have alpha. So, since the difference between round constant is alpha, so after key addition and round constant addition, the difference is zero. So, we can pass one more round and so after the key addition again we have alpha. So, it's an iterative, iterative characteristics for four rounds of the cipher. But of course, it's still, we can do more by, and yeah, so by cancellation idea, we have the iterative characteristics for four rounds. And the best alpha which we have found are listed here, in which we can attack the full round of the cipher. And also by branch and bound aggregate, a simple branch and bound aggregate for 12 rounds of the cipher. We can, for a specific alpha, we can find how many rounds we can attack. And some example of the full round of the cipher, the value of alpha is listed here. So, the observation is that if you look at this table and this table, most of the alpha value are low hanging weight. So, here, for example, we have 001.1. And so, the question is that would it be sufficient to choose the alpha value such that the most of the labels are active? Is it sufficient or not? So, for example, let's look at this alpha value. All of the labels are active, but the inverse of the alpha is, it has just four activities. So, for this kind of alpha, which is the inverse of alpha, has this format where SR means non-active, SR means arbitrary value. Then, we can show for six rounds of the cipher, we have this way, a truncated distinguisher, such that with probability 2 to the minus 32, the difference between the six rounds is this value. And also, there are three more classes, truncated characteristic, which are listed here. So, totally for 2 to the 18 values of alpha, there exist six rounds truncated characteristics with 2 to the minus 32 probability. And so, we can attack up to eight rounds of the cipher by using to the round 36 data complexity, and also to the 88-foot equation. And as a conclusion, so, we introduced generic distinguisher for piece-like ciphers. And we showed that the security of piece-like ciphers depends strongly on the value of alpha. And also, we identified classes, which were 4, 4, 6, 8, and 10 rounds of the cipher. We can distinguish the pre-score from the random ideal deviation. And in the case of the weakest class, we can attack 12 rounds of the cipher. But still, for the original value of alpha, the attack just worked for six rounds. So, we can conclude that the alpha value, which has been chosen by the designer, is among the best one. And so, we can conclude that this work can be considered as a new design criteria for selection or choosing of the value of opium ciphers. Thank you very much. Is there any question for Haru? No, I have a question. Besides, please, what ciphers did you consider? Well, actually, we have tried to apply this on more SPN-like ciphers, which are convolutional. And at least, they couldn't apply this for the ciphers, such that it works for all of the ciphers. Because maybe it works for some ciphers in the b-key's scenario. But because one of the points is that the round constant, you should find a good relation between round constant. So, you can, such that you can extend your characteristics for more rounds. And so, if for the non-block surface, which at least we tried, it was just to get this class. But in future, we should just try. Any more question? If you go to the net complex, it shows security margin that you can multiply data in the net complex. Yeah. Actually, our main idea is that the same, we are, it's just we wanted to show that we can apply the key recovery. I mean, distinguished can be converted to key recovery parts. But you are right. It's more than 127. But the point is that for specific alpha, we can distinguish more rounds of the core of the cipher. But the same, it's secure against linear attack, the single differential attack. I mean, other classical attacks, septic skull attacks. So, yeah, maybe for key recovery, it doesn't work well. But if you consider only print score, then everything is clear. Why? Because first, there is no real security margin on the print score. Yeah. But it's proved that it's secure against differential annealing. Okay? The attack complexity there, that you're right. Is it going to be smaller if you consider only print score? Of course. Because then if you don't need to, because one of the problems here is that the relation between K0 and K1 is not a direct reaction. So if you know K0, it doesn't mean that you have K1. Thank you. Any more questions? If not, let's thank Halia again.