 This is Alex Pilkov and Ibiza Nikolayevich from University of Lodom, University of Luxembourg, and many a technological university in Singapore. Ibiza will give a look. In this joint work with Alex Pilkov, we have taken another look at the complementation property of death. We have generalized this property and applied it to the analogies of the block ciphers. Let's start first. Let's take a look at what this complementation property of death is. It is a simple observation that says that if one complements or flips all the bits of the plain text and the key in this, then all the bits of the cipher text would flip. Basically, it means if you take any key in this and any plain text and you produce some cipher text, and then you complement the key and you complement the plain text, you'll get the complement of the previous cipher text. Based on this observation, we can get a few simple results like distinguishing on death with only two queries. We can also reduce the exhaustive key search by a factor of two because we only have to try on one half of the full key. Why does it work? Well, complementation or old bit flip, it means basically it reduces the difference in all the bits. The key scheduling death is a simple bit permutation. All of the sub-keys are actually keys of the master key. Of course, complementation of the plain text, again, it means introduction of the difference one by one in all the bits. Basically, if we start with this difference in the plain text, here and here, and there is the same difference in the sub-keys. In each round, here you can sell me two rounds, but actually this is for all 16 rounds. Every time before the round function, after the key xor, this different cancels and goes through the f function probability one. Then again at the output, we have all ones, all ones. Then the next round, again, difference in all the bits of the sub-key, and then again it cancels and so on. So this goes for 16 rounds. So as you can see, the probability is 51 because there is no difference entering the acid boxes of deaths. So let's try and see if we can... This doesn't have to be only deaths. Basically, this is applicable to any fistful cypher such that the round function first has an xor of the sub-key and then some transformations here. So first xor, then some transformation. And the key schedule, the sub-keys of the key schedule are actually permutations of the bits of the master key. So let's try and see if we can generalize this property. So the original property says that if you have a fistful cypher and then you take any key and you flip all the bits of the key and then plaintext and then you're going to produce a complemented cypher text. So what are the ideas to see here? It says if you take any key and you flip all the bits of the ideas, maybe this property doesn't have to work for all the keys. In other words, we can produce a v-key class. And the second thing is maybe we don't have to flip all the bits. Maybe we don't have to introduce this difference one, one, one in the whole plaintext and in the whole key. So based on these two facts, we have produced another complementation property which we call general complementation property coming from these two facts. So it says basically, let's say we have some key, for the key schedule we can produce some differential that from delta produces alternating differences in the sub-key, meaning in the first sub-key we have difference delta 1 in the second delta 2 in the third delta 1 and delta 2 and so on. For some key it doesn't have to be for all keys, meaning a v-key class. And then if you start with the difference delta 1 and delta 2 in the plaintext, so see after the key XOR it cancels and again this calls with probability 1 and then delta 2 comes here and then again cancels probability 1 and then again we have delta 1 delta 2. So if you have these alternating differences in the sub-key, after two rounds again is the same. So it can be kind of produced two rounds iterative and directivistic. So here everything calls with probability 1. As long as for some key and some delta we can produce this type of differences in the sub-keys. So basically what it says is that if you can produce this type of difference in the sub-keys, alternating differences, then you have a v-key class because for any key, such as if you add the delta difference in the key, you'll have this difference in the sub-key. This will call it. So you have a v-key class for this Faisal Cypher and then you can use it as a differential distinction. So the main problem now is how to find the differential in the key schedule. One thing is, once you have the differential in the key schedule with alternating differences, the differential in the state holds with probability 1. So basically the problem has been reduced significantly. We only focus on the key schedule, no more state. As long as the round function in the state goes like first we XOR the sub-key and then some transformations, only 1 XOR and only at the beginning of this. This is for something we call classical Faisals where you XOR the sub-key. For modular Faisals where you modularly add the sub-keys, similar observation holds, but now we have to take the account, the differential in the state as well. So basically now you don't want to produce any alternating differences in the sub-key, but you want these differences to have a low hamming weight. So for the probability in the state you will pay as less as possible because now you have to pay the probability that two differences after modular addition they cancel. So basically this is for modular cycles. Again the problem is how to find the differential for the key schedule but this time with low hamming weight of the alternating differences in the sub-keys. So we can try to apply these properties. Our first application is for Camellia, Camellia 128. Camellia is a Japanese QuickTrack standard. The version we are analyzing has 128 bit keys and 128 bit state. And it has two additional layers, so Camellia has 18 rounds. Camellia also has two additional layers after the rounds 6 and 12. So we are analyzing Camellia without these additional layers. This is not very unusual because probably half of the results on Camellia are without these two layers. And the key schedule of Camellia is fairly simple. It's composed of four rounds of Feistel and some additional rotations. So basically how the key schedule works again. If we want to complement Feistel cycles we only have to pay attention on the key schedule. As long as the round transformation of the Feistel is extra on the sub-key and some transformation. So we focus only on the key schedule. So you have the master key A and then in four rounds of the Feistel, the symbol of Feistel that's used in the state. We have another key, Camellia, that's produced from this key. And all of the sub-keys are actually 32 bit values of rotations of these two keys. Either Camellia or Camellia. But the rotations are on various amounts. So basically if we want to produce a good differential that produces alternating sub-key differences. We want these differences to be invariant of the rotations. Meaning you can choose only difference in all the bits. So basically in this key we have to choose difference in all the bits. And after the four rounds of Feistel and Camellia again we'll have difference in all the bits. So then after all the rotations, again all the sub-keys will have difference in all the bits. So again the basic idea is of course let's go with characteristics of those which one-on-one went to one-on-one. So basically you have all active S-boxes in the first round and then all active in the fourth round and something in between. But if we go this way the probability of this characteristic is really low because we have a lot of active S-boxes, we'll have maybe 32 active S-boxes. And then we cannot use this characteristic. So that's why we switch to differentials and we hope maybe we can produce better probability. Because actually we need only difference in the sub-key. We don't care about the intermediate difference in these four rounds of Feistel. So we switch to differentials. We compute the number of characteristics that go from one-on-one to one-one. So in the master key we have difference in all the bits, in the intermediate key we have difference in all the bits. So we try to find all possible number of such characteristics. We find the lower bound on the probability of each such characteristic and then we sum up. And then we obtain the lower bound of the differential, this differential. And we get that the probability of this differential is 2 to the minus 128, which is unsurprising. That's not actually our main result. We're going to use the process how we obtain this probability is used later in the analysis. So there are 2 to the 128 keys and the probability of this differential is 2 to the minus 128. Meaning there is one good key that has this property. But of course this one good key is not enough to claim an attack on the cypher. So the wiki class is too small. That's how we switch to hash functions. So we use Chameleon in the hash function framework with ladies-maire mode. And then this analysis here comes handy because all of the probabilities 2 to the minus 128, we can find this one key with this many computations. And then another good thing is that the Chameleon, we can use the pre-key whitening, pre-key whitening, post-key whitening. So basically once we have this good key for any chain value, the pre-whitening is going to introduce the required difference in all the bits. Then after 18 rounds we'll get again difference in all the bits. And then post-key whitening is going to cancel. So basically for any chain value we're going to get collisions with this probability. Meaning we can produce collisions for the hash function because it's for any chain value with this complexity. Of course this is not good enough because for the generic case we can produce collisions with only 2.64. But if we switch to differential multi-collisions and we can because we have fixed the difference in the message, we can produce as many as we want with this type of complexity whereas for the generic case the complexity is going to rise up to 2 to the 128. So somebody might say it's kind of strange because we are trying to find one message that follows a certain differential. Why don't we use rebound attack? The thing is we cannot use the rebound attack because we need precise difference in the input and at the output. If we use rebound in the middle we can pass these middle rounds for free but then at the output you have no idea what kind of difference you're going to get. You just know that in these active S boxes you're going to get some difference. So that's why we cannot use rebound attack. We have to go with this complicated approach. So that's our result on Chameleon. Now let's switch to Bost. Bost is a Russian encryption standard. It has 64-bit state and 256-bit keys. It has 32 rounds and practically it has no key schedule and that's really fast in hardware. So let's take a look again. It's a modular FISO so the sub keys are modularly add to the state. That's why we have to pay some probability for the differential in the state. So let's take a look at the key schedule. So these are the master keywords which is 32-bit. And for the 32 rounds sub keys, so basically the first eight sub keys are the same as the master keywords. Again the second same, third same, and the fourth one is just the same as the reverse tool. Basically this means that we can build probability 1 differential or probability 0 bringing input difference in the master keywords. And probability is really high. So of course it's tempting to try to use the complementation property. But we are not the first cause that has noticed this thing from us. Actually there are previous results and this property has been exploited in previous attacks. So researchers have found related key distinguisher with only two queries. So you can distinguish Bost in the related key model with only two queries. Actually you can also recover some key bits if you put a difference in a single bit of avoiding all the master keywords in the state. And also we have single key attacks. But the thing is all the attacks that recover the full key have to be practical complexity. So basically they are very, very equal. For our attack, so we are going to present a full key recovery attack on full Bost. You are going to use the fact that the key schedule is very simple. The probability of the key differential is only one. The probability that if you have one active bit coming into the state and one active bit in the sub key, they can cancel the probability to the minus one. And the fact is also very important that if you know the input of this state round and you know that the bits of the sub key and the state have canceled, then you can find the bit of the key. So we have three steps in our key recovery attack. So basically what we do first is we compare related key pairs. We have some unknown bits, master key. Then we produce 31 key pair where we take the master key and then we XOR one bit different. All 31 different bits. 31 less significant bits. And each of such 31 pairs, we encrypt 2 to the 32 plain text pairs with a similar difference. So why 2 to the 32? Because here we have probability to the minus one for one round. So after 32 rounds, after 32 rounds we can expect that for each of the 31 related key pairs there is one good cybertext pair that has been different in a certain position. So basically we have done this for each of these bits. So for the less significant, second less significant zone we collect the cybertext pairs that have the same difference. Then we have something that we call domino effect. So basically first you recover bit by bit the sub key of the first round. Then once you have recovered the whole sub key then you move to the next round. You only have to recover the sub keys of the first eight rounds because those are the master keys. So what we do we take for example the less significant bit. We have the input because we know the plain text. We have the less significant bit. We know that they have cancelled because we have this out there. So we can determine the less significant bit of the sub key. Then we move to the next bit. We recover the first less significant bit and the second. So up to the 31st bit so we can recover all 31 bits of the first sub key and then the less sub key we just guess. And now we have the full sub key of the first round. We have the state, we have the plain text. We can compute the value of the state after the first round. So move to the next round and again repeat this process. Again we have 31 pairs of plain text and corresponding cybertext. And again we recover the less significant bit, the second and so 31 bit. And again we guess only the most significant bit. Again we compute the value of the state of the next round. So again we use this fact all the time. So if the bits have cancelled in all the inputs state you can recover the key. So that's why this way we can recover all sub key bits all the sub keys of the first eight rounds but just guessing eight bits for each of the most significant bits. So our framework is related to the attack. We have 31 related key pairs. We need data complexity for each of the pair. We need pair of plain text. So the data complexity of 38. Time complexity is 2 to the 38 to generate these cybertexts that have the right difference from the difference. And we have two today for guessing the most significant bits. And the result is full 256 bit key recovery. So key recovery of the whole key of those. So as you can see that this complexity is practical, this is practical. So we have tried experimentally to see how our key recovery works and it does. So we needed around one day on a single board to recover some full 256 bit random key. So to conclude the general complication can be used as a simple way of finding related key differential or FISEL cypress. For cypress we have plus XOR and sub key and sign transformation. Why is it? Because you only have to focus on the key scheduling and completely forget about the state. We still have some people who have proposals for block cypress and in their paper they say we have run some advanced search for related key differential and we have found that the best differential after 14 rounds, we could not explore more than 14 rounds because it was too infeasible. Actually they miss this fact actually that you can do this simple check and immediately get much better lower bound on the number of rounds that can be affected with related key. So as you can see also the number of rounds doesn't matter if we deal with classical FISEL. So Camelia can have million rounds and then are attacked. It's applicable to generalize FISEL and balance FISEL as well as you have to play more with the difference. And of course this is only one case how to find related key differential. It doesn't exploit all possible so please don't use this to prove any resistance of the cypress gas related key. And stay tuned for our round session top where we are going to talk how to complement full round FISEL. Thank you. Thank you very much for your very interesting results. Presentation is now open for discussion. Are there any comment or questions? Okay sorry. So for your attack on Camelia you said that the key schedule has probability to the minus 128 right? Yes. So you expect that there is one there. When there is some glitch or something and you wouldn't have any... If you assume it's marked cypress under the assumption of course. Of course. It's just like normal differential. Whatever you assume for normal differential it's same here absolutely. Because you have the constant channel so we can assume it's a micro. Of course just because it's so close to the bound if you're just a little bit... I agree. Can I comment some questions? I have questions. You have shown a result on the famous mayor of Camelia with a 128 bit case. Without FL or FL minus... That's right. Without nonlinear layers. Okay. So have you ever examined the complexity for the... Actually we can assess one of the nonlinear layers with only one chain value we can find. Only one. But not the second. So it's not going to work for two layers. At least the problem is going to stay along with the original version. Thank you very much. Any other comments or questions? Okay, so let's move to them.