 I'm going to give the talk. OK, thank you for the introduction. So my name is Daniel. And as I said, this is joint work with Maria Erzeda. And I'm going to be talking about this clustering of related tweak characteristics and its application to Mantis 6. So first a short overview of this presentation. I'm going to give you some context and present this target Cipher Mantis. Then I'm going to talk about the differential attack strategy that we are employing. And then I'm going a little bit deeper into the probability calculation for these clustered differentials. And at the end, I'm going to show you some experiments and the results of those experiments verifying our claims. So first, Mantis is a tweakable block cipher. And in contrast to this normal block ciphers, which have message and key input and produce a ciphertext like AES and DES, the tweakable block ciphers also have this additional tweak input. And this tweak input can be used, for example, to give some context to this encryption. And the very well-known use case is, for example, memory encryption, where the tweak can be essentially set to the address of this memory. And so it can help us, for example, break determinism during the encryption process. And we can also use this tweak to then build more advanced modes of encryption, whereas with traditional block ciphers, we always have to build some kind of mode of operation around it to, for example, get rid of this determinism. So examples of this tweakable block ciphers include skinny, Mantis, and karma. And one way to construct such tweakable block ciphers is actually called the tweaky framework. And this tweaky framework is essentially an SPN network at the bottom, where we have this round function that gets iterated many times. And we additionally have this tweaky schedule, which takes the tweak input and the key, and then has some linear functions that produce a tweak key, a round tweaky, for each round, which then gets added to the current state in the encryption. And this framework has some very interesting properties. So most of the time, this tweak schedule is actually linear. And the attacker, which is useful for our attacks, can actually control the tweak and use these properties for related tweak attacks. So this gives the attacker essentially much more freedom and can improve existing attacks. And we are going to focus on differential crypt analysis. So differential crypt analysis, we are looking at a differential trail where we are looking at an input difference between two message pairs in the message input and a differential trail with some probability results in an output difference in some state. And then we can, for example, guess the last round key, calculate back and verify that our resulting state actually corresponds with this trail. And using this probability calculation, we can then eliminate wrong key guesses. And we can also do this in the context of tweakable block ciphers, where we then can also use this freedom that we have in the tweak to introduce more differences and cancel out differences during this differential trail. So let's talk about Mantis. So Mantis was introduced at crypto 2016. And it's a tweakable block cipher that also tries to have the gold have a very low latency. And there are two official rounds, two official variants. Mantis 5 with 12 rounds and Mantis 7 with 16 rounds. And in addition to also using this tweaky framework, they also, Mantis is also a reflective cipher, where we have this alpha reflection property that we can use the same circuit for encryption and decryption. And this also has some properties that we can use that induce symmetry on both sides of essentially this encryption process. So Mantis, the round function of Mantis, has these very lightweight operations, the 4-bit involutive S-box, which is taken from Midori. We have the add key and add constant steps. We have a very fast permutation of the state cells. And we also have a very lightweight near MDS matrix for the mixed column step. And we also have this tweak schedule, which is used in the tweaky schedule to permute the current state of the tweak. So now I'm going to present this attack strategy where we allow input difference in the message and in the tweak and essentially use the freedom that we get by also allowing differences in the tweak state to improve the differential characteristics. And this strategy was already applied to Mantis 5 at FSE 2017. So if we look at this picture of Mantis, it has the whole Mantis 5. It has the whole essentially 14 rounds. And here we can see a differential trail where we fix a specific difference per state cell. So each of those state cells represents a nibble. And this red color essentially means that we allow a single fixed state difference for this differential trail here. And if you look at this concrete trail, this has a differential characteristic with a probability of 2 to the minus 72. And that's essentially almost optimal for Mantis 5, but still not that good. So we can't use this trail directly to build a differential attack. So if we go to the other direction and allow every possible tweak, every possible difference, we arrive at essentially a truncated differential characteristic where we only care about if the state cell is active or not. So if there is a difference or if both cells are the same in each message. So this, of course, means that the S-Box transition works with probability 1, but then we have some other constraints like, for example, in the mixed column step. And overall, the probability that a given message pair with this truncated input difference follows this truncated differential characteristic is also about 2 to the minus 100. So also not that great. So one of the ideas was to kind of mix this process and restrict, first of all, the input difference or the difference in the tweak, which we as an attacker can freely choose. And if we fix this tweak difference to a very single value, we can then propagate these constraints induced by this single difference to other state cells. And this looks somewhat like this. So we have much more colors. And these colors, as I said, essentially represent how many different differences are allowed in this state cell. And as we can see, this cluster of differentials, as we call it, has a much higher probability than both the single differential characteristics and the whole evaluation of this truncated differential characteristics, where we see this cluster actually has a probability of about 2 to the minus 39. And then using some clever matching of message pairs at the input phase, we can actually generate enough message pairs that we expect one of these pairs to follow this characteristic. And we can do that with essentially much lower data complexity. So to generate a single pair following this cluster, we need about 2 to the 25 chosen message pairs. So this attack strategy for Mantis 5 is very practical and was even concretely implemented and took a whole one single hour on a standard laptop to recover the secret key for Mantis 5. So most of the work for this previous strategy for Mantis 5, so calculating the probability of this differential cluster and finding this differential cluster was actually done by hand. And this is one of the areas that we are trying to improve. So how can we find these clusters more automatically? And how can we then evaluate the probability of these differential clusters? So if we look at a short example, so if we have a single differential characteristics, it's very easy to calculate the success probability of this single characteristic, because we just go to the difference distribution table and look at the possible transition probabilities from, for example, this S-box step to the next S-box step and also can verify that. Based on the properties of this mixed columns layer, that this transition holds with probability 1 and that the S-box transitions here in our case hold with probability 2 to the minus 4 each. But if we now allow more than a single difference per cell, this picture gets a little bit different. So here the green set essentially denotes all of the differences that are reachable in one evaluation of the S-box from this starting input difference A. And from this difference of A, we can actually reach for other difference in the DDT. And these are A, F, D, and 5. And if we allow all of those, of course, this transition probability for the first S-box step actually becomes 1. But then, because we now have multiple allowed differences in the green cells, we actually lose some probability in the mixed column steps so that this transition actually works again. So for this specific transitions in the mixed column step to work, this probability needs to be this works with probability of 2 to the minus 2. And we have this constraint that actually these two input cells need to have the same input difference for this transition to work. And then later we have this case where we want to go back from essentially a state where we have multiple difference to a single difference and then there's this question, OK, how do we actually calculate this probability? And if we allow more than one state difference in this last state here, how do we calculate such a probability? And the previous approach was to compute this transition probability based on the DDT and the actual sets that are allowed here. So as I said, for a single input difference, mapping to all its possible output difference, this probability is, of course, 1. But then for mapping this set of AFD5 to another set of AFD5, we essentially go to DDT and sum up and multiply the probabilities for each possible transition. So we look at the probability that A transitions to A and A to F and A to D and so on. And we essentially account for all of these probability transitions. So here we can see that in the case of where we go from this set of AFD5, allowed differences to the same set. This probability is actually 2 to the minus 1. And if we allow these differences and these multiple states, this essentially improves the overall probability of this differential trail by a lot. And so this was the way this was computed back in the FSE 2017 paper. And then during the experiments and during the actual implementation of this attack, then we noticed that the concrete probability of these experiments actually doesn't correspond that well to this probability. And in practice, it was actually more successful the attack. So it was even more pairs than expected, followed our differential cluster. And this was actually by more than a factor of 2. And so one thing that we assumed here was that we always have a uniform distribution of these allowed differences. And of course, if we take care and actually try to verify this, that this is actually not the case for most of our states. So what we did was we extended this calculation with some probability distribution vectors. And here we can see, for example, that if we start with a uniform distribution of all of our possible differences are a, and then based on the difference distribution table, we move to a state where we have also a uniform distribution of these four differences. But then the next step, we actually see that a is much more likely to map to these four differences than the other three. And so we get this cute distribution. And this continues throughout these S-box layers. And actually then we can try to more accurately calculate this probability and see that we actually, in the second part here, get a much better probability for our cluster, because this transition probability is now much higher because of this weighted vector where we have much more appearances of the difference A. So we then took this improved probability calculation and automated the whole process by a tool chain. So first of all, we searched for a promising truncated characteristic. And we do that via a MIRB model. But you can also do that using, for example, a SATS over. And then we fix a promising quick difference. And then we propagate our constraints throughout the cipher. And we do that cell-wise for the S-box step, the key addition at the permanent cells layer. And we do that column-wise for the mixed columns. And then we, at the end, calculate the probability and the data complexity automatically. And because it's automated by a tool chain, we can now look at much more of these differential clusters. And we had a whole bunch of them. And then we found a promising one that actually allowed us to attack more rounds of mantis. And this tool is actually public. So if you want, you can go to GitHub and play around with it if you feel the need. So as I said, we had a look at a whole bunch of clusters. And then we found one for Mantis 6, which allowed us to mount the key recovery attack on Mantis 6. And here you can see some part of this cluster. And one nice feature of this tool is it actually also produces these pretty pictures for you. So the average probability of this cluster, so the probability that a message pair that conforms to these input differences, follows this cluster and conforms to the output difference, is about 2 to the minus 67.7. And again, using this matching of just careful matching of chosen input messages, we can get the data complexity, again, down to about 2 to the 4 to 6.7 per solution. So the overall attack complexity for this key recovery attack uses 2 to the 55.1 data complexity and about 2 to the 55.5 time complexity, and which gives us a data time product of 2 to the 110. And this is much lower than the general data time product for an attack of about 2 to the 126. So this key recovery attack is actually not that true, because we have to do this in multiple phases, where we first guess some parts of this. First round key in the Pleasant Filter guess parts of the last round key in the Pleasant Filter, then take this combined information and guess some bits of combined round keys 2 and 11 and apply another filter. And then we store all of these key guesses and intersect key guesses from multiple iterations. And the detailed process for this key recovery attack can be found in the paper. And another nice feature is that we actually use this probability calculation also to calculate the probability of our filter so that we can more accurately predict how many wrong key guesses will be filtered by this during this key recovery attack. And we do that by essentially using the same methodology and computing backward starting from the ciphertext. So we actually did a lot of experiments on this to verify that this attack is actually practical. This attack actually works with this complexity and that the probability that we calculated for this differential cluster is actually pretty accurate. So as we can see here, this is some experiments that try to verify this probability of these characteristics. And we did that for random keys and fixed keys. And we can see that, essentially, our experiments follow the theoretical estimate very closely. So all three lines are essentially on top of each other. And at the end, we are getting into a very low sample space because these probabilities, of course, get very small. So we had to do a lot of iterations to get pairs that follow this trail up until this point. And we essentially verified this using an equivalent of 2 to the 55 message pairs. And I said an equivalent because 2 to the 55 message pairs is quite a lot. But in the paper, we describe a method where we can deterministically essentially skip the first S-box layer and save a factor of about 2 to the 10. So we only needed to produce 2 to the 45 message pairs, which was actually doable. And we verified this characteristic up until about round 10, where the number of solutions following this characteristic was actually quite low. So about six. These are these empty points at the end. And we also verified that our attack method described in the paper has a good success probability. And essentially, one important property for our attack is that each of our iterations has actually at least one message pair that follows this characteristic. And we also verified that this happens. And we can see that we verified this at the mixed columns layer of round 9, where you can see the green line actually follows the expected distribution, which would be a binomial distribution very closely. For the inner rounds, for example, if we go back to round 5, this is actually not the case that each of these iterations would have at least one conforming message pair. Because at this point, the solutions are very essentially clustered together. So we have several iterations where we have lots of solutions and we have several iterations where we don't have any solutions. But this essentially evens out over the whole cipher. So for the later rounds, we can see that this approaches the binomial distribution quite nicely. So we are pretty sure that our approach does actually work and based on this, we then calculated the success probability of the attack and therefore then the concrete parameters for the data complexity and time complexity. OK, so let's come to a conclusion. So we presented, again, these clustered related tweak differentials and the general method to find and evaluate these clusters of differentials. And we also presented this improved probability calculation for these clusters of differentials. Using this and the automatic tool chain, we found then a cluster for month 6 and essentially used it to mount the key recovery attack. And we also did extensive experiments to verify the validity of this attack and methods. Thank you for your attention. And if there are any questions, I would be happy to answer them. So do you have any questions? Yeah, in fact, there's also. So do you have an idea on how this method works? Not why this method works, especially on Monday. It's like, what are the criteria so that it's working? Yeah, so this, we also try to apply it on other side, but then it works essentially very well for Mondays because of this interplay between the concrete S-box and the linear layer. So essentially the S-box has this property where we have high probability sets that map to each other. And the linear layer is actually so lightweight that we can more easily propagate these sets without actually changing them throughout the cipher. So essentially an interplay between the linear layer, the concrete linear layer, and the concrete S-box that is used in Mondays, which work very well together for this attack. OK, especially in regard to the remarks you just made about the interaction of S-box layer and the linear layer, have you tried to apply these results to karma? Yeah, we have actually tried to apply these results to karma. And we've also observed some clustering effects for karma, but they are not as severe as for Mondays. And we have found some trails, but not some clusters, but not high probability clusters that allow a key recovery attack. Which version could you attempt? So karma 4 is easily possible, yes. If you allow the full data and time complexity, and for karma 5, we didn't manage to mount an attack with the full data and time complexity. So we have a differential cluster, but the probability is not high enough to mount these attacks. Thank you. Thank you, Christopher. OK, thanks for the nice talk. You mentioned that the practical evaluation of the Mantis file for attack, it turns out that the attack is actually better than expected. Yeah, so it's definitely better than expected because in the 2017 paper, the previous method for this probability calculation was used. So essentially, the probability of the cluster was estimated to be lower than it actually is in practice. So if we apply this new probability calculation to the old cluster, we get essentially this factor of 2 that we observed in the practical attack back and it more closely corresponds to the concrete attack. Because in the old paper, I remember that the data complexity was slightly higher than that. Yeah, in the old attack, we also changed the way we generate these initial structures to also combat these clustering effects that we observed in the final picture. Because for last rounds, we then often have iterations where we have multiple pairs following the characteristic and more iterations where no pair will follow the characteristics. So it's not essentially not that uniformly distributed overall possible, overall iterations. So for this Mantis file attack, we tried to combat that by essentially taking different starting points, taking more different starting points for this initial message generation phase. OK, thank you. So let's thank the speaker again.