 The last talk of the session is about Cryptoanalysis of Round Reduced Skinny. The authors are Sadegi, Thairam, Ahmadi and Nasur Bagheri, but the authors couldn't be here, none of them, so Hadi is going to present the paper. Thanks for the introduction. As I already mentioned, one of the others wanted to come but for some unexpected reason. Unfortunately, she couldn't come and then very recently I was asked to present the paper. So actually we escaped in the airport when I was coming, so I hope that I can cover all of the details. But anyway, this presentation is about Cryptoanalysis of Skinny. So first I'll introduce a description of Skinny and then I'll present a new Zero Correlation Linear Cryptoanalysis of Skinny, which is obviously one of the, which is obviously the best Zero Correlation Characteristic that has been found for Skinny. And then I discuss MILP model for Skinny 64 and the searching related to Iki. And finally I conclude with the results. So yeah, it's good to be the last talk. A lot of things have been introduced by previous talks but Skinny was introduced at Crypto 16. It has different variants based on the length of the block and also the length of the tweak. We have different variants that denotes block size and T is the tweak size and T can be N, twice N and three N. And respectively we call them TK1, TK2 and TK3 version of Skinny. And two main versions is when N is 16 and 128 the block size. So of course as usual we can like AES like Cypher, we can present the whole state as four times four square. Each round has five operations. It consists of five operations. First we have S-boxes, S-boxes are applied and then we have Adronkey and then Twix, Twixy are added here in the first two rounds. And then we have Shifro like AES and we have a mix column that is a simple XOR between the cells. And the tweak is actually updated like this. We have a permutation each time it passes the permutation and then add to the first two rows. And if we have TK2 we have also LFSR. It means the first we apply permutation and then the first two runs also pass an LFSR. And for TK1 when the size of the tweak and the size of the block size are equal then we don't have LFSR. OK, zero correlation linear is already introduced by previous work. So we have a linear approximation like this and then for the linear approximation of course we can consider the probability. It means that the probability that this term would be equal to zero. And then the correlation is twice of the probability minus one. And if the correlation would be exactly zero for a cypher then we have a distinguisher for the cypher. For a skinny we can simply find the non-zero correlation characteristic. It's depicted here if we have for example one cell active here. It means non-zero mask for the input and another non-zero mask just for this cell. Then we can simply this that it's not possible because in the middle for example if we look at the first cell here and here from one direction it should be zero. The mask the internal value of the mask should be zero while here it should be non-zero. So it means that we have a zero correlation characteristic for nine rounds. It's very straightforward to obtain this characteristic. But if you want to add one more round for example we want to add from back forward or forward in the middle of the cypher to have ten rounds zero correlation then we can see that there is no contradiction. So if we compare this state to this state there is no contradiction between them at first glance. And what is done in this paper that the others have observed that if we consider these two cells and they should be zero mask. They should have zero mask and if we go to the details of the rounds then we see that okay if we don't want to have a contradiction then they should be zero. And after the shift row they are in the same column in the same column and after applying mixed column then this cell should be non-zero value. And if we compare this state to this state or this state then we can see there is a contradiction. So by this nice trick we can extend zero correlation with non-zero correlation for skinny for one round. And then the others utilize this zero correlation characteristics to apply care coverage attack on skinny for different versions. And they could actually attack fourteen and eighteen rounds for these wedges. In what follows I focus on impossible related to a key impossible differential. First let me recall the using MILP model for finding impossible differential characteristics. So at 2011 by Moha and others it's shown that the problem of finding optimal differential characteristic or linear approximation, linear trail actually converted to the optimization problem model in MILP. And in MILP we have a set of inequality and also we have a function that we want to have find the solution such that this function get the minimum value. So how we can model finding impossible differential by MILP. So it's simple we should consider all of the bits before the S-box and after the S-box. So we present the sets, the bits of the sets before the S-box by XI0 to XI63. And similarly the output as YI0 to YI60. And we denote each bit as zero or one if there is no difference between them and between the sets on that bit and one if it has difference. To describe the active or passive S-boxes we need to consider another value like AJ for any S-box. So we have four bit to four bit S-boxes. And if the S-box is active it's one and otherwise it's zero and to have to describe this like in a MILP model then we have these equations and then we want to have our aim is to find the solution for the minimum of the value of this summation. And then what as usual it is done they consider differential and all of the values that this probability is larger than zero. It means that delta X can go over the S-box to delta Y. And then of course we can consider instead of considering the whole space we can actually compute the representation of convex hull with sage melt. And with respect to XOR it's already well known that it can be modeled by this equation by using another one more actually variable like D and A plus B plus C should be equal to 2D. And it's description of A XOR B to C. And of course we have some kind of impossible events for example A B C cannot be zero zero one because zero XOR to zero is not one. Okay this model for the first time I think it was proposed by Sui to use a MILP model to find impossible differential characteristic or zero linear trade. But it was applicable on just small boxes with small size. And then in follow up work Sasaki and others propose another method that also includes applying it can be applicable on larger boxes like 8 times 8. The technique is to consider input and output differences fixed. And MILP search whether or not there are propagation from input to output difference and if it's not possible it's infeasible then it concludes that it's an impossible characteristics. So the others apply the technique on Eskini by considering different scenario. We can consider input value or after this box we can consider this state. Of course we should take into account the tweak key and we have also output. And the results are shown in this table. So the best result for Eskini TK1 is for 13 rounds and for TK2 is 15 rounds by considering these differences. So based on the results that I just showed they could find characteristic which is one round larger than all of the well-known previous works in impossible setting. And then it's used to actually, for example it's the picture of the characteristics that they have found for 13 rounds of Eskini NN. We have a contradiction in the middle. So maybe the color is not very clear here so they consider fixed value here and fixed value here and they show that for 13 rounds over 13 rounds it's impossible. For 15 rounds it's a bit more complicated. Actually they consider these states and what they found is that a bit complicated. So MNPQ cannot be any value. It should be actually holds in these equations to have an impossible characteristics. And all of the values are listed here so it means that totally we have 15 different impossible characteristics. But when you fix the input difference then the output difference cannot be anything on that very specific value. And of course for key recovery we have some problems that I will discuss later. I will say what they did but then this 15 rounds used to attack to 23 rounds Eskini NN2. So for key recovery as usual we have a nice framework. We have impossible differential in the middle and we add some rounds in the beginning and some rounds at the end. And then we guess the involved key in the last round and the beginning round and then based on the number of actually the probability of the filtration. And from the end and beginning we can compute the time complexity of the attack. But what I mentioned that 15 rounds impossible differential characteristics that has been used is very specific. It means that if you have specific value in the input of the characteristics then the output is very specific value. So totally 15 different values. So from forward and backward what they did is that instead of doing this in 15 different parallel methods they consider general structure and finally whenever they reach to this state like this then they save this value for the guess key. And then finally they can compare and do the same for the backward and then compare for which specific values they have impossible event and then they can of course eliminate the run key. So yeah this is the results for key recovery. If you look at the results the time complexity is slightly better than previous NAML attacks but the memory is not really decreased. So from 2 to the 124 it's decreased to 278 which is notably less than previous one and also from here 56 it's decreased to 49. And if I can find here then 112 is decreased to 97. So the memory is decreased notably. Yeah sorry again if I didn't go to all of the details. Actually I didn't have enough time to prepare myself but thank you anyway for your attention. So I guess you'll be willing to answer questions or? Yeah at least I can try. Question? Okay so let's thank the speaker again. Enjoy your coffee.