 Hey everyone, my name is Asafros Marina and I'm going to present a joint work with my advisor Nathan Keller Mind the middle layer the headest design strategy revisited Before we dive into the details, I'll start with a short summary If you could probably guess by the name our results concerned the middle layer of the headest design Which is the combination of SPN rounds and partial SPN rounds, which we call the middle layer as you see in the figure It's okay if we don't fully understand from now. We'll get back to that later Specifically, we consider the instantiations of the headest design, which are Poseidon and Starter For Poseidon, we show how an analysis of the middle layer can be used to increase the security guarantee against artistic artifacts As you can see in the figure, for some of the Poseidon variants, the middle layer provides even better security and statistical facts than the full rounds For example, for the t equals 6 variant The full rounds ensure only 28 active as it says, while the middle layer ensures 32 Again, it's okay if we don't fully understand what it means for now. We will get back to it with more detail These results mean that the same level of security against this type of attacks can be achieved while using less full rounds Which can result in a significant speed up of the sector. In Starcat, however, it was not the case We show that for some of the Starcat variants, there exists a huge invariant subspace which doesn't activate any experts in the partial rounds What to mean by invariant here is that the linear transformation used in Starcat mapped this subspace to its cell What it means is that for every difference delta from our invariant subsets be and every two inputs x and y with this difference The difference of the output is determined completely by delta Notice that the difference is stored here as a work of our binary field This is obviously an undesirable property of the Cypher As we'll see later in the presentation, we get the result of an analysis of Cushion matrices Which is of the family of matrices the authors of Starcat chose to use In a paper by Vaynetav, which appeared in crypto 2020 The authors show how to attack Starcat assuming the Starcat matrix satisfies some conditions We show that these conditions fault for some of the variants which break the security guarantee of the Cypher for some of the parameters Now let's get into details Before explaining the rationale of the head of design, let's go over SPN and partial SPN designs SPN would sense for substitution permutation network is a very widely used construction Using the AES for example in which every encryption round consists of key addition linear operation and non linear operation Hold S boxes and apply to each block of the Cypher separately Partial SPN is a newer design in which only some of the states go to an S box in each round In the head of design the first and last few rounds of full SPN rounds While the middle layer consists of partial SPN rounds The idea behind the head of design is to use the full rounds to ensure security against statistical attacks There are no methods to do this as was done in AES And use the middle partial rounds to ensure security against algebraic attacks It's even a partial run with one S box as in the case in head design has the same algebraic degree as that of the full round This design is meant for cases in which the main computational bottleneck for the S boxes into linear operations Take negligible time in comparison Like in MPC AES is a generic design and a specific Cypher So let's talk about its instantiations Namely StarCAD and Poseidon Each of Poseidon and StarCAD can be farmer priced with a note by T the number of blocks in a variant Which is the number of S boxes in full round In all of the variants only one block out of T go through an S box in the partial rounds The number of full rounds before and after the middle of partial rounds is always four While the number of partial rounds in the middle changes between the variants The metric chosen for the information is MDS matrix What it means for us is essentially that in every two adjacent full rounds at least people's one-netbooks S are active We'll see in the next slide exactly what's meant So the security guarantee of Poseidon and StarCAD is four times people's one active S boxes in each Characteristic In the previous slide I talked about active S boxes as a measure of security against differential and linear analysis To better understand what it means. I'll give a little background about differential In differential analysis we start with two inputs with a known difference and try to keep track of the difference at every stage of the In our case we're interested only at SPN and partial SPN Cypher consists of linear operations In which by linearity if the difference was Delta then the output difference must be M of Delta Key addition which obviously does not change the difference and S boxes So for S boxes if the input difference will zero it will remain zero As a zero difference with the outputs were the same so they will stay the same Otherwise, we cannot know the out a difference for sure and in that case we say that the Xbox is active People would like to know the out the difference. We could only know it at a certain probability So if you lower bound the number of active S boxes It means that the probability of every characteristic will be very low Now that we understand the mineralogy, let's get back to our result on Poseidon To get a lower bound on the number of active S boxes in the partial rounds we use the tool presented in the paper from 2015 by Barone Tom Let's look at some examples for T equals 4 The lower bound of the active X boxes in full rounds is 20 and For the partial rounds is at least 36 for T equals 6 It is 28 as book says for the full rounds and 32 for the partial rounds Notice that for all of the variants presented in the table except for the T equals 16 The partial rounds provide better security than the full rounds as the main purpose of the full round is to ensure security against Statistical attacks and we've seen that the partial rounds can provide the same level of security The possible effect would be to reduce or even completely remove the full rounds This can cause a massive speed up to the cypher However, after considering considering our results the designers of Poseidon decided to not currently reduce the number of full rounds That they may still provide better resiliency and get us to break attacks When we started analyzing Starcal, we tried to use the exact same method we used for Poseidon and get a lower bound using the automatic tool We were very surprised to see that the tool could not provide any lower bound Instead for the variant with 24 blocks We found the subspace of dimension 18 which does not activate any S box in the whole middle layer As it turns out this subspace is an invariant subspace What it means is that differences that start in the subspace will always same subspace And so we'll never activate any S box regardless of how many partial rounds there are After we found this the nature they wanted to understand why it happened So we started studying the family of matrices Starcal uses which are Cauchy matrices Specifically the ij entry of the matrix is the inverse of xi plus yj For xi and yj are just following integer Representing elements in the field of size 2 to the power of n Our results on Starcal are the following three theorems The first is for the special case in which the number of blocks T is a power of 2 2 to the k in which case we prove that there exists an invariant subspace That does not activate any S box in the partial rounds and if they mention is at least T minus 2 which is very high The second theorem ended the special the general case in which the number of blocks is S time 2 to the power of k For S is odd In which case we prove the lower bound of the dimension T minus k plus 1 times S Notice the D3 value at here S equals 1 We get that the dimension is at least T minus k minus 1 So the second theorem does not imply the first theorem The third theorem Which shows in the paper as a conjecture however we proved it since submitting the paper So it's now a theorem In that the dimension of the invariant subspace in the general case is at least T minus 2 times S Which is very neat as in this case if we evaluate S equals 1 we get the first theorem So the last theorem Implies both the first and the second theorem Notice also that for the T equals 24 variant, which we looked at earlier If we evaluate the conjecture we get the dimension of the invariant subspace is 18 Which matches exactly what we practically found The way we prove the three theorems we just saw is by studying the minimal polynomial of the matrix As it turns out Proving a higher bound of D on the degree of the minimal polynomial of M Immediately translates to a lower bound of T minus T on the dimension of the invariant subspace The reason why it happens is that higher powers of M are all spanned by M to the power of 0 until M to the power of D minus 1 We can easily see this by using division with remainder of X to the power of N by the minimal polynomial M of X Now if we evaluate both sides of the equation by the matrix M as the evaluation of the matrix on its minimal polynomial is 0 We get that M to the power of N is equal to R of M Which is of course spanned by power of M smaller than D as its degree is less than D The conditions for some input difference V To not activate any S box is that the first block of V is 0 so that the first S box is not activated And then the first block of M times V also has to be 0 so that the second S box is not activated and so on So for every round I we get a linear constraint on M to the power of I times V As we just saw the power of M are of dimension at most D So these are only D linear constraints and that's the dimension of the solution space must be at least T minus D If we record three theorems from the previous slide The first claim the dimension of at least T minus 2 so we need to upper bound the degree of the minimal polynomial by 2 The second theorem Claims the dimension of at least T minus S plus 1 times K Which means we need to upper bound the degree of the minimal polynomial by S plus 1 times K The third claim the dimension of at least T minus 2 S So we need to show that degrees most 2 times S To prove these results we study a class of matrices which will name spatial matrices Spatial matrices are a class of square matrices of sizes which are powers of 2 It is defined using this recursive definition Every one by one matrix is special and bigger special matrices are symmetric block matrices of 2 by 2 spatial matrices From this definition, it's pretty clear why the size of special matrices must be a power of 2 Special matrices are a commutative subring of the ring of matrices What it means is if we add the two matrices special matrices together, we will get a spatial matrix if we multiply two special matrices together, we will still get a special matrix and Special matrices commute with each other The interesting stuff happens when the special matrices are over rings of characteristic 2 such as binary fields Which are the case in starkon Notice that it's important to generalize from binary fields to binary rings as matrices with polynomials as entries are Interesting when analyzing the characteristic polynomial, which is related to the minimal polynomial From now, we'll only talk about special matrices over such rings The first property we're going to discuss is that every special matrix is a single eigenvalue, which we denote by lambda of n The eigenvalue is additive and the determinant is also additive, which is pretty unusual Finally, m squared is a scalar matrix and the scalar is the eigenvalue of m squared So why should we care about the special matrices? As it turns out, when t is a power of 2, the starkon matrix is a spatial matrix Notice however that it is not true in general for Cauchy matrices There are way more Cauchy matrices than there are special matrices While not every Cauchy matrix is a special matrix, every special matrix is a Cauchy matrix So the special matrices are a cool subclass of the Cauchy matrices The reason why the Cauchy matrices used by starkon or special matrices is the specific choice of the sequences X, I and Y, I As following integer starting from 0, which gives the matrix this special structure Using what we've seen earlier, as m squared is a scalar matrix, the minimal polynomial is of degree at most 2 And we got a lower bound of t minus 2 on the dimension of the invariant subspace, which proves the first theorem In the general case, where t is equal to s times 2 to the power of k The starkon matrix is a block matrix of s by s blocks, each of which is a spatial matrix of size 2 to the power of k by 2 to the power of k We're going to use that fact to prove the second and third theorems As the starkon matrix in the general case is a block matrix of special matrices, we proceed to study this type of matrices What we prove is the following Take such a matrix, m, and replace each block with its unique eigenvalue, lambda of n What we get is a s by s matrix denoted characteristic polynomial by q, which is of degree s And what we prove is that the minimal polynomial of m divides the q to the power of k plus 1 Which is of degree k plus 1 times s, which gives us the bound of the second theorem The conjecture is that the minimal polynomial also divides q squared, which would give us the bound of the third theorem Some of you may remember that they promised in the summary that the invariant subspace can be used to enable an algebraic attack on starkon While it's obvious how such an invariant subspace is useful When attacking the cypher using differential or linear attacks, it may not be so clear how to use it for other purposes Remember that not activating any s-box means that if we take two inputs x and y With difference equal to sub-delta from the subspace Then the difference between the outputs of the middle layer of x and y fx and fy is equal to L of delta, or L is some power of the round matrix n If we choose some x from The subspace and y equals 0, we will get that f of x, where after the middle layer, is equal to L of x plus f of 0 Which is a constant So what we got is that the middle layer acts like an affine transformation for inputs from the subspace Which has algebraic degree of 1 As the main purpose of the middle layer is to increase the algebraic degree, this invariant subspace is obviously bad news We didn't use the invariant subspace ourselves to attack the cypher, but left it to future work In a parallel research by Bayonetal, which was published in crypto 2020 It was discovered that if hypothetically the starkon matrix has a small multiplicative order d And it is possible to mount and algebraic pre-image attack on starkon From their proof, it is fairly easy to see that the exact same hold for matrices whose minimal polynomial Has the great most d, which is exactly what we proved So the invariant subspaces can be used to attack starkon and the attack even breaks the security guarantee For some choices of parameters As we've seen from starkon, a bad choice of the matrix can lead to the existence of big invariant subspaces That pass the entire middle layer without activating the single sbox In contrast to Poseidon, in which the middle layer could dramatically increase the security against differential linear attacks In starkon, the middle layer does not increase the security with respect to these attacks at all And even algebraic attacks can be launched against the cypher Most importantly, preventing the invariant subspace is really easy Here, all we need to do is to choose any t which is not divisible before Following our results, the authors of Poseidon and starkon recommended to use only Poseidon And if using starkon anyway, use it only with an odd t The lesson from the talk is to not disregard the middle layer in the security analysis As we've seen, the middle layer can both boost the security guarantee of the cypher And be very dangerous when the matrix is not chosen properly So it is very important to always take it into account and use it in the security analysis Thank you very much for listening. Hope you enjoyed the talk If you did, you're welcome to view the papers to see all the proofs in detail And that's it. Thank you very much