 So, this is the presentation of the work titled Cryptanalysis of Masked Cyphers, a not-so-random idea. This is a work made by Tim Banner, myself, Simen Doghe and Jenda Zhang. So in short, this work is about side-channel analysis, masking and probing security. So we are going to take a masked, symmetric primitive, and we're going to verify whether or not it is secure against higher-order side-channel attacks. And we're going to make our security analysis, we're going to propose a security analysis based on linear cryptanalysis. And important to note is what we're going to do is we're going to look at the bounded query security. So we're leaving behind the world of perfect security. And we're going to see further on that this brings with it multiple efficiency gains. For those that know threshold implementations by Nikova and co-authors from 2006, we know they know that higher-order threshold implementations had certain composability problems, while in this work we're going to look, via the security analysis method, we're going to see how to create and verify higher-order secure threshold implementations. And an important distinction with other models that are known, our security analysis includes randomness generation. So if there would be any, let's say, RNG in the masking that is used, then we can include that into our analysis. And we're going to see, with this analysis, we're going to see the importance of certain cryptanalytic properties. So the cipher we're going to mask is going to be of importance. Its diffusion layers will cause different activity patterns, linear activity patterns to be there, which in turn affects the security. And the non-linearity of the masked S-box will also be of importance. So to start the work, we first explain what our threshold implementations are. And threshold implementations are in fact masked circuits, where we have an input to that circuit. And that input is first put through an encoder function, so that input is basically shared with randomness, is then put through the masked functions. And at the end of the computation, all those shares are decoded again. And threshold implementations have these masked functions that adhere to three properties. Correctness, non-completeness, and uniformity. Correctness simply means that the masked function is indeed implementing the masking of a particular function F, the function you want to. Non-completeness means that each masked Boolean function only works on a subset of the shares. So F1, for example, only operates on the first share and the third, but not on the second. And uniformity means that the masked function is balanced for each particular choice of input secret. In this work, we're going to look at the glitch extended probing model. So that means we're going to look at the, I'm going to explain to you what is the probing model and we're also going to take care of the glitches that can occur on the platform. And this secure, this adversary model is defined by saying an adversary can pick a threshold number in this work, in this example that will be typically equal to two, can pick a threshold number of masked functions to observe. And what do we mean with observing a function? Well, in this case, that means observing all the inputs of that function. And so here in this work, this adversary can only view two masked functions of the entire masked cipher in total. And this indeed captures glitches since the input and output of each masked function in a threshold implementation is registered. So in the previous slide, we explained the adversarial capabilities. In this slide, we will explain the security model. And this is a bounded query left, right security model. In this model, an adversary chooses two secret inputs, K1 and K0. These are typically the plaintext and the key of a block cipher. Sends this to the challenger. The challenger picks a uniform random bit and sends back an oracle OB. This oracle is the masked circuit with the secret, either K0 or K1, kind of baked into it. The adversary can then query this oracle using probing queries. So using the adversarial capability we discussed last slide can query that oracle, let's say Q times, and then oracle gives back all the inputs to the probed masked functions. After Q queries, and it's important to note that every query the adversary makes, the encoder function uses new randomness to remask the secret input, and that this encoder function cannot be probed. After Q queries, our adversary needs to pick the random bit B and needs to guess which of the secrets K0 or K1 was used. The advantage of the adversary is defined by the probability of guessing the correct bit minus the false positive rate. We then come up to the main part of our work. So in our previous slide we viewed that the adversary gets back a bunch of probed values. What we want to prove now is that these probed values are either they don't relate to any secret or these probed values are fully entropic. So in fact, what the adversary sees is only uniform randomness. But this entropy of the probed values can be bounded in terms of the non-trivial Fourier coefficients of its distribution. So that's a reduction that we do. And then the question might be, well, how do we bound these Fourier coefficients? And in fact we know how, because the way you do this is using standard linear cryptanalysis. And it's not, there are some small changes, namely that we need to do linear cryptanalysis over our quotient space for fixed secrets. But once we have adapted that definition of what is a correlation matrix, we are again in the space of standard linear cryptanalysis. So our work is best explained via a case study, in which we take as a second order masked LED. The round function of LED is shown in the slide. It should be looked at as something close to a yes, in the sense that its diffusion layers also have a shift rows and mixed columns operation. However, the S-box is different. LED uses the present S-box and not the a yes S-box. Our sharing requires 664 bits of randomness. That includes the masking of the plaintext and the key. In total, we use 7 shares per state bit. This is to guarantee second order non-completeness and uniformity of the quadratic decomposed present S-box and three shares per key bit. This is lower because the key schedule from LED is linear. And so we do our security analysis in three major steps. The first is an S-box level or round level analysis. This follows standard side channel literature. So we just take one round of the masked LED and we show that any two probes placed in this one round do not give any information on any secret value. So in fact, we show that this is perfect secure. The novelty of our work comes into the analysis of the nearby rounds and distant round analysis. First, we show that when probing nearby rounds, so let's say you probe the first round of the masked LED and the second round of the masked LED. Then we show that there are in fact zero correlation approximations between the probed values, which means that the adversary views pure randomness. So the distribution of the observations of the adversary follow the uniform distribution. In other words, a few rounds of LED are still perfect secure. When probing distant rounds, let's say you probe the first round of LED and the fourth round of the masked LED, then in fact, you might be able to distinguish the probed values from a uniform distribution. However, the correlation between those observations is exceedingly low, which we use to show that the adversary needs to collect a significant amount of probed values. So that needs to make a significant number of queries to distinguish this distribution from uniform. We start our analysis with the S-box level or round level analysis. So we take one round of our masked LED and some details with the sharing. The present S-box S is divided in two quadratic functions, S1 and S2. And each S1 and S2 are then implemented such that they are correct, second order non-complete and uniform. We show using standard literature that one round of the masked LED is perfectly secure. Namely, if we probe two S-boxes to the same S-boxes, two S1s or two S2s, then the security simply comes back to the uniformity and non-completeness of S1. So we can show that this is independent of any secret variable, the same for probing twice an S-box S2. One can also probe one S1 and one S2 function. In this case, this might give information on a secret value and so we need to add randomness in between. The randomness we add in a simple way. We simply pick some random values, our bar, and we add them to the S-box. In fact, we show that we can add the same randomness to each S-box. So we only need to coin one cell of randomness to add it to every cell in the state. And what's more, we will show later on that this randomness can be reused every round. So in total, we're only adding one cell of randomness for the entire masked LED cipher. If we add randomness as shown by the previous slide, then we can show that one round of the masked LED is indeed perfectly secure. We're now going to analyze what happens if an adversary places their probes in two S-boxes from different rounds. In fact, we're first going to take a look at what happens if those different rounds are nearby rounds. So let's say the first and second or first and third rounds of the masked LED. And in our analysis, we're going to assume that our key is a constant. So we're going to analyze with our linear-crypt analysis the key schedule separate from the state function. In our work, in our theorem, our main theorem, this is indicated by a good labeling of wires. But more information can be found in our paper directly. So now let's consider the activity pattern from our masked LED. And again, remember that our LED, the diffusion layers are similar to those of the AES. So we have a shift-rose operation and we have a mixed column operation. Assume now that our adversary has probed an S-box. So for example, the top left S-box in the state of round i. We're going to consider this probe as the input mask of a trail. And we're going to follow the activation from this. From our following with our shift-rose, we know that the top row of our state remains unaltered. So our active cell is still on the top left. But when going through our mixed column operation, we know that the linear branch number of the mixed columns is 5. So our mixed column activates the entire column of the state. Going again in our shift-rose, we find the typical shift-rose pattern. And mixed columns will again activate the entire state. Now we're going to look at what happens if an adversary places a second probe. Say for example, in round i plus 1, and for AES in this presentation, we just assume that the adversary can probe only an S-box, so one cell of the state. So for AES, let's say our adversary has probed this cell of the state at round i plus 1. It's placing a second probe. Well, given that as output mask and P1 as input mask, then we find there is a mismatch here. So we find a zero correlation approximation between those masks, which means that the values from probe P1 and the values from probe P2, they are in fact not correlated. They are pure. They are uniform random. And same happens for any other cell, the adversary can probe in round i plus 1, or even in round i plus 2, and while it's not shown on the side, same will apply for round i plus 3 as well. So if you place the probe P2 here, then again you find a zero correlation approximation. So what we show in short is that our masked LED for nearby rounds is still perfectly secure. So the story changes for distant rounds. In fact, we see that for distant rounds, the adversary can probe, let's say, rounds i minus 1. So let's say round 1 and round 4 of the masked LED, and then you see that there are trails possible between input mask P1 and output mask P2. Which means that the observations from the two probes are in fact correlated. They do not follow a uniform distribution, in fact that information might be correlated or might give information about a secret value. However, notice that this trail has passed a total of 24 shared S-boxes. So here, for example, it has passed 4 S-boxes, here another 16, and here another 4. And in fact, we show that our shared S-box has maximum absolute correlation to the minus 3. So our masked S-box actually has, let's say, good non-linearity. This shared S-box, however, was the direct balanced sharing using 7 shares of the quadratic split present S-box, but we were lucky this S-box indeed shows some good non-linear properties. And because our S-box is non-linear, and because we have passed 24 S-boxes on the way from P1 to P2, we can show that the distribution of P1 with P2 follows is close to a uniform distribution. And in fact, that previous trail, or the trail from the previous slide, was in fact the best trail you can find through that masked LED. And that determines the advantage from the adversary, in which we show that the advantage of our second-order probing adversary on the masked LED that we have shown is bounded by the square root of the number of queries divided by 2 to the power 120. So that means you would need 2 to the 120 probing queries in order to gain an advantage equal to 1. And this advantage we claim is sufficiently low to say that this second-order masked LED would be secure in practice. So we conclude our work. We've shown that we can use linear cryptanalysis to analyze the probing security of masked primitives. This is novel, as normally we used simulation-based models. As a practical application, we have shown that our masked LED requires fewer than 700 bits of randomness. Typically second-order masked primitives require any orders of 10,000 bits of fresh randomness. And so we show that you do not need to add randomness apart from the initial masking of the plaintext and key for second or higher-order security. This is something that hasn't been shown before. Interestingly enough, this also has an effect on the symmetric key design world, as some primitives are easier to secure than others. For example, we have chosen specifically for the LED cipher because the AES diffusion layers or AES-like diffusion layers are quite good. However, the AES S-box currently has no known uniform sharing, which makes the analysis of a masked AES much more difficult. And you can overcome this by, for example, using the changing of the guards approach. However, this also changes the diffusion pattern of the masked cipher, but also ciphers like presence, which are known to be vulnerable to linear cryptanalysis. We see that in the masked version, we cannot easily use presence properties, presence cryptographic properties, to reduce randomness. So present is more prone to requiring more randomness when higher-order masked. And our work allows for some future work, which I would like to briefly mention. We found that sharings can have a new property, so the masking of a function can have a new property, namely non-linearity. However, as said before, the masked S-box we chose for the LED was the direct balanced sharing. The most straightforward sharing, so that was the normal uniform sharing, it would be interesting to see whether we can find good bounds on if we have an S-box, whether we know something about the cryptanalytic properties of its masking, whether we can find S-boxes that when masked show much higher non-linearity, because our presence, our LED S-box, was a 4-bit S-box over 7 shares, so it's a 28-bit S-box. Maximum absolute correlation 2 to the minus 3 is actually quite high for such a big S-box, so it would be interesting to see can we find, can we do better. We have applied our work to the probing security model, it would be interesting to see whether you can apply our analysis to other models, and I think it's, I'm going to leave it as a challenge, could we apply it to the random probing model, let's say, and in our design that we have shown, we do not use an RNG, so our RNG is in fact the trivial one, as this static randomness, this randomness that is simply reused every round, so our RNG is simply the identity function in this case, but it would be interesting to investigate more intricate RNG designs, which just, here we find that the RNG in fact just changes the linear cryptanalysis of the ciphers, so the fact, so an interesting cause is, our effect is that for different ciphers, there would be different RNGs that work better for that masking, and it would be interesting to investigate more on this, and with that I will leave the work, thank you for your attention.