 Hello everybody, my name is Abdel. I'm an internet crypto experts company in France and today I'll be presenting our work along with Sonia Belaille, Jean-Sébastien Corron, Emmanuel Prouf and Mathieu Rivain about the random probing security of masking schemes against sign channel attacks. So most cryptographic algorithms are usually secure when an adversary only has access to the inputs and outputs of this algorithm. However, when implemented on hardware, these algorithms become vulnerable to so-called sign channel attacks where an adversary can also observe the physical leakage of the device such as power consumption or execution time and exploit its dependence with the secret values of the algorithm. And so to protect implementations on embedded devices, many countermeasures have been developed. The most widely used of them is the masking countermeasure, where the idea is to split a sensitive variable X into n values that we call shares, where the first n minus 1 values are generated uniformly at random from the underlying group. And the last element is the combination of the random elements and the secret value. And the idea here is to break the dependence of the secret with any set of n minus 1 shares, making it difficult to recover the value of the secret as the number of the shares grows. And so to reason about the security of masking schemes in theory, the community introduced what we call leakage models. In general, an algorithm is said to be secure in a leakage model if the leaking variables do not reveal any information about the secrets. Now there are three main models that vary in terms of convenience for security proofs and closeness to reality of physical leakage. The first one is the probing model where we consider that during an execution there is a fixed number of T leaking variables. The second one is the random probing model where each variable leaks with a probability P. And the last one is the noisy leakage model where all variables leak with some noise. And in fact, as a model that describes the reality of leakage, it becomes less convenient for proofs. And until now, the most widely used one is the probing model, but it has recently been challenged since it does not capture very well the reality of physical leakage. And so researchers have been trying to establish security properties in the other models. Now there is actually a reduction property established in 2014 that allows to move from one model to another. It states that if a certain algorithm is secure in the probing model, then it is also secure in the random probing one. But in this part of the implication, the security parameter is reduced as the size of the algorithm grows. So it is not a very tight reduction. On the other hand, the second part of the reduction from random probing to noisy leakage is tighter and more convenient to use. Which is why our work tries to reason in the random probing model since it is still more convenient in terms of security proofs than the noisy leakage one. Now there are some constructions that already exist in the random probing model. Mainly two of them are based on expander graphs, but they are not very simple to use and the tolerated leakage probability is not made explicit by the authors. The more recent work from 2018 is based on secure multi-party computation protocols and it tolerates a leakage probability of about 2 to the minus 25 with a complexity polynomial in the security parameter. However, this polynomial is not made explicit by the authors in the work. And so our work is another one based in the random probing model and our contributions are threefold. We first introduce a new automatic formal verification tool that we call reps, which given a small circuit on our algorithm computes the parameters for which it is random probing secure. We also provide new composition and expansion properties to make small circuits composable in a global random probing secure circuit while achieving arbitrary security levels under certain conditions and we show the complexity of such a strategy. Finally, we exhibit an instantiation with new construction from base gadgets that fulfill the expansion property as verified by our tool. And we show that for a given security parameter kappa, we are able to achieve a complexity in kappa to the 7.5 while tolerating a leakage probability of about 2 to the minus 8. So what is this random probing security that I keep mentioning? For our definitions, we consider circuits that are directed graphs, as you can see on the example on the left, which edges are the variables that we call wires and that leak each with probability p. And the vertices are the operation gates. Now we can have additional multiplication gates, we can have random gates that generate random values, and we can have copy gates that output two fresh copies of an input variable. And in our model, we don't consider leakage on output wires, since when composing several circuits, the output wires of a circuit are just the input wires in the next circuit. So in the random probing model, since each wire leaks independently with probability p, we try to formalize this by the notion of a set of leaking wires w. Now this set can be thought of as a random variable output from a random process in which each wire in the circuit is added to w with probability p independently of the other wires. Now such a circuit is said to be p epsilon random probing secure. If for any set w of leaking wires, this set should be independent from the secret inputs, meaning that it does not require the knowledge of the secret values to be simulated. Now if it is independent, then we say that we have a simulation success. If not, we say that we have a simulation failure, and we associate this failure event with a probability function epsilon. And so actually to determine the random probing security of a circuit, we would need to compute the value of this failure event probability. Now it can be observed that when acquiring a set of wires w, since each wire is leaking independently with probability p, then the probability of having such a set follows a binomial distribution with respect to the value p and the total number of wires in the circuit s. And since epsilon is the probability of having a failure event, we can actually express it as a function of p. We can say that epsilon is the sum over all sets w for which there is a simulation failure of their corresponding probability of acquisition. So it's the sum of probabilities of all the failure events. And we can also express this epsilon slightly differently by grouping the failure sets w with respect to their sizes. So if we denote ci, the number of sets w of size i for which there is a failure event, then epsilon is simply the sum over all sizes from one to s of the coefficient ci times the corresponding acquisition probability. And using this expression, we can exhibit an algorithm that, given a circuit of s wires, computes the corresponding coefficient ci to get the value of epsilon in terms of p. And basically what our algorithm wraps does is that it computes the coefficients of epsilon c1 through cs. It will iterate over all sizes from one to s, enumerate the corresponding sets w of size i, and apply a set of rules inspired from Masquerif, which is a verification tool in the probing model from 2015, to determine whether each set is independent from the secret inputs. And finally, the coefficient ci will be the number of failures that we were able to detect in the list L. Now indeed, this algorithm is interesting for small circuits as when the size of the circuit grows, computing these coefficients becomes exponentially hard. But testing security for small circuits can be interesting, especially when we try to deduce global security through composition properties like we do in our work. And so in addition in our paper, we offer a couple of more optimizations that allow us to increase the size of the circuits that we are able to feed to our algorithm with a reasonable execution time. So as I just said in our work, we also try to achieve global random probing security from small composable circuits. And so we reason on small n-share circuits that we call gadgets and that compute a certain functionality. So for example, an n-share addition gadget outputs the sum of its two n-share inputs. And we show that if given such gadgets for each basic gate operation that are random probing composable for parameters T, Epsilon and P, then a circuit that composes them will also be globally random probing secure with a failure probability that is multiplied by the size of the circuit. And this is because each gadget in the circuit fails with a probability Epsilon. So the overall case failure is when at least one of the composed gadgets fail. And so concerning the random gate, it can be seen that an n-sharing of a random value is simply an independent random values, which is why we don't necessarily mention a gadget for randomness since it will trivially verify the property. And so the definition of random probing composability is just a more developed definition of random probing security. We will still consider a set W of leaking wires for the simulation success and failure. But in addition, we will also consider any set of T or less output wires. And these two sets should be simulated from at most T-shares of each input. Now, this additional constraint allows us to have an invariant in the composition notion. Since for each gadget, we can achieve a perfect simulation of the leaking wires plus T-shares of each output sharing from at most T-shares of each input sharing. And since when composing circuits, the internal output and input wires are directly connected, this allows us to have the global random probing security property for the resulting circuit. Now, in addition to composing gadgets for global random probing security, we also exhibit a property that allows composable gadgets to be expanded and achieve arbitrary security levels. And this is a revisited strategy of the multi-party computation approach from 2018. The expansion strategy consists in using and share gadgets for basic operations that are random probing expandable and use them to replace a base circuit where each wire leaks with probability P by an expanded circuit where each gate is replaced by the corresponding gadget and each wire by n wires carrying a sharing of the original wire. Now, in fact, this allows us to replace the leakage probability P of a wire in the original circuit by the failure event probability epsilon in the expanded gadget simulation. Because if a simulation fails, then one needs the full input sharing for the gadget simulation, which corresponds to leaking the corresponding wire value in the base case. And this strategy can be applied recursively on the base circuit, amplifying the security level from the original circuit until we achieve a desired security level. And this works as long as epsilon is strictly smaller than P. Now, to do this, the gadgets should verify what we call random probing expandability, which is a more advanced definition of the random probing composability. As before, since we are still composing gadgets, we will still have the same definitions with the T output wires and T input wires. Meanwhile, since in the base case, each wire as input of a gate leaks independently, then a gadget should have a failure probability, which is independent for each input, which is why our failure event probabilities are epsilon on the first input, epsilon on the second input, and epsilon squared on both. Now, in addition, in case of failure in a composed gadget, we should still be able to have a perfect simulation in the gadgets that are directly connected to it in the circuit. And a necessary condition to have this is to be able to choose at least one set of N-1 output wires for which the simulation succeeds along with the set W. Now, having access to N-1 shares plus the original input wire in the base case simulation, this way, in case of a failure event in a composed gadget, we will still be able to produce a simulation in the resulting circuit with the right number of input shares. So in this condition, we are really allowed to choose any set of N-1 output wires and have at least one simulation success. Now, using this definition, we show that any N-share gadget that is random probing expandable, then the expanded gadget to the gate level is also expandable with a failure probability f to the power k as we desired. In addition, we show that such an expandable gadget is also composable with a failure function 2 times epsilon, which is not very complicated to see since the expandability is just composability with some additional conditions on the simulation. And finally, using these two results, we see that using expandable gadgets and the expansion strategy on a base circuit, we can achieve random probing security and choose the number of expansions to perform to achieve an arbitrary security level. And to verify the composability and expandability of small gadgets, we add some modifications to our verification tool graphs in order to compute the necessary parameters for each of the properties. Now, it is clear that the difference with random probing security verification is that we have to consider output wires as well in the simulation and consider failure when a simulation needs more than t-shears of the inputs. So, having these conditions in mind, we include the verification of these two properties in our tool. In our work, we are able to instantiate the expansion strategy with a three-share gadgets construction that we show our random probing expandable. The first here is a construction of a three-share copy gadget that outputs two fresh copies v and w of the variable x. For each of the copies, we use three random values that are added twice to each output in a circular order. For example, for the first output, we add r0, r1 and r2 to the three output shares, and then we add r1, r2, r0. Now, in this way, we have output shares that are randomized, and their combination gives us the original value x, and the same thing goes for the second output. Now, for the construction of addition gadgets, we use almost the same idea for each of the inputs x and y, but we change the order in which the random values are added to the outputs. So, as you can see, the column of randoms r1, r2, r0 does not directly follow the column r0, r1, r2, as in the copy gadget. And this allows us to mix up the random elements so that intermediate variables become more independent from one another. And finally, the multiplication gadget starts by refreshing the inputs x and y using the exact same principle as the above gadgets, and then it performs the product of shares to compute the final result. Now, the combination of the shares product is done while adding some random values in between each combination, which is also to break dependencies with intermediate variables. And feeding these gadgets to our verification tool graphs, we verify that they are random probing expandable with parameters t equal to 1, the function epsilon with the smallest exponent of p of 3 over 2, and they tolerate a leakage probability of about 2 to the minus 8. So, I will use this construction as an example to reason about the complexity of the expansion strategy. Now, to determine the complexity of the expansion strategy, we use linear algebra representing each circuit as a vector of its number of gates for each of the basic operations, add, copy, multiplication, and random. And given the gadgets used for this strategy, we will construct a matrix where each column is the vector corresponding to each of the gadgets, and the last column for the randomness only generates and random values. So, in this case, we only have the value of 3 in the last position in the matrix for three share gadgets. Now, using the eigen decomposition of the matrix, we can see that when expanding a circuit, the gates vector for the resulting circuit is just the result of the matrix to the power of the expansion level multiplied by the vector of the original circuit. And so, the complexity of this operation strongly depends on the eigenvalues of the so-called matrix. Now, another observation that we can make is that we only have multiplication gates in the multiplication gadget. So, the third row has zeros except in the third position. And using this observation, it can be checked that the eigenvalues of the matrix are just the eigenvalues of the sub matrix MAC, the number of multiplication in the multiplication gadget, and the number of randoms in the last column. So, the complexity of the compiled circuit expresses in terms of the maximum of these eigenvalues to the power the expansion level K. Now, for a certain security parameter kappa and a failure function epsilon of amplification order D, which is the smallest exponent of P that we have in the function. So, to achieve the security level, we need the expanded failure probability almost equal to the security, true to the power of minus the security parameter. So, we can express the complexity in terms of kappa to the power E, where E is a function of the maximum of the eigenvalues and the amplification order. Now, in our three-share construction, we have E equal to 7.5 since our maximum of the eigenvalues is equal to 21, and the amplification order as computed by our tool is 3 over 2. And so, this is how we reason about the complexity of the expansion strategy. Now, if you want to compare our strategy with the expansion from 2018, first of all, what we call a random probing expandability is equivalently called composable security in their work. Achieving this property in our case is done by constructing gadgets that verify the necessary conditions. Their work is based on the use of secure multi-party computation protocols. And for the instantiation part, we constructed a three-share gadgets that we show our random probing expandable while they use an already existing protocol due tomorrow from 2006 to instantiate their strategy. Complexity-wise, we achieve a complexity that expresses in the security parameter to the 7.5, and while comparing their analysis with ours, we saw that they achieve a complexity in the security parameter to the 7.87. While these two complexities are close in terms of tolerated leakage probability, we can tolerate up to a value of 2 to the minus 8 while they tolerate p equal to 2 to the minus 25. So, for an almost equal complexity, we can tolerate the leakage much higher than theirs. So, to conclude, we provide a public implementation sage of our verification tool for users to test on small circuits. We introduced new notions of composition and expansion for achieving global arbitrary random probing security levels, and we instantiate these properties with a concrete construction that we show using our tool tolerates a leakage, a certain leakage probability with a complexity depending on the security parameter. And last but not least, we also provide a full implementation of the expansion strategy that operates on small circuits, and we provide an implementation of the AES algorithm in C language that uses the expanded gadgets as a concrete example of usage of the expansion procedure. And finally, for further works, we aim to look more into complexity and tolerated leakage probability trade-offs, and we also try to look more into generic constructions that satisfy the introduced properties for any number of shares rather than a fixed one. This will be the end of my presentation, and I would like to thank you all for your attention.