 Hi everyone, my name is Nicolas Bord, and this talk will be about a joint work with Pierre Carpemann called Fast Verification of Masking Schemes in Characteristic 2. So first a little bit of context. So we want to make crypto implementations on observable devices, and more specifically we want to do secure finite field notification in presence of flickage. So such operations are often used in non-linear components in symmetric crypto, for example in S-boxes. And in such components the inputs and outputs are usually secret. And so we want to protect the implementation against an attacker that can observe the device and leak information during the computation, that are so-called side channel attacks. To do so, one possible contermeasure is to use masking, which is to split A, B and C into shares using a secret string scheme. For example, we can use an additive one. Here we have the original value X, that will be split into D plus 1 shares, the Xi, such that the first D shares are taken uniformly at random, and the last one, XD, is computed such that the sum of all the shares is equal to X. Again, we want to compute the operation, so the multiplication, over the shared operands to obtain a shared result, and we want to do so while ensuring that no information can be gained on either A, B or C during the computation. So here is the first attempt at doing so for the multiplication. So we define CK as the sum of all the AKBJ for all J. Here is the corresponding circuit for D equal 1. The problem here is that any single CK is actually revealing information about B. For example, if C1 here is non-zero, then we are sure that B itself is non-zero. So intuitively, this strategy is not secure, and now we will see how to formalize this into the probing security model. So first some quick definitions. So what we call a gadget for a given function F is in fact a circuit that is working not on the input of F directly, but on a sharing of the inputs. And this circuit is also outputting a shared version of the result. The circuit itself is described as arithmetic gates and also some special gates providing additional uniform random values during the computation. Here this equality is just stating that the circuit is actually computing the right function F on the shared variance. Again, on such a gadget, we can have some probes that are just mappings that maps a given wire to the value it takes during the execution of the gadget. And so in 2003, Ishai, Sai, and Wagner introduced the concept of deep privacy. Deep private is a property of a gadget. So a gadget is said to be deep private when every set of less than D probes on this gadget is independent, the distribution of these probes are independent from the unmasked values on which the gadget is evaluated. So in the same paper in 2003, they designed deep private masking schemes for any D, which has quadratic complexity in the order of the masking. This complexity is both in terms of the number of sums, the number of products used, but also in the number of additional random masks used during the computation to make sure that the gadget is secure. Here is an example of such a gadget, multiplication gadget, at order D equal to 1. So first we have the sharing of A and the sharing of B. We make the tensor project between these two to obtain all the A, I, B, J. And here we see that R0 is an additional random value that is used to ensure that the circuit is deep private. As output, we have C0, C1, which is a valid sharing of the product of A and B. Here is another multiplication gadget, but this time secure at order D equal 3. So it's three privates from Bart et al in 2017. And as before, we are doing the first tensor project. And then we use additional random masks. Here there are four random masks, R0 to R3, to secure the gadget. We see that the complexity of such circuit is increasing quadratically in the number of, in terms of D, the order of the masking. The main problem about D privacy is that it is not composable, which means that if we have a two D private circuit, then their composition is not necessarily D private. So we can't chain the gadgets. So in 2016, Bart et al introduced new composable models that are called non-interference and strong non-interference, and that are based on the similarity property. So simulatability is a property of a given set of probes here, P. And so this set of probes is said to be T-simulatable. If for a fixed input sharing of the X, the distribution induced on this set of probes by the additional random masks can be simulated perfectly with the knowledge of less than T shares, less than T shares for each input, OK? So from this simulatability property, we have the non-interference and the strong non-interference. So we say that a gadget is D and I, if and only if. Any set of utmost D probes is D-simulatable. For the strong non-interference, it's a little bit different because we are regrouping the probes into two sets, one that are on the input, on the internal wires of the circuit, and some probes that are on the output wires only of the circuit. So we say that a gadget is D and I, if and only if any set of D probes where we have D1 probes on the internal wires and D2 probes on the output wires and D set of probes must be D1-simulatable. What is useful with these security models is that first, they imply deep privacy. And also, most importantly, under some independent hypothesis, the composition of a D and I gadget with a D and I one is itself D and I. So thanks to this property, this composition, we can compose different small gadgets in order to create a bigger, secure circuit. But now we need to check the security of a given gadget in this model. For example, if we go back to this circuit, this gadget, we are set to you that it is D-private. In fact, it is also D and I. But to prove that this gadget is D and I in absence of a generic proof, we need to check that for every set of less than D probes, this probably similar to the problem is that the number of sets of probes is growing exponentially in the number of wire. So to check a given gadget, a given masking schemes, when we don't have a generic proof, we want, first, an easy to check condition for the similarity of a given set of probes. And also, if we want to have gadget over F2, we want this easy to check condition to be valid over F2. And then, once we have a condition that is easily checked, we want to efficiently enumerate over all the subset of probes. And for this to be efficient, also, we want the set of probes to be as small as possible to mitigate the exponential growth of the number of sets of probes. Also, here, I will talk about D and I, but we also want to check the DS and I property. And also, we can think of extending the verification in a more adware-oriented model called the RobiSpring model, but we will not show here this here, but it is written in our article. So, yeah, so for the easy to check condition first. So in 2017, Belated R produced this condition, which applies to billionaire probes. So billionaire probes are just probes that can be expressed as the sum of some AIBJ, some AI, some BJ, some additional random masks called RI, plus eventually a constant. In general, in the masking schemes, we are going to look at all the probes are billionaire. So everything is good. So a set of probes is set to satisfy this condition, if and only if it exists a linear combination of those probes that can be expressed only as some AIBJ, some AI, some BJ, plus some constants. And so no additional random masks are appears in this expression. Additionally, we want that all the rows of this block matrix are non-zero or all the columns of this block matrix are non-zero. Why we want that? We want that because this means that if there is no zero row in this block matrix, it means that the linear combination functionally depends on all the d plus 1 shares of A. Here, if we don't have any zero column, it means that the linear combination functionally depends on d plus 1 shares of B. So from this condition, we can state a theorem which states that if a set of probes satisfies the condition, then it is not dissimulatable. So if it is not dissimulatable, then it is an attack against the DNA property. And the converse is true with the constraint because if the set of probes is not dissimulatable and the size of the finite field is strictly greater than d plus 1, then we are sure that it satisfies the condition. As a corollary, if the finite field is sufficiently big, then if no set of less than d probes and c satisfies the condition, then we are sure that it is dNi. So from this corollary, we can have a direct algorithm to check the dNi property of a given gadget. The problem here is that if we want to check the gadgets that work on F2, then this constraint is not met. And then we can use this theorem in corollary. So we are going to state a slightly different condition that is the same as the previous one. But instead of saying that all the rows must be non-zero here, at least L plus 1 rows must be non-zero or L plus 1 columns, where L is the number of probes on our set. So we take a set of L probes. And if we can find a linear combination such that this linear combination does not depend on any additional random value R and it functionally depends on more share than the number of probes in the set, then it's set to satisfy condition 12. As before, we can define a theorem and a corollary that state that if we have a set of probes of L probes that satisfy the condition, then it is not L-simulatable. And if the set of probes is not d-simulatable, then there exists a smaller set included in the first one that satisfies the condition. And as a corollary, if no set of less than d-probes and c satisfies the condition, then it is a dni. So thanks to this corollary, we can design an algorithm that will go over every set of less than d-probes. Check if it satisfies condition 12. And at the end, if no set of probes satisfies the condition, then we have proven the dni property of the gadget. That's what we are going to do. And to do so efficiently, we'll rewrite the condition in terms of weight of indicator matrices, the matrices that indicate the functional dependence on the AIBJ, the AIBJ, the additional random RI, et cetera. And so we will use a vectorized instruction during the implementation to compute the matrices and the weights very efficiently. And also, we will use combination gray codes to efficiently enumerate over all the subset of probes. So to go from one subset of probes to the other with the least computation possible. At pit performance on a single thread at 2.6 gigahertz, we are able to check around 200 million subsets by second. And because we use combination gray codes, we can easily parallelize the computation by enumerating the space of all the subsets of less than deep probes in a parallelized way. So as a result of our contributions, we have a new condition for the dni and also the dsni security over small fields. We are not constrained by the size of the field. We also have a new algorithm to check the dni and dsni property that is valid over F2. This is correct over F2. And this algorithm is implemented as a publicly available tool to check the security of gadgets. And this tool improved the verification performance by three orders of magnitude. So for example, for a dsni verification with the state of the art tool, it took 13 days on four threads. And with our tool, it took less than 10 minutes on a single thread. With our tool, we were able to verify the ni and sni property of concrete masking schemes up to order d equal 11, where it was previously verified only to order 7 and lower. We were also able to disprove a contractor by Bart et al on the security of generic transformation of ni gadgets into sni gadgets. In fact, this conjecture is, well, the generic transformation is correct until a given order, above which it is no longer correct. And then we also used our tool to design new masking schemes and to verify them straight away. And thanks to that, we are able to have a masking schemes at order g equals 7, which take 17% less additional random masks. So if you want to read more about it, you can read our article, which full version is available on Eprint. This version has additional examples and figures. And the implementation of our tool is publicly available on my GitHub. So you can check this out, too. Thanks for listening.