 Another model, which is called the probing model, which Sebastian told about, which was introduced in 2003 by Ishai, Sayai, and Wagner. And luckily for us, these two models are really related. So I'm gonna show you the probing model. It's very simple. It's very similar to this one. But there you get a much stronger attacker, which actually probes the value. So instead of getting noisy information about what's going on in the card, he actually gets the true value that is being manipulated. Pure information. So it's much more simpler to apprehend. And luckily for us, there was a work two years ago by Duke Antal that showed that if you have an implementation that is secure in this model, then it's also secure in the noisy leakage model for a certain amount of noise. So what this means is that if you say something in this model, which is very convenient to work in, you are also saying something that is relevant in the other model, which is more practical. So this being said, we are going to focus on the probing model in this work. So the key idea, once again, in the probing model, when you want to prevent an attacker to get information about your sensitive value is that you want to mask your sensitive value. So to do so, once again, you perform a masking. So that is for security against an adversary that makes de-observation, what you're doing is that you're splitting your sensitive data into D plus 1 random variables. So that's good for protecting your sensitive value X, but what you want as a cryptographer is that you want to compute on this value X. So you want to compute a function f of X, and you want still this operation to be secure. So you want a deep privacy of this function. You want to compute f of X from these shares in such a way that no adversary can once again get information about what's going on. Formally, what you want is that you want that for two different inputs, X and Y, you want the distributions of both sets of observations made by an attacker to be the same. So how is this achieved in the state of the art? Well, this is achieved through private circuits. So private circuits are just circuits that implement certain functions from one set to another. And the idea is that you take your input X, you put this input X into an encoder. It's a small function, if you want, a small circuit. You put some randomness and you get some output, which you plug into a big circuit, which is also feed with randoms. And then you put the results of that into a decoder, which is going to give you the evaluation of your function f into this point. So what you're asking to the private circuit, obviously you're asking him to be correct, that is, for any random, in fact, you want to get the correct evaluation of your function and you want it to be deep private. Of course, it's in the name. So in this paper, we looked at a particular private circuit, we looked at a particular function, sorry, and that private circuit that's implemented, which is this one, that's simply the end of two bits. Why is that? Because the XOR actually is very, very well-studied in the literature and the end is the next step. As you may know, every Boolean function can be expressed as XOR and ends. So if we know how to construct private circuits for XOR and private circuits for ends, then we have a good shot at constructing private circuits for bigger Boolean functions. So in the state of the art, all private circuits that compute this function are of these forms. You take your encoder, so the encoder is simply a Boolean sharing of a and a Boolean sharing of b and then you compute all the cross products between this vector of a and this vector of shares of b. So you have all these cross products and then you plug it into a circuit and the particularity of this circuit is that it's only XOR gates. So you're only XORing your cross products with some randomness and once you get that, well, you have some output to the circuit and you just have to XOR them to get the product of a and b. So we looked at this circuit and we looked at what's going on. So you have this input, you have these outputs and you also have a lot of randomness and the question that came to our mind rather quickly was how much randomness is needed to achieve this, to achieve the security of such a scheme. And this is a very natural question because in cryptography, you use randomness everywhere. You use it in your keys, you use it in your RSM frame factors, you use it when you want to sign something and you ask it a lot, you know, you ask it to be uniformly distributed, you ask it to be independent. So you're a bit dreaming about perfect randomness but where does it come from? Where does randomness come from? Well, what you want in the real world is that you want to capture physical unpredictability. So what is that? Well, for example, you want to capture toggling of gates, you want to capture maybe quantum phenomenon that you cannot predict and in real life, what this means in practice is that you want special hardware that can do that which is costly and the accumulation of entropy can be slow and you can even get biased or uneven distribution. So sometimes what this means is that you even have to retreat your randomness by putting it into an AES after that. So it's even more costly and it's even slower. So my point here is that randomness should be considered as a resource like space and time. So with that in mind, we have this idea that randomness is costly and we have this idea that we still want a private circuit. So how do we conciliate these two approaches? So we looked at this question, how much randomness is needed and we looked into the state of the art and actually in the state of the art the best construction that we could find was surprisingly in the seminal paper of Isha'i, Sahay and Wagner where they introduced the model. So they looked at it and they proposed the construction which was at about this square on two randomness. And after that, there have been improvements of the schemes, of course, but it was on time or on space but never of randomness complexity. So in this work, we tried to tackle this issue. We asked ourselves what's needed to compute social circuits and what can we really achieve in practice and in theory. So these are the main contributions of the paper. I won't be able to talk about all of them. So the first one actually is that we looked at our problem and we characterized it in terms of linear algebra. So we proved that the problem of deep privacy can be seen as a linear algebra problem. Then we proposed an upper bound which is quasi-linear in the order of security. We also proved the lower bound which is linear in the order of security. And then for very practical D, we proposed some construction. So we constructed some private circuits that are actually secure and actually private for small Ds. So we actually reached all lower bound here for small Ds and implementation that can be seen in real life. Then we have a generic construction for greater Ds. And if you want to go higher in your construction, and this halves ISW randomness costs. So this halves the costs of randomness that is in the state of the art. Then we had some results on the composition of these circuits which means that you can plug together several circuits that compute the end to make them compute a bigger function like maybe an S-box or a whole AS. And finally we proposed an automatic tool that looks at some constructions and that tries to find flaws in it. So I'm going to swipe through the underlined proposition and I'm going to try to give you an idea of the techniques that were involved. So the first observation you can make when you see this circuit actually is that any value that is being manipulated here, any Y value or any thing, any probe that an attacker can make has this form. Obviously because we are only performing XORs. So any value on any Y is of this form that is XOR of some of your cross products. So XOR of certain random bits. And you can write it in a matricial way. You write it in a matricial way. It looks like that. So it's simply a vector A times a matrix M times a vector B. So your matrix M just tells you, well, I'm just picking this chair and this chair and this chair and this chair. And so a vector S times your vector A with your vector S only telling you, well, I'm picking this random, this random and this random. So any Y value can be written like that. And trivially, any sum of this value can be written like that because you're only XORing them. So this is the observation on which we based our algebraic characterization. So the algebraic characterization is based on the condition that we defined which is called condition one. And we showed that this condition is actually very tightly related to the privacy of the circuit. So the condition reads like that. A set of probes satisfy our condition if and only if when you make this sum, you make the randoms disappear. So A times M times B, no more randoms. And you have the all one vector in the row or in the colon space of your matrix M. And what we were able to show with this condition is that we have an equivalence between the deep privacy of our circuit C and the nonexistence of a set of probes of cardinality less than D that satisfies it. In other words, what we showed is that if you have a set of probes that satisfies this condition, you have an attack on your circuit. And conversely, all the attacks are probes of this form that satisfies condition one. So I'm going to give you a bit of an insight of the sketch of proof at least from one way. So let's assume you have this set of probes that satisfies this condition. So you go treat of your randomness and you have the all one vector in your colon space of M. Well, first of all, since you get rid of your randomness, what you want to know is if you can retrieve some information on A or on B. So the first observation you can make since the all one vector is in the colon space of M is that there exists a particular vector B prime, such as M times B prime is the all one vector, obviously. But then look at what's happening in your sum of probes. If you compute A times M times B when this is the all one vector, well, you retrieve the secret value A. Otherwise, if M times B prime is not the all one vector, you retrieve some randomness. So that's what we show here. The probability that your sum of probes is equal to your secret value is one half when M times B is not the all one vector and it's one when M times B is the all one vector. So what this shows is that you have a bias in the random, in the distribution of this variable. Particularly, it shows that the probability that your sum of probes being equal to A is higher, that the probability that your sum of probes is being equal to one minus A. So this means you have a bias, so this means that your secret is not private. So you have an attack. The other way is way, way more complicated. We have it in the appendix of our paper. I won't be able to go through it right now. So we have this algebraic characterization and now we looked at the upper bound. So what's in the state of the art? Well, in the state of the art, we have the randomness complexity of ISW and the randomness complexity of ISW is this square. But we don't really need a quadratic complexity. Actually, what we were able to show is that there exists a deep private secret for multiplication that requires D log D randomness. So once again, I'm going to show you a sketch of proof about that. So sadly, it's a probabilistic method, so we weren't able to actually construct such circuits. The idea is that you take a class of the algorithm that you're going to define like so. So you take a lot of random bits, R, and you construct linear combinations of these things, of these bits. So when you do that, you set coefficient alpha with zero and one. And you take the class of algorithm that have different alpha. And the construction is as follows. So you take all your cross products, which I'm going to make in color to make it simpler. And in between, you're just going to put your random linear combinations. So like it. And of course, to ensure correctness, what you want is that the last one is equal to the sum of the other one in order to get rid of all the randoms. So you have this construction. And what we were about to show actually is for R high enough, the probability of one of these algorithms to be secure is nonzero. So since the probability of one of these algorithms is nonzero, this means that there exists one algorithm that is deep private. So we have a security here for some algorithm, but we don't know how to construct it. So let me now talk about lower bounds. So lower bounds, our results are rather intuitive, but they are not quite trivial to show. So actually, we proved that we need linear randomness to achieve this privacy. So for the case of D higher than two, you need at least D random bits. And for the case of D higher than three, we need at least D plus one random bits. So we have been able to construct some algorithm that actually reached these bounds for D until four, I think, and we proved them using EZ-crypt from our tool EZ-crypt. Speaking of which, we actually built an automatic tool for finding attacks in such construction. Why is that? Because when we were working on our construction, we tried to prove them using EZ-crypt a lot of the time. And for big orders of security, we couldn't achieve an answer, a quick answer. So what we did is that we actually built ourselves an automatic tool taking advantage of the algebraic characterization that we defined in our paper. So this means that obviously our tool is very specific, contrary to EZ-crypt that can deal with a much larger problem set. But our tool is very specific, so it's much more efficient on our problems. So we based it on our algebraic characterization, and it relies on coding theory, more particularly on information sets decoding. But obviously it's not perfectly sound. It's a random approach, a probabilistic approach. So it finds flows very efficiently, but it's not able to prove the security of your algorithm. However, it's much faster than your EZ-crypt-based approach, and it was very useful during your work to get rid of some float construction that we thought of. So here are the results of our performance. So just to summarize our results, he was the ISW, so the state-of-the-art randomness complexity with our lower bound in orange. Then we built a new constriction that has ISW randomness cost, which I didn't describe in this talk, but that is described in great lengths in the paper. And we also have a constriction for small ds that actually reach the lower bound that we defined. So thank you for your attention. Time for a question. Thank you for your presentation. The algebraic representation, you don't take the input. Why is there a reason for that? Well, actually, we focused on this type of circuit to make our proof, so that's why we don't take A and B as inputs in our algebraic characterization, but we can also do it with A and B, but it's in the paper actually. So you have to read, we have a much stronger algebraic characterization, which comes later, which is used actually to prove the counter-posit of this property, this PRM. All right, we have a three-minute track switch break. Thank both speakers in the section.