 So, thank you again. Hi again. So, in this talk I will show you how to improve the efficiency of private circuits by recycling some of the randomness inside. So, I thought it would be exemplary, starting with a really concrete example of recycling, which is the recycle of some of my previous slides. So, indeed we are again in the context of state channel attacks, and we are again in the context of the T-probe model. So, T-probe model, for the ones who were not here before, means that you need to be adverse so you can access up to three values in a circuit. And the required property is the existence of a simulator, which can simulate adversely view without accessing to the circuit. And the important control measure against such an attack is masking, which consists in randomizing the circuit, by splitting it into and random shares. And, yeah, so during this talk I will just mention, I will just talk about the Boolean masking, where the encoding and the decoding are represented here in these blocks. So, private circuits will have an encoder and a decoder, and they will be at the end of the execution, which will split and people will close the circuit. And internally we will have some linear gadget, which operates concurrent wise and don't use randomness, and some non-linear gadgets, like the multiplication, which actually makes a really heavy use of randomness. And the current multiplications can use o of n square random bits, where n is the number of the shares. So generating randomness is one of the most low tasks in the execution of a circuit. And one of the, indeed, a lot of research has been done recently in order to improve the number of randomness needed. And all this research so far focused on improving the randomness needing one single multiplication scheme. But, of course, we need to refresh every time the randomness for each execution of a different multiplication scheme. So there are some theoretic lower bound to the number of randomness needed for anti-privacy in the multiplication scheme. So we tried to open up a new research approach, which is to reduce our randomness between different gadgets. So in this talk we start with a new security designation, that will be this T LCR, so T security with common randomness. I will give you some examples of a big new result about this probability. And I will show you a new multiplication scheme, which would be this probability. And then we will say, again, I can start on the S box of the S. And a particular case for order one. So I'll give you more details on the ground about the T probing model. So in particular, two are the main probabilities that we should satisfy. So one is the TNI. So in the TNI probability, so T non-interference, we know that the adversary cannot serve up to T probes, so observations, on a gadget. And in order to show that a gadget is TNI, we have to show the existence of a simulator, which simulates the adversaries view, with only up to T shares of its input. So the drawback of this definition is that this doesn't give any security, any guarantee on the security, on a secure composition of this gadget. So this means in some cases, it can be secure-composable, but in some cases, maybe it cannot be. So a stronger definition is that TSNI, T Stronger Interference, where this is actually guarantee secure-composability between TSNI gadgets. So this means that the output of a gadget, TSNI, can be input of a second gadget that is TSNI. So the probability that we acquire is the following. So we have T1 probes on the internal wires and T2 probes on the output wires. And we have to guarantee that the simulation of the probes either on the internal and on the external, on the output wires, can be done by using at most T1 of the input shares. So this gives a kind of independence between the number of output probes and the number of input shares need. And that's why then the output can be used as inputs next. So our security definition is a bit more, is a bit different from the previous one because it actually involves a number of gadgets. Because we will give a kind of global simulation, global simulation requirement on some gadgets. So here I have an example for just two, which internally use exactly the same randomness. So let's consider two gadgets, which internally use the randomness R, and T probes, where T1 are on the first gadget and T2 are on the second gadget. So saying that this set of gadgets is TSER, means requiring an existential simulator, which can simulate adversity view on G1 by using at most N-1 of A and B, and N-2, N-2 inputs shares to A prime and B prime. So a composition result, we can see here a composition result. So if we group the gadgets in some blocks of gadgets or in some regions, and consequently these blocks share the same randomness. So here in the picture, whenever we use the same color, it means that internally they use the same vector randomness. In this case, we cannot film that the entire blocks are each other TSER. So this means that we can take a circle C, where each of the gadgets has fresh randomness. That is how it happens now. And we can divide it in some blocks that internally we use the same randomness. And this should be still secure. So if these blocks are TSER and TSNI, because also we want that they are composable. So far it was pretty easy, but we need to see if actually it exists a multiplication scheme and a refresh scheme that uses this probability. So I'll show you now how we design our multiplication scheme and which of the probabilities that we think are needed in order to grant TSER. So the first probability is that T of non-competence without fresh randomness. So we saw before this concept from the threshold implementation. This is slightly different. So we will require that any combination up to T of the output values without considering the randomness inside is independent of at least one of the output shares. So the intuition behind this requirement is that the worst case, which is actually the best case for the target, is to target only two gadgets. Because in this case it's putting all this power, all this blue. Yeah. So he can have much more blocks with common randomness. And so, yeah, the best case is if it put half of them on one and half of them on the second one. And of course, even if he can cancel out all the randomness inside, since we have this T of non-competence, it still cannot recover entirely the same way. So the second probability, ah, so actually this T of non-competence needs to have a bit improvement when we go to order T of. Because in this case we have T of probes on one gadget and T of probes plus one on the other gadget. So we need that this plus one probe still don't break the non-competence in two words. The second probability that we want to achieve is the TSNI. So we want that our schemes are composable. So this is reached by putting in a strategic way a randomness. And one of these ways is to ensure that for each of the output bits, so here we have an example for T equal to two, for each of the output shares, we have T random bits, so here we have R1 and R2, and each of these random bits appear a second time on a different output share. So here we have for example R1 on C3 and R2 on C2. And this trick is useful because in this way we can simulate at random each of the output shares. And lastly, we require independence of the inputs. So this means that we want that the inputs of two gadgets, of multiple gadgets sharing the same randomness are mutually independent encodings. In this way, indeed when we probe some wires, the information that we get about the input shares is independent from the one that we get from the other gadget. So here we have an example for T equal to four. So I know it's a lot of letters, a lot of numbers, but I really wanted to take this case because it's really extraordinary. It did actually guarantee no completeness without randomness, it's not such a triggered task. So up to order three, we can achieve it with T plus one shares, but from order up to order three, from order four on, we have to increase the number of shares. So for example for order four we need to have up to seven shares. And now we'll show you now briefly why the problem is as I showed you before works. So let's consider two gadgets where we have two blocks on the first one and two blocks on the same one one. And for example, let's take adversity, which observes C1 and C3 on the first one and C1 prime, C3 prime on the same one one. So these are the values of the gadget. So in this way we have here exactly the same random values. So a smart thing to do now for adversity is to sum up these props that he has. These are on the first gadget and these are on the second gadget. So in this way now he has a view completely independent of any randomness. He doesn't have any randomness anymore to hide a secret. But thanks to this judgment completeness, we can actually simulate the view with only five up to n shares meeting. So it cannot record the secret. An interesting but particular case is the one of parallel multiplications. So let's consider for example a circuit which is only composed by parallel multiplications with independent inputs. So we can reuse the randomness inside always. So with this method we can actually mask the entire circuit with only a fixed amount of randomness independently on the sides of this circuit. But this is of course a really particular case. In particular it is not so common to have some multiplication scheme with independent inputs. So what happens if actually two multiplications can only see independent inputs. So this probably was actually we actually overlooked a bit about this property in some of our previous works on this paper we thought it was actually sufficient to have just outputs that were outputs of a TSCR gadget. But we figured out that actually it is not sufficient so we need to create independence. And unfortunately the only way we found but probably there are others is to put some randomness. So with this trick we have a gadget creating independence. Here you see an example for t equal to 2 which actually uses less randomness than the normal. So this used actually n minus 1 random means for n. But this is a dangerous gadget because it's TNI so we cannot always compose it. So we have to be really careful when we use it. But what we have is that if before the output of a multiplication scheme would be independent now after this gadget they are independent. So we can use this output as input of a multiplication and in this way this input would be independent. Let's see now how to build a reflation scheme which is TSCR. It's much easier than for a multiplication because we saw before the reflation only operates component-wise. So for each output we will only have one component. So the TS non-completeness is widely satisfied it's really easily satisfied and for the TSNI we can just take the same distribution of the multiplication scheme of the randomness in the multiplication scheme and we will still achieve TSNI. We need an example for t equal to 4 is this one. So probably you don't remember accepting these numbers but they accept the same randomness that were before in the same position of before. And it requires t times n random elements or of t times n random elements. So let's see now how can we apply this to AES as box. So we chose to study this case because we have 200 of these box in the AES because we have around 20 parallel computation per 10 rounds. So we also have to remember that all of these output of the round will be input of another round afterwards. And the idea is to use for all these 200 blocks always the same randomness in this gadget, in this block. So it seems that by step what can we do with the tools that we have so far? So first of all we have to check the dependence in order to see if we can use or not the randomness. And I remind you another time that I will use the same color for gadgets using the same randomness. So for the refreshing we saw it was really easy. We don't need independence because they are really powerful. We just have one share so we can easily reuse randomness here. And for the first round we have that the inputs are independent. So we can reuse a randomness between this multiplication. But now, as we are using we make all the other wires dependent between these blocks. So we need now to use our tree to put this independent gadget and to check actually that this is possible. So that this is actually composable. And in this case it is. Now we can reuse randomness among these multiplications. But still we have a problem because here we want again an independent output because this will be the input of the next round. So for the first round the inputs here are normally independent but for the second one we need independence. And this time we cannot use our tree because if we put here this independent gadget we will not have any more composable gadget. So unfortunately the only way in order to have this is to put totally fresh randomness to put multiplications in case that don't share randomness anymore. And of course we don't need here the independent gadget anymore. But in this way we have blocks that are TACR and they are TSNI so composable and secure with common randomness. And we can use exactly this randomness for all of these 200 blocks that we have in the same input. So especially a case for first order so I don't have much time to show you the algorithm for these ones also we will learn and learn a lot. So by the idea is to use alternatively for first order so in the case when we have only one observation possible two random bits. So we can map any circuit from any side with only two random bits. And the idea is to modify all the algorithms even the linear ones the refashion and the multiplication such a way that every time they inject only one random bit and they will cancel out the previous randomness. And in this way all these values of those wires in the circuit will depend only on one random bit. And yeah, only on one random bit and they have always a fix form. And this is an example for these oranges R1 and these greens R2 and we use alternatively R1 and R2 and we still learn T-brides. So here is some performance evaluation so here on the left column for the case so these are the cost to the 10 random generator without reusing common randomness. And these are the ones we use in common randomness. And this is the end so this is how the number of shares increase with the order. So yeah, we can see that we have pretty much an increase in the O of T-square. And we always at this T-order 11 that we show here, we always have less cost to the random generator. So we have this improvement but of course there is a big balance because we also have more computations sometimes. So in conclusion I showed that we were using randomness among gadgets. It can be possible under central conditions and this can open up to a new direction approach in order to limitating one of the drawbacks of masking schemes. And still a lot of work can be done in this direction. So for example we can find a more efficient T-square multiplication scheme in different techniques in order to produce independence. Thank you for your attention.