 So good morning. My name is Lauren. Thank you for coming to the first talk and I'm going to be presenting the paper consolidating security notions in hardware masking. So we have a problem because adversaries can extract secrets from our devices by looking at power consumption or electromagnetic radiation. One of the solutions that the community has come up with is to start masking our implementations. But the question is how do we know that masking provides security against high channel analysis? For this we have started, we rely on proofs where we make an abstract model of the adversaries' powers. And a very popular adversary model in this context is the probing model by Ishae, Sahay and Wagnerf. It basically assumes that the adversary can put D-probes, can probe any D-intermediates of the circuit. And these probes, they give exact information, instantaneous and stable. And it's also independent from any other intermediates in your calculation. The reason that this model is so popular is because security, it has been proven that security in this model also implies security in other more realistic models. Like you don't know the leakage in the moment security model. And basically we assume that if our mask implementation is secure in the D-probing model, then we provide security against an adversary performing D-order DPA. So what do we do when we're masking? Well clearly we need to make sure that any combination of D-intermediates does not reveal anything about the secrets. So how do we do that? We split our secrets into at least D plus 1 shares. And we have to find a way to implement our cryptographic functions in a way that at no point are there D-intermediates that reveal anything about the secrets. This is a little bit more tricky in hardware because we have this effect called glitches, which is a temporary change in a signal before it stabilizes at the intended value. So during this glitch, instead of computing the function that we want to compute, the wire is computing, let's say, a glitch function. And it's very difficult to predict what this glitch function is going to be because it does not only depend on your circuit, but it also depends on your place and routing, your platform, even the temperature of the execution environment. So it's pretty impossible to model glitches. So we're going to use an adversary model that's quite high level. So you can see here our system. It consists of combinational logic clouds. Okay, this really does not work. Combination logic clouds and sequential logic blocks. The sequential blocks are memory blocks that stop glitches from propagating. So we can assume that all these wires are stable coming out of those blocks. So now we have the D probing model, but we assume that when the adversary probes a wire, let's say he probes this wire, then he could get more information because of glitches happening in this combinational logic. And to make sure that we account for the worst possible glitch that could happen, we're going to assume that the adversary sees all of these wires going into the cloud. Similarly, if he probes this wire, then he also sees these wires. We call them the glitch extended probe. So the glitch extended probe consists of all the stable wires that are required to calculate the normal probe. And by assuming that the adversary has this information, we can assume also the worst possible glitch can happen. And this is a very easy way of looking at things because it's very high level. It's independent of the platform. You don't even have to know what is going on inside these combinational clouds. You only have to know which wires are going where. So what does this talk about? It's about one very small, simple formula which says that the mutual information of something and something has to be zero. So my promise for this presentation is that with this very simple formula, you can prove security in a lot of different models, whether it be with or without glitches or composable or not, with any type of masking. You can even use it if you have non-uniform sharings. You can also use various leakage functions. So in the paper, we have a lot of proofs for what I'm going to say, but I don't want to put proofs in the presentation. So I'm going to refer you to the paper for that if you want. Instead, I'm going to tell you a story. So the story of this paper starts with the paper that I presented last year at chess in Amsterdam. So there I presented a multiplicative masking of AES in hardware. And like with any masking, you want to be able to prove that it is secure. And this was very difficult for us because there are a lot of verification tools in the literature, and we tried a lot of them, but we couldn't use any of them for two reasons. The first being that we're using multiplicative masking and a lot of the tools assume Boolean masking. And the second reason is that we have this block that reuses a lot of randomness, which also is not very compatible with the tools at that point. So this means that as a result, we had to create a very long appendix of manual proofs. And I can tell you making those was not the highlight of my PhD, and I don't think reading them is very enjoyable either. But the biggest problem with manual proofs is that they leave a lot of room for human error. I'm not saying I made an error, but we wanted to have some more security, so we wanted to use a tool. And one of my co-authors had a tool which was presented at FSE in 2016, and this is a tool for flaw detection, and it works like this. So you have a software implementation of your implementation, and you're going to make simulated traces that have one time sample for every single intermediate in your calculation, and you can collect simulated traces, and then as you would do with real power traces, you can perform TVLA for leakage detection. But in this case, it would be a flaw detection because it's absolutely noiseless, and flaws cannot be hidden by noise like in the real world. So it's a very effective flaw detection tool. Also, if you want to verify higher order security than just like with real power traces, you can combine different time samples in the trace with a centered product and perform TVLA on that. So the only problem with this tool was that it was made for software, and we had to verify security in the presence of glitches. So we figured, okay, let's replace all the normal probes with the glitch-extended probes and then make simulated traces in this way and do the same thing, TVLA. So yeah, that works. But the problem is you cannot use this for higher orders because we realized that you cannot combine two glitch-extended probes in a product because then you lose information. And we couldn't find any compression operation that compresses two glitch-extended probes into one. So we decided that the only way to comprehensively test glitch security was to concatenate the glitch-extended probes. Say that you want to verify second order security, then you're going to create a different kind of simulated trace where for every combination or for every pair of intermediates in your calculation, you create a time sample that consists simply of the glitch-extended probes concatenated. You make simulated traces in that way, and then for every time sample we would look at the probability distributions and we would make sure with the key square test that for different secrets that probability distribution is the same. So essentially what we were testing is that the mutual information of the glitch-extended probes and the secret is zero. So here you have that little equation that I was talking about. So at this point we realized that normal probing security without glitches was already defined this way back in 2010. So there was a definition that said given any d-wires you want to make sure that the mutual information of those d-wires and the secret is equal to zero. And now the only thing that we did to extend this with glitches was replace the normal wires each with their glitch-extended probes and make sure that the mutual information of those glitch-extended probes with the secret is equal to zero. So we have here the same equation, the same formula used for probing security with or without glitches. And so now we're going to look at different security notions in the field and integrate them also into this same framework. So since we're talking about hardware masked implementations we have to talk about threshold implementations because they were the first provably secure masking scheme in the presence of glitches. How they achieve this is with two properties. So the first one being non-completeness which says that for any block computing an output share you want to make sure that it's independent of at least one input share. So that means that whatever glitch happens in this block it cannot reveal anything about that input share that it's independent of. And then this means if you also have uniformity that not knowing this input share means you don't know anything about the secret. So that's how they proved the security in the presence of glitches and how we can now again prove it with the new formulation. But this relationship only holds for first order. So at crypto 2015 it was shown that non-completeness and uniformity are not sufficient for higher order security which is interesting because still in the last four years you can find a lot of masking papers that actually talk mostly about non-completeness and uniformity and I'm going to leave the rest up to the third talk of this session to talk about that. So what can we say about these properties now? First of all non-completeness is a very useful property so even though it's not sufficient and you cannot really use it to prove security of your scheme it is necessary so we can show also with this new formulation that glitch-extended probing security implies non-completeness which we kind of already know. So it's a necessary condition for your security and it's also easy to verify. There's a very efficient verification tool by Victor Arribas that checks non-completeness. Then uniformity is not only not sufficient but it's also not necessary. So we show in this work we demonstrate that you can make a probing secure sharing without having uniform shareings. So uniformity is very useful if you are trying to create a first-order masking then it's a nice rule of thumb that when you have uniformity and non-completeness you know you're going to have probing security. But if you're trying to prove the security of a higher-order masking scheme then uniformity is neither sufficient nor necessary and it's also actually even more difficult to verify than this newer formulation. So yeah. Another thing to keep in mind about these two properties is that they kind of have a univariate nature and up to this day the definitions have not really been extended for multivariate security. And also if you want to check them you need a lot of knowledge about your circuit so you need to know which wire belongs to which share and which share belongs to which variable etc etc. This is all information that you don't really need to verify this. So that's for probing security but probing security is not composable and for this reason there's another line of works about non-interference and strong non-interference which are notions of composability. Another thing about these notions is that they can make the verification of probing security more efficient. And this is done in the masquerade tool of Zilbach, Telsonia, Bilait and other people which is a very efficient and nice tool. So what are these notions? They basically say that if your adversary is probing intermediates of your circuit and outputs of your circuit you want to make sure that these probes can be simulated using a set of input shares that is bounded in size by the number of probes you're using. And whether or not you count the output probes or not is what makes the difference between non-interference and strong non-interference. So these notions are stronger than T-probing security. Originally they were introduced only for glitches without glitches originally. And then last year at chess there was the work on the robust probing model which extended these definitions with physical defects such as for example glitches. Now at this point the definitions came without a mathematical formulation so what we wanted to do in this work is unite them with this new mutual information framework. So in the paper we show that checking these things means checking this formulation which again checks the mutual information of your set of probes with not a secret this time but a number of shares. So this maybe looks a little bit complicated so let me give you an example. Suppose that you're only probing outputs and you want to verify SNI that means that this set S is empty and the equation becomes that the mutual information of your probes and the entire input sharing is equal to zero. So you can see here the subtle difference between non-security because in that case you would have independence with a secret and in this case you have independence with the input sharing and that's exactly what gives you this composability. And finally now that we have this it's very easy to also have these definitions with glitches because all we have to do is replace the normal probes with glitch extended probes. And why stop there? So in the last years a lot of people have noticed that there is a gap between theory and practice. For example last year at chess, Thomas de Knutte presented a work about coupling which basically showed that the independence leakage assumption does not always hold. Another also in software masking a lot of people noticed that the CPU can introduce extra leakages into software implementations that are actually considered theoretically secure. So what does this mean? It means that our theory is incomplete. Again I'm going to refer to the robust probing model paper from last year which started to extend our theory by defining new kind of probes not only glitch extended probes but also extended probes for memory transitions or coupling. And the point is that we can keep using the same framework of the mutual information with these new extended probes and that way we can test security in a whole bunch of new models. In the future if we can find new definitions for probes that accurately reflect what is happening how the leakage is happening then we can keep using the same framework and the security in those models. And the best part is if we have tools that check this formula for example the masquerade tool implicitly also verifies this we can keep using the same tools because we only have to replace our current probes with these extended probes. So that brings me back to what I was saying the slide from the beginning so all the advantages of this one simple equation which can be used to check the security of it can be used to check probing security or composable security with or without glitches or in other models you can also apply another leakage function to the probes than the identity function you could apply a hamming weight leakage function for example. And then I'm going to end with a conclusion which is basically the definition of consolidating because consolidating is a popular word in the title of crypto papers in this context that's not the only reason why we chose this so there are two definitions the first one being quite obvious it means to join together or to unite so we wanted to unite all these different security concepts in the literature into one framework with or without glitches et cetera and then the second meaning says it's to make more strong or solidify and so with this work we want to strengthen our understanding of how to bring physical defects into our theoretical models thank you any questions please come to the mic hi thanks for the talk could you go back to the page where you explain the simulated traces with glitch extending perhaps simulation with glitch extended you mean in the beginning could you explain how you get this value like 9831F5 so that's a combination from it's just a toy example but we've got here in the normal software model we have a variable B1 which is 9A which is 3, 1 and F5 so we've got F5 and 3, 1 here so in the glitch extended probe version B1 and M1 are combined because they are used to compute compute one variable so we get 3, 1 and 9A as a glitch extended probe and then we still have this F5 here which is a single normal probe and then if you're verifying you just concatenate these things and you get this can I ask what kind of leakage model you're using so the thing if you mean identity model or Hemingway model is that what you're referring to if you do it this way if you think about this 9A31F5 it's actually a linear model because it's just cascading out everything together it's not actually linear you're verifying the probability distribution of this thing you're checking the joint probability distribution which includes any function that you can compute on these things as inputs if you would take a non-linear function that computes on 9A31 and F5 and that would leak then also the distribution of this entire big variable would leak am I oh yeah right thank you any other question one question how do you efficiently compute mutual information how do you efficiently compute the mutual information so we've actually never computed it we computed by looking at distribution so in a tool like this you can compute either exact probability distributions or experimental depending on whether you're drawing inputs randomly or going over them exhaustively and if they are input then we use a key square test to check that the mutual information is approximately 0 and otherwise it's just exact equality of the probability distributions I think we don't have time for another question so hope we can take it offline when you consider mutual information among more than two variables you can get strange things happening for example the conditional if I take an XOR of a secret key and a plain text both chosen at random and I consider the conditional entropy of the secret key given the plain text that's full entropy still and if I consider separately the conditional entropy of the key given the ciphertext that's also 0 but if I know the plain text and the ciphertext I've revealed the secret key completely so when I'm not quite sure on your definition of a set of probes and the secret key are you considering that you know all all the values of the probes simultaneously or are you looking at them just one by one? sorry I didn't catch every part of your question sorry we are running behind schedule can we take it offline the question? okay we're talking a break