 session, the first talk is on circuit and compilers with logarithmic leakage rate by Marcin Andrihovich, Stefan Djubovsky, and Sebastian Faust. And Sebastian will give the talk. OK, so OK, thanks a lot for the introduction. So this is a joint work with Marcin and Stefan, on Marcin and Stefan. And I will talk about, essentially, the black box bottle first. So we have, in the black box model, I think it's too loud, no or not? Sounds very loud, huh? So in the black box model, we have essentially a disadversary. And what you can do is you can attack the black box. And as all models, also models in cryptography, the black box model makes some restrictions about what the adversary can do. So what we usually assume in the black box model is that there is some source of perfect randomness from which we, for example, sample the secret key. So we have this key that comes from this perfect source of randomness. This perfect randomness is also used, for example, when the algorithm is executed. For example, when we do some encryption, we need some perfect source of randomness. And then there is this adversary. And the second kind of assumption that's also why this model is called black box model is that this adversary, he can interact with this algorithm over some well-defined interface. So you can think of this, for example, as an encryption scheme. Now what the adversary can do is provide some input message M. And then what he gets back is a ciphertext. For example, other types of interfaces are possible, of course. And it's called black box model because whatever happens inside this box here, the adversary has absolutely no knowledge about this. So in particular, he doesn't have any information about the secret key. So most of the security proofs they are done in this black box model. So we show that it's impossible for an adversary to break the algorithm as long as the key is perfectly random, and the attacker only has access to the algorithm. So question is, is this unbreakable? So of course, it turns out that in many cases, it's not unbreakable in which this algorithm can be used. For example, when we implement it on some smart card, then the adversary can kind of move from this world here, where he lives to the outside world, to the non-black box world, let's say like this. And then you can try to exploit some weaknesses of the implementation. There are many examples. So one important example, which is also the most important for this talk, is where essentially, adversaries try to break smart cards. So they measure some side channel information and try to break the algorithm. And these side channel attacks, they're typically much, much more efficient than the traditional attacks against the algorithm. So what is the problem? This essentially, this adversary moves outside of this black box world. He moves to this, well, to exploit some weaknesses of the implementation. And he can, for example, measure side channel information that emits from executing this device. And that's usually called the leakage. So there are many examples. And I guess maybe some of you have seen the talk of Emmanuel yesterday. So I give very high level idea not only about this power analysis attack. So essentially, one source of this leakage is the power consumption of the device. So we have this RSA decryption, for example. It computes C to the D mod N. And what it does is using a square and multiply algorithm, for example, when you implement it on some real world device. And then the adversary, when he gets physical access to this device, he can measure the power consumption of a device. So he can, like, for example, get this power trace here. And if the power consumption is different, if the power consumption is different for squaring or multiplication, and since whether we execute a squaring or multiplication depends on the bit representation of this secret key, the adversary can just by looking at this power trace reveal the secret, okay? So there's many countermeasures been proposed. So there are like two classes of countermeasures or physical countermeasures and like algorithmic countermeasures. So I will only talk about the algorithmic countermeasures and there are like also two variants. So we could like try to design specific schemes that protect against these sidestand attacks. So we can design like a leakage with an PRF or a public key encryption or signature, or we can aim for something much more ambitious, namely trying to protect arbitrary computation, okay? So this is a general protection mechanism now. And this is also the focus of this goal and we want to essentially protect arbitrary computation. So we need some way to model arbitrary computation and we do this as a circuit. So the circuit has some operations like a multiplication operation, addition operation, there's some wires in this circuit and these wires carry some elements in a finite field, okay? So this is how we model arbitrary computation. So this could be for example, an AES circuit, has some secret key here, some input there and produce some output. So the most famous general protection mechanism is this so-called masking schemes and there's like a lot of work on this. So starting with this work of Shari I'll from crypto 99 and now in the last years they have much more work on this topic like trying to get more efficient schemes or schemes that like provide higher resilience against certain types of attacks and essentially the idea of the schemes is as follows. So you have like some description as a circuit and we transform it into a new protected circuit that hopefully is protected circuit description that hopefully has better security against realistic side channel attacks. So this was formalized, this concept was formalized by ISW and as a circuit compiler who also produced or like introduced the leakage model. So this is the work that inspired many of these follow-up works and essentially the circuit compiler takes as input some arbitrary circuit like for example, an AES and then produces a protected circuit description that hopefully gives better resilience against like power analysis attacks and then we can implement this protected circuit description on some real-world device like some smart card and the hope is now if this adversary obtains some leakage then he cannot break the scheme anymore. So this is kind of this and so the leakage model that was considered by ISW is essentially the so-called T probing model and the nice thing about this T probing model is two things, so first it's very simple. So it's very simple to describe and also argue about security and this model is relatively easy but even like now automatic tools who check this security of these masking schemes in this model and they're also quite realistic because they model some relevant side channel attacks. So what does this T probing model say? It says essentially the adversary can like learn some of the intermediate values that are produced by the circuit. For example, here he could learn some bit of the secret key or here some output of the addition gate and the only restriction that we do is that he is bounded to learn up to T intermediate values. And security of this masking scheme essentially guarantees you that as long as the adversary is restricted to only learn T intermediate values he learns nothing about the values that are on which we actually compute. And typically we aim here for perfect security so we want to have a perfect simulator that can simulate the view of the adversary. So this typically what's done. And there are two variants. So one is where we talk about absolute leakage and the other one about relative leakage. So they were both introduced in this work of ISW and I want to show you what's the difference between these two models now. So the first one absolute leakage model it puts a bound on the total number T of probes in the circuit. So that's like a total restriction what the adversary can learn. And you can see a following example. So we have a circuit, a small circuit with 300 wires. The big circuit with 30,000 wires but the T stays in both cases the same. So in both cases, even though the circuit is not much larger the adversary can only learn three intermediate values. So the main disadvantage of this approach is that the T stays the same even if this circuit gets larger and larger. And this in particular a problem for this compilers because usually this compilers they make the first blow up the side, the circuit by a large factor by a security parameter or like a polynomial in the security parameter. This circuit, the protected circuit becomes larger. The other model looks at the fraction of the leaking wires. It was also considered by the ISW work. And essentially now it introduces this parameter alpha which is essentially T divided by the size of the circuit. So T is the number of probes and the size, well it's the number of wires or the number of gates essentially. And in this case, we want essentially if you have a small circuit the adversary can learn here three wires. We have like leakage rate of 1% while here the adversary can learn like now 300 wires. So we have a leakage rate of still 1%. So we want that essentially the leakage rate stays the same with the size of the circuit. Even if the size of the circuit increases this is at least the final goal. Not going to, we're not going fully achieving it yet but it's the final goal. So the state of the art for perfect security of ISW was to achieve alpha one divided by N where N is the security parameter and I will like this approximate means usually that there will be some small constant in this one divided by N. So this was the state of the art and what we achieve now in our result is the following essentially. We try to aim to maximize this rate alpha. We have a construction for a fine circuits so where the circuits only contain addition and multiplication by constant which achieves actually this optimal rate so alpha equal to one divided by C. The main ingredient is some asymptotical optimal refreshing scheme to achieve this. And we have also results for arbitrary circuits where now the circuit can also contain for example multiplication and then we achieve a rate of one divided by lock N where N is the security parameter. So rest of the structure as follows I will talk a bit more how this masking scheme works masking schemes work then talk about this affine circuits and also how to lift it to work for all circuits. So what is the ingredients of such a masking scheme? There are essentially three ingredients that we always need to get a secure scheme. The first one is an encoding function. So that's also where this parameter N comes from. So this encoding function maps an element in the field to a vector over this field. And so essentially you can think of this as a linear secret sharing scheme so an N out of N secret sharing scheme where the element B the secret element from the field is encoded by a vector B1 to BN that is uniformly distributed over F to the N such that the sum of the shares is equal to B. So this is just like a normal simple additive secret sharing scheme. And as long as you only learn like N minus one shares you have no knowledge about the secret B. And then that's now the first some of this kind of trivial. So the first main difficulty is to actually come up with some operations that compute on this encoding schemes. So we need like these kind of gadgets they are called by ISW. And such a gadget takes as input some encoding of some secret value A and encoding of some secret value B and produces encoding of A plus B. And it has to be done in such a way that it's secure it preserves the security of this encoding. And for the addition this is actually simple. You can just do because this secret sharing is linear we can just do like a component-wise addition. And then the third thing is that we need some kind of method of composition. So we need some way to actually like if you have a large circuit now to compose all these simple gadgets in such a way that security is pre-served. And usually this composition is done by a gate by gadget like replacement strategy. So you have like some circuit, the unprotected that you want to compile. There are many gates in it and each of these gates is replaced by some gadget by some protected gadget. So let's take these ingredients and now look at the circuit that is composed of many gates. So we get like kind of like now some huge circuit and which computes on encodings. It has some addition gates here, some protected multiplication gates. Maybe there's some protected multiplication by a constant gate also which we need for our fine circuits. And there's some secret state K1 to KN, K1 prime to KN prime. Maybe this was be like an abstract view on the AES circuit. So there's some secret there. And there's also some encoding and some output decoding. And you see here, these are these fat wires here and these fat wires now contain essentially encodings. So for example, this fat wire here would contain the encoding of A plus K. So the first question we want to like achieve a good rate. So the rate should grow with the size of the circuit and in particular, it could happen that since this is the security parameter, we could look at circuits that are much larger than N. So in this case, of course we want that the T can be larger than the N, but is this possible? It turns out that of course not possible if the adversary can like target here one encoding by putting all the probes here. He can essentially learn the entire secret and we cannot have a perfect simulation. So ISW to this end introduced some restriction of what the adversary can do. They introduced the so-called T region probing model and this model essentially restructured the computation in so-called regions. Each region is represented by some of the gates, a gadgets and the adversary can place T probes inside it. So if the gadgets are large enough, then we can still hope for very good rate. And the main ingredient to achieve a security or to get a composed circuit that is secure in the end is some kind of method to refreshing. So why do we need this refreshing routine? Because you can think of this addition gate I showed you already, it was completely deterministic and if you have a large number of addition gates, then if the adversary can in each of these addition gates probe like T wires, then at some point you can still recover, for example, the entire secret key. So we need some way to pump new randomness into the encodings. And that's essentially done by this refreshing algorithm. This refreshing algorithm works on a high level like this. We have essentially some input encoding K1 to KN and produce some output encoding H1 to Hn and internally this guy uses some randomness. The first requirement that you want is correctness. So if this was an encoding of K, then the output would also be an encoding of K. So this is the correctness requirement and we will use of course the randomness now such that this encoding will be independent of this encoding. Second requirement is the security requirement. So we want that when we allow in each of these executions of the refreshing the adversary to place T probes, then or like where T has to be smaller than n half, then he should not learn anything about the secret K that was encoded here. So why is n half, why n half? Because we can probe some fraction of the output encoding we can learn here and then some other fraction of the shares we could learn here. So if this would, T would be equal to n half, then we can of course again learn the entire encoding and recover the secret. So that's why we need T smaller than n half. And the main ingredient to build such a scheme is this encoding of zero sampler. So essentially we, this is where the randomness is used. We generate an encoding of zero by sampling R1 to RN and then by just adding this to the input encoding. And the security of this refreshing follows essentially from the security of this encoding of zero sampler. So what does this encoding of zero sampler, or what does security of an encoding of zero sampler say? Essentially we have here this sampler that produces this distribution R1 to RN which is an encoding of zero. The adversary can place some probes inside here and what we want in the end, that's still a large fraction of these outputs are independent of the internal probes. So that's what we want. Some simple way to do this is to sample R1 to RN minus one uniformly at random and then computing RN as the negative sum of the shares. Unfortunately, this simple encoding of zero sampler doesn't satisfy this property that I just described. And moreover, if you instantiate like a refreshing scheme using the sampler, then the scheme becomes actually insecure. And the best secure refreshing that achieves perfect security is due to ISW which has O of N square size and leakage rate of alpha equal to or approximately one divided by N. So how can we improve this? So essentially we improve this by viewing the encoding of zero sampler as a graph where the output is represented by these vertices here. So we have R1, R2, R3, R4, R5. These are like the outputs that are produced by this encoding of zero sampler and the edges are the internal randomness that are used for the sampling. So now for example here, this would be a simple encoding of zero sampler where the sample A, B, C, D uniformly at random and then we add to the R1 becomes essentially the sum of all these shares and then R2, R3, R4, and R5 as the negative of these random values. And then it's clearly clear that now when we add all these values together we get an encoding of zero. So the ISW encoding of zero sampler can be essentially viewed as a fully connected graph. So that's why also this complexity is O of N square. And now probing is essentially corresponds to removing edges in this graph. So when we, for example, here we had like B and C when the adversary learns this, then we remove these edges in this graph. So recall what we wanted to have for this encoding of zero sampler. We want that a large fraction of our IS is independent of the probes, which corresponds in this graph viewing approach to essentially the graph. After removing some of the edges has still a large connected component and we need for security that this connected component contains at least n half of the vertices. So this is the connected component here in this case after probing many of the internal randomness. And now to get a good encoding of zero sampler what we need essentially is for security. We need a graph that initially is very highly connected and to have an efficient sampler we need some graph, we need a sparse graph so we can use an extender graph essentially and then this gives us then an encoding of zero. So what is the result that we get finally? There is, we show that there exists an explicit construction of refreshing for size and randomness complexity of O of n and the security, it's also constant fraction of probes security, then can be instantiated with some expanders like this Margulis expandograph. And what we get in total is then a probing rate since the size is O of n and the rate that the number of probes is smaller than constant fraction then we get one divided by the probing rate. So then we can combine this for affine circuits with this addition gates for example the addition gate has like complexity O of n so we get in total a circuit that has size O of n and in this we can probe like a constant fraction and this we can probe a constant fraction so we get like a constant fraction of probing attack possibility. So now for the multiplication, ISW achieves one divided by n leakage rate and what we do essentially to improve this leakage rate is using some techniques from multi-party computation so instead of computing on the simple additive secret sharing we compute on like some threshold secret sharing so a T out of n secret sharing for example, Xiaomi secret sharing where essentially A is represented by these shares A1 to An that are different points on a polynomial of degree T smaller than n half and so in the following I denote this by this kind of notation which is due to this paper of Damgat-Ishai Boygart I think which essentially says that there's n shares and they lie on a polynomial of degree T and in this work of Andriy Shovits et al it was shown that if we have like some leak free component here that samples us some like a Xiaomi secret sharing or some encoding a threshold secret sharing of a random value R lying on a polynomial of degree T and random sharing of the same value R lying now on a polynomial of degree two T if you can generate them somehow in a secure way then we can actually achieve optimal leakage rate of one divided by C for the multiplication. So what we observe now is that we can actually eliminate this leak free component by observing that this sampling here is actually an affine circuit so this can be described by an affine circuit and then we can use our affine compiler to transform it into like a larger circuit that now computes on encodings of these Xiaomi or like threshold secret sharings and what we get is essentially output of an encoding of this R of sub n comma T and R sub n comma two T so the one difficulty to continue this computation so this actually achieves the optimal probing rate security but now to continue this computation we need to decode these we need to peel off one layer of encoding so we have some kind of circuit that does this but this is the place where we lose the factor of one divided by log n so this is a still open question how to get this also to constant so we can then show that the whole circuit achieves like security for rate one divided by log n and to summarize since I don't have any time anymore I guess so we can have for affine security optimal leakage rate one divided by C for some constant C we can have for any circuit security one divided by log n leakage rate this refreshing achieves size and randoms complexity of O of n and has a tolerates a number number of both of O of n this may be even like interesting for other settings like for composition we usually need also refreshing to achieve to protect against certain types of attacks so we can maybe use this to get more efficient schemes and also it has nice implication to this very important work of proof and revain on the noisy leakage model because we can have like this kind of rate here we can actually show that we can in the end achieve security for optimal noise rate one divided by some constant C that concludes my talk, thanks a lot I think in order to stay on schedule we'll have to take questions offline because we'll be synchronized with the other room thank you but anyway we have to change this thing so they can ask me questions all right