 Hi, I am Gianluca Briane and I am presenting the mother of all leakages, how to simulate noisy leakages via bounded leakage almost for free. This is joint work with Antonio Faonio, Maciej Obranski, Joe Aurepeiro, Marc Simkin, Maciej Oskorski and Daniela Venturi. The security analysis of cryptographic primitives typically relies on the assumption that the underlying secrets are uniformly random to the eyes of the attacker. In reality, however, this assumption may be false due to the presence of so-called side-channel attacks. More formally, suppose that we have a random variable X, for example it may be a secret key, the private randomness of some algorithm or in general something which should be kept private. A leakage function for the variable X is defined as a factual F which takes as input as simple from the set X and outputs some binary string. Clearly, it is impossible to achieve leakage resilience against arbitrary functions because for instance, the attacker could choose the identity function thus obtaining the secret in full. Because of this, we need to define some restrictions, for example, on the function family from which the attacker can choose the function. Before introducing the leakage models, we need to talk about manentropy. Intuitively, the manentropy measures how hard it is to guess the outcome of the random variable X. Given two random variables, we adopt the definition of conditional average manentropy of Doddy's et al. Because although being slightly counter-intuitive at first, it better captures the notion of residual entropy after learning the outcome of Z. Indeed, consider an example which compares this notion with a possibly more intuitive one. In this case, the random variable X is such that if the first bit is 0, then all the bits are 0, otherwise the other bits are uniformly random. Then with probability 1 over 2, it is possible to fully predict the outcome of X and with probability 1 over 2 again, the outcome of X is completely random except for the first bit. Therefore, the average of the residual manentropy after learning the first bit is n over 2. However, an attacker has probability at least 1 over 2 of guessing the outcome of X just looking at the first bit, and the notion of conditional average manentropy fully captures this setting by saying that the manentropy of X conditioned on B in average is approximately the same of one bit, or in other words, guessing the outcome of X is not harder than guessing the outcome of a single random bit. Now we are ready to introduce the various leakage models. The simplest model we could think of is the one of Bandit leakage, in which the adversary computes a function of X which outputs at most L arbitrary bits. Thanks to its simplicity and versatility, this model, introduced for the first time by Jembovsky and Piedrok, has been widely used in cryptographic constructions that remain secure even in the presence of leakage. However, in real-world side-channel attacks, the leakage obtained by the adversary is far from being bounded. For instance, the power trace of a physical implementation of AES typically consists of several megabytes of information, which is much larger than the length of the secret key. This motivates a more general notion of noisy leakage, where there is no upper bound on the length of the leakage, but instead we assume that the leakage is somewhat noisy, in the sense that it does not reveal too much information about the secret X. More formally, the first and simplest model of noisy leakage is the one in which conditioning the minentropy of X on the leakage does not decrease the conditional average minentropy of X by too much. In our work, we refer to this model as minentropy noisy leakage. A slightly weaker model replaces the random variable X, which may be not uniform, with the uniform distribution U. In our work, we refer to this model as uniform noisy leakage. Another variant of noisy leakage is the one in which we consider the statistical distance between the pair XZ and the pair XZ' where Z' is a random variable distributed exactly like Z, but which does not depend on the outcome of X. In particular, we ask for the statistical distance to be bounded by some leakage parameter, and we refer to this model as statistical distance noisy leakage. Finally, consider the notion of mutual information between the random variable X and its leakage Z, and we ask that this mutual information should be bounded by some leakage parameter. We refer to this model as mutual information noisy leakage. Actually, there are several other models which depend on further restrictions to the relation between X and Z, or the leakage function family. However, in our paper, we mainly consider these four models since they are the most common used. In addition, we give a full picture of the separations between all these leakage families. In particular, we show an example of why minentropy noisy leakage and uniform noisy leakage are not the same thing, and we prove two other separations. First, we prove that statistical distance noisy and minentropy noisy leakages are two really different models, and then we prove that although mutual information noisy leakage is statistical distance noisy leakage, the opposite is false. To make things more clear, here is a diagram of all the leakage models taken into consideration. The first one is the simplest one, the bounded leakage. Then we have the uniform noisy leakage, which contains the bounded leakage. Indeed, leaking L bits from the uniform random variable decreases its minentropy of at most L. Then we have the slightly more powerful notion of minentropy noisy leakage, in which the random variable is not required to be uniform. Then the statistical distance noisy leakage, which as said before, we prove to be quite different from minentropy noisy leakage. And finally, mutual information noisy leakage, which is inside statistical distance leakage, but we prove that they are not the same. Motivated by this situation, we consider the following question. Can we reduce noisy leakage resilience, which is more powerful, to bounded leakage resilience, which is simpler to achieve in a general way, thus achieving the best of both worlds? And our mind result is a positive answer to this question. And of course, we solve it by defining yet another leakage model, which we call dense leakage. The plan is to show how the other models of leakage are indeed dense leakage, and then to prove that dense leakage is simulatable from bounded leakage. Before stating the definition of dense leakage, we need to define further relations between two distributions. In particular, we define the notion of density of distributions. Formally, we say that a distribution P is delta dense in P prime. If forever Z in capital Z, it holds that P of Z is at most 1 over delta times P prime of Z. We also define the relaxed notion of approximate density, which informally says that P is delta dense in P prime, with high probability over the choice of Z. Now we are ready to define the dense leakage. Given a leakage function F, we say that F is dense leakage for X if the distribution of the leakage conditioned on X being equal to a fixed little X, is delta dense in the distribution of the leakage unconditioned with high probability over the choices of X and Z. By using the notion of approximate density, the distribution of the leakage conditioned on X equals little X, is approximate delta dense in the distribution of the leakage unconditioned with high probability over the choice of X. The notion of density allows for approximating distributions with other distributions. In particular, if P is delta dense in P prime and we have access to a certain number of independent samplings from P prime, then we can run the rejection sampling algorithm to obtain a value which is a good approximation of a sample from the distribution P. How does the rejection sampling work? Well, first of all, we sample S independent samples from P prime and we initially set I to be both. Then we define the bit Bj to be 1 with probability delta times P of Zj over P prime of Zj and we set Bj equals 0 otherwise. Now if Bj is equal to 1, we update I to be equal to J and we halt the loop. Otherwise, we keep running the algorithm until we run out of samples. Finally, we return the index I which is the index of the candidate sample we are using to approximate P or both if the algorithm failed to produce such candidate. It is easy to see that the distribution P tilde from which is sample the outcome Zi is close to original distribution P. To get more comfortable with the notion of dense leakage, here is an example of a 2 to the minus l dense leakage function for the uniform distribution over 0, 1 to the n. I copied the definition of dense leakage in this slide for convenience and in particular the function simply outputs the first l bits of its input. First of all, we fix an input X and then we parse X as Z concatenated to Y where Z is the l bit prefix and Y is the remainder of the string. Then we study the distribution of the leakage condition on X being equal to little x. In particular, for all l bit strings Z prime, the probability of the random variable Z being equal to Z prime is exactly one if Z prime is the l bit prefix of X and it is exactly zero otherwise. On the other side, without conditioning on X equals little x, the random variable Z is uniform over l bit strings and therefore for each l bit string Z prime Z has probability to the the minus l of being equal to Z prime. Since we want the distribution of Z condition on X equals little X to be dense in the distribution of Z and conditioned we have to choose delta to be at least to the minus l. Actually this value suffices for the condition distribution to be dense in the unconditioned one so we just proved that F is delta dense leakage for X for delta equals to the minus l. A little spoiler here, this example not only shows how to prove that a function is delta dense but also shows what is the role of the parameter delta. I'll say more on this when stating our simulation theorem but before talking about that I'll show a limitation for the case of main entropy noisy leakage which also helps to understand some of the challenges behind our goal of studying the relations between the leakage models. First of all we consider the notion of semi-flatness. A distribution is semi-flat if intuitively it is not too distant from being uniform. Here is an example of a non-semi-flat distribution. Well of course it depends on the parameters but this distribution clearly is not semi-flat for too small values of alpha. In particular we have an initial spike of a high probability and the distribution is uniform everywhere else. Consider now the following leakage function which is random on the spike and it is quite regular on the uniform part. You can think of F to be the function which on the uniform part outputs the l-bit prefix of X. Now let's try to see what happens when we leak from X obtaining for example the value 3. The problem here is that the spike decreases the main entropy of X quite a lot while the distribution of X on the uniform part has still quite high min entropy. As a consequence a function decreasing the main entropy of X by l actually requires leaking way more than l-bits in order to have a small simulation error. For this reason when talking about min entropy noise leakage we have to factor in the alpha semi-flatness of the distribution we are trying to approximate and this increases the amount of leakage required. Let's move now to the main result of the paper. First we define the following simulation paradigm. We say that the leakage family F is simulatable from another leakage family G if for each leakage function in F there exists simulator which is able to approximate the leakage with one leakage query chosen in the family G with small simulation error. With this paradigm in mind we prove the following result. For a random variable X the family of delta dense leakage is simulatable with small error from the family of l bounded leakage where l is defined as logarithm of one over delta plus some stuff which is actually very small in most applications. The proof is quite simple and consists of constructing for a given dense leakage function a simulator which applies rejection sampling to approximate the leakage. In particular first the simulator samples a certain number of independent samples from the distribution of the unconditioned leakage. Then constructs a leakage function which upon input X runs the rejection sampling algorithm to sample from the distribution of the leakage conditioned on the value X and then outputs the resulting index i. Finally the simulator outputs the i sample. The bounded leakage performed is only needed to obtain the index i and it depends only on the parameter delta. As I said before by ignoring all the other stuff which is negligible in almost all situations we can see that the dense leakage parameter delta is strongly related to the bounded leakage parameter l. Here is the diagram of before in which we also use the red arrows to denote that the dense leakage is simulatable from the bounded leakage with a slight simulation error. There are many applications to our result one of them is about the lower bounds in communication complexity for instance when considering bounded collusion protocols. In a bounded collusion protocol there are several parties and each party holds some input xi and the goal is to evaluate some boolean function of all the inputs. To achieve this goal it is defined an interactive protocol in which in each round a certain subset of the parties is chosen to cooperate and output some partial result. The usual definition of the bounded collusion protocol requires that the length of the transcript which is output by the colluding parties in each round is bounded by some parameter l. We define two variants of this notion both without limitations on the length of the transcript. The first one is in the noisy model in which the partial results output by the parties in each round are uniform noisy leakages of their inputs and the second one is in the dense model in which the partial results output by the parties in each round are dense leakages of their inputs. By studying these new notions we obtain two major results. The first one is another lower bound in communication complexity in particular to evaluate certain boolean functions the transcript not only needs to be sufficiently long but also needs to contain a large amount of information about the inputs. The second result which is also a consequence of the first one is the possibility to leave the security of cryptographic primitives whose leakage resilience is modeled as a bounded communication bcp to the more general setting of noisy leakage or dense leakage. In conclusion we first introduced the notion of dense leakage which captures many existing notions of noisy leakage. Then we have shown that a single query of dense leakage can be simulated in the information theoretic setting using a single query of bounded leakage therefore allowing also to simulate noisy leakage with bounded leakage. Finally we gave some applications in leakage resilient cryptography not only in the setting of bounded collusion protocols but also in the leakage resilience of linear secret sharing or essentially in any cryptographic primitive with bounded leakage resilience in the information theoretic setting such as forward secure storage leakage resilient storage leakage resilient non malleable codes and many others. Unfortunately in the computational setting things may be a bit more difficult and this brings us to the first open problem. Can we make our simulator efficient for certain families of noisy leakage? Although our simulation may be inefficient we give an idea of application in the computational setting by lifting bounded leakage to noisy leakage for passively secure multi-party computation in the common reference training model and for a concrete construction of leakage resilient one-way functions in the floppy model. Another problem we leave open is whether other families of noisy leakage which we have not considered now work for within the class of dense leakage thus achieving simulatability through bounded leakage. Then we leave open to improve our results for bounded collusion protocols. In particular we achieve that each round of bounded communication can be used to simulate a round of noisy communication or dense communication. However since concatenating the output of bounded leakage leads to a bounded transcript we wonder if the same happens with noisy leakage or dense leakage. In other words is it possible to generalize the results on bcps so that it is the final transcript to be noisy leakage or dense leakage of the inputs instead of the single communication rounds? Finally and more in general is it possible to extend our results to the setting of multiple leakage queries? The problem with this is that once obtained the result of the first leakage query the second query itself may depend on such leakage and this makes the reduction much harder. For instance in the setting of uniform noisy leakage the first leakage query leaks some information from the uniform random variable which then is not uniform anymore. Can we overcome these problems? If you're interested I invite you to take a look at our paper to see more in detail our results and open problems. That's it for now and thank you very much for your attention.