 Alright, so let me announce the next talk. So the next talk, it's an even talk. Generic side-channel distinguishers, improvements and limitations by Nikola Vira and Francoise Vier standard. And I guess Nikola will give a talk, well, a start 15 seconds earlier. So this is some work we did on generic side-channel distinguishers. So trying to improve over the existing mutual information analysis and it led us to consider some of the limitations of such distinguishers. So this is something we did in the context of standard DPA attacks used in order to evaluate the actual security of physical implementations. So what happens is that during your computation the device, this is in the context of a non-plane text, the device will use parts of the key in its operations and the intermediate values will depend on only a small subset of the key bits, usually 8 bits for the AS. And once again, this leads to leakage because during your computation you have storage, you have a source and such, which transpires into the physical world. For example, your leakage value here can be a power, it can leak into the power channel or EM radiations. And here you can see, for example, that the power consumption will depend on the aming weight. So you have 0, 1, 2, 3, 4. And you can try to exploit this dependency in order to recover the key. So how does the adversary does that? He performs, he tries the sub-key, so he has the hypothesis. He predicts what would be computed during the cryptographic operation, models how this value leaks, should leak into the physical medium, and then tries to relate his predictions to his observations. And there is some kind of dependency test and the key hypothesis that relates best to the actual observations should be the correct one. So the two main ingredients here are the models. So how does the value being computed transpires into the, impacts the medium, and then the dependency test between observations and predictions. So for the leakage model you have two adversarial scenarios. The first one is the profile case. So basically it says that the adversary can, before the attack, before the actual attack, control the device and feed it values, plaintexts, and keys, the goal being to perform preliminary estimation of the density of the leakage. So after that you have different kinds of estimations using Gaussian distributions, mixture models and such. And the other case where the adversary is not so powerful, we just give him a device with a fixed secret key and he has to try to recover it. So what is needed now is, he has to perform an assumption on the distribution of the leakages, usually based on his intuition about how the device should leak. So a typical example is the amine weight that I should be for in Caroline used, but you have some more advanced hypothesis depending on the technology. And once you've chosen your leakage model, the second ingredient is choosing the dependency test because different kinds of tests will be able to detect more or less complicated scenarios. So for example, dependency test can use only univariate dependencies such as Pearson correlation or multivariate. You can also use exploit different moments in your distributions, maybe only the mean on the variance or any kind of moments or skewness and higher moments. And also you can maybe only detect some specific kinds of dependencies between observations and predictions. So linear dependency monotonic or any kind of dependency from information theoretic point of view. So given all these choices, you have a kind of a trade off between efficiency and generosity. So for the usual site channel distinguisher, which is a Pearson correlation, what you do is you assume an amine weight model and you will be able to detect linear dependencies between observations and predictions. So it's not very generic. It's a rather specific case, but it's very efficient because it's univariate. It only estimates means. So it's very fast when the assumptions hold. And you have different tradeoffs going all the way to the generic distinguisher best known in the literature, which is the mutual information estimation. And this one is multivariate. And it can exploit any moments because it estimates densities in a nonparametric way, usually. And it it detects any kind of dependencies. But so it's the end of the spectrum from a generic point of view. Now it has some difficulties because since you have to estimate densities, this is a statistical problem. So usually you have to choose some parameters which should relate to how the device leaks, which is not something that you always know. So for example, if you want to perform a histogram estimation of this density, you have to choose a number of means and various ways of estimating densities require you to set some kind of parameters. And this is sometimes difficult to optimize. So the questions that we try to answer are first, is it possible to design a generic side channel distinguisher, which is efficient is better, and also free of parameters so that you don't have to optimize things in order to ensure that your attack works. And once you have such a generic distinguisher, is it possible to evaluate the resilience of a device experimentally using only non-profiled distinguishers? So our considerations are first, we propose a new distinguisher which is based first on copulas in order to reduce the leakage space so that it simplifies the problem of modellizing the leakage function. And we use a dimension reduction based on spacings. And in the end, we apply a uniformity test, which is non-parametric. So we get rid of the pesky parameters. And after that, we perform some empirical evaluations in order to show that this generic test first works is efficient. And in the end, we tried some advanced scenarios showing that maybe, well, at least in this case, you need to profile, you need to profile the attacks in order to ensure that your device is secure. So first, this new distinguisher, it consists of three main steps. First, we simplify the leakage space. Then we sample a value, a distance between samples, which has a specific shape when it's actually uniform, so for wrong key hypothesis. And in the end, we use this sample distribution in order to differentiate the correct key from all the wrong hypothesis. So what this looks like is first, so the goal of your side channel attack is to detect when your key hypothesis allows you to separate the pink and the blue distribution. So you try to sort them and do something meaningful. And the idea is that for wrong key hypothesis, you just perform a random sampling. So it shouldn't be different from the marginal distribution. And the statistical issue here is that the marginal distribution can be complicated and hard to model. So what we do is first, we project the samples through the empirical cumulative, which has this very nice property that it's easy to estimate is just a sorting of the samples. And everything falls back onto uniform distribution. So very simple distribution. And now, the attack is that if you perform a wrong key hypothesis, your sorting of the samples, depending on your leakage models values, should look like a uniform distribution. And if you have the correct key now, you do something meaningful, which is, well, the only criterion is different from uniform. And we sample the distance between samples. And this should behave like uniform. So you have a theoretical distribution on how actually uniform values behave. And you can see that for a wrong key hypothesis, they tend to behave this way. So a nice thing also compared to mutual information is that all samples contribute to this estimation. Rather than for mutual information, you need to perform separate estimations depending on conditional distributions. So once you've done that, this is just a problem of smoothing these densities and comparing them in order to look for the distribution which is furthest from the theoretical distribution. And this one is your correct key. So no parameters to be set here. It's quite straightforward. And now, when you have more than one dimension, so here, this is an example, it's masking counter measures, so that you have only b-variate dependencies that you can exploit to recover the key. And what we do in the first step is a copular reduction, meaning that you apply this component in each dimension independently. And the nice property is that any dependency in the original space is preserved once you project. So now you're only working on between zero and one, so it's a regular space, easy to manage. And second step is the same as before. You sample the distance between pairs of samples. And this is where you reduce your dimension, because this distance is just real value. So it's one dimension. You sample the distribution of this distance for the correct key, for the wrong key. And once again, you see that for the wrong key, it should behave as uniform. So you have a theoretical distribution, which is a dashed green one here, which corresponds to perfectly uniform samples. And the wrong key hypothesis is very close to that. So you can differentiate between key hypothesis and just take the one that diverges most. And this is how you perform your side channel attack. So this is nice. But we'll add some experiments in order to validate this distinguisher, ranging from a very simple case using amine weight leakage and onto actual experiments with current technologies such as 65 nm. And we try to extrapolate what will happen in next generations. So it's simulations based on what dual-ray implementation on 65 nm should behave like. First, very simple case. So it's a univariate dependency with amine weight leakage. So rather simple. And we compared our distinguisher with the correlation coefficient, which is very efficient in this case because it's tuned specifically for this kind of leakages. So we feed it exactly what it needs. And it's very fast. But we see that generic tests such as mutual information and the test we propose also work, which is a good first step. But they're less efficient because they're less specific to this particular leakage function. Now we move on to a slightly more challenging case. So the leakage function is still rather simple. But now the dependencies are variate. So correlation and least squares are not applicable as such. You need to use a distinguisher that can exploit multivariate dependencies. So we compared our distinguisher to the mutual information and both work that this new test is more efficient because mainly of the way it uses samples in order to estimate just one distribution. So it's slightly more efficient in this case than MA. Plus, there is no parameters to be set. So it's simpler to use. Now we move on to actual measurements taken from CMOS 65 nanometers. And this kind of technology leaks in a more complex way. So the leakage function is not so easy to infer. And we try to feed it with a seven-bit model which corresponds to as close to identity as you can get for an AES. You can't use actually an A-bit model. And what happens is here is the attacks don't seem to work. So it's not because the device is protected. It's because the leakage model here is too far from the reality. So we had to lead some kind of profiling beforehand in order to infer a relevant model. So and now you see that the attack using a relevant model actually work. So the device was not secure. It's just that the leakage model before even the dependency test was hard to infer using just engineering intuition. And now we move on to simulations of jewelry implementation. So a very complicated leakage function which is I-Linear and Linear. We see that the usual distinguisher, so correlation and least square regression, don't work at all because they exploit linear dependencies which in this case don't exist. And the generic distinguisher then work as long as the leakage model is relevant. So we can exploit this kind of non-linear leakages using generic distinguisher. So this is one of the case where we need generic distinguisher rather than specific ones. And now in neither even more complicated case. So still we have a non-linear leakage but now we on top of that we added masking counter measures so it's a bivariate dependency. And even using a relevant model now the generic distinguisher cannot attack successfully because of the interactions between the two dimensions. So it looks like this attack, non- profiled attacks cannot work but still the device is not is not protected because in this case we led also a profiled attack so we had to profile the device beforehand in order to lead it and it works. So the device is definitely not protected, it's not safe but in this case we need profiling in order to evaluate its security. So our conclusions. First during a non-profile sectional attacks you have a trade-off between efficiency and generosity because if you know that there are some simple dependencies these are easy to exploit so you can lead very fast attacks using for example a correlation attacks but in the case of generic more complex leakages you need a generic distinguisher so we provide a new generic test which is free of parameters and rather efficient. But for security evaluations a more safe way of leading the evaluation is to do profiling and lead profiled attacks because whereas dependency tests can be generic your leakage models so far are not so you can perform an actual evaluation using just a generic test and this is in line with the evaluation framework for CI channels proposed in Eurocrypt 2009 and now an opponent question is in practice can we achieve this non-linear leakage is and is the non-linearity of leakage function is it possible to use it as a design criteria. That's it. Thank you. Any questions?