 Bonsoir. Je vais présenter notre paper « Mode Level Versus Implementation Level Physique et Sécurité en Symmétrie Cryptographie », un guide pratique à la jungle des résistances délicates. C'est joint à travailler avec des collègues de UC Louvain en Belgique, du CNRS en France et de Shandong University en Chine. Pour l'introduction, j'aimerais commencer avec quelques mots de motivation, d'écrire nos goals et commencer avec la question « Pourquoi une jungle délicate ? » Pour cela, j'aimerais première de discuter les différences et les similarities entre les réductions de sécurité mathématique et de sécurité physique. Dans les réductions de sécurité mathématique, nous avons un schéma de frais, et nous voulons réduire cela pour quelques assumptions mathématiques. Par exemple, ici, nous avons un schéma d'encryption basé sur un bloc de tricot et nous voulons réduire cela pour l'assumption que le bloc de tricot est un cipher idéal, ou une permutation normale. Comme résultat, l'assumption mathématique peut être seen as a specification of requirements that symmetric designs have to ensure. Therefore, we have a trade-off between the efficiency of the mode and the efficiency of the primitive. For example, if you have to instantiate an ideal cipher, it may require a few more rounds of your symmetric primitive than if you have to instantiate a pseudo-random permutation. Yet, the extent of the trade-off is in general quite small, and performance will vary by factors ranging between times 2 and times 4, typically. In the case of physical security reductions, the main idea is the same. We want to reduce the security of a full-fledged scheme to a combination of mathematical and physical assumptions. The first difference here is that while we have a few mathematical assumptions that are well understood and established, we have a wider zoo of physical assumptions, all of them coming with pros and cons, and current analyzers usually combine several physical assumptions in order to gain insights about the security of the modes. For the rest, the physical assumption is again a specification of requirements that implementations must satisfy, and therefore we have again a trade-off between the efficiency of the modes and the efficiency of the primitives. Typically, protected implementation will require expensive counter-measures. The important point here is that the extent of the trade-off will be much larger. Performances of protected implementation can decrease by factors of 2 times 100. Therefore, we have a stronger incentive to have a finer granularity in the modes and to use exactly what you need. For example, if you need to prevent leakage in encryption only, you could design a more efficient mode than if you also need to prevent leakage in the encryption. A second reason why physical security analyzers are difficult, relates to the observation of Michaelian Raisin that unpredictability with leakage is easier to capture and guarantee than indistinguishability with leakage. Applied to the case of authenticated encryption scheme, it means that integrity guarantees are easier to reach than confidentiality guarantees. For example, it can happen that you can guarantee integrity with an unbounded leakage of all the FMR values of a scheme, which is something that's not going to work for confidentiality. As a result, we have a stronger incentive to use composite definitions rather than all-in-one definitions at a hogarish ringtone, which is more convenient in the black box world. So if you look at the current state of the art, you will find a zoo of definitions, which in our opinion is unavoidable, and a zoo of assumptions, which could probably be improved with further research. The combination of those zoos is what we call the leakage resistance jungle, and the goal of this paper is to translate this complex state of the art into concrete guidelines for implementers. That is, trying to use formal security analyzers in order to help hardware engineers designing more secure implementations. The main tool that we will use for this purpose is a simplifying framework that will be in three parts. First, we will identify relevant steps in authenticated encryption schemes. Second, we will simplify the assumptions zoo. And finally, we will simplify the definitions zoo. Starting with the steps of an A-scheme and taking the example of an inner-kit sponge, we will consider four steps. First, an optional key generation function. Then, double computation that processes the message blocks. Third, the tag generation function and finally the verification. We know that we do not consider associated data that doesn't have a lot of impact in leakage analyzers. As already mentioned, the goal of a formal analysis is to reduce the mode to some assumptions, ideally weak assumptions and in a tight manner. All these physical assumptions can be viewed as sufficient conditions of security and they are expressed in quite different abstractions. In this respect, the observation we make in the paper is that if you translate these assumptions into necessary design goals, you can reduce them to a few well-known attacks as would be investigated by security evaluation labs. More precisely, we will sometimes need DPA security, for example against the long-term key or the tag, which means that we need to prevent attacks where the adversary can observe the leakage of the primitive for many inputs. We will sometimes need SPA security, for example against fmrl secrets, which means that we need to prevent attacks where the adversary can observe the leakage of the primitive for a few inputs. We will sometimes need one block confidentiality, which means that we need to prevent attacks where the adversary directly targets one single message block. And finally, we will sometimes even tolerate unbounded leakages. In order to illustrate that using a weaker physical assumption is practically relevant, the table now contains approximate performance overheads. We see that DPA security, which can be obtained via higher-order masking or shuffling, implies overheads factor from 5 to 100. SPA security, which can be obtained via parallèle implementations or noise, implies overheads factor from 1 to 5. One block confidentiality is not reported because let's explore it so far. Un unbounded leakages, of course, come for free. We now move to the simplification of the definition zoo. For this, we observe that for confidentiality we can have CPS security or CC security. For integrity, we can have plaintext integrity or ciphertext integrity. For leakage, we can have it in encryption only or in encryption and decryption. For the nonce, we can have nonce misuse resistance or resilience. And for leakage, we can have leakage resistance or resilience. By resistance, we mean that we aim to maintain the security guarantees even in absence of nonce misuse or leakage. By resilience, we mean that the security guarantees vanish in the presence of nonce misuse or leakage but they are restored as soon as nonce misuse or leakage are removed from the adversary's capabilities. The result of all these options is represented by the two cubes at the bottom of the slides. They represent all the composite definitions that we can have. Now, the simplification we propose is the following one. Consider grade 0 designs that have no mode level leakage resistance. Grade 1A designs ensure ciphertext integrity with leakage in encryption and CCS security with leakage in encryption. For example, thanks to key evolution. Grade 1B designs ensure ciphertext integrity with misuse resistance and leakage in encryption and decryption. For example, thanks to strengthening the key generation function and attack generation function. Grade 2 combine CIML2 with CCS security with misuse resilience and leakage in encryption. For example, by combining the two previous IDs. And finally grade 3 designs add CCS security with decryption leakage for example by using two passes. The next step of our outline is to show that we can apply this taxonomy to existing AE schemes and to illustrate the trade-off between mode level and implementation level physical security. We start with OCB Pijamasque, which is a grade 0 design. In this case we see that even for the lowest security target, which is CCS security and ciphertext integrity with leakage in encryption only and no misuse, we always need to protect all the block cipher calls with strong DPA protections. This is not a bad thing in itself because the mode is quite efficient otherwise. But it shows that side channel security will only depend on implementation level counter measures. Other ciphers provide the same type of guarantees. OCB AES is interesting to mention because it recalls something that we do not cover. Namely the fact that certain ciphers like Pijamasque are going to be easier to protect with implementation level counter measures like masking than other ciphers for example DAES. Moving to a mode with slightly better leakage resistance we have Photon Beetle which is a grade 1A design. What is interesting here is that with leakage in encryption only and no misuse then the bulk of the computation only has to be protected against SPA. And this is because every message is always going to be encrypted with a fresh key generated by the key generation function. Therefore such designs are calling for so-called level implementations where different security levels are used for different parts of the implementation and therefore lead to different levels of performance. By contrast this doesn't work anymore if you add misuse to the adversary's capabilities. The reason is that the adversary can now fix the ephemeral key. Use multiple message blocks in order to recover this ephemeral key thanks to DPA and inverse the permutation. So we are back in a situation where we need DPA security everywhere and this applies more or less to every inner key sponge design. In order to improve this grade 2 designs like Ascon are essentially strengthening the key generation function and the tag generation function in order to make them non invertible. This doesn't change anything in case you have no misuse but it means that even when you have misuse the fact that you can recover an ephemeral secret cannot leak to the long term secret and as a result for confidentiality it is enough to protect the bulk of the computation against SPA only. Yet this is not sufficient if you want to also have confidentiality in front of the encryption leakage. The reason is that despite the ephemeral key cannot leak to the long term secret it is sufficient to recover the message in full. As a result, here again we need to have DPA protections everywhere in the design. Interestingly, integrity guarantees have even weaker requirements in the case of Ascon. Namely, ciphertext integrity with leakage in encryption only and no misuse can be guaranteed Even if you leak all the ephemeral values of the bulk computation in full this shows the interest of composite definitions and this guarantee is even maintained if you have misuse resistance. Eventually, in order to reach the highest integrity guarantee which is ciphertext integrity with misuse resistance and leakage in encryption and encryption one additionally has to protect the tag verification against DPA Otherwise, the leakage of the verification in order to mount forgeries. This shows that key recovery security is not enough in order to analyze authenticated encryption schemes against leakage and it applies to other ciphers like Ace, Gibbon, Spix or Wage. As an alternative grade 2 design we mentioned Spook Spook is pushing the level implementation concept one step further by using two primitives namely a trickable bulk cipher and a permutation The rationale is that the smaller size of the TBC should make it easier to mask. For most of the analysis Spook provides similar security guarantees as Ascon The only difference is CIML2 where the fact that we have a TBC allows to perform a tag verification based on the inverse of the TBC which is secure even with unbounded leakage. We cross the last mile by describing grade 3 designs like TEDT They provide strong confidentiality guarantees in front of decryption leakage by leveraging a second pass. These guarantees that only well-formatted messages are going to leak leaving the adversary with the possibility to perform an SPA against the ephemeral secrets of the bulk computation. Here again we have an alternative candidate called ISAP The differences between ISAP and TEDT are that ISAP is based on permutations while TEDT relies on trickable block ciphers. ISAP instantiates its key generation function and tag generation function with a raking scheme. The goal of the raking is to provide DPE security thanks to multiple executions of a permutation that is only secure against SPA. Ok, so the previous discussion showed that different modes of operation can leverage different design tricks in order to improve security against leakage and we now briefly discussed the concrete impact of these different design tricks. So the main question is probably whether level implementations where different parts of a design have different levels of security against physical attacks can bring performance gains over uniformly protected implementations. For this purpose we analyzed the energy performances in hardware of a level implementation of Spook and a uniform implementation of OCB. The main conclusions are that the overheads of level implementations are limited for short messages and the gains can be very significant for long messages especially when the physical security of the key generation function and the tag generation function increases because of higher order masking schemes. We also analyzed other design tricks. For example we looked at the key generation function and the tag generation function implemented with a sponge or a trickable block cipher. We conclude that trickable block ciphers can gain interest when high side channel security is needed. We also looked at the tag verification implemented in the forward way and masked or implemented in the unprotected way with the inverse TBC. There we concluded that overheads are quite similar and anyway limited compared to the cost of the key generation function and tag generation function. And finally we discussed the instantiation of the key generation function and tag generation function with freshwater keying or the masked trickable block cipher. There the short summary is that they aim at different goals. For freshwater keying the goal is to reduce DPS security to SPA security. It is therefore easy to implement but all the overheads are primitive based and they will always be there. We also show in the paper that SPA security can sometimes be broken on small unprotected devices. For masking it rather aims at high security which will require expertise. But the advantage is that the implementation overheads are more flexible because you can choose the security level that you want in encryption and decryption. So this finally leads us to the conclusions and open problems of the paper. In this respect one important conclusion for cryptographic research is that there is no single right definition of leakage resistant authenticated encryption. We rather have a kind of continuous tradeoff between mode level and implementation level leakage resistance. And as the security required by an application increases whether in terms of security target or quantitative security level we know that leveraging mode level guarantees is going to gain more and more interest. This leads to various open questions for example which mode is best in which context and this of course requires having concrete security evaluations and implementation results in order to be able to compare things very rigorously. Another question is whether we could find improved candidates for the various grades that we introduced. A third question is about a finer grain analysis and whether other security targets could be interesting to capture certain applications and finally of course bridging the gaps between theory and practice and in particular finding better physical assumptions. And with this I thank you for your attention and we'll be happy to answer questions.