 Hello everyone and welcome to my video presentation about the family of authenticated encryption schemes called ISAP V2.0. My name is Christoph de Braunig and this is joint work with Maria Eiches-Ida, Stefan Mangat, Florian Mendel, Bart Menning, Robert Primas and Thomas Unterlugauer. This is the first video that I am recording, but nevertheless don't forget to like the video and subscribe to the channel if possible. Let us start with the motivation that lies behind the design of ISAP V2.0. Today we live in a connected world consisting of devices performing cryptographic tasks. These devices get more and more every day. Many of them operate in environments where an attacker can have physical access, which means that cryptographic security alone is often not enough in these cases. What's more is that implementations of cryptographic algorithms have to withstand side-channel and fault attacks. So what do we do now? One take on this problem is to consider it as the problem of the implementation and provide protection solely on the implementation level. One alternative to this is to design modes and primitives considering the threat of implementation attacks. Such modes can provide increased resistance against implementation attacks, while carefully designed primitives can ease the protection on implementation level. Our goal when designing ISAP V2.0 was to provide an excellent out-of-the-box experience for engineers who implement ISAP V2.0 with the goal of hardening their devices against implementation attacks. We designed the mode of ISAP V2.0 to provide increased resistance against several classes of side-channel and fault attacks, like differential power analysis, statistical ineffective fault attacks and differential fault attacks. The increased resistance of ISAP V2.0 against fault attacks is the main change compared to the original version of ISAP. We instantiate this mode with permutations that allow for efficient implementation level countermeasures. We do not design new permutations for this, instead we base our instances on excellent already existing permutations, which are the 400-bit variant of the catch-acc permutation as specified in FIPS 202, and ASKON's permutation. Now let us have a look at the mode. The mode is inspired by a concept with the name Fresh Requiem. The idea here is that fresh nonsense can be used to derive session keys, which works fine for encryption. To provide additional security against side-channel attackers for decryption, we use the 2-Bus scheme, namely we do encrypt the MAC. We do the verification before the decryption starts so that during decryption we never decrypt different self-attacks with the same notes. This counteracts plain text recovery attacks via side-channel attacks. Since we rely on the principles of fresh rekeying, I will give a little background. Simplified, the goal of fresh rekeying is to provide cheaper protection against certain implementation attacks, like TPA, than masking cryptographic permutives directly. For instance, in this scenario, an RFID tag communicates with the reader. You see the tag on the left side and the reader on the right side. And in this scenario, we want to protect just one block of a call. RFID tags should be cheap mass-protection devices, so we want cheap protection against the tags here, while for the reader we can allow masking. To mount the DPA, we need to observe the processing of many different inputs with one key. So if we use a new session key for every encryption, we cannot mount a DPA on the block cipher. On the tag side, the reking function G derives the session key with the help of an on-chip generated nonce and a static master key. So in this case, the DPA problem is shifted to the reking function, where it can be hopefully solved cheaply. Since the reader has no influence on the nonce's generation, a new session key cannot be guaranteed for every decryption on the reader side. So also for the reader, the block cipher has to be protected against DPA. In the communication setting, there exists a solution to this problem. As you can see here, we have two party communication, and in this picture, both parties contribute to the nonce, which means that no session key or no selection of the session key kstar can be solely influenced by an outside attacker. But what do we do if this is not an option? So let us assume I just want to store some data off-chip. So for the encryption, I do not have a problem. As we've seen before, I can always use a fresh nonce for encryption. But what happens for decryption? How can I ensure that the nonce and the data I read are not manipulated so that I decrypt different inputs for the same key? So one solution to this problem is to do the verification before the decryption. So on this picture, you see the following. You see at the top a sponge-based hash function, which hashes the cipher text to an hash value y. This hash value goes into our reking function g and derives the session key k a star. On the bottom, you see a sponge-based Mac function. In this concrete case, this is a sponge-based Mac function where the key is not added at the beginning, but at the end of the absolving phase of the sponge. And in this case, the key that is added is the session key k a star. In such an encrypted Mac scheme, we can easily base the reking function on the data we want to verify. And we do not have to worry about the situational protection of this data since it is assumed to be publicly known anyway. In our case, this is the cipher text. So we can hash the data without worrying about situational attacks and base the reking function on the hash value y. If the data changes, also this value y changes. If we are now using the derived key with a suffix key sponge Mac, we actually see here that we do twice the work. So to simplify work, we can get rid of the double hashing that we do. And we finally end up with the following construction where the last xor of this suffix key sponge Mac function is replaced with the reking function g. The security of the suffix key sponge is studied in another paper presented at this conference. So if we want to learn more about the security of the suffix key sponge, please watch the video. Now the question is, how do we absorb our keys into this function? How do we instantiate our reking function g? We've chosen to use a reking function based on sponges where the rate is reduced to a minimum. For the concrete instance of isub, we only absorb one bit of the value y we have seen before. So we end up with a construction that is related to the classical ggm construction. An attacker that learns to aim the secret state of this sponge can only observe the leakage of two different inputs per permutation call. So if we assume the dba attacks just relying on two inputs are infeasible, then this reking function should withstand the dba attacks. From the fault attack side, the mixture of the key with y could also be attacked using statistical ineffective fault attacks. Similar to a dba, sifa also needs the mixture of many known inputs with a static secret. Also in this case, the number of known inputs per static secret is reduced to two by following this construction, which drastically increases the resistance against sifa. For encryption, we use the reking function that we have seen on the previous slide to absorb the nonce bitwise, which you can see here on the left side, to produce a session key. This session key together with the nonce absorbed at once this time initializes a sponge-based stream encryption. If we compare the n and decryption of ISAP v2.0 with the n and decryption we did in ISAP v1.0, we see that the notation changes, but also you can see that we do not make this strict separation between rekeying and sponge-based stream encryption. The strict separation for ISAP v2.0 that you can see here was introduced to make a differential fault attack harder. So let's have a look at ISAP v1.0 again and assume that we can get the plaintext after the decryption and that we can induce faults for repeated decryption queries. This would then allow us to mount a DFA where the key stream is squeezed to recover the secret inner part, which in the case of ISAP v1.0 would allow an attacker to compute back to reveal the master key. The cut we do in ISAP v2.0 prevents the recovery of the master key from a single recovered state. For ISAP v2.0 we also provide a security proof in the leakage resilient setting. For this proof we assume non-adaptive bounded leakage. So we assume that the call to the permutation leaks lambda bits of its input plus output and that for repeated call with the same input the same information leaks. The proof is a combination of results on the leakage resilience of the duplex construction and the suffix key sponge since the components of ISAP can be either considered as duplex or suffix key sponge. So here you can see the bound which is quite complex so for full details we refer to the paper. However what already this quite complex bound nicely shows is here marked in red the direct effect of the leakage lambda on the advantage an attacker has. Hence to limit the advantage of an attacker it is advisable to keep the leakage small. Next we have to look at our parameter sets in the concrete case of ISAP v2.0. We specify in total four instances two instances based on Ascon's permutation and two instances based on Ketchak's founded bit permutation. All these instances have in common that the rate during the absorption in the raking functions is reduced to one bit and in the other scenarios it is 144 bits for Ketchak and 64 bits for Ascon. What you can also see here is that the parameter sets ending with an A reduced the number of rounds compared to the conservative parameter sets which basically leads to a reduction of the security margin. In the case of the raking function the number of rounds is even reduced to one. What we do here is comparable with what the CSAC candidate Ketchak does. Although Ketchak even uses a larger rate for absorbing and is even putting out results. So we decided to follow this approach assuming that implementations will only leak a small amount of the internal state per absorbed bit. Hence assuming a side channel attacker this information would have to be combined with several rounds to recover the internal state which is quite large which has either 320 bits or 400 bits and which is quite hard as Ketchak has demonstrated. So finally let us talk about implementing ISAP. Implementations are still required to protect against some implementation attacks especially attacks like simple ball analysis and template attacks where some form of hiding might be required. Clearly the concrete countermeasure and the cost of it depend on the platform so I would personally assume that an 8-bit micro controller implementation requires more sophisticated implementation level countermeasure than a round-based hardware implementation. In addition the attack comparison also requires protection. For instance it requires protection if there is the threat of fault attack cost that can just skip it which is probably true for all schemes all authenticated encryption schemes that do some sort of comparison because otherwise it is possible to get fault truth through. So if side channel attacks are a threat then the comparison needs to be protected against recovering the correctly computed attack and for instance this can either be done by masking the comparison or the comparison of the attack can be performed via applying one permutation call on the values to compare and to compare the truncated outputs. So let us conclude. So we can conclude that side channel and fault attacks become more and more a threat in such scenarios ISAP not only eases the protection of implementations but it is also actually very efficient. So visit our website for more information. As I would be quite surprised to get the questions I want to say thank you and end the video here.