 Welcome to my presentation Beyond Birthday Bound Secure Freshly Geeing Application to Authenticated Encryption. Many cryptographic schemes that we use nowadays are based on a block cipher. Difficult examples are counter mode encryption, CBC encryption, OCP authenticated encryption schemes. But these schemes they often evaluate a secret key repeatedly. So they use the secret key many times during the lifetime of the key. And to show you this let's look at a typical construction which is a TETA-CB authenticated encryption scheme. And the TETA-CB authenticated encryption scheme is the generalization of OCP-1, OCP-2, OCP-3. It is based on a 3-core block cipher. And this 3-core block cipher is used as evaluated many times, always for the same key. And then you have some tweak which is different for every evaluation. You've got associated data and message checksum that get processed to obtain the tag. And you've got the messages that get encrypted independently to get the cipher texts. And for now it's not important how TETA-CB works exactly and why it's secure. What's important is that the same key is used for every tweak of a block cipher evaluation. And while in a black box setting this might not be a problem, if you consider this evaluation in a fragile environment one might be able to obtain leakage from the key. And if the key is used many, many times you get repeated evaluation of the key and hence repeated leakage of the key. And one must make sure that this is not a problem. One must protect the key against this kind of leakage. And one way of protection is implementation protection. The idea is that you take a very simple, very lightweight scheme which you put protection on top. And protection on top would for instance be masking or hiding. And while the scheme is very efficient this makes the countermeasure typically design-specific and sometimes rather expensive. Alternatively you can invest a bit more in the mode, you get a more solid mode, but then you don't need to put protection on the side. So you get protection by design. Sometimes it's less efficient. In fact the older solutions, many block cipher solutions were less efficient. Nowadays we have seen some quite efficient solutions, mostly permutation-based solutions. But there is also a method in the middle that combines the best of both approaches. So this approach uses leakage resilience where it's needed and it uses implementation protected where that one is needed. And this approach is known as re-keying. And it can be seen as a method in the middle. So as I said it uses leakage resilience where that one is needed and implementation protection where implementation protection is needed. It's also known as a leveled implementation. And more detailed, the idea of parallel fresh re-keying is that you mix cars use of the key material. And the key material, the evaluations of the key are strongly protected. So this means a strong protection is only needed for cryptographically light building blocks. In detail, a re-keying scheme is typically put on top of a block cipher where you have the message that goes into the cipher text. And now instead of putting the key to the block cipher, you put a sub-key to the block cipher which is derived from the key and the re-gear. So the re-keying function, the core re-keying function typically needs strong protection against for instance DPA because the key is evaluated repeatedly. But this function turns out that this function does not need to be cryptographically strong. The core must be cryptographically strong because it deals with the message but it only needs lighter protection against for instance SBA. Because it does not use the key but only a sub-key which is used once or twice. And the idea of parallel fresh re-keying was formalized by Abdallah and Bilhara in 2000. Abdallah and Bilhara looked at the construction that's here on the right. So you have the message, it goes to the cipher text and the sub-key is derived from a PRF evaluated on the key and the re-gear. It took a couple of years before follow-up work came and this is a work by Mathwaad et al. in 2010. They looked at minimalized sub-key, minimalized re-keying. So instead of a random function you take a universal hash function. So notably it really resembles the idea of Abdallah and Bilhara although Mathwaad et al. didn't notice this connection. For years later the Brownig et al. mounted a key recovery attack on the scheme in the birthday bout and they presented two remedies which I called DKM plus one and DKM plus two named after the authors. DKM plus two is particularly interesting. It's this construction that uses a tweakable block cipher and it puts the re-kear on top. So instead of a normal block cipher it uses a tweakable block cipher and the re-kear goes into the re-keying function but also into the tweakable block cipher. But if you think about re-keying in a broader sense and if you think about tweakable block ciphers in a broader sense you might conclude that they somehow share ideas. So intuitively a re-keying function is really comparable to a tweakable block cipher. The interface is the same and the goal is the same. They both do block encryption and can be used to do block encryption. So the idea of re-keying reminds a bit of tweakable block ciphers. There's one subtle difference and this difference is that in a re-keying scheme a tweak change or a re-kear change results in a different key to the underlying block cipher. In tweakable block ciphers this is originally not the case. But there have been schemes where this is the case and this is known as tweak-re-keyable tweakable block ciphers. So in a tweak-re-keyable tweakable block cipher one takes a normal block cipher and one builds a tweakable block cipher on top of that that changes the key to the block cipher if the tweak also changes. So this term already dates back to a couple of years to around 2015-2017. But it turns out actually that although the field of re-keying was not that developed the field of tweakable block ciphers and also in particular the field of tweak-re-keyable tweakable block ciphers has a vast literature. Already in 2009 Minimatsu presented a tweak-re-keyable tweakable block cipher of all my letters. And the scheme might remind you of an earlier scheme namely the one of Abdallah and Belaghe. It's really a comparable scheme but the design goal is different. Abdallah and Belaghe tried to find a random function and Minimatsu targeted an invertible primitive. Notably, from this perspective, the scheme of Medvedal can be seen to be a closer match to the one of Minimatsu than the one of Abdallah and Belaghe even though they didn't draw this parallel. In 2015 I introduced two Beyond Birthday Pound Secure tweak-re-keyable tweakable block ciphers. The idea is that the key and the tweak they get added to get the sub-key which in general of course for leakage is not a very good idea. But in the black box setting this is fine and the key and the tweak also get mingled to get the masking to the input and the output value of the block cipher. One at all patched one, a small gap in the MEN2 construction and also generalized this construction to 32 schemes which I will not depict. And Naito presented a tweak-re-keyable scheme that targeted authenticated encryption. And eventually Jadal introduced XHX which is a generalized construction where you have a block cipher and you have a universal hash function. There is an agent input of the key and the tweak that generates three sub-keys, one that goes into the block cipher and two that go into the mask. And what we see basically is that the knowledge on tweak-re-keyable tweakable block ciphers is quite solid. So tweak-re-keyable tweakable block ciphers have received quite some attention, but re-keying solutions not so much. And this leads to an interesting question, can we use this knowledge on tweak-re-keyable tweakable block ciphers to improve our solutions in re-keying? And this is the question that I analyzed in this work and I presented three solutions based on tweak-re-keyable tweakable block ciphers. The first one I called R1, it is a derivative of MEN1 although it's not the same. And so it takes the block cipher and then the sub-key is derived using a universal hash function U and then U gets also multiplied with the re-gear R. And it has a restriction that the kappa, the key size to the block size must be equal to the re-gear size must be equal to the block size, but in this case it gets 2N over 3-bit security. The second construction is R2, it's not based on MEN2 but rather on one at all construction number 12, which shows better suited for this. It also takes kappa is equal to rho is equal to N and it is not identical to this construction, but it's rather different to suit the re-keying purpose, but still I managed to prove N-bit security of this construction. And finally we take R3, which is an adaptation or basically a simplification of xhx. Recall that xhx uses a universal hash function that generates three sub-keys and now it only generates two sub-keys U and a V that goes into the masking. And this one achieves N-bit security if the key size is equal to the block size. And finally it also covers permutation-based constructions if kappa equals 0, then you get birthday bound security. In the paper I give a full justification of why I selected these three schemes as inspiration and also I give a proof of these constructions. I will not go into details for the proof, but rather I will consider more, focus more on the applications in this presentation. But before doing so, let me first make a comparison. For simplicity taking kappa is equal to rho is equal to N, fd0 is a random function till the e is equal to block cipher and e a block cipher. And for the sake of counting, assume that the universal hash function is as expensive as 1 and the finite field multiplication. And then these are the numbers so I split the cost into the sub-key cost which needs strong, for instance, DPA protection and the core cost which needs weak, for instance, SBA protection. Note that here the universal hash of R2 costs 2 because it uses a universal hash function that generates two sub-keys. And now if you compare, for instance, if you compare, for instance, R2 with DKM1. R2 and DKM1 is equally expensive so the sub-key function is equally expensive, the core is equally expensive, the key size equally expensive, also the state is equally high, but it achieves optimal security. If you compare R3 to DKM2, you see that it's a bit different. So DKM2 uses a dedicated 3-key block cipher in the core whereas R3 uses normal block cipher, conventional block cipher, but it also needs additional sub-key generation and then gets a larger key. Not necessarily, but if you take finite field multiplication it gets a 2M bit key because security is the same. So basically what we see is that it's a trade-off between what primitive can you use, what primitive do you want to use and which scheme is more suited. And the cool thing of these solutions is that they are not just re-keying schemes on their own so you don't need to put them on top of a block cipher. You can actually use them as 3-core block cipher, noting that a re-keying scheme and a 3-core block cipher scheme is basically the same. And this is particularly interesting in light of the fact that the 3-cable block cipher is a very popular primitive for mode design. To give you an example in the CSR competition that ended a couple of years ago, 18 out of 57 schemes were based on a 3-core block cipher. Also many MAC functions like ZMAG are based on a 3-core block cipher. And here I depicted this typical example to set a CB and let me quickly recap it from the first slide. So we have a key that goes into all 3-core block cipher calls. We get messages that get processed independently. We've got associated data and a checksum that will be processed to get a tag. It's a very popular and very widely used design, although not necessarily in this shape. For example, OCB-3 or also OCB-2, but most notably OCB-3, is Tata-CB instantiated with the XEX 3-core block cipher. XEX is a 3-core block cipher on top of a normal block cipher. Deoxys-1 for example uses Tata-CB but then instantiates it with a dedicated Deoxys-BC 3-core block cipher. But one can just as well instantiate the scheme with a re-gear. And this brings me to a typical example, Tata-CB-R3. And Tata-CB-R3 is just Tata-CB instantiated with the R-key re-gear. This means that the tweaks that were input to the 3-core block ciphers now serve as a re-gear to the scheme. And this gives some nice features. So it achieves ended security and it makes a normal amount of lightly protected block cipher calls and makes strongly protected universal hash function calls. And by design it is easier to protect against side channel attacks. And the reason for this is that R3 is. And it's particularly interesting to see what happens if you take a scheme with a permutation. So the subkey in the key size to the block cipher is 0. But then you get birthday about security but it's based on a random permutation. If you compare now for instance to OCB 3, OCB 3 only gets birthday about security. However, this is security in the standard model. And it turns out that it is impossible to prove optimal security in the standard model if you base it on a 3-core block cipher. So it only gets end of 2-bit security but in the standard model. On the downside it makes strongly protected block cipher calls. So you need to strongly protect the block ciphers. The block cipher evaluation. If you compare it to DTE which is Digest, Tag and Crypt by Bertie et al. I would like to mention that it is hard to compare it because they have a different goal. They target nonce misuse resistance. It's a different approach and this also leads to a different type of scheme. But it's still interesting to kind of count the cost. And here we look at the scheme where the hash function of DTE is based on a block cipher. So it's for instance smuggled down guard based on Divismire. Then you get a normal amount of unprotected calls. You also need lightly protected calls and strongly protected calls. So you have a different goal, different primitives that need to be protected. And this also gives a different setting. Not particularly the DTA is sequential whereas TTCB is parallel. To conclude what we have observed is that two different fields basically consider the same problem. And you can use solutions from one of them to obtain solutions for the other direction. A bit more on the Riekeing approach. I already mentioned for simplicity let's look at multiplication for the finite field multiplication for the universal hash. But it has been shown that multiplication is not strong enough. One can mount side channel attacks on this multiplication. So it's interesting to see what other solutions can we take for the subkey generation. It's interesting to note that ISAP is submission to the NIST lightweight competition uses a sponge for Riekeing. And this appears to be a very solid approach. This concludes my presentation so thank you for your attention.