 Hello everyone, I am Chun Guo, this worker is cooperating with Francesco, Olivia, Thomas, and FX. In this paper, we design a new authenticating and encryption mode, TEDT, for tweakable block ciphers. Using a tweakable block cipher with 128-bit keys, 128-bit tweaks, and 128-bit blocks like Scenial Deoxys, the mode TEDT achieves, first, full leakage resilience or mode-level side-channel security. It means the implementation of this mode ensures side-channel security without full side-channel protection. Second, there is a so-called nonce misused resilience. It means the confidentiality of messages encrypted by fresh nonces is always insured even if got repeated. Third is highest multi-user security. Using such a configuration, it ensures 114-bit security up to 226 users. This is nearly optimal for 128-bit keys. For this, the mode uses a public key, PK, here, of 127 bits. We will discuss about this later. And the fourth, it supports level implementation. It means we only need to add side-channel protections to two tweakable block ciphers here and here to achieve a strong side-channel security and just leave the others, leave all the others naked. We can prove this implementation ensures strong side-channel security while the energy cost is significantly lower than the full protection. Finally, for better efficiency, it supports online encryption and efficient handling static and incremental associated data. For the remaining of this talk, we will first review some background. We first reveal the notion of AAD in brief. It's a single primitive scheme for both confidentiality and authenticity. It has an encryption process that maps a key K, announce N, blocks of associated data N, and a message M to a ciphertext C. Correspondingly, it has a decryption process takes a key, announce N, associated data A, and a ciphertext C as inputs. Now, if the inputs pass the integrated checking, then the decryption outputs the corresponding input and text M, and otherwise it outputs bot to indicate a failure. As to crypto properties, an AAD shall ensure both confidentiality and integrity of the message M and integrity of the associated data A. As another background, we now reveal side-channel attacks. When an AAD is implemented and deployed, the implementer may leak information about the internal state via side-channel information, mostly power consumption. There are two approaches to leverage power measurements. The first is a simple power analysis SPA. It takes advantage of the leakages resulted from a single input or message provided for encryption, with measurements that are possibly repeated multiple times in order to remove the noise in measurements. The second is more powerful, is the differential power analysis DPA. It takes the leakages resulting from the same data processing multiple data and exploits data dependency of the power consumption. Reduces the computational secrecy of this data at a rate that is influential in the number of distinct inputs. The data requirement of SPA is much weaker, but it is also very difficult to launch. To resist the DPA, one should apply the side-channel protection techniques like masking, shuffling and hiding, but these are of course not for free. For example, software masking blows up to the circle cost by a factor of more than two. The pay is inevitable for a single software call, but what about many calls in encryption, as we have seen before? So with this motivation, the community seeks for another method to explore, to exclude DPA by the design of the protocols or modes. We present two examples here. The left is a so-called leakage resilient PRG at FOX08. The right is a PRG at CCAS10. The basic idea is to have frequently updated state to avoid a single state processing multiple data. And leaks many traces connected by the DPA analysis. So the possibility of DPA is raised at a level of the mode design. We will review these ideas later. Let's then see the details of the TDT mode. The starting point is a block cipher mode EDT at TOSC17. The idea will be summarized as five points. First, it uses a requiem to define against a side channel key recovery. This is clear from the figure as the key of the block cipher is always changed after encrypting a single block. Secondly, it minimizes the operation from manipulating the message to only a single external action. Anyway, this maximizes confidentiality within the leaks. Third, it uses encrypted MAC composition. First they encrypt and then use MAC to generate an attack. Then the decryption is of a verify first and then decryption style. And this prohibits invalid ciphertext from being processed and leaked. Finally, the MAC is a harsh SPRP structure. And this avoids use for leakages during processing, compressing the ciphertext. And also the verification is designed to use the inverse of the SPRP. It means during decryption and integrated checking, the mode EDT will check equality at here. First, compute from the IV and the ciphertext to obtain the intermediate value of H. And then compute from the user-specified tag using an inverse to check if they match here. This avoids leaking the correct tag for the specified ciphertext and avoid trivial forgery. Though EDT has the main shortage that the concrete security is bound in weak. So even without leakages, the mode could be broken in birthday complexities. This means 64-bit security for 128-bit block ciphers and even 32-bit security for 64-bit lightweight ciphers. This limits its practicity. Now we discuss what we do. We first plug the associated data AD into a scheme. Notice that the EDT does not include AD. But this step is quite easy because it just takes the AD as a part of the harsh input. So the AD is got authenticated by the tag. We then turn the harsh as an SPRP to harsh as NTBC to improve the authenticity security bounds. A tweakable block cipher will allow for example 2n-bit data from a double block length harsh function. So here we use a double block length harsh function to produce 2n-bit harsh digest. And use the tweakable block cipher to solve such a 2n-bit harsh digest. The harsh security N divided by 2 is increased to 2n divided by 2 equals N. And this part solves the security problem around the integrity checking. And still we can check the integrity by inverse of the TBC. We compute from here in the forward direction to achieve V and W. And then we use W to inverse the user specified tag to check if they match at the value of V. So the trick against the decryption leakage is still preserved here. There above increases authenticity security. To increase confidentiality security we use the GCM style counter to replace the plan to replace the constant 0 and 1 in EDT. By this collisions between the internal keys will not immediately result in confidentiality loss. And actually as we proved later in our paper this does improve the confidentiality security to optimal. To further approach the model into practical use we need to concretize the hashing. So for this we survey existing double block length harsh functions and identify the harsh harsh as of the best efficiency and security. This harsh has a good feature. It is as that for each pair of block cipher or tweakable block cipher calls. The pair of calls will use the same key or tweak key or something like this. So the tweak key or key schedule will only need to be exactly once for each pair of calls and this solves and this saves this calls. So with this hashing and with sophisticated analysis we show the harsh is sufficient for our proposal. Finally as we have an additional n-bit input in the tweak we propose to use another key to improve multi-user security. The idea is if we can use different tweaks to separate the algorithms for different users then multiple users won't help the advantage of the adversary. For the encryption we can just insert such a tweak at here in the key derivation call and in these calls. Note previously they are just zero. For the harsh then TPC MAC we can append the tweak to the harsh input to achieve a separation. As we proved this turns out basically enough. And to separate the key derivation and the tag generation calls we use one bit. So only n-1 bits are used for separating users. But then the question is how to ensure the chrono users are using different tweaks. This might be possible for multiple user sessions in a single computer. But it won't be possible for many sessions in a country or in a world. So facing this we propose to use the random tweak to separate. Random tweaks may collide but it is less possible than both the key and the tweak collide. So this seems safe and as we proved it is actually safe. The tweak is then public randomness and we therefore call it public key. So we have introduced the details and we will talk more about the technical details. So we will talk more on the leakage assumptions and interpretations. For confidentiality and for integrity we should use different assumptions. This is as expected. As first explored in the TCC04 paper leakage integrity is much easier to achieve. Then leakage confidentiality. So for confidentiality we need to assume leak-free initialization and finalization. This means we assume that previously you see the key derivation function and the tag generation function are strongly protected and secure enough and could be seen as leak-free checkable browser for executions. And we still need to assume it is sufficiently hard to recover this data using SPA. As you see it is in such a structure. Such a structure could be seen as a basic component in the TDT mode. And we have to assume after such a call after the secret S1 is used by the first TBC call and the second TBC call after the anniversary have obtained such two leakage traces it could not recover the secret S1 using SPA. So it is harness of the SPA key recovery. For integrity the assumption is much weaker. We just need to assume leak-free initialization and finalization and we don't need any other assumption. This means the other component could be seen as just a leak hole. So with these assumptions we could prove leakage confidentiality and leakage integrity. It is difficult to understand, it would not be easy to understand the concrete bound and the theorem in such a short period. So I will only talk about interpretations. So the interpretations are first if simple power analysis is hard then the state is safe and then the scheme would be safe and something like this. We should note that the assumption that the SPA is hard could be experimentally verified. Actually the concrete hardness could be verified by concrete attacks. And second the message encryption is extended in a security preserving manner. Let's see this structure again. As we know during the encryption if the leakage leak a single bit of the message then the message confidentiality is completely broken in theory. But what we achieved here is that if simple power analysis is hard then the scheme could extend an encryption of a single message block to the encryption of multiple message blocks in a security preserving manner. That means if the encryption of a single message block is secure enough then the encryption of multiple blocks is also secure enough. So I would say this is the best possible security. It is a domain extension of the encryption in a security preserving manner. Now let's see the performance discussion. For illustration we compare implementation of OCB. It is uniformly protected and the level of implementation of TDT. For TDT we need to use a concrete instance and we use deoxys for the tweakable block cipher. And for OCB we just use AS. Here as the wrong function of deoxys is the same as AS this comparison could be more fair. The advantage of deoxys TDT as follows. It achieves the strongest leakage security in the sense of this paper. But it is rated 1 divided by 4. So its design is less efficient than OCB. Regarding the cost of deoxys BC, 1 deoxys BC 256 consumes about 1.4 to 1.6. Compared to AS 128. So the total implementation of deoxys TDT could be viewed as 2 masked deoxys BC cost plus 4L unprotected AS cost. And about AS OCB there is no more level leakage security at all. But on the other hand it is efficient. It is rated 1 and only consumes L plus 2 AS cost. But if you want to assign channel security you have to use L plus 2 masked AS cost as shown in this figure. For deoxys TDT as we mentioned before we will use the level of implementation. We just apply side channel protection on the key derivation and the tag generation. Only 2 costs and the others are just naked. But for AS OCB of course we cannot use such an approach and we have to protect all. So the entire figure would be in black. So with the above preparations we have these figures for the performance and comparisons. Our main conclusions are twofold. First starting from the minimum two shares deoxys TDT compares favorably to AS OCB. This is independent from the message size so it shows the level of implementation is much better for side channel protected implementation. And second the factor of GAN approximately coverage is towards L plus 2 divided by 2 as the number of shares increases. This matches the theory. We will expect similar energy GANs in hardware. We then compare to some others we can base the modes. The first target is our starting point EDT. As we mentioned before the shortages of EDT suffers from multi-user security degradation. It is not fully specified especially for its harsh function. And also it does not have so good black box probable security and this limits its use. And then another earlier proposal of ours it is FEMO. The FEMO mode achieves non Smith use resistance when leakages are absent. So it has stronger security but it is less efficient. It requires three passes so it is very impractical. And finally as a sponge-based design I stopped TDT sponge, the last one in our algorithm's book. Among them the mode ISAP and TDT sponge uses very close to ours the encrypted MAC structure. As we mentioned before it is because the encrypted MAC has more resilience to decryption leakages. Compared to us ISAP and TDT requires less primitive calls for the message encrypting. It means to process every message block they only makes a single permutation call to encrypted block and then a permutation call to hash the block. So it is two permutation calls but TDT would require four tickable block cipher calls. But on the other hand TDT is forward secure so it also has its advantages. For the one-pass mode ASQA and this book they are more efficient because they are one-pass. But they also only ensure weaker leakage confidentiality. For the details we refer to the relevant papers and citations. So the choice depends on the context if you would prefer efficiency or if you would prefer stronger leakage security. Finally there is a comparison to Bavarian details, paper and design. Basically they propose the encrypted MAC design or an encrypted MAC design. Their design were based on the classical CFB mode instantiated with a leakage resilient pairing-based PRF. The performance of the pairing-based PRF looks like a mode uniformly protected by high order masking. Their design always uses MAC at the end so always the integrated checking goes before decrypting and this avoids some leakages. Finally their work was using different leakage security models. So we refer to the work and our previous work for more detailed comparison. So let's now conclude. We propose TDT which is a new ADM mode for tickable block ciphers. It achieves forward leakage resistance, non-smith use resilience, high multi-user security. It supports level implementation for high side channel security with low cost and finally it's online and efficient handling of static and incremental AD. We finally present some discussions. First is about TBC. One should be careful with the TBC used in the TDT mode. The TBC shall be secure against chosen tweaking security because the TBC will be used to instantiate a hash. So it needs something like chosen key or chosen tweaking security. Here Sginia and Dioxys could be good candidates for TBC but the block cipher based mode LRW1, LRW2, XEX they won't be sufficient for the purpose. And second is about the public key we use. You can of course use longer secret keys for better multi-user security. But public keys could be easier to generate and easier to transfer. To generate a public key you could just pick a random string and to transfer the public key you could just send the public key in clear text to the other endpoint. So on the other hand if you want to generate and transfer secret keys you may be a key agreement protocol is needed. The advantage of the approach using longer secret key could be it is more classical and maybe easier to be accepted by practitioners. But this depends let's see what will be the final winner. So that's all. Thank you for your attention.