 Hi everyone, this is also my talk, Chun from the Shandong University. This work is about leakage resilience of the duplex construction and the duplex-based authenticated encryption. The work is cooperated with Olivia, Thomas and Fiks. We start with the sponge construction in this picture. Here the function F is a keyless cryptographic function or a cryptographic strong permutation in most cases. The sponge construction starts with some initial value and iteratively calls the function F. Each time block of message with R bits is absorbed into one part of the state while the other C bit of the state is not touched. So after an iteration, the R bits state will become the output. The sponge construction will absorb the message and produce some digest. It turns out to be an indefinitable random oracle with variable input length and variable input output length. So this could be a multi-proposed cryptographic object used for many settings, for example, for cryptographic hashing, for max and for procedural random generations. Based on this structure, the designers further propose the duplex construction. Each duplex call can observe an input and produce an output and the output could be used as a crypto key. This ensures one pass authenticated encryption designs. So for example, Ascon, one of the CISA final winners, is duplex-based. This is the structure. It is one pass. The AD is injected here and the message blocks are absorbed by the duplex outputs and they in turn affect the computation. So the final state depends on all of them on the AD and on the messages and it can be used as a message authentication code. Now the question arises from masking duplex to define against the side channel attacks or differential power analysis. The question is, duplex seems to offer some sort of leakage resilience. Is this true? For example, the authors of this reference, they designed a masked implementation for Ascon. They only masked the red functions and leave the others exposed. The idea is that these internal secret values are kept evolving so it is infeasible to collect multiple power traces to recover. So they can just be lived there. Similar commands could be found in other earlier papers. So for example, the designers of Kayak mentioned that the side channel security could rely on the evolving states. So ultimately, is this true? Regarding this question, we have our first result, the leakage resilience of the duplex construction. In detail, we consider this duplex-based stream cipher or stream encryption. And our security model is the leakage use dropper security. It means the advantage of distinguishing encrypting two messages M0 and M1 should be somewhat limited. Even if the leakages of the encryptions are given. Of course, we need some assumptions on the side channel leakages to ensure security. So for this, we assume that these internal secret state values are unpredictable from the leakages or non-invertible given the leakages. So note that this is the minimum assumption for leakages because otherwise, if the leakages give the secret and there is no secret at all. More detail, we define a secret recovery advantage as this. For the main setting, we take the cp the capacity state as the challenge secret and view it as uniformly distributed advantage challenge. This challenge is involved in two computations. One of the combination is obvious. It's the subsequent permutation call takes it as a part of the input. While the other computation is the previous permutation call that produces it as a part of the output. Of course, this also leaks information about this value. We reveal the leakage function of the permutation circuit as a combination of two somewhat independent functions. One of them is the input part l in and the other is the output part l out. So by this, the real world side channel leakage observations correspond to that the adversary gets the previous output leakage function and the subsequent input leakage functions as input and try to recover the secret state from such two leakages. As there are only two leakages that can be exploited, the success probability of side channel secret recovery or the defined attack advantage should be very small and could be enabled from security. We also assume pie the random permutation and the works in the random permutation model. This random permutation model seems unavoidable for analyzing the boundary constructions. Besides, we assume the advantage of distinguishing including two different single message blocks are somewhat limited. With these assumptions, we use a reduction to show that first all the internal state values remain somewhat procedural random. Otherwise, they use the dropper as an adversary could recover some of the secret state from the side channel leakages. This reduction is embedded in a classical H coefficient technique based argument and it helps bounding the probability of the so-called bad transcripts in this H coefficient argument. Then we use another reduction to show that the advantage of distinguishing encrypting two long messages reduces to the advantage of distinguishing encrypting two messages of only a single block. This follows the CCS 15 paper of our group. So the final result eventually will prove the final result. It means that the duplex-based stream encryption is in some sense a security-preserving domain extension for the single block leakage encryption. We know that in the classical black box setting, indeed the security of keyed sponge or duplex could be based on some less ideal assumption. In detail, we can add a sequence of keyed XOR in actions and write the keyed duplex in such a form. In the black box setting, this representation is equivalent to the original representation. As you see, these XORing could cancel each other and enable restoring the blank state. Then instead of assuming a random implementation pi, we can assume this block cipher. The partial keyed even meant for cipher built upon the implementation pi is a secure PRP. This assumption is less ideal than the random implementation model. But this equivalent representation is not possible in the leakage setting, because how can we handle the leakages of these XORing actions that do not exist in reality? In all, we didn't find standard assumptions based on the argument for the leakage resilience of duplex, and it seems this could be an open question. It compares our leakage resilience result with two concurrent works. First, this reference assumes that the secret state still has enough entropy after being leaked. And second, this assumes the classical bounded output leakage functions. Both of them give rise to simple non-concise theory and analysis. So for ours, as mentioned, unpredictable leakage is a minimum assumption. And more importantly, we believe this is close to practice. See the discussion in this beep. And also, it seems unpredictability assumption can be verified in practice. We can just run aside China's state recovery attack on the device and measure the success probability to verify the assumption. So anyway, one side is concise theory while the other side is practice relevance. The approaches are complementary. We hope all of these approaches could help push the research direction of practice-oriented leakage resilience. We next step deeper and consider designing duplex-based authenticated encryption. We can use the encrypted MAC composition as in this picture. This will give rise to a two-pass design. It has strong integrity against the encryption and decryption leakages with non-smith use. It also has CCA security against the encryption and decryption leakages. This is because the decryption starts by hushing the AD and the ciphertext and checking the integrity. Once the integrity checking does not pass, the decryption immediately stops. So the decryption leakages only leak a little information about the key here. And there is no information leak a different decryption here. And this leakage, this key could be protected by masking. It is light because this part is keyless and it does not need to be protected. So this is a two-pass design. The two-pass design with mode level side channel security or with leakage resilience. But on the efficiency side, we may prefer one-pass designs. It has been known that one-pass is not good for resisting decryption leakages. So the question is, if we exist on this more efficient approach, what can be achieved? Regarding this, we note that the duplex could have two roles in this construction. First, when the internal state has not been recovered, the duplex function has a standard one-pass AE as in this picture. Then, when the internal state has been fully recovered from side channel leakages, the duplex collapses to a keyless crypto hushing, the same as the spawned construction. So there is still some cryptographic properties that can be used in this construction. And we could play with the hash digest. With these observations, we come up with such a design. We use a tweakable block cipher to absorb the hash digest UV. And the key, the AE key is only used by the tweakable block cipher. So we only need to mask it. So we only need to mask the tweakable block cipher. The others can just be lived there. This could greatly reduce the energy consumption of the implementation. As the hushing digest absorbed by the TBC could be 2N bit, this overcomes the birthday integrity issue. And the use of TBC enables the meeting the middle style integrated checking. In detail, given N, A and C, we will compute along this direction in this flow to reach this intermediate value U and V. And then we use the V as the tweak and use the user specified tag Z to compute along the inverse direction to reach the value U star. And then we check if U equals U star for the integrated checking. By this, even if the integrated checking action leaks something, it only leaks some use for this value. The hash digest U is ultimately unsecret. While the inverse of the tweakable block cipher U star is a procedural random value that is also useless. This follows our previous chess design about its leakage security. Some security against the decryption leakage remains achievable, even if we only have one pass. That is, cipher takes the integrity against the nonce misuse and the decryption leakage. This model is named CIMO2 in our related papers. The reason is that the adversary could fix the nonce to the decryption oracle and recover the internal states via leakages. Indeed, this is the case. But then the AE collapses to a hash-than-max game. As we mentioned before, the duplex becomes a hash and the second TBC becomes a max function. So it becomes a hash-than-max game and the integrity remains ensured. As long as the tweakable block cipher remains secure. On the downside, CCA security decreases. The one pass AE only releases encryption leakages and only ensures security on messages encrypted with fresh analysis. Here, decryption leakage is harmful because if decryption leaks, then given N and the corresponding ADA and the cipher text C, we can feed the decryption machine with the same nonce N and different AD block A1 and recover the first duplex state by DPA. This is feasible. Then it is easy to compute the message of the cipher text C and recover and fully break the confidentiality. About security and fresh analysis, it is because if a nonce N is reused in encryption, then obvious messages encrypted with N are no longer secret. But this won't affect another nonce N star because N and N star would correspond to different initial data B and B star and the recovery of B won't affect B star. B star can only be affected by the reuse of the corresponding nonce N star itself. So if N star is never reused, then B star remains secret and safe and the messages encrypted with N star remain secure even if some other nonce N is reused. We also follow our chess design and include the public key for better multi-user security. Let me elaborate. The multi-user model considers the setting where the mode is deployed in mess and many instances with many user keys are there. Crucially, the adversarial goal is not to break a specific instance. It is satisfied with breaking only one of the U instances in the entire system. To have a better security for such scenarios, we use an unsecured key pk as a tweak. So in the system, pk could be uniformly distributed. But we don't need it to be kept secret. So it is public randomness or public key. This avoids the attack with complexity to the length of the secret key divided by U or say it avoids the so-called multi-user security degradation. Finally, we also use domain separation bits to distinguish whether the last block of the AD or of the message is a fall or not and to clarify the border between processing A and M. Please see our paper for more details on the security analysis and also discussion. Interestingly, to some extent, our design meets ASCOR and G-BEN. We all use the duplex as a main processing part and we all use keyed functions for initial and final. But ASCOR and G-BEN are purely permutation-based while we use one more primitive, the tweakable block cipher for better resistance to decryption leakages. However, we remark that in the field of cyber-channel security, it is not uncommon to use different crypto objects for efficient re-keying. Besides ASCOR and G-BEN, other designs with more level cyber-channel security include our previous chess proposal to DT and previous FSE proposal ISAP by another group. Both of them are encrypted MAC two-pass compositions. For example, see here, this is ISAP AAD and this is the encryption pass of ISAP. It is mostly a duplex-based stream cipher and this is the MAC part. Both of the two designs employ the plan keyless hash than MAC authentication. Of course, the reason is obvious. The keyless hash does not need to be protected, so the whole protection could be lighter than a classical AAD. These two-passes designs are more resilient to leakages as explored in previous works, but on the other hand, one of the main proposals of our work is to investigate what can be had on the more efficient one-pass side. So, the design, the different designs with different number of passes and different emphasizes, they are just complementary and are probably suitable for different use cases. We also discussed about comparison of our multi-user design to existing ones. The protocol TLS 1.3 is used as a GCM for encryption. The designers have been aware of the threat of parallel attacks on multiple users. To counter-measure, the protocol TLS proposed to use the randomized nonce and as a result, the GCM variant was called RGCM, randomized GCM by Blair. So, this technique, the nonce randomization technique, it also helps separate the encryption of different users. So, the underlying ideas are similar, but about our random public key idea, its advantage is the message requirement on randomness because the random public key is chosen at the set of phase, it's chosen once for all, and then all the messages are just processed using this random string, using the same random string. The shortage is that it needs new designs and most of the new designs have to be based on new primitives such as applicable block ciphers and strong crypto permutations. This should be compared with the more classical block cipher-based designs. About the nonce randomization technique, it almost uses the underlying AD as a black box and in some sense amplifies the multi-user security generically. The disadvantage is that it needs more randomness, it needs a new random nonce for every encryption, so it could be costly. To reduce this cost, one could use a random oracle to derive new procedural random nonce for new messages, but this is to trade computational complexity for randomness complexity. So, all in all the two approaches, both the random public key approach and the nonce randomization approach, I think they are just complementary. And as the real-world scenarios are complicated, maybe both of them could find their places. In the end, let's have a summary. We established leakage resilience of the duplex constructions and we show that the minimum leakage assumptions of unpredictability suffice. Based on this, we designed the AG mode TT sponge. It is one pass, it is online, and it employs the inverse of the tickable block cipher for less decryption leakages or for more mode-level side-channel security during decryption. Using an n-bit tickable block cipher, it has beyond n divided by 2-bit multi-user security. See the three lines for some details. With respect to n, the block size of the tickable block cipher, the concrete security is almost optimal. It's 2 to n divided by n squared. Of course, the classical c divided by 2 term remains here, but as c could be much larger than n, it won't be the bottleneck and the concrete security could be nice. And concretely, take n as 128 and c 256. This mode ensures 2 to 115 security and this is sufficient for the NIST call for lightweight AG proposal. The mode was instantiated by our group as the AG algorithms book and submitted to the NIST competition based on its website for more information. So that's all. Thank you for your attention.