 And the next talk is about full state keyed duplets with built-in multi-user support by John Damon, Bart Menni, and Gilles van Aschel, and Barthel Gutifall. OK, thanks for the introduction. The Sponge function is a popular design of a hash function. It's introduced by Bartoni at all in 2006, 2007. And it has now been used for SHA-3, for acceptable output functions, and for many lightweight hash functions. And it is particularly popular due to its conceptual design. So where you use in original hash functions, you use a block-cypher in certain mode of operation. This time, you just use a single permutation. You want one permutation over a large state. And the state is then split into an inner part of c-bits and an outer part of r-bits. And the idea of the Sponge is that the message is absorbed into the outer part. So the message is padded, and then is formed in the outer part, interlaced with evaluations of the permutation. And then the output also is squeezed from the outer part. The inner part is left untouched. So you have c-bit state that is left untouched. And this c-bit state ensures the security of the design. And Bartoni at all proved that if this permutation is assumed to be ideal, that the Sponge function behaves like a random oracle up to 2 to the c over 2. So if an attacker cannot make much more than 2 to the c over 2, it can make no 2 to the c over 2 evaluations, and then she uses security. This is just hashing. But in many applications, we need a key function, a Mac function, or encryption. And you can use a hash function for Mac. You can use HMAC. But in many cases, HMAC is reasonably inefficient, because if you have a short message, you have to evaluate the hash function twice. For the SHA-3, the beautiful aspect of SHA-3 is that the Sponge is that you do not need HMAC. Instead, you can just concatenate the key in the message, and you have a PRF. So this is what we call the key's point. So it gets in with the key and the message. It concatenates the key and the message, and you have a PRF. And that's the key's point. You can use it for message authentication. The PRF applies message authentication. You can also use it for a key stream generation. So instead of the message, you take a nonce, and you have an output of a variable length, you can use this output as a key stream and encrypt your data with it. This is the key's point. But in many applications, you don't just need authentication or encryption. You want to have it both. In this setting, ideally, you don't use the key's point, but it's a sibling. And the sibling is the key to Duplex. So the key Duplex is used for authentication or encryption. There are many Caesar submissions that follow the Duplex design. And I will go in a bit more detail on these two designs, on the history of these two designs. Starting with the key's point. So here we see the key's point. It has the key concatenated with the message. So this is exactly the key's point, where the key concatenated with the message. It was introduced by Bertoni at all in 2011. And in 2015, we analyzed it. We formalized it and called it the outer key's point. Why outer key? Because the key goes in the outer part. And because there is also an inner key's point. And in the inner key's point, the key goes into the inner part. And this is the way you initialize the secret state. This scheme appeared before in Chang'e-Dol 2012. We formalized it in 2015. And Naito and Yasuda analyzed these two schemes and derived improved bounds. However, at some point, we noticed that you can improve the scheme. You have to see with capacity that ensures some secrecy. And the extraction should also leave this part untouched because if the attacker learns, the entire state scheme is broken. But for absorption, you have a key. And the state is secret. And there is no point in keeping this part untouched for absorption. In more detail, we have the full state key's punch, in which case the message is absorbed in the entire state. The first appearance of this is the monkey duplex by the sponge people, by Bertoni at all. Though without any formalization, it was just a mentioning of the scheme. Gazi et al. analyzed this in case you just have one block, so without the other ones, that significantly simplifies the analysis. And in 2015, me together with Reserie Nittebach and Dami of ISAR, analyzed, introduced, formalized, and analyzed the scheme. And the interesting aspect is that all of these three schemes achieve an approximately same level of security, even though this one is more efficient. And that said, the beauty of the full duplex, the full sponge. Now for the duplex, this is the unkey duplex. It's a bit different. So you have an initialization state with an inner part and an outer part, again. You can think of the duplex as a sequential evaluation of some small sponges. So a sponge where you absorb some data, transform the state, and extract some digits. So absorb, transform, extract, absorb, transform, extract. And those are duplexing calls. And this is the plain duplex from Betoni at all in 2011. You can key it again by concatenating the key with the first block. In this case, you make the state secret, and then you can do the duplex sequentially. You can use this for encryption, right? So here, you can input a message. You can output to the ciphertext of the message. So you can use this for encryption. You can also just input nothing and take the output as a tag. And this way, you can get authenticated encryption. But again, there is no point of keeping the inner part untouched. So now in 2015, again, we introduced the full state key duplex. And it has full state absorption. So the message goes into the entire state. The digits is extracted from the outer part. So you extract z0, or z1, or z2, bits where the zi is at most r. So you still leave the inner part untouched for extraction, but you absorb over the entire state. And again, the schemes are equally secure. And if you know this one, it's more efficient. Now, looking more detail into this scheme, what we proved two years ago was roughly this bound. So it's a very simplified form of the bound. So we proved that the scheme is secure as long as this term is less than 1. So this is mu times n over 2 to the k plus m squared over 2 to the c. k is the key size. c is the inner part, the size of the inner part of the capacity. The key is always smaller than the inner part. So k is smaller than c. And m is the data complexity, the online complexity, which corresponds to the number of queries the bad guy makes to the scheme. And as the number of calls the attacker can make to the randomized primitive permutation. Mu is some magical term. It's called multiplicity. Intuitively, the multiplicity considers the maximum multi-collision on the outer part of the state. So if you focus on the outer part of the state, mu tries to find the maximum multi-collision on that state. Intuitively, mu is the most 2m. At that point, we thought we were done. I mean, this is the most efficient way of doing the duplex. The bottle looks clean and nice and very secure. However, a second thought reveals that it could be improved a bit. So first of all, we see mu times n over 2 to the k. So what is mu? We don't know, but it is at most 2m. So the first term is of the order m times n over 2 to the k. So that's birthday bottle security only in the key size. And that's quite counterintuitive, because if you look at the scheme, the key only appears in the first block. And you cannot use the full data complexity. So the entire online complexity to break the scheme to recover the key. So birthday bottle security in the key size is somewhat counterintuitive here. There are also some other minor limitations in the scheme. So first of all, you have the dominating term mu n over 2 to the k. But instead, you would expect something like mu times n over 2 to the c, because that's what corresponds to a collision between a construction call and a primitive call in a part of the state. The multiplicity mu is only known a posteriori. It's a parameter that depends on the randomness of the primitive, the randomness of the scheme, and on the randomness or the coins or the choices of the adversary. And in fact, it should be computed. It should not stay in the bound. And this is a weird thing, this multiplicity. It's kind of an artifact from an earlier paper from the Bartoni et al. sequence. The scheme is not analyzed in a multi-user setting. And as you've seen in the previous talk, it's a very interesting and popular and also practical topic, practical setting to analyze the scheme. So it should be analyzed in a multi-user setting. And finally, it only has a very limited measurement in the adversarial strength. So we just have data complexity and time complexity. But if you use the scheme for encryption, we have, for instance, an encryption scheme where the nodes can be reused or an encryption scheme where the nodes cannot be reused. Those are two different cases. And they're not covered by the bound. So it's not clear how the scheme behaves in these settings. And inspired by this, we generalized the scheme. And we came up with this new duplex. So for some reason, we still call it the full-state duplex. So there are some conceptual differences. The first one is that it has multi-user security by design. So it gets its input instead of a single key. It gets its input in key sequence, a key array, both face key. And the input, there is an index delta which specifies which key in the array to take. And this sounds like a very small difference because it usually in security bounds is our security models. You have to stretch the model to multi-user security. Now you can just use the single model, analyze this scheme, and you have multi-user security because you use a key array. And in fact, in the bound, in the final bound, it becomes visible how relations among the keys influence the security. So we use, for instance, a min entropy in this key array and this term appears in the bound. So this kind of simplifies analysis. You do not need to adapt the security model to every different type of key array. The initial state is concatenation of the delta key with some IV, which may or may not be announced. We have, well, this is a small improvement. We have full state absorption. So the sigma goes over the entire state. We don't need to pad it anymore so that, well, saves one bit. Well, one bit is almost no bit, but it still simplifies the scheme a bit. We did a refacing. So this is kind of a weird, it's not really an improvement, but it's to suit the analysis. So in the original duplex, we had absorption transformation extraction, absorption transformation extraction. So absorption of sigma, transformation, extraction of z. Now we look at the scheme as transformation, extraction, absorption. And the reason to do this is that we look at different duplexing calls. So we have, this is the initialization call, including the f absorption, extraction and absorption. This is a duplex call and, wait a minute, this is a duplexing call and this is a different duplex call. And there is a difference, namely that the outer part is overwritten or is not overwritten. And to suit the analysis, so the reason why we use these two is to cover, for instance, the case of release of unverified plain text. If you use an authenticated encryption scheme for which the encryption of verification outputs the message before the tag is actually verified, you have a release of unverified plain text and this one covers the setting. So it allows us to analyze the more refined adversarial strength. And now it drives the following security boundary. It looks a little bit more complex because we cover many more adversaries. So we have QIV times N over two to the K. So let me first go for, I consider this fraction and then the second one. So now we have QIV times N over two to the K, where QIV is the maximum number of initialization queries for the same IV. Recall that in the old bound, we had mu times N over two to the K, where mu is at most M. In this case, you have QIV times N over two to the K, where QIV is the maximum number of initialization queries with the same IV, and which is upper bounded by the number of keys in the key array. Because every IV can be used at most, well, the mu times if there are U different keys. We do not count duplicate queries. So if you make the same query for the same IV and same delta, it only counts once. So in this case, QIV is at most U, and then you use a single key setting, QIV is at most one, and you get N over two to the K rather than mu times N over two to the K. Now for the second term, in the old bound, it was M times N over two to the C. Now we get L plus omega plus new RCM times N. Aware, L is the number of queries with a repeated path, and with a repeated path, we mean that if you make two queries with the same sigma zero or sigma one, sigma two, sigma three, sigma four, and then a different sigma five, then you have two queries with the same path, and then it goes in it to a different direction, and L counts the number of queries with a repeated path, so for which the path already appeared. Omega is the number of queries with an overriding outer part, so the number of duplexing calls of this shape, which corresponds to the release of unfairly pain text, and new RCM is some multi-collision coefficient, so instead of mu, which was based on the adversarial strength, this is just a thing based on RC and M. And it's important to note that new RCM is often a very small constant, and in many cases L and omega are dominating here. So a little bit more on this new RCM, it's essentially a balls and bins problem. So we have M balls thrown into two to the R bins, and new RCM is a bit of a technical definition. It's the smallest value X, such that the probability that the fullest bin is at least X, is at most X over two to the C. So it's a rather complex definition, but let me explain it in a bit more detail. Suppose you take a very high level of X, so suppose you take X to be very high, in this case the probability will be very low, because if X is very high, the probability that there is a bin, which is full of, which has more balls than X, is very low, so if X is very high, this term will be very low, and X over two to the C will be very high, so there is a huge gap between the probability and this term on the right-hand side. On the other hand, if X gets very small, the probability gets very high, and this one gets very low, so at some point you don't have less or equal, but the first term is bigger than the second term. And new searches for the value X, for which you still have this equation, or at least an upper bound on this value X, because if X increases, this one decreases. We did some analysis, so first of all, we simplified this probability a little bit, and then we also derived a proper and easy-to-use upper bound on this new RCM, and so we also made some computations. So first of all, we simplified this probability a bit, and we did some computation for the case of R plus C is 256. It doesn't really matter what RC are, because the picture shifts with R, if you see in the bottom line, so here we have two to the R. The black line are some computations of this probability, and we have two lemmas, lemma 4 and lemma 5, it doesn't matter that they're called lemma 4 and lemma 5. That upper bound, it is the value X, and these values, these lemmas can be used in the final bound. So I have some very nice lines on the proof idea, but I will skip them, because I would rather like to go into the applications. So one of the applications is on the full state keyed sponge, because there is some relation between the duplex and the sponge, you can exchange the bounds. So if you have a bound for the sponge, you can use it for the duplex, and to the other way around. You can use our bound now for the sponge. So here we have the sponge, indeed if you have a duplex, so if you look at this call, and you either do not absorb any data or you do not extract any data, in this case you get a sponge. So in this case, the first duplexing call doesn't extract any data, second one doesn't extract data. Doesn't extract data, doesn't extract data. This one doesn't absorb data, and in this case you can design a sponge from the duplex. This is a general case where we have multi, well, we have a key erase with multi-key security. Overrides are possible, so we look at a general case. We do not have a non-restriction, we do not have a non, so also not a non-restriction. In this case this means that L and omega can be arbitrarily large, and most about M. The new term, which is often constant disappears, is negligible in the bound. QIV is the number of users, the number of keys is the most used, so you get U times N over two to the K plus M N over two to the C, which improves the bound of two years ago. Now for authenticated encryption we can look at two different cases. So first of all it's a non-strivaled case, so where we de facto do not have a non. Also we consider arbitrary number of overrides, so L and omega are at most M, so as arbitrary as possible. The new term is negligible, so we get a bound of a similar form. So I didn't replace QIV by U, but we get a similar bound. Now for the non-respecting setting, in the non-respecting setting, and no release of unfairness by plaintext, non-respecting means that you never have a repeated path, so every time you use a different nonce, so all paths are fresh, so L is zero, omega is also zero, and in this case in the second part of the bound, dominating term is new, and recall that U is often closed to a constant, and if we have single keys, if we consider the single keys, I think QIV is one, so you get N over two to the K plus constant times N over two to the C, which is a very strong bound. And to show the strength of these bounds, we looked at the Caesar competition, the third round has four sponge-based schemes, which are Ketsche, Ascon, Norx, and Kijak. They all have different parameters, of course. The most important column is the C, which is the capacity, which guarantees the security. And of course in the bounds I've shown you, I left out some details, but you can believe me that if you do some reasonable upper bound on the online complexity, so we say we refresh the key as soon as the message reaches some certain level, so you can do this, you can refresh the entire key as long as the data complexity reaches a certain threshold. This means that we can consider a case where the online complexity is at most two to the A, and we get a bound, and security as long as the offline complexity is at most this term in the non-sviolating setting, which in practice is close to the C over two. So for instance, for Kijak, you have C is 256, and the non-sviolating setting approaches 128, often a bit higher. But in the non-respecting setting, you get actually 255 bits of security. And this perfectly matches this view here, that nu is close to a constant, and this one, if you take a C bit key, then you get N over two to the C, plus N over two to the C, so this is C bits of security. And that's also reflected in the computation of the bounds. So let me conclude, the new full state key, Nuplex, is a versatile primitive. It's a more general primitive that covers many more settings. It covers multi-user security by design. It also covers more potential adversaries. It covers adversaries in the non-srespecting setting, the non-substantial setting, the rub setting, not the rub setting, et cetera. It makes life easier for the SpongeBob designer. And I copied this from a slide from U1. So he didn't mind that, I hope. The scheme has already been used. So the FSKD, the full state key, Nuplex, is used in PR. It could also be used for the other schemes to improve the efficiency and security. And I also think this multi-collision analysis you could be of further interest. That concludes my talk. So I would like to thank you for your attention. One question about the, there was the two variants where it's kind of interrupted and non-interrupted. I don't know what's the words for it. Does it, does this mean that basically your analysis holds no matter whether we do interrupt, non-interrupt or mixed or anything? Yes, the attacker can choose. Like on each query we can choose which one we use. Yes, each query. So if you look at the, yeah, well, in this picture we had a duplexing call. So the attacker can say, I initialize the state or I duplex the state. And in this case it gives us sigma. In our setting, it can choose, I initialize the state, I duplex the state with override or I duplex the state without override. And so the attacker can choose what it wants. Okay, thank you. Okay, let's thank the speaker then. Thank you. Thank you.