 Welcome. This talk presents a security analysis of tweak and tweak. My name is Syklis and this is a joint work with Djongo, Django and Lingzo. Trigger block ciphers augment classical block ciphers by adding a public tweak input that can be used for instance in modes for the purpose of security since the tweak would allow to separate multiple domains or as a means for efficiency since it allows to process more input material. Nowadays, many dedicated tweakable block ciphers exist such as craft, deoxyspc or skinny. Nevertheless, generic constructions from classical block ciphers are still relevant. Such constructions were proposed already with the proposal of tweakable block ciphers themselves by Liskov et al. in 2002. They proposed for example the LLW construction which took a tweak and hashed it with a universal hash function to augment the classical block cipher and transform it into a tweakable one. A cinema proposal is xcx by Rockaway. The problem of this approach was that it was indistinguishable only until the birthday bound of 2 to n by two queries where n is the state size of the primitive. To surpass this bound, other works proposed to use for instance cascade designs. In 2012, Landecker et al. proposed the cascade of two LLW constructions where they used two independent hash functions to have three independent hash functions in total. They showed for their construction that it had at least two n by three security. In the subsequent year, Lampe et al. generalized this to an R round construction and showed that it asymptotically achieves almost optimal security. At TCC 2018, Manning re-resited the design of two instances of LLW with an information theoretic distinguisher which had complexity of 2 to n by 4 times the square root of n queries. From the other angle, Jha and Nandi recently considered a new version of a security proof following the mirror theory by Pataram and they showed a lower bound of 2 to the 3n by 4 security for the cascade design. A negative aspect of those is just the presence of the hash function that must be implemented in addition to the block cipher. To circumvent that, Bau et al. proposed at EuroCrypt 20 a design that needed no tweak scheduled at all but simply added a tweak in between three subsequent block cipher invocations. Actually, this is an extension of also a proposal by Liskov et al. who proposed the CMT mode. Here, Bau et al. proposed a variant with three block cipher calls. They showed in their work that this construction is secure for at least up to 2 to the 2n by 3 queries where secure means it's indistinguishable from a tweakable random permutation. They also proposed an instantiation of their work with round-reduced AES for every block cipher. In their work, they proposed TNT AES where every block cipher call is instantiated with six rounds of the AES and they showed that at least more than five rounds are necessary in the middle since they had a boomerang distinguishable. The question, however, that arose was if one could tighten this gap that still existed between having no attacks on the generic instance and having a proof of 2 to the 2n by 3 queries. So, two perspectives are relevant here. On one hand, the adversarial perspective. Do there exist distinguishes on the generic constructions with less than 2 to the n queries? On the other hand, a constructive perspective is interesting. That is, can one improve the lower bound of the security of the construction from 2 to the 2n by 3 to more queries? In this work, we wanted to close this gap from both sides. We found that the construction shares strong similarities to the two-round cascaded LRW design. We found that the variance of the distinguisher by Manning seemed to work also on generic TNT. And we had to reduce the complexity for it to not only be an information-theoretic distinguisher. Moreover, we found that Giannandi, who recently proved this 3n by 4 security bound for CRW2 could be transformed into a forward only but still a security result on TNT. Since using TNT only in forward manner is good for modes, we could provide therefore first steps to close the security gap to somewhere between 2 to the 3n by 4 and a factor in the square root of n. In the following, we consider the distinguishes on TNT. Before, we would like to have a close look on Manning's distinguisher on CRW2. He fixed two tweaks, T0 and T1, and composed two sets of 2 to the 3n by 4 plus a certain amount of messages each. Moreover, he fixed some threshold. In the end, his work was to look for quartets of message tweak tuples. What he did in his distinguisher was that he first fixed T0 and for all queries, he created messages MCUI and encrypted them to ciphertext. Similarly, for the tweak T1, he also formed messages and encrypted them to CIJ. He stored them into a list and at the end tried to find differences. Let's have a conceptual look at that. With a certain probability of 2 to the minus n, two states will collide after the first hash function addition to the state. This can happen only if the messages and the tweaks differ. Say we have a second pair also with different messages and also with the different tweaks. Again, they collide with probability 2 to the minus n. If this happens, then clearly those pairs will also collide after the call of the first permutation. Again, with probability 2 to the minus n, it may happen that they form a new pair since the second hash function can form a collision. Now, Manning's observation was if this happens for one other pair in such a quartet, the corresponding next pair will also collide with probability 1. And when this happens, of course this collision can be tracked also to the second permutation call. What Manning did was that now for every possible difference of both hash function outputs, he counted the number of quartets. That is, how many quartets existed so that both ciphertext pairs had the same difference. He went over all differences, counted the number of quartets and if it exceeded some threshold, it returned 1 and otherwise his distinguisher returned 0. The probability, as said, was about 2 to the minus 3 and 4 quartets. Now, those quartets not only occur because of his distinguisher but also just out of random. However, for the real construction, he could observe twice the number of quartets as they would occur in an ideal tweaked PAP. Therefore, he needed roughly a square root of n times 2 to the 3 and by 4 queries for detecting it. Since he needed to go over all differences, the distinguisher needed more time and therefore was only information theoretic. What we observed and our question was can his distinguisher and the principal be adapted for TNT and can it be improved? We consider two variants that we call crossroad and parallel road distinguisher respectively. The illustrations may give a hint why we call them that way. Both work in a similar but only a slightly different manner. The crossroad distinguisher works as follows. Similar to Manning's approach, we use tweak message tuples. In contrast to his approach, we use two messages and for each, collect a set of tweaks that we store in lists. For our approach, we use two hash tables, a list where ciphertext will be stored and a list where difference counters will be stored. In a crossroad distinguisher, we collect for every value of Q iterations for M0 a certain tweak according to some tweak construction function of the current index and encrypt it to its ciphertext and store the tweak at the index of the ciphertext into a list. Next, we repeat the same procedure for M1, construct the tweaks TIJ, encrypt it and now we'll do procedure to find Quartet collisions. What happens conceptually? Assume we have a Quartet of four pairs. Two of them will start from M0, two of them will start from M1. Clearly, these pairs will collide after the first permutation call when they share the same message. Now assume that one of the pairs collides after the first tweak addition with probability total minus N. If both pairs have the same tweak differences, this means T0i, X or TIL is the same difference as TIJ, X or TL0K, then with probability total minus N and again total minus N, both pairs will collide after the first tweak addition. Clearly, these collisions will track also with probability 1 through the second permutation call. Again, with probability total minus N, it may hold that one of those pairs collides also after the second tweak addition. Since we already paid for those pairs to have a tweak sum of 0, this means both tweak differences are the same. It will imply that also the second pair will collide after the second tweak addition. Then we will have two matching ciphertext pairs at the end with probability total minus 3N. What happens in our collision finding procedure is that we look up for the second ciphertext. If we find already a previous one with that, we derive the tweak difference and we'll look up in the difference table if there's also previous pair that had the same tweak difference and already had a collision. If so, we increase the counter of our quartets that we found that collided. If this number of quartet collisions exceeds a certain threshold, we return 1 and 0 otherwise. The parallel road distinction works similarly. However, in contrast to the previous one, where we wanted collisions in the ciphertext between different messages, the parallel road distinguisher wants collisions in the ciphertext for the same messages. Conceptually, since we start from the same messages, pairs will collide also after the first permutation call. With probability total minus N, a pair with different messages will collide also after the first tweak addition. If both pairs have the same tweak difference, then with probability total minus N this happens and the second pair will also collide after the first tweak addition and clearly also collide after the second permutation call. Again, we pay a probability of total minus N that one of those pairs will collide after the second tweak addition and since we already paid at that point that they have the same tweak differences, it will imply that the second pair also collides and what we obtain is two colliding ciphertext pairs. In our algorithm, we encrypt M0 and all the tweaks that we want to ciphertext and store the tweaks into a list index by the ciphertext. This happens here. What we do now is, since we want pairs from the same message, we can already use those collisions from our list here, derive now the tweak difference and add the number of pairs that have collided with that tweak difference at the corresponding tweak difference index in D. Then we consider the same or a similar procedure for M1 encrypt them, look them up now in a fresh list that only tracks the colliding ciphertexts for tuples with M1. We derive again the tweak difference and now if we found already a number of pairs that is not zero with the same tweak difference of colliding pairs from M0, we add those number of quartets that we now obtained to our collision count. At the end, again, we output if the collision count exceeded the third and third threshold or not. The previous algorithms still needed lists of two-to-one elements to store all possible tweak or ciphertext values and were the bottleneck of the algorithms. As an improvement, what we suggested was now to have smaller lists that stored only Q elements. Assume we have Q being two-to-three and by four tweaks. What we could do now is to split the ciphertext that shall be stored into a larger part that still defines the index, where to store the elements into it and a smaller part. Instead of having a list of two-to-one elements, we now have only Q elements and store in these lists not only the tweaks but now also the smaller part of the ciphertext. So we have a list of small lists in it. If now we have a collision, we retrieve the tweak and the smaller part of the previous ciphertext and compare also the smaller part first. So only if the larger and the smaller part hit, we know that the full ciphertext collision occurs and then we derive the tweak difference. Also for the second list, where we stored before, the number of pairs with a certain tweak difference. Now we store not the full tweak difference and not only a counter but a sub-list of the remaining bits that we could not use now as a part of the index. Therefore this is not a counter yet but it has been transformed to a small sub-list of the remaining parts of the tweak difference. For the two pairs that were encrypted from M1, we do a similar procedure. We also encrypt them as before in the parallel row distinguisher and now split also the ciphertext into the smaller and the larger part. We look up first again the bigger part and retrieve the smaller part and the tweak of previous texts that encrypted to that ciphertext. Only if the larger and the smaller part of the ciphertexts match, we derive again the tweak difference, split it up, look in our difference table for all pairs that were encrypted from M0 and had that tweak difference. Then we retrieve the smaller part of the tweak difference. We also compare that and only now the full tweak difference matches we have a quartet and only in that case we increase the collision counter. While both distinguishers are very similar, they differ in a significant aspect. For the crossword distinguisher one searches for collision between ciphertexts that arose from different messages. What we have here is say we have a set of 2 to the t tweaks per message that after the first tweak addition we can combine 2 to the t times 2 to the t pairs that's equal to the probability 2 to the minus n about we have roughly 2 to the t minus n pairs. From those we can form quartets that collide again with probability 2 to the minus n and form approximately to the 14 minus n minus 1 correct quartets. In addition to them pairs can collide also randomly. We have the same number of pairs from different messages that collide and among these pairs we can form the same number random quartets as we had correct quartets before. What this means is that for the reconstruction we expect to have about twice the number of quartets as for a random one. For the smaller row distinguisher we have the same number of pairs that will collide in n and from this number we will abstain by a combination in 2 quartets the same number of correct quartets as for the cross row distinction. However, here we only have 2 to the t by 2 pairs that we conform from the same message which will collide in the ciphertext with probability 2 to the minus n and among those pairs only we can form quartets which is now a little smaller than before namely by the half. This means for the parallel row distinguisher we will expect about 3 times the number of quartets for the reconstruction as for a random one. To get at least an intuition of whether all distinguishers can work we implemented them with a small variant of presence by Leander with state sizes of 16, 20 and 24 bits. The idea was here a random function and we used 1000 random keys to random messages and 2 to 2 weeks per experiment. We can see that for the cross row distinguisher we obtained the expected number of quartets and about twice the number of quartets for the real construction as for an ideal one. For the parallel row distinguisher we expected to have roughly the same number of distinguishable quartets however only half the number of random quartets as before our experience matched this intuition quite well. TNT-AS as said before is an instantiation of TNT with 6 ones AES for every block cell we call. At an earlier stage we wanted to find its security where it had 5 rounds for every call. Here we found that we could use an impossible differential attack where one of the other layers at least had 5 rounds. The core idea is simple. Our previously shown distinguisher's work if we can find message pairs such that their difference can be cancelled by their distinct tweaks after the first call of the permutation. We choose a tweak difference space and for every message we associate a set of tweaks such that their difference lie in that space. Here this tweak difference space by having tweaks that are inactive in one anti-diagonal before mixed columns is applied at the end of rank 5. Now assume we have a correct message pair. We choose that they are active in the first diagonal. Assume their difference lies in that tweak difference space and there will be 2 tweaks such that they cancelled. Only in that case the distinguisher's of before so we have to choose enough messages and enough tweaks for every message to build and to find such quartets. We choose the tweak difference so that it is an output difference of our impossible differential. In that case, our message pairs cannot encrypt to the start difference of this impossible differential. Therefore, if we have identified correct message pairs we can discard all first round keys of the first diagonal that have led to encrypting the message pair to having only a single active byte after the first round. If we have enough message pairs we can therefore hopefully discard enough key candidates so that only few will survive our distinguisher. Our goal is to reduce the number of candidates for the first diagonal since we have a probability of 2 to the minus 8 to only have a single active byte and since there are four cells that may be active that a key candidate is filtered. We need roughly 2 to the 26.5 correct message pairs to reduce them to only a few lefts. The probability that two messages lie in that space is 2 to the minus 32 roughly, which means that we need about 2 to the 26.5 times 2 to the 32 to have enough message pairs which is about 2 to the 58.5 pairs that we need and we can build them from having two sets of 2 to the 29.2 4 messages. Since we have a smaller space of tweaks the probability of a correct for an incorrect message pair is roughly 2 to the minus 354. In contrast a quartet from a correct message pair 2 to the minus 320 something. Following the analysis we might enter a car that we have a normal distributed assumption for our difference means that we need roughly 2 to the 83.3 tweaks per message to be able to distinguish correct and incorrect message pairs. Multiplying this by our number of 2 times 2 to the 29.3 messages we get 2 to the 113.6 message tweaks chosen by index. And that the encryption of our data dominated memory and the time complexity. Since the attack is infeasible to implement also with 5 round AES we still wanted to have an intuition what goes on and therefore had to consider a scaled down variant. There exists small AES by C. de Dahl. Still in the order of 3 and by 4 we would have roughly 2 to the 50 operation since we needed also to consider this for multiple random keys. we had to scale down small a is further to a three by three version 36 bits. And we consider the crossword as signature. And our goal was our goal was, are we able to correctly identify correct message pairs with our approach? What we wanted to see is what is the number of quartets for message pairs whose difference light in that tweak different space. And what is the number of quartets built for message pairs that do not have a difference after pi one that lies in the space? What we found matched our expectation. In addition, we saw a huge distance of the number of quartets for message pairs with the desired difference and for the number of quartets for message pairs without the desired difference after pi one. So we know that the security of TNT is upper bounded by at most squared n by two to the three n by four queries. Next, we considered a constructive approach to find a lower bound of its security. And recently there was an interesting related work by John Nandi who showed three n by four bit security for CLRW2. We found that we could consider the forward direction only. And CLRW2, the outer hash function calls and the addition of the message work like an epsilon almost universal hash function. In TNT, we could observe that the first permutation call and the tweak addition works also like an epsilon almost universal hash function. We noted that we could rewrite the internal variables of TNT to be very similar to the process of CLRW2. Only at the end, no hash function call happens. So the ciphertext is not mass and to model this process is that we also said that the ideal oracle also chooses a random first permutation out of a set of all n bit permutations. In the proof, John Nandi defined two sets of bad events. One set that occurred from the input and the output of the hash function and one set where the ideal word could not sample variables as in the real world in the very center of the construction. The former bad events are the core difference between the proof by John Nandi for CLRW2 and the proof by TNT. The sampling strategy and later the analysis of good transcripts is the same for both or basically the same. TNT does not have hash functions as such. Therefore, we considered what we call bad hash equivalents. Those are the bad events that also occurred in the proof by John Nandi. However, note that also in our model of the ideal world, the first permutation call is from a random permutation. John Nandi defined in total seven bad events as such. The first is that we have a simultaneous collision for two queries with a collision in X and a collision in U. Here, this cannot occur. Since for a collision in X, we need different tweaks in two messages. And if we have different values of T and different values of X, then they cannot collide at the end in U. From the same argumentation, we also can up about the probability of the second event that we have a collision in X and T at the same time and we'll have a probability of zero. Similarly, we cannot have at the same time a collision in U and a collision in T. We have two pairs, I and J and K and L, both which collide in X. And we have that two of them, J and K, also collide in U at the end. This is bounded in John Nandi's work by the probability for alternating collisions, which is Q squared by epsilon 1.5. Since our epsilon is quite small, since we have a random permutation, we can have a boundless by Q squared by 2 to the 1.5 N. But 5 is a similar but just flipped event where two pairs of queries each collide in U and one among each also collides in X. From the same argument as for the previous bad event, we can obtain the same boundless there. What remained was multi-collisions in X and multi-collisions in U. Those could be bounded by J and Nandi by using a Markov Bounder argument. And we could see that for a chosen K, we obtained 16 Q4 by 2 to the 3 N for each of those events. Overall, bad hash equivalence events, we obtain therefore a bound at the order of Q squared by 2 to the 1.5 N plus Q to the 4 by 2 to the 3 N. For bad sampling, J and Nandi defined to consider a transcript graph, which is an isomorphic representation of the queries and the relations of X to U for all queries. What they defined were the interesting components, all further components that could occur had negligible probabilities. They grouped the components into five types and defined sets of components i.i. for the individual types. The ideal word, Oracle, then tried to sample Y and V consistently and if this was not possible for a transcript, then this would define the bad event where the queries would be distinguished depending on in which components they were. Here, we can use the same sampling strategy as stated and we obtain the same bound as stated for CLRW2, which is in the order of Q4 by 2 to the 3 N. Similarly, the analysis of good transcript is very similar to that of CLRW2 and we get basically the same bound. In sum, if we consider now not ideal permutations that are independent, but independently secure block ciphers, by summing up, we arrive at a bound that is in the order of Q2 by 2 to the 1.5 N plus Q2 to the 4 by 2 to the 3 N, which corresponds to 2 to the 3 N by 4 TPP security. In sum, we consider the security of TNT from a constructive and from an attacker's perspective. From a constructive view, we could derive a bound of 3N by 4B security when considering TNT in forward direction only. From the view of attacks, we show that Manning's attack is also useful for TNT, so at most square root of N times 2 to the 3N by 4 queries are needed to distinguish it from random. For TNT AES, we further showed that the outer layer need more than five rounds since we could mount otherwise an impossible differential attack. Combined with the attacks by the orders of TNT that consider five inner rounds, we know now that six rounds of AES are at least a lower bound for a secure version. We emphasized that our work does not violate the security claims of TNT, that it had at least 2 to the 2N by 3 queries of security. Neither did we violate the claims that TNT AES with six rounds in each instance would be a secure cipher. We'd like to emphasize that we stand on the shoulders of others great works. Our contribution is to find the structural similarity between CLRW2 and TNT. This allowed us to apply Manning's distinguisher to improve methods by Giannandi on CLRW2. An interesting future work can be to fully close the gap from a constructive perspective and show 3N by 4-bit security for both plane and ciphertext direction. We found that the analysis of how to sample the values in the middle in the ideal world isn't far from trivial and has to be solved for that purpose. So far on this, thank you very much for your attention.