 OK, so we're getting ready now for the second talk of this session. The title is efficient and optimally secure key length extension for block ciphers via randomized cascading. And the authors are Peter Gazi, Stefan Osaro. Peter will be giving the talk. Thank you very much. So the topic of our paper is the question of key length extension for block ciphers. And therefore, I will start by recalling the notion of a block cipher and introducing the problem of key length extension before getting to presenting our results. So block ciphers are well-known and widely used cryptographic primitives. And we can see a block cipher as a mapping that takes n bits of plain text and k-pa bits of a secret key and outputs n bits of cipher text in such a way that for every fixed key, the mapping obtained is a permutation on n-bit strings to be decodable, to be decryptable, of course. And but for most of the applications that are using block ciphers, the security guarantee that we actually require from block ciphers is to be absolute random permutation, which means that the block cipher, when used with a secret randomly chosen key, is indistinguishable from a random permutation. So more precisely, we consider a distinguisher which can participate in two different settings. Either it is interacting with the block cipher that is used with a random secret key that is hidden from the distinguisher, or it is interacting with a randomly chosen permutation over the domain of the plain text of the block cipher. And the distinguisher is allowed to ask both forward and backward queries to each of these permutations. And after some time and such an interaction, it is expected to output a single bit representing its guess, whether it's in the left or in the right world. Then we can, of course, define the PRP advantage of this distinguisher against the block cipher, which is the difference in probabilities that the distinguisher outputs one in both of these worlds. And we can talk about the PRP security of the block cipher in question to be the amount of resources that an distinguisher needs to achieve a constant distinguishing advantage between these two settings. Constant meaning, for example, one half. We still need to make more precise what we mean, by resources, and what are the resources that we care about. And I will do this. But before that, let me make a simple observation about the security of block ciphers. If a block cipher is to be a good pseudo random permutation, it is essential that it's killing this sufficient. This is because the distinguisher can always mount the following attack. If he's given an oracle, which is either the permutation or the block cipher, he can issue a single query to this oracle, let's say, all zeros, and then try to encrypt the same value of all zeros using the block cipher E with all possible keys on his own. And of course, comparing the results to the result that was obtained in the first step. And if a match occurs, this suggests that the key that was used for the encryption is actually the key that is being used for the block cipher, and that the distinguisher is actually talking to the block cipher, so he's in the right scenario, not in the left one. And this can be easily verified by the distinguisher by using a different input. And this whole attack just requires two to the kpa, evaluations of E. So this is an upper bound, since it allows the distinguisher to succeed in his distinguishing task, this is an upper bound to the PRP security of the block cipher E, which can be a problem for block ciphers such as this that have an unacceptably short key length, only 56 bits. But apart from that, they do not show serious structural weaknesses. It's also worth mentioning that this attack is generic, meaning that it can be mounted on any block cipher, even one that is flawlessly designed. So it upper bounds the PRP security of any block cipher. And motivated by this trivial brute force attack, the question that we address is that of a key length extension. So if we are given a block cipher E, our goal is to come up with a construction that uses this block cipher and results, again, in a block cipher with the same block length, but with a key length that is greater than the previous one, and does it in such a way that also the security increases with respect to generic attacks, meaning that the best generic attack on this construction E prime requires more than two to the kpa construct queries or evaluations. So it requires more effort than the brute force attack on the underlying block cipher E that I described. In order to capture these generic attacks, the model that we use is the ideal block cipher model. And we model the underlying block cipher E as being ideal, and therefore providing an independent uniformly randomly chosen permutation for each of the keys. So the actual formulation of the key length extension problem in the ideal cipher model looks as follows. We have a distinguisher that is, again, in one of the two worlds. And in the first world, he's allowed to query a random permutation. And in the second world, he can query the key extension construction. But apart from that, he's also allowed in both worlds to query the ideal block cipher E, which in the right world is actually the same block cipher that is underlying our construction. And he can query this block cipher in both directions, meaning he's allowed to issue encryption and decryption queries under an arbitrary key of his choice. Then we evaluate the complexity of such a distinguisher to be the total amount of all queries that it has issued during its interaction in the experiment. And now we can make the PRP security definition that I mentioned before more precise by asking how many such queries does the distinguisher actually need to achieve this constant distinguishing advantage in distinguishing the two settings described above. So this is a key length extension problem. And before I move to our results in this area, I would like to mention two existing approaches to key length extension. The first one is cascading. This is based on the easy idea that if you have a block cipher, what you can do is apply it multiple times with independently chosen keys. And what you get is an encryption by a new block cipher that has a greater key length. The trivial application of this approach leads to double encryption, but it is well known that double encryption is susceptible to the mid and the middle attack, which I will now briefly review. So what the distinguisher can do when he's given an oracle, which is either a double encryption or a random permutation, he can, again, query this oracle for a value, say, all zeros. Then he can use his block cipher oracle to encrypt this value all zeros under all possible keys, obtaining all possible intermediate values denoted by you here in the figure. Then he can do the same from the other side. So he can take this value y that he obtained in the first step and try to decrypt it with all possible keys, obtaining another set of possible intermediate values. And then he can just try to find the match between these two sets of values. And if such a match is found, it suggests that the corresponding keys were the keys used in the double encryption, and that the distinguisher is actually talking to the double encryption and not to a random permutation. And this can, again, be verified by using other values here. And we can see that this attack only requires two to the k-pa evaluations of the block cipher and its inverse. So this gives us that the double encryption doesn't provide any significant security increase, which leaves us with triple encryption as being the shortest possible cascade where we can expect some reasonable security increase. And indeed, it was shown by Belar and Drogaway that triple encryption is secure up to two to the k-pa plus minimum of n-half and k-pa-half queries, also in the ideal cipher model. And there is also an upper bound on the security for the case of triple desks used with independent keys, which was given by Lukes. And it was also observed by Uli Maurer and myself that for the case of longer cascades, the security improves further with the length of the cascade if the block cipher has smaller key length than message length. So this was the first approach cascading, and the second approach that I want to mention, because it will be useful later, is the approach of key whitening, which is, for example, used in the desks construction proposed by Rivest. And it's based on the idea of using additional keys to hide the inputs and outputs of the block cipher by a simple XOR operation. Indeed, this construction was studied also by Even and Mansour, as discussed in the previous talk. And it was shown to be secure up to 2 to the KPA plus N half queries in the ideal cipher model by Kilian and Rogeway. And this proof also covers the case if the two keys used for the whitening steps are the same, so if there is just a single key for the whitening. So from what we have seen so far, we know that none of the efficient constructions that we mentioned so far was secure beyond 2 to the KPA plus, the minimum of N half and KPA half queries. And if we want to achieve a security beyond the basic level of 2 to the maximum of KPA plus N, KPA or N, maximum of KPA or N, we have to pay a price for that in terms of efficiency, because we have to use the triple encryption, and that requires three block cipher queries per single invocation. So the question that we actually ask is what can be achieved with more efficient constructions? And we take a look at constructions that issue at most two queries to the underlying block cipher per invocation. So let me now address this question by first showing what we cannot achieve and describing some generic attacks that we give in the paper. First, we focus our attention on one query constructions that only issue a single query to the underlying block cipher. And we show that none of these queries can be expected to provide a reasonable security increase because they cannot be secure beyond the bound of 2 to the maximum of KPA and N. I will briefly sketch the proof of the attack, but only for a special case of constructions that are injective in the way that for a fixed key, distinct inputs to the construction imply distinct queries to the underlying block cipher. This is a special case, but in the paper we prove the statement for all one query constructions. So what the distinguisher can do in such a setting, he can first issue 2 to the N plus K k-pah of random queries, distinct queries to the underlying block cipher. In the second step, he issues the same number of queries to the Oracle O, which is either the construction or random permutation. And in the third step, he doesn't issue any queries to the Oracles at all. He just performs some computation. More precisely, he tries to evaluate the construction by himself on all the values that were queried in the second step on all these y values. But of course, he can evaluate the construction only if he knows the answer of the query that the construction would ask to the underlying block cipher. And the only way he could have learned this value was in the first step by querying the block cipher itself. But based on the numbers of queries that were asked in the first and the second step and on the injectivity property that we assume, we can expect that at least for one y value, this will be possible. And distinguisher will be able to evaluate this construction. He will try to do it for all possible keys k prime. And of course, if he manages to evaluate the construction, he will compare the result with the value that he obtained in the second step. And if he's actually talking to the construction, this check will succeed if the right key is used. But in the case he's talking to a random permutation, the check will most probably fail. So this allows him to distinguish. It takes some more technical work to show that constructions that don't issue injective queries do no better in this aspect. And then combining these two claims, we can arrive at the above statement for all one query constructions. So this, of course, leads to questions about two query constructions. What can we say about them? Here we also give an attack, give an upper bound on security. We show that for a wide and very natural class of two query constructions, they can achieve at most two to the k-pa plus and half query security. And this is, again, the class of constructions with injective queries, meaning that for a fixed key, distinct inputs to the construction imply distinct first queries. And distinct outputs from the first query imply distinct second queries by the construction. But if we look at the bound, we see that this already allows for some significant security improvement. And therefore, we can look for two query constructions that would achieve this. And this is what we actually provide as the main result of our paper. So let me finally get to presenting our construction. As you can see, it's very simple. It uses the paradigm that was discussed also in the last talk. And it is just a double encryption with two additional whitening steps, one taking place before the encryption and one taking place between the encryptions. We use the same key set for both of the whitening steps. And for the encryption steps, we derive the second key from the first key by a simple publicly known operation, like just a single bit flip. So we need an n-bit long key for the whitening steps and the k-bit long key, k-pa-bit long key for the encryption steps, meaning that the total key length of our construction is k-pa plus n. And the main result of our paper is that actually this double X or k-skade, as we call the construction, is secure up to 2 to the k-pa plus n-half queries. So it meets the upper bounds that was given by the generic attack that I presented earlier on two query constructions. And in this sense, optimal in this class. I will not give the whole proof, of course, of the security of the construction, but I will just mention some important points. So let me recall that the initial setting that we are considering is the setting with a distinguisher that this has access to an ideal block cipher and either a random permutation or a block cipher and our construction using this block cipher. And the goal of our proof is to show that the distinguishing advantage of this distinguisher must be small if he issues significantly less than 2 to the k-pa plus n-half queries. How we do this is that we actually reduce it to a simpler combinatorial problem and show this to be hard, which is, again, related to the previous talk. So the new problem that we are considering here is the problem of distinguishing only permutations. So it doesn't involve block ciphers at all. And the setting looks as follows. We have a distinguisher who either is given access to two permutations that are independent and uniformly randomly chosen, or he is given access to two permutations that are randomly chosen but correlated in such a way that they satisfy this equation for a random secret value z, meaning that if they were used in our construction instead of the encryption steps, and z would be the whitening key, then the whole construction would result to an identity. And we show that distinguishing these two settings is hard for less than 2 to the n-half queries. And we also show that this actually implies the desired bound on the security of our construction that I claimed. So before I get to some final remarks and conclusions, let me just mention one last thing on the design of our construction. One might be tempted to consider even simpler constructions using just a single whitening step, either before the encryption steps in between or after, after two encryption steps, or one might want to consider a construction where the two whitening steps are used in a more symmetric way, meaning before and after both encryptions. But it turns out that all of these constructions can be attacked in two to the maximum of k-pi and n-queries, and therefore do not provide the desired security increase that we achieve by our construction. So let me summarize the contribution of our paper. We presented new key length extending construction for block ciphers, which is both more efficient than triple encryption because it requires only two block cipher calls for invocation. And it is also more secure than triple encryption for certain parameters of k-pi and n. Here you can compare the two bounds. And we also give generic attacks that support the optimality of our construction within the class. These attacks show that any one query construction can be attacked or is insecure for more than two to the maximum of k-pi and n-queries. And no two-query construction from the class of injective constructions can be secure beyond the bound of two to the k-pi plus n-half, which is the bound achieved by our construction. So this is our contribution. Thank you very much. We do have a little bit of time for a question or two. One of the types of queries and said either one of them is costing you a unit of query. It might be a bunch of clearly distinguished between the two because one of them is really related to the amount of data available and the other is related to the amount of time you apply your attack. So how would your results change if you tried to distinguish between the two types of queries that you could do in your own? Well, it is true that these are two different types of queries that represent something else or two different things. Actually, some of the results that I mentioned do make distinction between these queries. For example, the result for desks. But we took the more common approach to consider only the sum of the queries as the complexity measure for the distinguisher, which is the usual way. I think you typically, if the bound is two to the k-plus and a half unit, for example, in the attacks, you need to ask two to the n queries of the more complicated type that you discussed. So that you named as a more complicated type. So this would be the more fine-grained formulation of our results, but we give the sum of the queries. Well, since we are working in the ideal cipher model, we don't need any additional assumptions about the block cipher. This is sufficient because in the ideal cipher model, the permutations performed for each of the keys are independently random. But in practice, if you would use a block cipher in this construction, you would need some degree of some security against key related attacks, but in a quite mild form. But of course, you could use this construction also with independent keys for the two encryption steps. The whole proof would go through exactly the same way. Only you would have a larger key length and the same security. So you could use independent keys if you wanted. But in the ideal cipher model, you don't need to. Okay, so let's thank Peter.