 So for the last 40 years, our community had developed hundreds of block ciphers, but if we look at all those designs, 99% of them follow a very simple design idea, namely take a relatively simple round function and iterate it over and over again with some key schedule. And we have two basic ways how to choose the round function. We can either use regular networks or SP networks, even though I'll not use the fact that there are S-boxes and linear mappings, so you use an invertible operation in each round, or you are using a five-stell structure in which you divide the input into two parts, not necessarily equal, and you apply a not necessarily invertible function to one and then xori to the other side. This is basically what we have been doing for the last 40 years, and the fundamental question in my opinion is to try to really understand how much security do we gain by iterating a good round function over and over again. So I think that after 40 years of analysis, we should have reached some conclusions about how security accumulates as you increase the number of rounds. In the case of indistinguishability, information theoretic analysis, Jacques Pateron had done a great job in trying to understand, as you increase the number of five-stell rounds, how close it gets into a poly-rendum function. But if I have to summarize the situation, you know that Gauss in his famous book wrote that prime numbers are such fundamental items that the dignity of mathematics requires that we understand them, understand the distribution, understand how to distinguish primes from non-primes, etc. So paraphrasing Gauss, I would say that the dignity of cryptography demands that we understand what happens when we concatenate several rounds on top of each other, assuming that each round by itself has no weaknesses. Now, there were some recent surprises in developing generic attacks on such generic structures, those which do not look at all into the internal properties of the round function. So at Crypto 2012, the same gang of researchers presented improved generic attacks on all regular networks, those which are not five-stell, with at least four rounds. So any four and higher number of rounds suddenly had better attacks than previously known. And what I'm going to present today is a similar kind of improvement, where we use new techniques to develop improved generic attacks on all five-stell networks with at least five rounds. So altogether, since most of our designs have more than four or five rounds, we are getting new improved attacks on the majority of structures when we ignore internal properties. So the new generic attacks are actually so efficient that they even improve the best-known concrete attacks on a variety of cryptosystems. So here are some examples. DEAL 256 is a design with eight rounds in which each round function is extremely strong. It's a full DES. This was one of the proposals submitted to the AES competition. So because the round function is full AES, we can think about it as a kind of random function. And by using our new techniques, which work generically, we can improve the memory complexity of the best-known attack on DEAL 256 by a large factor of 2 to the power of 56. Another example, in the case of CAST 128, we can reduce the memory complexity again by a large factor of 2 to the 47 and no better attack is currently known. So the basic cryptanalytic problem is the following. You are given a bunch of known plaintext-cypher-text pairs. All the attacks I'm going to describe are not in the chosen, but in the known plaintext model. And typically, we are going to talk about a small number of plaintext-cypher-text pairs. And the goal is to find a key that maps all the plaintext into cypher-text. And we can assume that the plaintext and cypher-text are n bits long, and there are r rounds, and that each round gets an independent n-bit round key. And therefore, the total number of keys, or key bits, is r times n. And information theoretically, if everything is random, you expect that after seeing r input-output pairs corresponding plaintext-cypher-text pairs, then the rn bits of the key are likely to be uniquely defined. And most of the attacks I'm going to look at will deal with this minimal number of rounds, minimal number of data, minimal amount of data. And later on, I'll mention what happens when you are given more data. Okay. Also, a small remark, I'm not talking about key-alternating schemes in which you XOR a key, apply-round function, which is keyless. XOR a key and apply-round function. I'm talking about the most general case in which, in each round, you are mixing together in a random way the n-bit key with the n-bit input. So we can think about the problem as an execution matrix where vertically I think about applying the r-rounds. So here is the plaintext, first plaintext. I'm applying the first round, and then I'm applying to the result, the second round and so on, until I get the first cypher-text. And here is the second given plaintext. And all the columns are independent of each other because I'm starting from arbitrary known plaintext, no relationship between them. And also, all the rows are independent because the first key and the second key and the third key are all assumed to be unrelated n-bit round keys. No key schedule here. Now, if you are talking about a single round, there's nothing better you can do than exhaustive search. It's easy to show that this is optimal. So basically, the time is 2 to the n. You have to go over all possible keys for a single round, and the memory is constant. Once we go to two rounds, already something better can be achieved, better than exhaustive search. And this was a very nice idea that Diffie and Hellman published in the late 70s, invented in the late 70s, published in 81, which is the Meet in the Middle Attack on Double Encryption. And the result is that double encryption with two independent n-bit keys can be broken with the same time as a single encryption, but at the cost of enlarging the memory also to the n. And I'll very quickly just show you the basic idea of Meet in the Middle. So for each n-bit value of k1, you partially encrypt the first plaintext. You can do plaintext because there are two keys. So you are given p1, p2, but let's ignore p2 for the time being. I take p1, encrypt it under all possible keys k1, and I'm getting 2 to the n possible suggestions for the value in the middle x. Then I'm keeping this table, but sorting it, not according to the order in which I generated it, which is going over all the k's, but in the order of this intermediate value x. So I can quickly search for any desired value of x in this table. Table of size 2 to the n. Now I take the first ciphertext, decrypt it under all the possible k2s. For each one, I'm getting a suggestion for x, which I search in the table, and therefore I get a suggestion for a corresponding k1. Usually unique, sometimes there are a few. And therefore you are now going to get each k2, one, or maybe a small number, of corresponding k1s. You take the pairs of k1, k2, which are suggested, and check their validity based on your second plaintext ciphertext pair, and only one of the 2 to the n suggested combinations is likely to survive this second test with the second pair. So this is a well-known technique, nothing new here, time, as I said before, is 2 to the n, because you're separately encrypted here under all the possibilities and separately here, and the memory is the size of the table 2 to the n. Now, what happens when you go to three rounds? Motivated by three key triple days, people had looked very, very extensively at the security of triple encryption with independent keys, unfortunately the only idea that people came up with is guess the first key, then you reduce the three encryption into two encryption, and then do meet in the middle. So basically you increase the time from 2 to the n to 2 to the 2n, keep the memory the same, because for each guess you can reuse the memory, and the product of this time for each memory is still the same as the number of possible keys. Okay, so it looks as if we fully understand one, two, three rounds, and then surprise in 2012 was that as soon as we get to four rounds, suddenly there are more efficient algorithms that you can apply. So we shouldn't have stopped at three. So in meeting the middle attacks, time times memory is equal to the number of keys. So this can be viewed simply as a trade-off. If you want to decrease the amount of time by certain multiplicative factor, you have to increase the memory complexity by the same factor. You just shift from time to memory. And until recently most people believe that you can divide not in the middle, do all kinds of things, but basically time times memory should be equal to the number of keys in generic. But in 2012 we showed that there's a much richer theory behind it, and in fact I'm going now to start by recalling the simple proof that four-round encryption can be broken with the same time and memory complexities as breaking three-round encryption by using something which we called a dissection technique. So what's the difference between meeting the middle and dissection techniques? In meeting the middle, I'm guessing keys at the top, I'm guessing keys at the bottom, I'm encrypting from the top towards the middle and decrypting from the bottom towards the middle, and I never know what is the value in the middle. I'm using it only as a filtering condition because it should be the same, whatever it is, it should be the same as you come from top and bottom. In dissection attacks you use a different idea. You guess what is the value in the middle. Not all the way through, but some of the values in the execution matrix. What does it enable you to do? It enables you to partition the problem into a problem with known plain text ciphertext with a reduced number of rounds at the top and also a partial problem of known plain text ciphertext pairs at the bottom. So now it's a completely different kind of recursion. If I have to think about what is the effect of guessing a key versus guessing a value, if I guess, for example, the first key, I reduce it, the problem for r rounds to r-1 rounds, but now I know all the plain text and all the ciphertexts of an r-1 cryptosystem. If I guess a value somewhere in the execution matrix, then I get a partial, a partition into two partial problems with a top part, so you have k, let's say, t rounds here and r-t rounds there, and each one, you know, some of the plain text and some of the ciphertext. So it's a different idea, and very quickly in dissection of the text, for four rounds you guess the middle value in the first plain text to ciphertext execution. This enables you now to solve this through meeting the middle, you get some suggestions, and now you go and verify them by looking at the second plain text ciphertext. You'll see more examples once I get into the new text. So I'll skip the details, you can see it in the paper. So the total time is 2 to the 2n, and the total memory is 2 to the n, which is the same as in triple encryption. So I went from three encryption to four encryption, but didn't increase the complexity of either time or memory. So dealing with a larger number of encryptions, if I go to five rounds, the best thing we know how to do is to guess the fifth key, fifth round key, and solve with dissection the four round. Six rounds, the best thing we know how to do is guess two keys, and then solve again by dissection the four remaining rounds. So it looks as if it was a one-time lucky event that going from three to four we didn't increase the complexity, but in fact, attacking seven rounds turns out to have the same time and memory complexity as six rounds. So again, I'm saving, and the savings start to accumulate, because once I'm taking seven, I had the saving when you went from seven to six, and also the saving when going from four to three. So things get better and better, and the attack on seven rounds is based on dividing it into unequal parts, three at the top, four at the bottom, guessing two values in the execution matrix. This makes sure that the number of possible triplets of keys where two of the intermediate values are known. There were two to the three N possible keys, but I'm imposing two to the two N conditions, so there are only two to the N possible triplets of keys, small enough so that I can store it in two to the N table, and then at the bottom, I'm making recursive use of the four-round dissection technique. So this will require guessing another value in the middle. So what you are going to get is the following, skipping the analysis. For seven rounds, you get time, which is two to the four N, memory, which is two to the two N, so the product is two to the five N instead of the number of keys, which is two to the seven N because I saved twice in terms of going from three to four and going from six to seven. So there is a whole magic sequence of ours for which such a saving happens. When you go to three to four, when you go from six to seven, when you go to ten to eleven, then sixteen, twenty to twenty nine, these are the places where our recursion saves and you accumulate the savings. So this was very strange property, but now let's switch to the Feistel structures. So in order to compare the meet in the middle and dissection approaches on Feistel structures, we have to understand why meet in the middle is more effective on Feistel than on regular networks. And for the sake of convenience, we assume that the round function has only half the size input, so N over 2-bit inputs, N over 2-bit outputs and also N over 2-bit round keys so that guessing a key and guessing a value will cost me the same, otherwise it's harder to compare. So I can always consider a pair of Feistel rounds each with N over 2-bit round key as having one regular round with a full N-bit key. Now let's look at how you attack meet in the middle with an odd number of rounds more efficiently because you can totally ignore the middle round. Why? You guess the first three keys so I can encrypt from the top and get the value here, both this value and that value. I can guess three round keys from the bottom and again calculate those values, but due to the Feistel structure I can equate on half the state and therefore I don't have to guess anything or to do anything about this key of the middle round. So I get a middle round for free. That's why meeting the middle is better for an odd number of rounds than normally. And this is where we were stuck for three years between 2012 and 2015 because our dissection didn't have the same kind of improvement where you could skip the middle round. So let me summarize what are the two complexities. Meeting the middle attacks for seven rounds require 2 to the 1.5n that's how much time you need to guess three half n-bit keys. So the time-times memory of meeting the middle on seven rounds is 2 to the 3n and dissection I can think about it as four regular rounds each one being two Feistel rounds so the time for a four round attack is 2 to the 2n memory is 2 to the n so again time-times memory is 2 to the 3n no improvement of dissection over meeting the middle for an odd number of rounds. However, here are the results of our improved attack so I'm quoting the previous ones so meeting the middle had 1.5 1.5 dissection had 2 and 1 the product was 2 to the 3n and our new attack gets the best of both worlds 2 to the 1.5n which is the minimum of the two and the memory is 2 to the n which is the minimum of the two. Okay, I think that I'm going to skip quite a large number of slides now I'll just give you highlights of the basic idea so we are looping over each possible n over 2-bit value of this value R3 which is about halfway through the 7-round encryption for each one of them we prepare a table which consists of 3 plates L2 and R2 and K3 which had been pre-prepared in a way that I don't have time to explain and the size of the table is 2 to the n and I'm going to do a more sophisticated meeting the middle with dissection and I'm going to get the improved result that I mentioned before Okay, here is the magic sequence that we now get for Feistel structures as I increase the number of rounds again and again I'm going to get points at which I'm suddenly not paying any complexity any added complexity for the added Feistel round so the original magic sequence for regular SP networks and for Feistel network our new techniques are giving us improvements at 5 that's why I'm starting to get improvements only from 5 and then 10, 15, 22, 29 a different sequence now let's keep the multi-collisions let me show you a table which summarizes our improvements so for 5 rounds the minimum for which we get an improvement could give you 2 to the n and 2 to the n because you are guessing 2 half n round keys from the top and 2 from the bottom separately if you are using slightly different partitioning you get 2 to the 1.5 n and 2 to the 0.5 n in both cases the product is 2 to the 2 n our new attack is 2 to the n and 2 to the half n the product is 2 to the 1.5 n for 7 I already showed you how we get a product which is 2 to the 2.5 n whereas all the previous attacks were 3 n for 8 the product here is 3.5 with this section I could get 3 and with a new technique I can get ok this is also 3 so compared to 2 to the 2 n if you don't want to use the smallest possible amount of memory you are willing to have more time then you can reduce it from 2 to the 2 n to 2 to the 1.7 n 7.5 n and get something which is better than 2 to the 2 n here for 15 just quickly here it's a product 2 to the 7 n 2 to the 6 n for the section 2 to the 5.5 this is a new and if you are willing to have a little bit more memory then you are going to get 5.5 here ok conclusions and open problems so we presented several new generic cryptanalytic algorithms which generalize dissection attacks from 2012 we had to change when we moved from regular networks to freestyle networks because we had to fight the fact that meeting the middle is slightly more efficient for freestyle and this allowed us to improve actually the best known concrete not the generic attacks on several block ciphers like deal and cast especially those who had relatively few rounds of relatively strong ground functions the main open problem while we keep improving and finding new attacks it is we have no idea how to prove the optimality for our attacks so it would be extremely interesting to look not at the information theoretic question of distinguishability from random oracle but the cryptanalytic problem of finding the keys key recovery attacks can you show that generic key recovery attacks on a certain number of rounds can never be better than anything thank you very much