 Okay, the second talk of the session is dismantling the Outs64 Automotive Cypher, and the talk is given by Christopher Higgs. Thanks, Emmanuel. So yeah, I'm presenting dismantling the Outs64 Automotive Cypher, which is work that I did with Flavio Garcia and David Oswald. And the structure of the presentation is that first I'm going to introduce the automotive context in which Outs64 is used, then I'm going to describe the Cypher, present the crypt analysis work that we did, and then conclude with the practical implications. So Outs64 is an immobiliser solution, and immobiliser is an authentication system, which is designed to prevent vehicle hot-wiring. And the way in which it does this is a transponder, a passive device is embedded in your car key, and a coil is placed around the ignition barrel in your vehicle. And then when the transponder is brought sufficiently close to the ignition, the transponder is powered up, and an authentication protocol proceeds between the transponder and the immobiliser box. Specifically, we looked at an Atmel transponder chip and an immobiliser box from a Mazda. And Outs64 is also used as a remote keyless entry system. So these are the press button to unlock vehicle door systems. And that was where Outs64 was first discovered in 2016. So Outs64 is a proprietary Cypher, and so the first thing that we had to do was to recover the implementation. And so to do this, we recovered the firmware from the immobiliser box, loaded it up into IDA, and then by cross-referencing the patents and data sheets, we were able to recover all of the sub-routines. And what we found is a 64-bit block cypher with a 120-bit key. And so to begin with, we were quite surprised and we thought we might have found quite a secure design. I'm sure many of you have seen recently that the Tesla Model S has been using a cypher with a 40-bit key for some of its keyless entry systems. And what also makes Outs64 quite unusual and quite interesting is that it has an unbalanced Feistel network structure, and so classically Feistel networks are balanced, iterative, round-based designs, where in each round, the half of the state is changed by the output of the round function, whereas in an unbalanced design, some proportion other than half is changed. And also, quite unusually, rather than the security of Outs64 resting in the secrecy of the key, it actually also rests in the structure of the key and the security is dependent on the specific structure of the key. It operates for either eight or 24 rounds, dependent on a bit which is flipped in the transponder. And until now, there's been no in-depth crypt analysis or study of an immobiliser implementation, and that's what I'm presenting today. So this is the Outs64 block cypher. It takes as input eight bytes, and then a byte permutation is applied. The permuted bytes are then input to a round function f, which outputs one byte in each round. The round function comprises a compression function g, which takes as input eight bytes, and outputs a single byte. And then there's a small substitution permutation network, which has one s-box at both the input and output, and then a bitwise permutation in between the two. The compression function looks like this. There are two main properties, which are that it operates nibble-wise, and there are three lookup tables. And so, first of all, each byte in the input is divided into its upper and lower four bits. And then the first two lookup tables prescribe permutations of the compression function key part. That's t-u and t-l in this diagram. And for each nibble in the input, a nibble from the key is appended and uses input to the third lookup table, t-offset. t-offset outputs nibbles, and an x or sum of values from t-offset is computed for the output of the function. An Outs64 key has three components. It has a bit string, the compression function key, which is 32 bits. It has a permutation key part, which describes an eight-element permutation, and it has a substitution key part, which describes a four-by-four s-box. And this gives us a nominal 120-bit key size. And in terms of the dependence of Outs64 on the structure of the key, both the byte permutation in the Feistel network and the bit permutation in the substitution permutation network is defined by the permutation key, and the s-boxes in the round function are defined by the substitution key part. And of course, because substitutions and permutations have structure, they're not just random bit strings. The actual entropy in a key is reduced from the 120 bits to only around 91.5, based on the possible combinations of permutations and substitutions. And if we think about this a little bit more, the first thing that you realize is that the byte permutation has to have the property of being cyclic, and this is because the permutation mixes the output from the round function in the subsequent round with the other bytes, and so if the permutation weren't cyclic, then there would actually be bytes of the plain text present in the ciphertext, even after an arbitrary number of rounds. And we might also want our s-boxes to be resistant to linear and differential cryptanalysis, and a result by Sarenan indicates that there are around 2 to the 40 of these, from the 2 to the 44 total s-box space. And for a reason we'll see in just a few slides, the compression function key should not contain any nibbles with the value 0, and so perhaps the total entropy of an A64 key is just 83 bits, but this is still too much for us to brute force, it's still quite secure. And so we proceeded to do some cryptanalysis to see if we could weaken the cipher, and we focused on a chosen plain text cryptanalysis where what we were trying to do was just distinguish the output of 8 round A64 from that of a random permutation. And the first thing that we realised is that in the first round, if all of the bytes in our chosen plain text have the same value, then we can nullify the byte permutation, and so we can have quite tight control over the input to the round function in the first round. And so if there were any cryptographic weakness in the round function, we'd be able to learn the output from the first round and distinguish it from the other bytes in the cipher text. And in terms of the round function, what we would hope is that the output was uniformly random, and we would also hope that the output from the compression function was uniformly random, but because the s-box is a 4x4 component, it's necessary that the s-boxes operate nibble-wise on the upper and lower 4 bits of the byte output from the compression function, and so it's likely that this function's only going to operate uniformly and randomly if the compression function outputs bytes uniformly and at random. Unfortunately, what we found, unfortunately, is that it doesn't do this, and the main reason for this is the nibble-wise operation and the property of the x or sum that computes the output, and so each nibble in the input has a nibble from the key appended to it, and this is used to select values from the T offset lookup table, and the key nibble selects a row, and the input nibble selects a column, and so in the case where all of the input nibbles have the same value, then we fix one column in this table and the key nibble selects rows. Now, because the key schedule just prescribes a permutation of the key part and what's going to happen in the scenario is that we will compute a sum of the same set of values for both the upper and lower nibble and the order in which the sum is computed will differ, but because x or is commutative, the output byte will be symmetric and the upper nibble will be equal to the lower nibble, and so this forms the basis of a divide-and-conquer attack where what we can do is force the output of the compression function to always be symmetric, so we build the set of chosen plaintexts where each plaintext has the property that all of the nibbles have the same value, so all zeros or ones all the way through to 15, and what will happen is the compression function will output a symmetric byte, the S-box will operate nibble-wise and we will still have a symmetric byte. The bit-wise permutation will typically remove the symmetry, but not in the case where the input to the bit-wise permutation, all of the bits are one or all of the bits are zero, because no matter how these are mixed it will have the same value, and so for at least two out of the 16 plaintexts we will have a symmetric byte in the output from the first round, and this forms the basis of a probabilistic attack that allows us to distinguish the output of the first round from the other bytes, and so this is what 16 ciphertexts might look like corresponding to these chosen plaintexts and you can see that in this case the fifth column features eight symmetric bytes and so we know that this is the output from the first round, and once we do this we learn immediately one element from the permutation which corresponds to the position from the first round after eight rounds, and in the average case where there are two symmetric bytes we also learn nearly two elements from the S-box, so if there are only two then the inputs which cause them were all zeros or one bit, and in the paper we show how we can make this attack non-probabilistic. Of course, so this gives us a remaining entropy or uncertainty and the key value is still around 77 bits, but fortunately learning this element from the permutation that corresponds to the first round of encryption forms the basis of a much more significant attack in which we actually reduce the security to just two to the 51 encryptions in which we can recover the 420 bit key, and the way in which this works is we set all of the input nibbles to zero except for one target nibble which we assign each of the possible values, from 0 to 15, and the T offset table will output a zero for each of the input nibbles which have the value zero, and only four bits of the compression function key will be used to compute the output under these attack conditions, and so essentially we just brute force the remaining key space which is the uncertainty that we have in the S-box, the bitwise permutation, and just 15 possible values for one nibble of the compression function key, and so that concludes the cryptanalysis part of this talk. We found an attack which can recover the 420 bit key of 0.64 using just two to the 51 encryptions, and so in terms of the implementation, what we found is that although the default in the Atmel transponder is 8 rounds of 0.64, they had in fact implemented 24 rounds, and so they were clearly conscious of security when they designed the system, and they've combined it with the bespoke challenge response protocol in which an ID code is first transmitted from the transponder to the base station of the immobiliser box. A nonce is generated, encoded using a proprietary stream cipher, which is deterministic, and then both the transponder and the immobiliser box compute the 0.64 encryption of the nonce, and then the result is compared and authentication succeeds if it matches, and unfortunately we also found some very weak key management. In fact, the compression function key is derived to deterministically from the ID code of the transponder, and the patent prescribes that the permutation key part is actually, there are just 16 values assigned to each automotive manufacturer, and finally we found some evidence that the substitution key part can be fixed across different vehicles. So to conclude, we've shown that 8 round ought 64 is certainly not a secure block cipher, and that we can recover the 420 bit key in less than 2 to the 51 encryptions. 8 round ought 64 with a known compression function key, for instance because it's derived from the ID code of the transponder, can be broken within milliseconds, and this is because once we know the output of the compression function and we know the output of the encryption, the final ciphertext value, you can very quickly attack the small substitution permutation network, and for most key values there's very little entropy in that. And finally 24 round ought 64 is more secure, we'd expect that with a block cipher that the more rounds we apply, the more security we tend to get, but it's broken in practice owing to weak key management. Thank you. Thank you Christopher. Question, comment, no? Maybe one short question. Are you aware of some rationales behind the design of this block cipher? It's the rationale behind it. I think that it's designed to be bijective, which is not apparent from the design of the protocol, so it clearly was designed for more uses than just this. And that's evident from the T offset lookup table, which is actually symmetric about the descending diagonal, and it's why the key nibbles for the compression function key part can never be zero because if they are then all input nibbles are encoded to the value zero and you lose the information that allows you to invert the function when you run it backwards. But beyond that, it seems to be just a sort of relatively classical combination of confusion and diffusion, but it's ineffective when it's used in eight rounds. But it seems that there are weak keys in this kind of design. Not all the possible keys are of the same behaviour relative to. Yeah, certainly the permutation key part is quite constrained. It certainly needs to be cyclic. I think using S-boxes at random tends to produce relatively reasonable S-boxes. You can design AES with key dependent S-boxes and it can be secure. And did you see a document explaining the classical attacks against the black cipher, for instance, the differential linear analysis or things like that? There are no rationales about the security of this black cipher against classical... Well, we decided not to use linear or differential cryptanalysis actually because of the dependence on the key. So if we did that, we'd be exploiting, say, the linearity in the S-box, but that would be key defined and so it wouldn't generalise well to all of the keys. Thank you. No other question? Okay, we thank the speaker again.