 Hello and welcome to my talk about deck-based white block cipher modes and an exposition of the blinded keyed hashing model. My name is Alder Gunsing and this is a joint work with Jan Damen and Park Manning. So a very commonly used primitive in symmetric crypto is of course the block cipher. And this encrypts some plaintext into some ciphertext under a secret key. However it has a fixed block size like 128 bits for example. And if you want to encrypt a variable sized message we need some kind of mode of operation. So for example counter mode. But these modes require a nonce which has to be stored somewhere. And very often we have some place to store it but in some situations like this encryption for example we do not really have the space to store it. So what we can also do instead is that we design a white block cipher. And the white block cipher is simply a block cipher with a variable block size. So we do not really need a nonce for this as every part of the output depends on every part of the input ideally. But if we do encrypt the same plaintext twice we do get the same ciphertext. So we do leak some information. What we can also add is a tweak to it. So then we get a tweakable white block cipher. And this is a normal white block cipher but it also takes a tweak as an additional input. So the tweak behaves a bit like the secret key. In that the ciphertext completely changes when we change the tweak. But in contrast to the secret key the tweak can be public. And this is useful for in for example disk encryption. Where we can use the sector number as the tweak. And this means that when we encrypt the same file multiple times on the disk, they do not get encrypted to the same ciphertext. So we do not leak the information that the same file is repeated multiple times. And what we do is that we build two tweakable white block ciphers based on two kinds of primitives. The first one is the doubly extendable cryptographic keyed function or just a deck function for short. And its inputs can be of any size and it also outputs an arbitrary long string. And the second primitive is the keyed hashes. Its input can also be of any size but its output is just a fixed size. And in contrast to the block ciphers, these primitives do not have the requirement that they have to be invertible. So this allows for a more flexible design in them and allows for faster primitives. So the first construction we have is the double decker. It is a generalization of the farfaller white block cipher by Bartoni and others. And it looks a bit complicated at first, but if you merge the left two lanes and the right two lanes together, you can actually see that this is just a four-round fashion network. But the topmost and the bottommost functions do not write to the whole lane so we do split them. And those functions are actually the keyed hash functions and the functions on the inside are deck functions hence the name. And the outer lanes we have a fixed size and the inner lanes have a variable size. And this just means that the keyed hash functions do indeed write to just a fixed size lane. It also means that the bulk of the data will be in the inner lanes. If you look closely you can actually see that those lanes are only processed three times. So it is actually more efficient than just a normal four-round fashion network because the bulk of the data is processed less. We also have the docked double decker. This is a variant of the double decker where two lanes are merged together and some input is moved around. So it actually has one lane less. And again the outer lanes here are of a fixed size and the inner lanes are variable size. So again the bulk of the data will be in the inner lane which means that it's only processed three times. But in this case what's also interesting is that the deck functions only get a fixed size input. So they actually become stream ciphers conceptually which are much more commonly used. So this is more suitable for that. So now I will talk a bit more about the secured model of the double deckers. So a very commonly used security property of keyed hash function is the notion of epsilon x-ray universality. This property means that the probability of getting a specific difference between the outputs is small. So we actually require that the probability of getting a difference y is less or equal than some small epsilon. However this property only considers the difference between a single query pair. So when we look at Q queries instead this bound simply becomes Q choose two times epsilon. However this epsilon is the worst case bound over all query pairs with some difference. And for some functions not all of these pairs do get close to the worst case bound. So this bound can actually be a very bad estimate. So instead we consider the blinded keyed hashing model or big h for short. There we want to estimate those multiple queries more precisely. So it is actually defined as the following setup. So when we have a keyed hash function h we consider it big h secure. If it is indistinguishable in the following setup. So on the left hand side we have the real world. Here we have some input x which we pass through our keyed hash function. And then we apply a difference delta to it. But instead of directly giving the output we first pass it through a random oracle. And in the ideal world we simply give the two inputs to a random oracle. But by first passing the output through a random oracle in the real world it becomes a bit interesting. Because this means that the distinguisher cannot directly see the output of the keyed hash function. And this means that it gets very little information. It actually means that it does not get any information besides that it knows whether there was a collision or not but nothing more. And a very good example of this is Zufi. So when we have a single query tuple Zufi claims to have a security bound of 200 power minus 127. And this holds in both the XOR universality model and the big h model. But if we extend it to q queries when we use the XOR universal model we can only get a bound of q choose 2 times to the power minus 127. But Zufi actually has a dedicated claim for the big h model which is much better namely one of q times to the power minus 128. And this makes a very significant difference. So when we use Zufi as an XOR universal function we only get a claim security guarantee of 64 bits. But when we use it in a big h model this claim security improves to 128 bits which is of course a very significant difference. But we cannot directly apply the big h model to our construction so we have to do a bit of work to reduce our construction to the big h model. And if we use the XOR universality definition instead this is trivial to do. This is of course also the reason why this is so commonly used. But we are able to do this to do this reduction. And we do show that the double daggers are secure when the key hash function is big h secure and also when the dag functions is prf secure. And for the more we also have another interesting security property of the double daggers. This is because we apply the tweak to the dag functions instead of the keyed hashing functions. And this means that the bound of our keyed hashing function becomes tweak separated. And because the dag functions behave independently for different tweaks we do not have to worry about collisions in the keyed hash function in in those cases when the tweaks are different. And this significantly improves our security bounds in some settings. So for now we will consider just the normal epsilon XOR universal keyed hash function h. But instead of looking at all Q queries we will separate the queries based on the tweak. So we make QW queries with some tweak W. Now we will look at the security laws on our keyed hash function in three different scenarios. To the first one we just have the general case with a general bound. But to understand it a bit better you also look at two extreme cases. First one where we just use a single tweak every time. And the other extreme where we do not repeat the tweak at all. So we use a different tweak every goal. And the naive security bound in all these three cases is just a normal Q choose two times epsilon. But this is not the case. So in general we get QW choose two times epsilon. And then we sum these terms over all tweaks. But if we use just one tweak this doesn't make any difference at all. We again just get our normal Q choose two times epsilon. But if we have no tweak repetitions at all we actually get a bound of zero in this case. So it doesn't even matter which hash function we use. And of course if we have some case in between where we have limited tweak reuse we get some bound in between those. So a good example of this is disk encryption on SSDs. So the double decker is very suitable for disk encryption. Because disks are separated in sectors and we can use the block size just equal to the size of these sectors and we can also use the physical sector number as our tweak. And the fact is that in SSDs these sectors have a limited lifetime because they get damaged every time we write some data to it. And if we look at some specific SSDs we see that it can every sector can be written to at most of around 500 times. So if we use those numbers into our previous bounds we get the following result. If we do not have any tweak separation at all this is secure when 2 to the power 74 times our epsilon is significantly smaller than 1. But if we have tweak separation this improves to 2 to the power 46 times epsilon. And that is significantly smaller than 1. So this means that we might get away with some faster key hash function if we have this tweak separation. But we have to note that we only get this guarantee when we use the physical sector number as a tweak not the logical one. So this means that it is mostly applicable when encryption is implemented directly in firmware of the SSDs. So now I want to compare our construction to a previous construction in the adiantum. The adiantum was presented here at FSE last year. And the goal for the designers was really to minimize the number of block cipher calls because those are very slow on low-end Android devices. And for this they basically used a three-round FISL network. But because that is not secure they added an extra block cipher call in between. But this block cipher calls only used applied on a very small part of the data. And the bulk of the data is on the left-hand side which doesn't has any effect of the block cipher call at all. And our docked double-decker construction is actually very similar to adiantum. So if we start at the bottom you can see that the bottom two functions are actually called exactly the same in both adiantum and the docked double-decker. But it is different for the block cipher call because the docked double-decker splits the left lane into two parts. We are able to lay the block cipher call flat. And this means that we don't need the inverse of the block cipher at all so we can actually just replace it with a deck function. And because we laid it flat the top key-dash function has to be swapped as well of course. So concluding, first of all we introduced the two double-deckers which are two tweakable white block ciphers which are based on deck functions and key-dash functions. We also introduced a new security model BKH for keyed hashes which is a generalization of the more commonly used extra universality. And then we also were able to use this model to prove better bounds for some specific keyed hashes like Zufi. And finally our usage of the tweak improves the security in some situations where the tweaks are limited to re-used. So that's the end of my talk. Thank you for your attention.