 So, the next job is to recover block cypress with asymptotically optimal security. The orders are orders, block and secure. And orders will give us a talk. Hello. This is a joint work with Yannick Sauron. And it's a security proof. It's about the indistinguishability of... Think about block cypress. I'm going to explain what it is. So a trick about block cypress is a block cypress when you add a tweak. So, instead of having just a set of keys on a domain, you add a parameter which is a tweak. And it can be useful for this encryption, for example. And it has been introduced until 2002 by this country. And in the following, we consider only construction based on existing block cypress. So, the original construction is just one round. You have a block cypress and you have a set of hash functions. And you can see the tweak is the variable T. The hash functions verify some property that I'm going to explain in the next slide. And this construction is secured up to 2 to the L or 2 queries against the CC attacks. And this is the property for the hash functions. The hash functions needs to be... It's not randomness, but it's that property. If you set X and X crane and Y, then you have the following inequality. And the last year, Landau-Gaschrington and Terashima proposed the following construction. It's the same construction with one more round. And the proof of the security up to 2 L over 3. So, the intuitive question is what happens if we have more rounds? Let's say you have L rounds, what's the balance? And that's the subject of the paper. We analyzed the security of the previous shape and that's what we proved. You have the following bound and you can see the R and R plus 1. So, for example, for one round, you will have Q to the 2 and then you will have 1. So, you find the same bound for one round. And we have the same result for two rounds. And so on, asymptotically with the number of rounds, you go up to the information bound, which is 2 to the L. So, this is the NCPS security. So, we are optimal in NCPS security. And the second inequality is the CCS security. For the CCS security, we are close to the optimum but it's not yet finished and it's an open problem. So, I'm going to explain the proof. So, you have two worlds. You have the real world. You have the queries of the attacker. You have the shape. On the right, you have the ideal world where pi is a tricable random permutation. It means that for every T, P of T is a random permutation. So, you have these two worlds and you want to compare the statistics, you want to compute the statistical distance between these outputs and this one. If you have a statistical distance, you know the security of the tricable block cipher. So, the first thing we do is we divide the problem. We add intermediate worlds so that it's easier to analyze the security. Because so far, these two worlds look quite different. So, first thing, we consider this world. When we change the block cipher, we replace it by random permutations. We do this for each round. And then the third world, which should be compared to the one on the right, when we have random permutation, but the very interesting thing is that we change the queries. Now, we take random queries, uniformly random queries. And let's see the statistical distance between the following world. Between this one and this one, we change the block cipher by random permutation. The statistical distance is upper bounded by the security of the block cipher. That's what I put in there. And then it's a very nice idea for the coupling. The coupling is a technique used a few years ago. And the very beautiful idea of the coupling is to change the randomness, to move the randomness from the permutation i to the inputs. So, in the ideal world, the outputs are uniformly random because pi is uniformly random. But in that third world, the outputs are uniformly random because the inputs are uniformly random. So, the idea is to move the randomness of pi to the randomness of the inputs. And that way, we know that these two worlds are the same. The outputs have the same distribution. So, the statistical distance is zero. So, it remains to compute the statistical distance between these two worlds. And as you can see, these two worlds are very close, at least in that description. It's the same thing, but there is only the inputs that change. So, I take this world q and world zero. And I put it there and there. And now we keep dividing the problem. We introduce many worlds. And in each intermediate world, we change the inputs. So, in world q, you have the q queries of the attacker. And in world zero, you have q random inputs. And the idea is just to take a variation between these two worlds by taking the alpha's inputs to be the attacker's query and then you have random queries. So, if you want to distinguish the world q with the world zero, we are going to distinguish between the adjacent worlds. It means we're going now is to analyze world l plus one and world l. And as you can see, they are very, very close. The only difference is the l plus one query. Right there is x, l plus one, and right there is u, l plus one. And right now, we are going to use the coupling technique, which comes from probabilities. And it's very, very important to analyze the minimums of shims. And that's the same technique we used in the article as I kept 2012 when we studied the even non-soul construction. We made it not. Well, first thing, just for notation, it was pi one and pi one there, but it's not necessarily the same. So, I just changed the notation. You have pi prime and hash prime. So, now I'm going to explain the coupling technique. Let's say you have two distributions, mu and u. A coupling is just a joint distribution, such that marginal distributions are mu and u. It means exactly that. The coupling is lambda over the space product, and you have the following equalities. And the following lemma is explained why it's useful to use the coupling. Because for any coupling, you have the following inequality. Let me just explain. You have two random variables, x and y, which have the distribution of the coupling, which means x follows the distribution of mu and y follows the distribution of mu. Then you can compute the statistical distance between mu and u by computing the probability that the random variables are different. So, in theory, it looks maybe complicated, but in practice it's not very complicated. I'm going to give an example. Let's say you have two coins. And the first coins make head with probability p1. And the second one makes coin with probability p2. And you want to distinguish the two coins. You want to compute the advantage to distinguish the two coins. You can make the regular argument, but here we're going to use the coupling. So, the idea is to correlate the two distributions. How we do it? I'm going to say that every time the first coin makes a head, the second one will always make a head. So, it means that with probability p1, the two coins make head. Then, when the coin c1 doesn't make head, we have probability p2 minus p1. That's one makes tail and the other one makes head. And one minus p2, that's the two coins make tail. So, really the idea is to correlate the two distributions and using the previous lemma, this one, you can compute the statistical distance between the outputs. Because x, for example, is the outputs for the first coin and y, the outputs for the second coin. And with the following array, you see that the two coins are different with probability p2 minus p1. So, this gives you that the advantage is upper bounded by p2 minus p1. That's the following technique that we are going to use to distinguish the two worlds. So, let's go back to the two worlds. So, right there, where is the randomness? You have randomness p1, h1, qr, hr. There, there, there, there, and you. There is many randomness. The idea is to correlate randomness in the first world and the second world so that the outputs are the same. Because right now, they are not necessarily the same. And after that, if we know the probability that the outputs are different, then we know the statistical distance between the two worlds. So, the first thing we do is we pick at random h1 to hr. And then, for h1 to hr, we say it's equal to the left one. So, it's a very strong correlation. Right now, we have the same h function on the left and on the right. Now, maybe we might be tempted to say that we can take the same parallel function as well. And it's a good idea for the first input because if you have pi1 equals p1, 1, and since you have x1 and x1, we'll have the same outputs. The problem is for the last one. x1 plus 1 and u1 plus 1. You're not sure you're going to find the same output because you're not sure that this is the same input. So, we are not going to make such a strong correlation, but it looks like that. We take the pi function and then, just for the l first inputs, every time you have to use p prime, you choose the same randomness for p prime and you use for p, for pi. And you do the same for each round. And so, you have the same outputs. So, the idea relates to you have x1 here, you have h, you absorb with h1 of t and you have an input in there. And on this one, you have the same input, but not so far the same permutation, but just for this input, for x1, you choose the same outputs. You correlate the randomness. And you do this for each round. And you do this for the l first queries. So, that way, you know that the l first queries give the same outputs. So, that's what I put there. Y1 to Y1, this is the same output. It means we have successfully coupled the first l queries. We know how to couple the l plus 1 problem. This is the tough part. So, no, the idea is let's say the input to p1, 2.1 doesn't collide with a previous query. And if at the same time this input right there doesn't collide as well, then you have some freedom on the output, on the true output. And the idea is to choose the same randomness so that the two systems can couple at that round. So, that's this equation. If you have not already defined xl plus 1 plus h1 up to tl plus 1, the same thing in the right hold. If it's not defined, then you choose the same randomness. And you make them equal. And in the further round, you make them equal with the same technique. Now we have to compute when it's not possible. It's not possible when there is a collision in the left hold or a collision in the right hold. This is the following equations. And now we are going to compute the probability of a collision. You need to have one of the following equality. And this is exactly the property of the hash functions. You can upper bound the probability of these two events using the property of the hash functions. You remember we have the property of the hash functions. This is this one. And so we can compute the probability of such equations. This is given by this equation. And that's it. That's it for one round, sorry. And then for multiple rounds, because we chose independent round functions, we just have to multiply the probability of coupling for all the rounds. So instead of 2l epsilon, we have 2l epsilon to the r. This is the probability of having an error while coupling. So this gives the statistical distance. The statistical distance will be upper bounded by this term, which she calls this one. And this ends the proof. And we have the following result. We have to notice that the proof for NcPa is not cca, because we choose the queries by advance in order to compute the probability. Right now we don't know how to do it directly with cca queries, because we don't know what's the probability of a collision. But we have a trick to obtain the cca security. The idea is to propose two NcPa secure trick about box IPHA. And when we propose two such box IPHA, it yields a cca trick about box IPHA. This is a new result. We already had such a result with just box IPHA, and we have the same result for trick about box IPHA. With a condition that we have to use the same tweaks. And so applying these results to the previous one, this gives the cca security. We change the formula. And as you can see, this is secure up to 2 to the R over R plus 2. So it's not yet optimal. And the open question is to prove that it's secure to R over R plus 1. This is exactly the same open problem that we have for even more so. And we have to present it as a trick of the last year. Thank you. So comments. We have one. So in this scheme, we need many keys for underlying lock sizing and hash function. Every lock cyber is independent and hash functions are independent. I wonder if you have constantly reduced the keys. The problem is that it's, I think it's possible, but it's, we don't know how to compute the probability of not coupling on all rounds. We use the result that the probability of not coupling on all rounds is the product of the probability of not coupling on that round times probability of not coupling on the next round. And so because the round functions are independent, it's easy to compute this one, but if there is some dependence, it's not easy to compute such a thing. Perhaps it's an innovation to you to analyze such an object. It's interesting to analyze. Other questions? Thank you very much.