 Thank you for being here. In this talk, I will present a joint work between Chungo, Tomas Peters, for Susana Standard, and myself. We will talk about integrity in presence of the cage, and we will provide this without any idealized assumption. To start, we start talking about what is integrity. And as usual, we have Alice who wants to send a message to Bob via an insecure channel. And she wants that Bob is sure that the message she sent to him is really from her, not from another person. To do this, she use a message authentication code. So when she has a message, she use a Mac to compute a tag. Then she send this couple of message tag to Bob, who verifies and decides to see, oh, this is from Alice, so this is from Alice, or bad, this is not from Alice. This is a message authentication code. And this is an example of the cryptographic previous scheme. But if she can intercept this message, so she can see couple of message tag, but since these cryptographic algorithms are implemented on an electronic device, Eve may be able to obtain additional information having physical access to them. In fact, for example, she can measure some physical quantities which are produced during the computation. For example, the power consumption, the time, and the electromagnetic radiation. These are the so-called assertion. And analyzing them, she can obtain critical information. For example, in some cases, even the full key. And we say that via side channel, some information are leaked to her. To avoid this problem, an example of this problem is, for example, we can consider this simple Ashton BC, which is a Mac, which we start with a message, then who is Ashton, and then his input is passed as a message. Which is passed as an input of a block cipher. And we complete the tag. To verify what we do in verification, we recompute the tag to a tilde, and then we compare it with the tag which is provided with the message. And we see that if this tag is valid or not. We start observing that maybe via side channel, we can, if we are able to recover the key clearly, we can do everything. Since all the security of a scheme is in the secrecy of the key. But for example, in verification via side channel, also to tilde may be verified. And obtaining to tilde means that you are able to forge at least for this particular message. Now that usually when we put a side channel protection on primitives like this block cipher, we will protect usually the key, not the output. So if it may be problematic, or so if some outputs are leaked. Thus, cryptographic devices are not black box or others. To avoid this problem, we leak a jersey and cryptography has been introduced. It's going to provide security in presence of leakage, or as it will be integrity, our own. In particular, we want to have a trade-off between some assumption, which we suppose that they are weak at least for physical assumption. And, but on the other hand, we still want to have efficient construction because an inefficient construction which achieves some security is not so interesting. For example, to show you what is the, there are the problems that for the weak physical assumption, in particular, we don't want that they are ideal. For example, consider pickable block cipher. We may have this ideal hypothesis that it is leak-free, which means that the leakage gives no additional information. But first, if you want to use, if we want to use in a real or random game, it's very difficult to simulate the ideal, the leakage of an ideal object. Made in a real or random, we have to distinguish between a pickable block cipher and its leakage and an ideal pickable permutation and its leakage. So you have to compute this ideal leakage, which is very difficult. Moreover, how can an evaluation laboratory check if this device is leak-free? And that's also the security bounds which are provided by such a scheme. Maybe, problematically, they understand it. Thus, it has some problems. On the other hand, it is important, it is very nice, the leak-free model because it allows in a construction where there are many primitives and only some of them are leak-free, it gives us what it is needed to be protected and what is not so important to protect. Moreover, in the leak-free model, we can have efficient constructions. But in this talk, since we want to skip ideal hypothesis, we will use strong unpredictability of leakage, which I will explain later. So we provide three message authentication codes. The first, which uses a one TBC call and a collision-resistant hash function. And it is also, it is secure, but it is also beyond birthday secure when the TBC use long-tweaks, which means two times the size of the input. Then we say, okay, perhaps we do not have this TBC call with long-tweaks, so we want again to be beyond birthday secure. So what we do is a LRMATCHU, which use two TBC calls and a collision-resistant hash function. But to obtain beyond birthday security, we still need to have half-collision resistance, which I will detail later. Finally, we propose a much more efficient construction, which use one TBC call and only a collision-resistant hash function, but to prove its security, we need a really strong assumption for the hash function. In this talk, I will start with the background. I will start defining what is the integrity definition we aim for, then how we can model the cage. Then the secret definition for TBC, which I have just told before. And then I will present a construction, a hash then TBC, which has been introduced before and has been studying at InScript 90. Then I will move to our reconstruction to show the results we have provided. In the first, we use a fixed input, the second, two calls, to the TBC call box size, and the third, we will need to add an additional hypothesis on the hash function. So here we want to provide, to define the security we aim for our message authentication code. For us, it will be strong affordability with leakage. We start observing that our adversary, she can model the leakage of her device. She can do tag generation queries on input message, and then she'll see as output the tag and the leakage of this computation. Moreover, once she does verification queries, she asks as input the message and the tag and she output the couple input out and she obtains either. Okay, invalid. Invite, okay. Or she, and in addition, she received the leakage of this computation. The goal is that she's able to provide a fresh and valid couple message and tag. Even if she receive, she can model the leakage of the device she's facing. She can, she received the leakage of tag generation queries and verification queries. A forgery means a couple message tag, which is fresh. That means it is not an answer obtained from an answer of this, of queries to the tag generation of a code, and valid, which means that when you verify it, you obtain, okay. The problem is very nice definition, but we need the two model leakage. We start, and we start observing that usually photographic schemes use many primitives. And so we want to use the unbounded leakage model, which has been introduced at FSE 70. The idea of the authors is to divide primitives into type of primitives, unprotected primitives, which are filled in white. And we suppose that all inputs and outputs leak. In particular, also the second one. Note that here we use for input and output of the scheme, not of the primitive. We use the green color and we use for values which are computed during the computation or our input during the secret input during the computation of unprotected values, the green, the orange color. Then we have the green box, which are strongly productive primitives. And we suppose that all, but not the secret inputs leak. In addition, the outputs leak. So here we use the red color to denote the key of strongly protected primitive. This gives a nice pictorial view for the unbounded leakage model, because it means that the author receives as leakage the green values. So in reality, what she knows is all the green values and all the orange values. The orange is what is given by leakage. This is very stronger. And the problem now is, how can we define the security for the green, for the strongly protected primitives? For this, since the strong primitives is a tweak-out block cipher, it will be a stronger predictability with leakage. For this, we have our adversary who is able to model the leakage of this tweak-out block cipher and she's able to query an evaluation to the TBC with input tweak and input and obtain an output and it's leakage. And also she can query the inverse of the primitive. This is why it's stronger. Obtaining the inverse of X with tweak TW and the leakage of this inverse. And so, and we want that it is harder for the adversary to provide a fresh and valid couple tweak, input and output. Fresh never obtained from these queries. Value, it means that when you evaluate with tweak TW star and the input X star, you obtain Z star. This definition, I did want to that you can verify by your lab. You can give, this is my implementation of my tweak-out block cipher. Try to put a random key and try to provide the fresh and valid couple. So my information lab can do all the tasks it knows to try and she can say, okay, we think that the security of this, this primitives is up to like due to the 70 queries, which is a nice result. This is, this is something real and concrete. Then I will start showing that there are some already some construction which are secure when the TBC is molded as strong and predictable, but with leakage, but they assume that the hash function is around the model. This, for example, one of them is age than tweak-out block cipher, which was started in script 90. We have the message, which is after, then an half of the message is used as a tweak for the TBC, the other half as an input. And the target is obtained via this tweak-out block cipher. Clearly, using our leakage model, we cannot do a naive verification. For example, which means, okay, we re-computed the tag to tilde, this, this to become to tilde, and then we compare if we tap with the target provided by the other side. Clearly in our model, to tilde will be leaked. So they're going to say, okay, thank you. To tilde is the right one. Okay, then I will read to my forger instead of being unto, it will be unto tilde. The idea, which will use at the SSE 17 is to use the inverse of tweak-out block cipher for education in this way. Again, we re-computed the hash, but instead of re-computing the tag, we compute the inverse of the tag with tweak v, and we compare if this value, with tilde, is equal to the first half of the hash, but the problem is that, if you assume that f is strong and predictable, there may be some formatting interaction between the TPC and the hash function. So what they do, they use a random oracle to bound the probability that the other side computes uv as a Nash of a message, and then obtains u as the inverse of this, this u tilde, this u tilde with tweak v. The random oracle allows to control all the queries that the other side did before, and this is very helpful in the proof. Having given the background, now I want to move to our queries. The first is error mark one. We started thinking, okay, this, there may be bad interaction between the hash function and the tweak-out block cipher. We can do, we can avoid this. To avoid this, we use, the idea is, okay, is error mark one, we take the message, we hash it, and we use it as a tweak. Then we use a fixed input, which is in our case zero n, and we compute the tag from these two inputs from the TPC. The idea is to use the hash value only as a tweak of the TPC, and to use a fixed input for the TPC block cipher. In verification, what we do, again, we use the inversion. Yeah, we use again the, the message, we recompute the hash function, then we inverse the TPC on input A and tweak h, and we obtain a value of x tilde, and we compare it with zero n. The idea is that if x tilde is not zero n, you cannot reuse again for further, because you cannot, there are no forgery which starts with a value, with an input of the TPC, which is not zero n. This will be the idea of our proof, and then I will talk to you later. But before, for those of you who are more interested, I will start talking, I will give the security, and we have the collision resistance, and we have the strong unpredictability times the number of verification policies that I can do. It is QV plus the one, which is the one given due to the finalization step of the strong unpredictability. The hypothesis that we have is that f, it is a strong unpredictable TPC with location, and h is collision resistance function, you see very simple basic assumption for these two primitives. In particular, if for a good hash function, the collision resistance should be bounded, we know that there is the birthday bound, so it should be bounded by the names of query that the other side can do, to square by the square of the number of queries that the other side can do, divided by the size of the outer space of the hash function. And while the unpredictability depends on f is implemented in a secure way, we propose, I prefer to implement them, to use deoxys 384 and a hash function, shot by 256, the advantage is that the deoxys is a pick up block cycle, which is 128, a bit long input, and 256 bit long queries. So in this way, this term will be around a security, which is near two to the 128, perhaps a bit less because shy, it's not an idea, but it will be much beyond the birthday bound, which is a nice result. And for those of you who are interested, I will now, I will dictate the proof, which as you will see, it's very, very simple. We consider fresh and valid verification queries, MI to I, and we consider it's the hash, H, I. We may have two cases, H I has been obtained also as a hash of a method during a type generation query. So there is a type generation query on input MJ, such that H, the hash of MJ is HJ, which is equal to H, I, or this does not, this has not happened. In the first case, either MJ is equal to MI, so for the forged, the litigation query is not fresh, or we have found the collision for the hash function, or zero N as input, H, I as a tweak, and two I as an output is a valid triple for the TPC. So we have found, we have been able to find a prediction for the TPC. So we are able to, so we have an attack which invalidates the strong unpredictability with leakage of the TPC. Then we move to alarm actual, where we have the problem, okay. We have a very good result beyond birthday if we use this typical work cycle with long tweaks, but if we do not have them, we are doomed to be, not to be beyond birthday, not because the idea is no, but we can do it using true cause to the trigger work cycle. What it is important and what the key idea is to use the same tweak for the true cause and choose two different keys here. Again, we compute the hash of the hash function of the message and which we divide into couples. The second art which is used as a tweak for both call and the first art which is used as an input for the first TPC, then it is passed, then we obtain an output Y which is passed to the second TPC to obtain the package. In verification, what we can do is again, we recompute the hash function, then we compute, we compute again Y, but instead of going and recomputing to tilde, we inverse to and we check if the tilde Y obtained here is equal to the Y obtained from the first call to the TPC. The security we have which is, we have the collision resistant on the full hash function, on the full action. We have many collision on this V or we have, or the adversary may be able to find an error. Particularly we want that the TPC is strong and predictable with leakage. The hash function is collision resistant, but it is also move collision, lower half collision resistant, which means that it is hard to find M1 move messages, M1 and M2 such that the second half is the same. Note that for reasonable good hash function, not idea about reasonably good, for ideal function, it's enough to have move equal to have this term if we are using the output space of the hash function is 128 bits, we have security over 100 queries, over 200 100 queries. But for reasonably good hash function, we may even have, for example, we think that it is, if we say that move is 128, we may reasonably have security up to 200 100 queries. And note that with 128 queries, this term is not very bad. Then the third result is, okay, can we, we want to study a bit more in detail age than TPC, which I mind you, we take a thought to generate a type, we compute the hash of the message and then we use alpha as a tweak, alpha as an input and then with a TPC loop tag. In verification, we inverse the tag and we compare if the output of this inversion is equal to the first half of the hash function. The idea is to say, okay, we suppose that for the hash function, there is a set of weak points in the code domain, so that it is easy to find the day pre-image. And we say that, and we assume that an hash function is acceptable if there exists a polynomial time adversary that outputs the previous set. This assumption is not ideal, but it is very strong. But HTTBC is very efficient. So we have here a very strong assumption for the hash function, which leads to a very secure and very to a very efficient construction. This is part of the trade-off. To conclude, we have provided three new, three leakage resilient message authentication codes, which range from pragmatic to theoretically more important. Their security is based on minimal physical assumption, strong unpredictability with leakage and black box, not a generalized assumption for the hash function. This makes a security proofs and bounds in the standard model. And thus, we have concrete requirements for implementers. As future worlds, we leave the problem of instantiating them, comparing their performances and extending to a syndicated impact. I thank you for your attention and my co-authors and me are more than happy to answer to your question. Thank you again.