 Hello, everyone. Welcome to the last session of EuroCrypt. So this is Information Theoretics Security 2. So we have an exciting lineup of three papers. So first up, we have Authentication in the Bounded Storage Model by Willie Quach, Evgeny Dodis, and Daniel Wicks. And Willie will give the talk. He'll be online, so go ahead and share your screen. Yeah, is audio good? Can everybody hear me? Yes, yes. Cool. Yeah, so I'm going to talk about Authentication in the Bounded Storage Model. This is joint work with Evgeny Dodis and Daniel Wicks. So to motivate a bit the setting that we consider, let's take a step back and consider the most basic setting that we are dealing with with cryptography, which is the setting where you have two parties, Alice and Bob, who want to communicate a bit between each other, but they are under the looming threat of an adversary Eve that might want to disrupt what they're doing. So what Alice and Bob want to do is to hopefully have a way to make their communication secure by having several nice properties about this. So for instance, they would like the commission to be secret and authentic. So what they'll do is potentially using some shared secret key, execute some cryptographic protocol in order to achieve those properties. However, it turns out that in general, this is not possible. In particular, if we don't have any kind of assumption on the adversary, then Shannon tells us that this is impossible in general. So more precisely, what it means is that there will be strong restrictions on how good the schemes will be in the sense that you won't be able to use the key many times. So for instance, for a one-time pad, you'll only be able to use the key once and then throw it away. And also it tells you that public key agreement is impossible. So cryptography won't help you to generate a new, a fresh key afterwards. So what you're saying is consider a restricted class of adversaries. So the standard kind of adversaries that we consider in cryptography are adversaries that are efficient, computational efficient. And by that, usually what we mean is that the adversaries runs in polynomial time. However, this makes all of our secretive properties that we love and prove rely on computational harness assumptions. So for instance, it at least requires that P is different from NP trap security. So there is this code, cryptographer seldom sleep well that I think is attributed to Michele. And essentially what it means is that if it turned out that P were equal to NP, then all of the proofs that we were so hard to prove and the protocols to be on would be completely dumped into the trash. Okay, so if you'd like to sleep better, one alternative is to consider the bounded storage model that was introduced by more in 92. And bounded storage model puts a different kind of restriction on the adversary. So instead of limiting the adversaries run time, what we do in the bounded storage model is limiting the storage of the adversary. And that's the only restriction we make. And the adversary could, could as far as we're concerned, run using infinite amount of time, we're fine with that. So again, as is common in cryptography, what we would like is to prove security when the adversaries could be have, could have much more computational power than harness users. So what we'll assume is that the adversary will have much more memory than than the harness users here, Alison Bob. And quite surprisingly, this is a useful restriction to put on the, on the adversary Eve, in the sense that this allows to prove security of schemes to build schemes and prove their security unconditionally. So without relying on any kind of computational or complexity, theoretical assumptions. Okay. And to give a brief intuition on why this is helpful. One thing that happens is that, for instance, Alison Bob can try to exchange message, messages to each other. And Eve will try to remember some information about, about what she sees from the communication. But in the bounded search model, Alison Bob will talk so much that Eve won't be able to store anything. And that will really allow Alison Bob to, for instance, transmit information security. So that's the very vague intuition of what we'll be going on. So to be a bit more formal, the way we model our harness users is by considering streaming algorithms, which will be our way to formalize the fact that the harness users can generate messages that are way, way longer than what they can actually store themselves. So they can generate bits, say bits of their stream and stream that Alice can stream messages to Bob. The total length of the message again can be very, very large. And the restriction that we put is that generating that stream should be efficient. That should be, Alison Bob should be able to do that using low memory. And in terms of security, what we consider is that there's an adversary that has a memory much bigger than Alison Bob. And that's the only restriction that we put on the adversary. So as far as we're concerned, at least Eve could run using, run using infinite, infinite time. Okay. So that's our ceiling. So what can we do in that model? It turns out, so as I alluded to earlier, that we can actually build the schemes that are unconditionally secure in that model. So that's what makes it comparatively better than schemes that as standard schemes that use computational assumptions. But the schemes are also reusable. So that's what makes it better than, say, one-time pattern information security could. Yeah. So it turns out that we can do most of what you can think of. So symmetric encryption. And quite surprisingly, maybe, at least surprisingly to me, you can actually do public encryption and key agreements unconditionally without relying on any assumptions. And you can even do slightly more fancy cryptos. So for instance, obvious transfer or MPC with a dsns majority. Yeah. So somewhat similar to the computational, the standard computational setting, the way we measure the security of the scheme is by kind of measuring how much power does it take the adversary to break our schemes. So for instance, in the computational setting that we will measure the time that the adversary needs to break the scheme, here we'll measure the memory that the adversary needs to break our constructions. And it turns out that those are the parameters that we get. So for the in the symmetric case, the adversary can have memory up to exponential more than what the hones user have. And for the public settings, what we have is quadratic, essentially. So just to make it clear, essentially, with the intuition that I gave earlier, if Eve is allowed to have up to exponential memory, the hones users will have to stream exponentially many bits in order to kind of fool her. So if instead you want, I listen to hones users to run in polynomial time, even though they're streaming adversaries with low memory, what you need to do is to set the, sorry, the memory of the adversary to be a fixed polynomial. Okay. So the question that we asked is what about authentication? Can we do authentication in the bonus search model? Yes. So that's what we say. And we show that this is also possible by giving several constructions. The first construction will be in the symmetric setting, where tags are long, and the construction allows adversaries to have up to exponential more memory than the, than the hones users. And we also show that quite surprisingly, we can also make having another construction where the tags actually short and fits in the hones users memory directly. But this is at the cost of only supporting a quite a smaller gap between the memory of the adversary in the memory of the hones users. Unless we show how to build a public connection. So again, the appeal of all these constructions is that security is unconditional. So we don't need to rely on any computational assumptions. And the schemes are reusable. So there's no, no bounds, no small bound on the number of times that the hones users can, can use can say sign messages. Okay, so let me talk a bit more about our constructions. So our setting would be first, the symmetric setting where you have two parties, Addis and Bob, which with memory N, and they both share a security SK. And at least we want to authenticate messages to Bob by streaming a potentially large authentication information. So after receiving that authentication information, Bob should be convinced that the message has been authenticated by Alice and Alice should be about to authenticate many messages. So crying as states that this holds. And for security, what we really want is security where again, Alice can do that many times. So what that will correspond to is that if you can look at what Alice transmits to Bob, store some small information that fits in her memory, and do that many, many times. So she can store information about many different times as long as it fits in her memory, and then try to modify a message and try to output a authentication of a message that was not authenticated by Alice, and she wins if Bob exits. So unfortunately, she says that she will not win. So that's what we consider. And to be more precise, we actually want to also handle active adversaries. So that would capture Eve's that act as a man in the middle between Alice and Bob. And we have to murder that in a different way because Eve doesn't necessarily have the memory to store a whole thing. So what we want to say is that if we're able to modify the authentication on the fly, in order to produce our fortress. So the way we formalize that is to have a stream that Alice and Eve share where Alice sends authentications to Eve on a stream between Eve and Bob that Eve can use to provide authentications to Bob and see whether Bob accepts or rejects. Yes, so that's our definition of Mac in the bounded search model. So how do we be on that? How do we be on such objects? And our idea will be pretty simple. We have information theoretic Macs, but they only are usable single time. And we also have encryption in the bounded search model. So we'll just combine both. So we'll encrypt information theoretic Macs using a symmetric encryption in the bounded search model. And why does that help? The intuition is that this prevents kind of true kind of fundamental attacks that Eve can do. The first fundamental attack is if Eve just passively observes what authentications provided by Alice, then by security of the encryption, she won't be able to infer anything about the Mac, the underlying Mac. On the other hand, if Eve only sees authentications one by one from Alice and tries to modify them on the fly and send them to Bob, then one time security of the Mac should be enough to argue security. So we have to be careful. We want to do that in a way that actually preserves security of the encryption. But it turns out that we can show with a bit of work that all Eve can do is essentially one of these two attacks. So what we obtain in the end is a construction of a Mac where the memory that we allow for the adversary is exponential in the memory of the Hones users. But the size of the tags is large and both features are essentially inherited from the symmetric encryption. So that's all construction for long tags. That raises a natural question. So here what I just described is a construction where tags don't even fit in Eve's memory. So Alice and Bob need to stream a lot of information in order to just authenticate a single message. But is that inherent? In particular, can we hope that the authentication tags directly fit in the Hones users memory? So the appeal of that is that if that were the case, then Alice could just generate tags on her own in her own local memory and send that directly to Bob. So in terms of Hones execution, there wouldn't be any need for actually a stream. The issue is that for security, because of information 30 bounds, essentially if Eve were about to see end tags, she would be about to break security. So the gap has to be quite narrow. But up to that restriction, we could hope that this would be possible. And we actually showed that we can. So quite surprising. And the main tool that we use, so to build a Mac where the tags directly fits in the Hones users memory is a lower bound proven by RAS that states that learning parities is hard when the when the distinguishes memory is bounded. So another way to phrase the bounds that will be convenient for us is to consider the following function. So it's a function that has some s-hardcoded that takes some AI and outputs AI as on with an inner product where everything is a vector and the last part is a bit. So what RAS says is that this is a weak PRF that is unconditionally secure as long as the adversary, the memory of the adversary of distinguisher is almost quadratic. And if you think about it, this is essentially that. So how do we use that to build a Mac? Well, if you look, if you started it, that actually looks a lot like LPN without noise. So what we turn to is constructions of Macs from LPN and turns out that we have a generic abstraction that says that key homomorphic weak PRFs actually generically imply Macs. And if you look at the function just above, this is actually a key homomorphic weak PRF. So the issue is that the general construction is only secure in the standard sense. So we'll have, we'll, we actually have to work a bit to make it work for the bounded memory setting. But this still works in the end. And what we get is a Mac with short text where the gap is quadratic. Okay. And last, I'll talk about signatures. So what's a signature in our setting? We have a signer Alice who will stream a large, potentially large verification key to the world. So essentially what we want is that verification key should be some public information that everybody can read. And we'll be potentially long, but we'll be authenticated. And so, so the reason that, that we require this information to be authenticated is similar to the standard setting, but we need the verification, the verification key, sorry, to also be authenticated. And then while streaming this huge key, Alice should be able to keep on her own some small sending key that she can keep in her memory. And on the other hand, any receiving party looking at this huge stream can compress it down to some small verification digest. And then the stream doesn't exist anymore. It goes away. And Alice should be able, given her sending key, to send messages to Bob, who can verify using his digest. So that's the syntax that we want. And we want, again, visibility, so Alice should be able to do that many, many times. Quite importantly, again, what we want, the domain feature that we want of this, out of the signature is that it should be a publicly primitive. So anybody looking at the stream should be able to derive some digest. So if Bob prime comes instead of Bob and looks at the stream, he should be able to derive a potentially different digest and verify also the signatures. So in terms of security, what we want is that Eve should be able to look at this public information and store some of it on her memory as long as it fits. And then she will essentially act as the same many-dominant attack that we considered in the symmetric key setting. So now the question is, how do we build such signatures? And so suppose, for a moment, that using her first message, Alice and Bob could magically agree on some common security. Then in the second phase, they would just need, it would suffice for them to generate max using, say, the bondage storage max that I described earlier to compute those signatures. But the problem with that is that verification key is public. So in particular, if we hope that Alice and Bob can agree on key, that any Bob prime should be also should also be able to derive the same key. And in particular, the adversary should be able to do that as well. So the question that we ask is, can we actually build some kind of meaningful key agreements using a single unidirectional message? And maybe surprisingly, we show that this is possible using what we call a state key agreement, where Alice will stream a long public key and store on her own set of keys, while Bob, on the receiving end, will be able to derive a subset of the keys that are computed by Alice. In terms of security, what we want is that if any Eve comes along and stores some information about the stream, then Alice and Bob will share some key that looks uniform to Alice, to Eve, sorry. So this is a bit subtle to define in particular, the index of the key that looks uniform to Eve actually depends on Eve. So Alice and Bob, who don't even know that either such an Eve exists, will not be able to determine what index is the good index. But Sion, yeah, so we build such an object. But Sion, this is enough in our setting just because after performing this set key agreement, Alice can just authenticate all of her messages using the symmetric machine. And then security holds by security of the Mac using this special security SKT. So to conclude, what we do in this paper is we consider authentication in the bounded storage model, which is a model where instead of restricting the adversaries runtime, we restrict the adversaries memory. And quite surprisingly, this enables unconditional security, so security that doesn't rely on any computational assumptions, but also reusable. And we provide several constructions of authentications in that setting. One of a similar key version with long tags that allow exponential adversary memory. And one with short tags where tags actually fits directly in the user's memory, but that's at the cost of having a smaller gap with the adversary. And then we also build a signature. Yeah, that's all I had to say. Thanks for your attention. Yeah. Alright, thanks for the talk. So I guess we're running a little bit tight on time, so maybe we have time for one, one quick question. Alright, I guess I had one quick question. So in the signature model, is there an assumption that Eve can't tamper with the stream during, I guess, the key setup phase? Yeah, yeah, yeah. We don't allow Eve to tamper with it. And that's kind of similar to the standard setting where we don't allow forgers to tamper with a verification key in the signature. Alright, alright. Okay, I guess we'll go ahead and move on to the next talk then. Thanks. Alright, thanks.