 Welcome to my talk on overcoming impossibility results in composable security using interval wise guarantees. This is joint work with Wally Maurer. The main question of this work is how to best define security. There are essentially two approaches to defining security. The first one, being game-based security, leads to very simple and minimal security definitions. The downside, however, is that those security definitions not always immediately link to a real-world execution of the protocol. Moreover, if we were to build a cryptographic protocol from any building blocks, then, at each step, we will have to prove an explicit reduction to the security game of the underlying primitive. On the other hand, there are so-called composable or simulation-based security notions. As they compare an execution of the real protocol to the one of an idealized protocol, such a mismatch between the security notion and an execution of the protocol can actually not exist. Moreover, as they come with a composition theorem, we can easily build modular protocols without having to prove reductions at every step. The downside, however, is that those security notions are very strong and thus lead to less efficient schemes or, in many cases, even impossibility results. A large class of such impossibility results is caused by the so-called simulator-commitment problem. So let us consider an example of the commitment problem. Let's say Alice wants to confidentially send a message to Bob. So she might use a symmetric encryption scheme to achieve this goal. So in a composable security notion, we would compare the real-world execution of the protocol to Alice and Bob just using an idealized resource, such as a secure channel in the ideal world. And the commitment problem will actually appear whenever we assume that the shared symmetric key Alice and Bob share might not just be secure, but maybe at some point will actually leak to Eve. A bit more concretely, in order to prove the protocol secure, we would have to show that for any interactive environment, the two worlds are computationally indistinguishable. So the environment might input any message M of his choice and then receive the ciphertext C at Eve's interface. So in the ideal world, the same needs to happen, so it's the simulator's job to actually come up with a ciphertext that looks computationally indistinguishable. As long as the encryption scheme is actually in CPA secure, the simulator can easily do so by just encrypting any arbitrary message of the appropriate length. However, the simulator at this point is now committed on the ciphertext C that does not depend on the actual message. So if in the real world, at some point, the key actually leaks to Eve, then Eve can obviously now decrypt the ciphertext to obtain the message M. Hence we need to reflect that in the ideal world. As at this point, the message is obviously no longer confidential, a naive attempt might be to just hand out the message M at this point to the simulator as well. However, it turns out that for most encryption schemes, the simulator will still not be able to come up with a key K that's not only correctly distributed, but also makes the ciphertext C decrypt to this message M. And in fact, in order to be able to do so, one needs a so-called non-committing encryption scheme. Such schemes are however not only more expensive, but often also require additional setup. This sort of raises the question, isn't there a better way to define this thing compulsively than saying there is this ideal channel with the simulator attached? Because in the end, it seems that in a game-based world, we don't seem to have a problem. And moreover, if we say, okay, that this scheme is secure, if the key never leaks, then why should it be suddenly insecure if the key can leak in a thousand years from now and sort of indicate that maybe you've already learned something about the message now. So it seems that rather than having to always use a non-committing encryption scheme, that it's rather the simulation-based security notion that's just too strict. And in particular, we can observe that we don't have any problem to simulate until the key is exposed and we don't have any problem afterwards. It's just we can somehow not simulate past this event. Yet, giving the simulator only the message length to begin with and asking him to come up with a fake ciphertext is essentially only something we do as a proxy to formalize the confidentiality until this event. I should mention that there have been a number of approaches to overcome the commitment problem. For instance, a large body of work considered allowing for super polynomial simulators. However, even with a super polynomial simulator, one typically still needs less efficient schemes and additional setup. Another line of work considered so-called non-information oracles, which essentially embed a game-based notion in a composable framework. So instead of saying that we have this channel that first only leaks the length and only after the key got revealed, it leaks the full message. In such a world, one would say we have a channel that leaks a ciphertext. However, the ciphertext is subject to a certain game-based security notion. This, however, means that if we want to use this channel, we will still need to prove a reduction to this embedded game, hence it somehow hinders modularity. So in this work, we essentially want to explore whether we cannot circumvent the commitment problem by simply making two separate statements. One about confidentiality until the key leaks and one about the remaining guarantees afterwards. And I should stress here that the goal is really to express the guarantees of our regular schemes, which we have used game-based now for a long time, which seem to lead to many interesting schemes and so on. So we do not want to come up with yet another stronger notion that requires less efficient scheme or additional setup, and we definitely do not want to just give up and fall back to game-based security notions as this means we have to prove reductions in every single step and cannot benefit from composition. So the remaining open question is essentially how could such a notion that looks at separate security statements for different time intervals in a composable framework. And to answer this question, let me do a quick detour and look at how the constructive cryptography framework uses specifications as its main objects. So in the constructive cryptography framework by Maurer and Render, one always makes statements about so-called resources, which are essentially the analogon to functionalities in the UC framework. So traditionally, one might want to show that our real-world resource is computationally indistinguishable to some kind of ideal-world resource. However, we can also make a different, more general statement. So what we can do is we can show that this, our real-world resource is contained in the set of all resources that have some kind of desired properties. And such a set, we simply call a specification and a bit more general instead of looking at the single real-world resource, we can say that we also assume just a specification saying that maybe we don't know about certain aspects of the real-world. And the main statement of the constructive cryptography framework then becomes the one of specification abstraction. So we start with this assumed specification, which might look very scary and about which we might not know whether all the resource contained therein have the guarantees we need. So what we can do is we abstract it by a large specification. And of course, this only makes sense if this larger specification is somehow defined and described in a way that now makes it more obvious that all the resources contained in it have all the guarantees we need and want. And making such a specification abstraction statement, the core of constructive cryptography was introduced by Maurer and Renner in 2016. So if you look at all the papers, they might still use a notion that's much more similar to the simulation-based one of the UC framework. And what this abstraction statement really gives us is flexibility. Because on the one hand, it provides a meta framework with this kind of statement already leading to a lot of nice properties. On the other hand, it does not fix as many aspects as the traditional UC statement you might know of. And in particular, it does not necessarily fix what kind of specification we are allowed to look at. So let's quickly look at the nice properties this kind of statement has. Well, first, we're still making a statement about the real-world resource being contained in an ideal specification. And that means there are no forgotten attacks. There is no mismatch between the security definition and the real-world execution. Second, the transitivity of the subset relation gives us a very basic composition result. Meaning that afterwards, we can forget about this complicated specification and just continue proving results about the one that's somehow easier to deal with. And rest assured that in the end, we can plug the two statements together and get our desired result. Finally, and very importantly for this work is that we can actually... That this view actually gives us a very natural way to formalize multiple guarantees. So we can simply make two statements, so it's contained in this specification and in that specification, and then if we say, okay, it's contained in the intersection, we know that it has both guarantees. And I should also mention at this point that the standard simulation-based notion in the end is just a special case. So we can consider the real-world resource as our assumed specification and the ideal-world resource, including the simulator as the ideal-world specification. And this really shows that this kind of decomposing the ideal-world specification into what we actually want, the resource here, the confidential channel, and the simulator is just a way to define a specification such that it is easy to see that messages are confidential. But the simulator is really, in this view, no inherent aspect of our security notion. It's just a mean to the end, and this will become very important in our notion. What we also have to deal with is that traditionally one would say the two words are computationally indistinguishable, but the abstraction statement is an absolute one. So what we have to do instead is to enlarge this green elliptic specification to a slightly bigger one such that the real-world specification is then actually contained. And how do we do that? Well, we simply map it to the one consisting of all systems which are computationally indistinguishable from the original specification. And such a mapping we call a relaxation. And in particular, one can show that this kind of computational relaxation has nice properties and interacts nicely with applying a separate protocol. And using those nice properties, one can then derive the UC composition theorem or the old constructive cryptography one, essentially as a syntactic derivation rule. So going back some slide, we can then prove a subset statement about the green elliptic specification and later on transform this statement to one about the red specification such that we can plug them together nicely using the subset relation. So with this set, let's go back and see how we can formalize such interval-wise guarantees within a composable framework. And as a quick hint, we actually do so using relaxations because formalizing this as relaxation will exactly allow us to do the same thing. We can then, in separate steps, forget about the relaxations and only in the end apply them and put everything nicely together. So to recall, the problem we had with this naive idea of, let's say, we have a channel that first only leaks the length and only afterwards the full message was that we could not formalize past this key exposure event. So we wanted to make the two separate statements. And this is exactly our starting point. We start with this over-idealized channel we were unable to achieve and now, for both intervals, we suitably relaxed this overly idealized resource. So for the first interval, we essentially want to say, okay, it should behave like this channel, but we don't care about what happens after the key got exposed. And this is actually not so difficult to formalize. I mean, for instance, we could say it's the set of all resources T that behave like this channel. If all the systems just shut down at the moment the key is leaked, or if the environment has to stop once the key is leaked, it doesn't really matter. And I will refer to the paper if you want to know about the details, how exactly we did it in this work. A bit more interesting is how we formalize a relaxation that says, okay, we don't care about what happens before the key got leaked. And in particular, we want that the simulator only has to work from the moment on the key got leaked, and hence it learns the full message and hence can trivially come up with a ciphertext by just encrypting the message. And here what we do is we rely on the extension of the constructive group of framework called constructive cryptography with events that was introduced last year. And we essentially only formalize this kind of relaxation for so-called external events. So for instance, here in the channel, whether the key leaked or not is actually not something that's decided by this confidential channel, but something in the ideal world by the environment. So what we can do is within that extension of constructive cryptography, which I have an example of here. So in this extension, there is some kind of global event history on which all the resources can depend. And in particular, the events that are noted there are not necessarily something triggered by the actual resource, but it could also be the environment triggering them. So for instance, we can say that the confidentiality of a channel depends on whether the key leaked or not, where this event is something that's just up to the environment to decide. And this allows us to really say, OK, we only look at the channel in the world where this event already initially happened. So putting it all together, we see that for each time interval, we can formalize the respective guarantee as one specification, and which we can then all intersect in order to get the conjunction of all the guarantees. And I should really point out that while each of those specifications will probably be described by using a simulator, those simulators do not have to be the same one. And this is really what allows us to avoid the commitment problem. And to see that this is not something bad, we really have to take this specification-based view where we see that having a simulator is not something that's somehow inherent to a simulation-based notion. But in the end, it's really just a way on how to define specifications. So why should there be a need for separate specifications to share the same simulator? And on a more technical level, we then formalize those interval-wise guarantees as relaxations of the same overly idealized resource. And we did it in a way that we could prove nice properties about these relaxations, such that we get all the desired composition rules we would like to have. And really, for the details, I would like you to refer to the paper, but what let me repeat once more, what's really nice is that then in every subsequent statement, we can just assume we have this overly idealized resource and forget about all the nitty-gritty details. And only if we do somehow the overall bookkeeping, we have to apply all the relaxation and see what we really got in the end. So in our work, we also looked at some additional examples, such as identity-based encryption, for which it has been proven by Haufein's Martin Maurer that the UC simulation-based security notion is impossible to achieve in the standard model. However, using our interval-wise guarantees, we were able to come up with a composable notion that is not only achievable, but that actually turned out to be equivalent to the standard in the NCPA notion. As a second example, we looked at coin tossing via commitments, and obviously we could just say, okay, we take a UC commitment, but then we would need setup. And what we were able to do here was, we were able to come up with a formalization of commitments that don't have to rely on setup, yet then the coin tossing protocol can just be applied using the composition rules, and we don't have to prove any explicit reduction. So to conclude this talk, what we really did was, we took this specification-based approach of the constructive cryptography framework and used the flexibility it gives us. And in particular, we defined so-called interval-wise specifications, which have very nice, which are defined and built in a way from smaller building blocks that interact nicely with other aspects of the framework, such that in the end we get strong composition rules, and moreover, it allows us to circumvent the simulator commitment problem. Thank you for your attention.