 All right. OK. So the main contribution of this next work is a definitional framework for reasoning about key reuse in a very broad way. So key reuse occurs when the same secret is used in multiple ways, either within a crypto system or among different applications. This practice is risky, as we all know, and it's usually discouraged. But key reuse can crop up in some unexpected ways that make it difficult to enforce key separation all the time. So recognizing this fact, oops, I'm not advancing. OK. Hold on. OK. I have to sync my laptop and this thing, so OK. Recognizing this fact, prior works have looked at particular settings where key reuse is safe. So one of the most important and influential papers in this space was one from Haber and Pinkus, where they introduced the notion of joint security for signing and encryption. So they formalized conditions under which it's safe to use the same key pair for both of these primitives. So they start with the standard notions of security for these and augment them to account for certain forms of key reuse. So let's take a look at how they defined security for signature schemes. So the usual experiment starts by running the key generator and handing the public key to the adversary. You'll notice here that I've drawn the signing algorithm as an object that the game makes queries to for performing computations that involve the secret key. OK, so once the adversary has the public key, we execute it with access to the oracles defined by the game. So one of these oracles allows the adversary to sign a message if it's choosing. And so the game just relays these queries and then relays the response back to the adversary. So the black arrows here represent oracles that objects have access to in the experiment. So eventually, the adversary halts and outputs a forgery attempt, and it wins if the ford signature is valid. So now suppose there's an encryption algorithm that uses the same key generator. So what we ask is what happens to the security of the signature algorithm when the adversary has access to this decryption oracle that uses the same key? Well, depending on the particular algorithms in question, this capability can lead to attacks that break the intended security of the signature algorithm. On the other hand, many combinations are provably safe in the setting. So Haber and Pinkus proved, for example, that in the random oracle model, it's safe to use RSA PSS and RSA OAP with the same key pair. So this type of key reuse is pretty common, signing and encryption. For example, a TLS server might use the same RSA key pair for both key encapsulation and signing Diffie-Hellman key exchanges. But the scope of key reuse potentially is much larger in modern systems where applications interact in very complex ways that can be hard to, that prior work just hasn't envisioned. So it's natural to consider how we might generalize the setting of Haber and Pinkus. For example, what about UFC CMA security in the presence of other key operations? Of course, we can consider security properties other than UFC MA. And we can, of course, generalize this more by thinking of what's the secure, just replacing the signature scheme with a generic primitive or protocol. OK, so suppose that pi is defined in terms of calls to some underlying object that defines a set of key operations that can be performed. So we'll call this object the interface, and it's going to define what the set of key operations that can be performed in an experiment. So in an exposed interface attack, the adversary is given access to this interface and may use it in its attack against the game. So this model's key reuse attacks at various levels of abstraction. So for example, if the game is UFC MA and the scheme is the signing algorithm, the interface might expose lower level key operations used by the signing and algorithm and possibly other systems. So for example, if pi is RSA PSS, then the interface might expose textbook RSA or something. So on the other hand, the interface might expose higher level primitive operations, such as signing and or decryption. The scheme might be a protocol that uses these in some way, and the game might capture the security property intended by this higher level protocol. So the model captures a very broad class of key reuse attacks. So let me now illustrate this with a couple of concrete examples. The first is TPM. So TPM is a specification for on-chip crypto processors that provide a variety of security features used in trusted computing applications. One such feature is remote attestation of a host state, which is useful for things like digital rights management. So TPM supports several protocols for remote attestation, many of which use the same key pair. Thus, this of course raises the question of whether these protocols are jointly secure. So suppose we want to study this security of a particular protocol when the adversary can simultaneously execute other protocols all using the same key. As it turns out, each of these protocols is implemented by making calls to the TPM's API. So the API exposes low-level key operations that are common to all of the protocols that TPM intends to support. When a remote server requests attestation of a host state, the host makes calls to this API in order to execute the protocol with the server. This means that when we're analyzing this scheme, we might conservatively model these protocols' execution by giving the adversary direct access to the TPM. So this is what we call an exposed interface attack. And here it's modeling something very realistic. The purpose of the TPM is to provide a hardware boundary for cryptographic keys so that if the adversary has control of the host, it still doesn't have direct access to the keys. So when we're modeling the security of these protocols, we want to consider what happens when the adversary has compromised the host. While in 2013, Asar had all pointed out that this API can be used as a static Diffie-Hellman oracle for the secret key. This significantly reduces the concrete security of TPM's application and it also violates some important privacy properties of these protocols. So our second example of an exposed interface attack involves TLS. So version 1.2 and below, in version 1.2 below, it's possible to negotiate custom parameters for both classic Diffie-Hellman and elliptic curve Diffie-Hellman key exchanges. Which mode will be negotiated and which parameters will be used depends on how the client and server are configured. Different configurations lead to different protocol variants, all of which might use the same long-term secrets. So let's explicitly model the interface that exposes, say, the server's secret key for use in TLS. So the key is used for signing in some variants and decryption in others, but the protocol variants we're interested in both use signing for authentication. So here's where things get a little interesting. In TLS 1.2, the signature doesn't cover the name of the negotiated cipher suite. The cipher suite, by the way, is a string that's meant to indicate what type of key exchange is being performed and what primitives are being used, et cetera. So in particular, the cipher suite, it covers, it indicates whether a classic DH or elliptic curve DH group has been negotiated. These leads to an interesting cross-protocol attack discovered by Mavro-Giannopoulos at all in 2012. What makes their attack possible is this lack of binding of the signed key exchange message to the cipher suite that was negotiated, meaning an ECDH key exchange message might be misinterpreted by the client as pertaining to a classic Diffie-Hellman key exchange. Under the right conditions, this can lead the client to leaking bits of the shared secret. In general, key reuse can lead to cross-protocol attacks when secret key operations performed in the context of one protocol can be consumed in another context. So as before, we might model cross-protocol attacks against TLS by giving the adversary direct access to the underlying interface. In this case, one that exposes signing and decryption operations on the server's secret key. But this attack model is much too strong and because the adversary can use it to trivially impersonate the server. So we'll need some way of excluding these trivial attacks. So jumping ahead just a little bit, our syntax for interfaces will take as input a context string that is meant to uniquely identify the calling application. And in our security experiment, we'll require that none of the adversary's interface queries use the same context as the application under attack. I'll return to this idea shortly. But first, let me return to an earlier slide and fill out some of the details of our main security notion. So first of all, secret key operations in this experiment are specified by an interface. And we model key reuse attacks against the scheme by giving the adversary direct access to this interface. Now, notice that the only purpose that the scheme serves in this experiment is to define how the game interacts with the interface. So for our purposes, it'll be simpler to drop the scheme from the formal definition and allow it to be defined by the game itself. So the new experiment now just involves three objects, the adversary, an interface, and a game. So to reiterate, the interface specifies how keys are generated and how they might be used. And the game specifies how the keys are used, how the interface is used in a particular application, as well as what is the intended goal of that application. We call this setting security under exposed interface attack. Proving security in this setting will often require a property that we call context separability. Loosely, an interface is context separable if key operations can be bound to the context in which they're used. So our syntax for interfaces makes this explicit. Among other things, it takes as input a context string that's meant to uniquely identify the calling application. Our security experiment involves a distinguished context string, which we call the game context. And we require that the adversary's interface queries never use the game context. We'll call this requirement in the experiment context separation. Now in our experiment, we allow the game context to be chosen by the adversary. So here's the main point. For context separable interfaces, we can design applications so that as long as context separation is enforced, the adversary can't use the interface to attack the application. So I should remark that we didn't invent this idea. Context separability is actually a design pattern that's apparent in a number of cryptographic standards. Let me just give one example to illustrate. So one of the systems that we looked at in our paper is the edDSA signature algorithm. EdDSA is a variant of Schnorr that offers several advantages, one being that it's deterministic. And I won't belabor the details of the scheme, but the point I want to make is that, like most digital signatures, edDSA isn't context separable on its own. And this is simply because the syntax doesn't surface an explicit context string. However, the RFC standard for edDSA specifies variants of the algorithm that do just this. For these, a context string is provided as input to the signing and verifying operations so that the input of each hash computation is prefixed by the context. So for certain classes of games, enforcing context separation ensures that signatures obtained by the adversary by interacting with the interface can't be used in an attack against the application. So for example, suppose the application is some key exchange protocol that uses edDSA for authentication. As long as the protocol signing and verifying operations always use the game context, context separation prevents the adversary from being able to trivially impersonate the server. So codifying context separation into our experiment allows us to prove security under exposed interface attack for large classes of applications. Let me now show you how. So our main technical contribution is a composition theorem that relates security under exposed interface attack to security in the usual setting in which the adversary has no direct access to the interface. And the interface is basically only used for the intended application. So our key insight is fairly simple. If the adversary's interaction with the interface can be simulated given only knowledge of the public key, then interacting with the interface doesn't assist it in its attack against the application. So we formalized this idea by the GapOne experiment which asks an adversary d to distinguish between two worlds. In the real world, d is given access to the interface and oracles defined by the game as usual. And in the fake world, the adversary's interface queries are answered instead by the simulator. So intuitively, if there's an efficient simulator such that no reasonable adversary can distinguish the output of the interface from the simulator when being able to play the game as usual, then any computation that involves the interface can be performed without it. So here's the informal statement of our main result. If an interface i and game g are GapOne secure, oops, I have not advanced this for a while, I apologize. If an interface i and game are GapOne secure and g is a secure in the usual setting, then g is also secure under exposed interface attack. So basically, so to rule out exposed interface attacks, it suffices to prove the interface and game are GapOne secure. Okay, returning to edDSA, we prove that the context separable variants of this signature algorithm are secure for any game whose signing and verification operations always use the game context. This generality is the key advantage of context separability. Our theorem precisely specifies conditions for applications under which enforcing context separation is sufficient to rule out exposed interface attacks. So besides context separability, what we need to prove GapOne security is a way to efficiently simulate signatures output by the interface using only the public key. And in fact, there's a well-known technique for doing just that when we modeled had a hash function as a random oracle, but I'll leave the paper, I'll leave it to the details to the paper. So edDSA is an example of an operation exposed by what we call a discrete logarithm interface. The public key of a discrete logarithm interface is some point in a finite cyclic group and the security of this such an interface's application is predicated upon the hardness of computing discrete logarithms in that group. So in our paper, we looked at various operations that a discrete logarithm interface might expose, some lower level and some higher level. Fortunately, I don't have much time to, ooh, not very much time at all, to talk about these, but so I'll just quickly go through our main application, which was to the noise protocol framework. So for those who don't know, the noise protocol framework is used for designing and specifying authenticated key exchange protocols using just three primitives, a Diffie-Hellman function, an AEAD scheme and a hash function. So noise can be thought of as a large set of partially specified protocols called handshake patterns and each handshake pattern has its own security properties and each is useful in a different context. So rather than explicitly define each handshake pattern, noise specifies how messages are processed in valid noise protocols, which in turn determines the set of valid patterns. This makes it, so this manner of specifying protocols makes it possible to reason about key reuse in noise in a very general way. So static Diffie-Hellman is the primary means of authentication in noise. So what we did in our paper was model the message processing rules as a discrete logarithm interface that exposes a host static secret for use in handshake protocols. So the interface specifies how to consume inbound messages as well as how to produce outbound messages. And it also specifies how to update the host state as a side effect. So the interface is also context separable. Noise uses a context string that is meant to bind protocol messages to the handshake pattern in which they're being either produced or consumed. So this allows us to prove gap one security of the interface with respect to a large class of games. This implies joint security of all handshake patterns, i.e. protocols that are interface supports. Our proof required tweaking the processing rule slightly in order to provide context separability under a wider set of applications. In particular, those that meet the restrictions of imposed by the game by the theorem. As for NDSA, we need only to restrict the game's use of the interface so that key operations are always bound to the game's context. So our interface doesn't support all noise protocols because some would give rise to gap one attacks. But in light of this, so our work leaves security of the security of key reuse and noise in open question. Still, I think if you're interested in key reuse and noise, I think you should still read our papers because I think we found some interesting stuff. And that's all I got, so my time's up. Thank you very much. Any questions? I have time for questions. So you applied your result to the discreet log base schemes, right? Is there any problem using? We didn't look, we looked at, so we, you know, we started looking at, I eventually landed on noise as an application because it seemed like perfect for this setting. It seemed like the way noise specifies protocols lets you reason about these kinds of properties in a very simple generic way. I think you could of course apply this to, you could apply this to any other, you know, set of primitives who would, you know, it would apply to RSA or lattice based schemes and stuff like that, so. And in the standard model too, it would be applicable, yeah? Oh yeah, yeah. Thank you. If there are no more questions, then let's thank the speaker again. And see you at dinner.