 My name is Chris Patton. I'm a recent graduate of the University of Florida where I was advised by Tom Shrimpton In this talk, I'll present our latest paper called quantifying the security cost of migrating protocols to practice These days the standard of security for a cryptographic protocol is that breaking it should be as hard as invalidating some well-studied computational assumption Which provides a strong guarantee in theory, but in practice the provable security methodology has some important limitation One of them is that for a given system it succeeds at eliminating attacks only to the extent that the system's formal Specification accurately captures its real-world behavior So the subject of this paper is the task of reconciling discrepancies between the real system and it's a supporting analysis So let me start off with a simple motivating example The latest version of TLS TLS 1.3 is often lauded as a success story for provable security What sets 1.3 apart from earlier versions is that security proofs were pursued during the standardization process In that spirit an early draft of 1.3 was based on a protocol called OP TLS OP TLS has a security proof and it was designed to meet the needs of the new standard Even so Significant changes were made to OP TLS in the process of translating it into the draft of TLS 1.3 And this is typical of the standardization process The protocol's original designers won't be able to foresee all of the practical considerations that the standard needs to address So changes are often necessary, but it does raise an interesting question What if any of the provable security that supported the original protocol is inherited by the final standard? Our goal in this work is to answer a more general version of this question Starting with some reference system for which security is known We want to infer the security of the real system derived from it So here the reference system refers to the formal specification from the original analysis And the real system is obtained by revising the spec in order to account for some observed behavior like the details of the standard I'll refer to this process as Translation generally speaking translation is the revision of a system says specification in order to account for something not dealt with in the original analysis That is in the in the reference system And because changing the spec can result in an attack. It's important to thoroughly evaluate its impact on security Suppose we have a security proof for some reference system Y When we translate Y into a new system X The hope is that X is at least as secure as Y That is the translation doesn't result in a new attack and the way we verify this is to repeat the same analysis for X That's already been done for Y That is suppose that we know that breaking Y is as hard as solving some problem P What we want to show is that this reduction from the hardness of P to the security of Y can be modified to account for the translation But this task bears a significant analytical cost and one that might make a rigorous treatment of this problem prohibitive because generating a fresh proof just requires a certain amount of work and Depending on how different the real system is from the reference system This task is likely to be highly redundant to take an analogy Imagine we want to find a route to the top of a mountain. We might choose to start from scratch But if someone else has already found a path to a point close to where we want to go Then we might save ourselves a good deal of trouble by starting our pathfinding expedition from this previously discovered point And so it is with provable security The work required to prove that the real system is secure might significantly overlap with the work required to prove That the rest reference system is secure So what we're going to do in this work is characterize the conditions under which it's possible to prove the real system secure By appealing directly to what's already been established for the reference system In other words what we want is a black box reduction from the security of Y to the security of X And the main contribution of this work is a formal framework in which this kind of argument can be made precise We call it the translation framework It's based on the notion of indifference ability which by now is a fairly well-known idea Particularly as a design goal for cryptographic hash functions But the notion is much more general than this one application here I'm going to demonstrate for the first time it's used for studying cryptographic protocols now if you've never heard of this concept of Indifference ability don't worry. I'm going to define it in a little while I'll start this off by describing the case study that motivated this application of indifference ability and the more general Translation framework One of my favorite types of protocols is password authenticated key exchange or PAKE for short The goal of PAKE is to establish a key between a client and a server who initially only share a password So here I've drawn a PAKE protocol with just two messages The first sent from the client to the server and the second sent from the server to the client Having exchanged these messages the parties can then derive a share key, which we'll call the session key The security goal for PAKE is that the session key should be Indistinguishable from a random key even to an adversary who observes all network traffic and can actively interfere in the protocol's execution And more to the point this should hold even when the password has relatively low entropy But of course the entropy is too low Then there's no hope of achieving security at all because the adversary can always try to guess the password when impersonating one Party to the other and since this is inevitable PAKEs are designed to ensure that the adversary can't do any better than this simple online guessing attack Now PAKEs are wonderfully versatile objects that find a lot of useful applications They tend to be on the simple side often having just two messages and Simplicity is a real asset here because the lower the communication complexity the easier it is to integrate the PAKE into applications In particular, there's been a lot of recent interest in integrating a PAKE into TLS Now this would allow using passwords alongside of or even in lieu of traditional certificates So that TLS can be used without solely relying on the web public key infrastructure for authentication Okay, so let's let's think about how we might do this how we might integrate a PAKE into the TLS protocol So here we see a simplified view of the TLS 1.3 handshake The client and server exchange three flows before they derive a session key in the first flow The client sends its Diffie Helman key share to the server and in the second flow The server sends its own key share to the client as well as another value Which I'll call the server's confirmation message and in the last flow the client sends its own confirmation message to the server Each of these flows corresponds to a sequence of actual handshake messages in the TLS protocol But as we'll see later the contents and semantics of these messages aren't relevant to the problem at hand for our purposes All that really matters about them is that the key exchange messages are sent in the clear Whereas the confirmation messages depend on keys derived from the shared secret. I'll return to this idea in a little while Now those familiar with TLS will notice that there are any number of ways that we might integrate a PAKE into the handshake But the general principle is that we would standardize a protocol extension for this purpose Now many extensions in TLS have the same basic structure a client requests usage of a particular extension by sending what's called an extension request to the server if the server agrees to Use it then it'll send an extension response and reply The request in response might carry data that's relevant to the extension So we can specify an extension which in which the request carries the first PAKE message and the response carries the second Then we would use the PAKE shared secret to derive keys rather than the usual Diffie-Hellman key exchange Now this is one approach to integrating a PAKE into TLS. It's not the only one of course, but it has some nice features It works with any PAKE that has just two messages and therefore appears to have relatively low implementation complexity But what about security? The hope is that at the very least this usage of the PAKE should be no less secure than the PAKE itself In other words, whatever security property has been proven for the PAKE should also hold for our extension Okay, so now we need a definition of security Here I'll provide a brief overview of an appropriate model for authenticated key exchange I'm going to define it in terms of two objects The execution environment which formalizes the interaction of the adversary with honest parties in the protocol and the game Which will formalize the adversaries objective in attacking the protocol Now if you're familiar with game-based models for authenticated key exchange Then this decoupling of the components of the security experiment might seem a bit unusual But it's going to help us get to where we're eventually going which is to use indifference ability to formalize secure translation We model an active network attacker by having the adversary be responsible for relaying all traffic in the network It's also responsible for initiating each session That is each instance of a party running the protocol presumably with an instance of some other party The attacker is empowered to corrupt a session by revealing its ephemeral key Which consists of the random coins consumed by that session for example to generate the party's key share It can also corrupt a party by revealing that party's static key That is the long-term secret used by the party across multiple sessions So in our case a client static key would be its password When a session completes successfully it outputs a session key which the attacker is also capable of revealing And recall that the goal of the adversary is to distinguish some session key from a random key To formalize this the experiment defines an oracle that executes the following simple procedure It flips a coin and if the coin comes up heads then the oracle returns a key Output by some test session chosen by the adversary Otherwise as it chooses a key at random and returns that instead The adversary wins if it correctly guesses the outcome of the coin flip based on the output of this oracle We also require that the test session is fresh in the in the sense that Computing the real test session key should be non-trivial for the adversary We're going to write this experiment in terms of three objects The adversary a the game g that formalizes this key and distinguishability property and the system x that formalizes the execution environment x specifies two distinct interfaces one that's called by the game and another that's called by the adversary The second interface lets the adversary interact with honest parties Including initializing new sessions sending messages to sessions and revealing ephemeral and static keys The first interface is used by the game in order to obtain session keys as well as other information That might be used to determine if the test session is fresh The adversary makes queries to the game in order to reveal session keys Including for the test session and to make its guess of the challenge bit The experiment begins by running the adversary and once it halts the game outputs a bit that indicates whether the adversaries attack was successful What we're interested in is a protocol's concrete security That is the probability that an adversary with a given runtime and query complexity correctly guesses the challenge bit Our goal for the design of our peak extension is that its concrete security Should be roughly the same as the concrete security of the underlying pick more formally We want to show that for every adversary a that breaks the peak extension with some probability There exists an adversary be that breaks the peak itself with essentially the same probability and that has a similar runtime and query complexity Now it turns out that we can construct such a be whatever the execution of the peak extension is Indifferentiable from the execution of the peak So let me now clarify what this means in particular I want to show you how to use the standard notion of indifference ability to reason about secure translation And what we're going to find is that it has some limitations that we're going to need to overcome in the translation framework Now the usual notion of indifference ability is expressed in terms of an adversary D Which we'll call the differentiator and a pair of systems X and Y X is thought of as the real system and we'll think of Y as the reference system for which security is known Now D is given black box access to one of these systems and its task is to determine which it's interacting with It outputs one if it believes it's interacting with the real system And we're interested what we're interested in is the probability that it outputs one in the real experiment versus in the reference experiment And this will formalize its advantage in discerning which system it's interacting with Now in the reference experiment The adversary's access to the system second interface is mediated by a simulator s as shown here The simulator's job is to trick the adversary Into thinking it's interacting with the real system when in fact it's running in the reference experiment So the system second interface intuitively Is meant to formalize what changes from the reference experiment to the real one So going back to our model for authenticated key exchange Recall that the second interface lets the adversary execute sessions of the protocol So in this setting the simulator's job is to make the execution of the base pike look like the execution of the pike extension In general we say that x is indifferentiable from y if for every efficient d there exists an efficient s such that D's advantage in differentiating the real experiment from the reference experiment is small In other words to prove indifferentiability what we want to do is construct the simulator s Such that the differentiator d is unable to tell which experiment it finds itself in If we can do this Then there's a simple lemma that allows us to relate the security of the real system to the security of the reference system In particular for every efficient attacker a and simulator s we can construct an efficient attacker b and differentiator d Such that a success probability is bounded above by b success probability plus d's differentiating advantage In other words if the real system is indifferentiable from the reference system Then we can transform an attack against the real system into an attack against the reference system So for any game g if y is secure in the sense formalized by g And x is indifferentiable from y Then x is also secure in the sense formalized by g and this is true for any game That's compatible with our security experiment Which is great because we now have a way to infer security of the real system from what's already been established for the reference system All we have to do is prove indifferentiability. So we're at the top of the mountain, right? Not quite This notion which i'll remind you is the usual notion of indifferentiability is not yet suitable for a formal treatment of secure translation Now indifferentiability i'll remind you is normally used to reason about relatively simple objects like hash functions or block ciphers The objects that we're interested in authenticated key exchange, for example Are more complex than this so to deal with this complexity will need to generalize the notion a bit further For starters, our notion of indifferentiability pertains to a more general security experiment So here we see the experiment that we had before involving the same old system x game g and adversary a In the translation framework, we're going to embellish this experiment in three key ways The first thing we'll do is add another object, which we'll call the resources Now resource resources are going to model things that might be used to prove security But aren't essential to defining a security. So for example resources are used to formalize the random oracle model in our experiment Now they can be used for a variety of things But in general they're going to be things that that that in the experiment that the system and adversary share access to For example the random oracle in the random oracle model Next the experiment is parameterized by a function psi called the transcript predicate Which is used to determine if the adversary's attack is valid based on the sequence of queries it makes to the game and system The adversary wins now if it wins the game and its attack is valid according to the transcript predicate And this is going to be useful for enforcing restrictions on the execution environment when proving indifferentiability So for example in an analysis of a particular authenticated key exchange protocol We might need to restrict the adversary from revealing a femoral key And this is what the transcript predicate is useful for The last generalization we'll make is intended to simplify the analysis of standards for real world protocols like tls Now tls specifies a complex protocol and most of the details are not necessarily relevant to what we might want to prove about it The partially specified protocol methodology of rogaway and steggers provides a nice way to account for these details without needing to specify them exhaustively And we're going to incorporate their core idea into our framework Their strategy is to divide a protocol specification into two components The protocol core which formalizes the elements of the standard that are essential to the security goal And the specification details which captures everything else The protocol core is defined in terms of calls to a specification details oracle Which in the security experiment is answered by the adversary That means the adversary participates in the computation of the protocol This formalizes an unusually strong attack model But one that yields a rigorous treatment of the standard itself without exhaustively specifying all of the details Okay, let's define a notion of indifference ability that's suitable for this experiment Remember that what we're interested in is an adversary's advantage in differentiating some real system x From some reference system y As usual d's access to the second interface of y is mediated by a simulator s And likewise d's access to the to the resources will be also mediated by the simulator And this means that the resources might also change from the reference experiment to the real one And this is will be a useful feature of the model and in particular for our case study We say that x with shared resources r is psi indifferentiable from y with shared resources q If for every efficient psi differentiator d there exists an efficient simulator s such that d can't Differentiate the real experiment from the reference experiment except with perhaps small probability for this generalized notion Which we'll call shared resource indifferentiability. It's easy to prove an analog of the lemma that we had before which says that If the real system x with r is psi indifferentiable from the reference system y with q Then breaking x with r is essentially as hard as breaking y with q in the sense formalized by any game g and transcript predicate psi So that's a bit of a mouthful. Let me quickly summarize what's going on The translation framework is basically a security model that allows us to express a wide range of security goals for a wide range of systems And for this model, we've defined a generalized notion of indifferentiability called shared resource indifferentiability That suffices to show that a given translation is secured and without resorting to a fresh proof Now there's much more to the translation framework than I have time to get into So I encourage you to check out our paper if you'd like to learn more In the meantime In the last few minutes I have left. Let me turn back to our case study of designing a peak extension for tls So what we did specifically in this paper is show how to securely instantiate the tls extension We discussed earlier with a particular peak protocol In that protocol is spake 2 first proposed and analyzed by abdala and pointcheval back in 2005 Now spake 2 is an exceptionally simple scheme making it. I think well suited for this application Still in translating it into the extension. It was useful to make some changes to it In particular We changed key derivation slightly in order to ease the task of integrating the shared secret into the tls key schedule And we also changed the manner in which the public parameters are chosen Again, I don't have time to go into the details. So please check out the paper Now what we show is that breaking the extension is at least as hard as breaking spake 2 itself This is true for any security goal that can be formalized as a game in our experiment And it's true in particular for the security goal targeted by abdala and pointcheval in their original analysis We need only to assume that the adversary doesn't reveal ephemeral keys As do abdala and pointcheval So our strategy is to prove that the execution of the extension is indifferentiable from the execution of the original protocol And the proof is quite interesting, but unfortunately, I don't have time for the details So let me just tell you about the the the most interesting part It was really dealing with the details of tls. So we formalize the extension as a partially specified protocol Which allows us to easily handle most of the specification details without needing to specify them exhaustively But one part what one part we have to account for explicitly is derivation of the keys that are used to compute the confirmation messages Simulating this computation in with our using our simulator results in a modest loss of concrete security So let me just wrap this up. The provable security methodology has profoundly shaped the practice of cryptography So much so that a proof is often a prerequisite to standardization of a protocol Yet this process of standardization can create discrepancies between the real system and its formal specification That result in a gap between what's known about the system and its refer and its real world security The translation framework allows a rigorous treatment of these discrepancies without resorting to a fresh proof In particular to show that a translation is secure It suffices to prove that the real system is shared resource indifferentiable from the reference system Now this approach can significantly reduce the analytical effort required to evaluate the impact of translation as we demonstrate with our case study Well, that's it. Thanks for listening and I hope you enjoy the rest of the virtual conference