 All right, everyone. Welcome to the last session of the conference. How's everyone doing? Everyone holding up? It's good. It's good. All right, so welcome to When the Going Gets Tough, Get Tough Going. I'm Riaz, and I work at the security team at Docker. And today, I'd like to share some of our work that we feel is particularly relevant to securing Linux package managers and software updates in general. So firstly, I'll start with the construction of a package manager and explain that if you're using just TLS and GPG, you're going to have a really bad time. Then I'll introduce Tough, which is short hand for the update framework. And it's a full-fledged framework for delivering secure updates. We'll dive into what it does underneath the hood and see how it's being used today to be further integrated. And lastly, we'll end with the idea of hermetic builds. So something we've been dreaming about at Docker. And Tough brings us really close to delivering hermetic builds. And I want to save the best for last at the end of the talk. So but first, before we get too deep into what Tough is and how it's being used, I want to start with a question. And so let's take a step back and ask, where does software come from? And the reason why I pose this question is because when many of us think about buggy software and software security, we're thinking about SQL injections. We're thinking about cross-site scripting bugs. But really, you have some buggy software, and you've got that buggy software somehow, and it's somehow running on your computer. And as many of us know, a computer is a general-purpose machine. So this Apple, I don't get my computer with all of the software it ever need pre-installed when I buy this computer. I have to go acquire, install, and update my software. I can't just send this back to Apple and say, hey, I need the new keynote. When can you ship it back to me? I need to download that. And this process of acquiring, updating, and installing software presents us with interesting security challenges. Because at some point, we have to trust somebody to send us a piece of software that we want to trust. And we need to break down the types of guarantees we need to establish that trust. So let's think of this in a different setting first. So imagine that you're a soldier. You follow orders. And if you do your job well, you'll get paid by the end of the month. So as you can imagine, you wouldn't necessarily obey any orders that anyone random on the street has to you, right? Like they could be trying to trick you or have you carry an illegitimate order. Ideally, it's vitally important that you can guarantee that your orders come from someone that you trust, someone in your chain of command. So this guarantee that you can trust the provenance of your orders is known as authenticity. Likewise, you also don't want to execute orders if they've been tampered with on route, no matter how trustworthy the messenger is. You want some kind of guarantee that from issuance of the order to delivery, there was no tampering of the actual order contents themselves. So this is referred to as data integrity. And lastly, you don't want to execute orders if they're out of date, since they may no longer be relevant and, in fact, may actually be actively harmful. So you want some kind of expiration date or timestamp, after which you can understand that the order is no longer valid and should not be executed. So you want a freshness guarantee. And all of these guarantees translate over to computer speak. So when you want to install a package foo from any package manager, you want your computer to execute relevant, accurate, and trustworthy orders. So you can trust the package foo itself. So now that we've thought about the soldier analogy, let's think about how we can get the same guarantees in a strawman package manager. So one idea is that we can use TLS, right? We can use TLS to download our packages when we're installing and we're updating our software. And so, for example, apt clients can be configured to use TLS. You must also configure the mirror. And many other package managers do this. And as you may already know, TLS is short for transport layer security. And it provides us with authenticity and integrity guarantees for data over a network. So if you think of back to our soldier analogy, that's over our messenger character. But this is only over the transport layer. So we clearly trust the messenger, which is our transport layer, and the data the messenger carries. But what do we know about the origin of our orders? Where, why, and how did the messenger get this data in the first place? And importantly, who is the author, right? Are they malintentioned? So while we're using TLS for data, may offer authenticity and integrity guarantees over the wire and over the messenger, our transport layer, we don't really know anything about our content layer. Who's our author here? So how do we trust this data that the original author published? So in computer speak, what we're wondering is how did the package manager even get the package that we're installing in the first place? Well, someone on the internet set up the repository and then someone, maybe the same person or someone else, at some point, upload the package that you're installing. And in many cases, many existing package managers' permissions to upload or delete packages are granted to user accounts. And these user accounts can be compromised. So for example, on app, you have private package archive accounts, PPAs. So how can you be sure that the package you installed wasn't uploaded by some untrusted third party who might have compromised its accounts? So perhaps digital signing with GPG can help us. Apped and Yum both have support for uploading GPG signatures to company packages. But however, while GPG signatures do provide us with stronger authenticity and integrity guarantees, they're just a cryptographic primitive, a building block that provides authenticity and integrity property over a bunch of bytes without any additional context. So even assuming that a key and this signature can be reliably and accurately associated with a particular person, all you know is that person signed at some point for some reason over some bytes. You know neither the reason nor that that person should have been able to sign those bytes in the first place. And this leaves us with some challenges in a software updating system. So one such challenge is, for example, GPG alone does not provide enough context to protect against freeze and rollback attacks. So an indefinite freeze attack occurs when attackers are able to freeze clients by sending them the same version of metadata over and over again. So you can imagine you have version 7 and you keep on getting metadata for version 7, 7, 7, over and over again. And a rollback attack is where you actually are tricked into installing an older version. So you may roll back from version 7 to version 6. And these are both essentially replay attacks. In both cases, because the GPG signature over the bytes of the message or the package alone, do not provide any expiry or version context, the client is vulnerable and can be starved to fresh updates. In addition, in the case of GPG, it's quite difficult to recover from a lost key. Unless you properly tinker and set up subkeys ahead of time, it's still quite difficult to do that. And even so, losing a massive GPG is fatal to the security of the system. So signing keys may be compromised or lost. So systems must be designed in order to be flexible and recoverable from key compromise. So in the example of our soldier, our general, our author, he might pass on. So who should take his or her place? How do we facilitate the transition? In the United States, for example, we have an election coming up in November. And the United States president is the commander-in-chief. How do we, in software updating systems, provide a similar transition between generals and administrators? So as such, GPG alone does not provide a very strong story for survival of key compromise. And lastly, there are some orders that are so important. You might want just more than one signature. You might want a quorum of signatures. So for example, launching a nuclear missile. In the United States, we have Air Force. It's a concept of a two-man concept. So you might have seen it in movies where you need two people, two authorized operators, to each retrieve their own key, simultaneously place them in locks, which are far enough to where, even if you're a well-player, you can't reach both, and turn the keys simultaneously in order to authorize a nuclear missile launch. And even so, in addition to that, in many cases, you also require another two operators in a separate location to do the exact same procedure. So again, very important decision, but we have a threshold of people authorizing that decision. So by default, GPG, you can very easily append signatures, but you have to do some wiring up to yourself to get this thresholding property. So let's add survival key compromise and thresholding to our desired guarantees. So if you couple together TLS and GPG alone, they're essentially just two building blocks, and you really care about these guarantees in the best case scenario, you achieve some level of authenticity and integrity, but you don't have any built-in freshness. And likewise, you have survival key compromise through subkeys, but we need some extra work for thresholding in addition to just append signatures. And of course, GPG isn't considered the most easy software to use. If you're familiar with the papers, why Johnny can't encrypt, why Johnny still can't encrypt, and even last year, there was a why Johnny still still can't encrypt for understanding the usability of PGP, the last two were for emails, but even so, GPG is not considered the easiest system to use when managing keys, dealing with subkeys, and let alone if you wanna get thresholding to work here, it's going to require some work. And ultimately, what we've learned so far is that for a software distribution, it's a pretty tough problem, and in order to better understand the intent behind updates to fend off against all the problems we've identified so far and many, many more, we're gonna need a signing framework instead of just building blocks. And so from the talk title, since the going has started to get tough, it's just about time to get tough going. So I'd like to introduce Tuff, the update framework, which is a security framework designed to secure content distribution, which can be any kind of content. So for instance, application updates, we used Tuff for Docker images, and we could also use Tuff for Linux packages, as we'll see later in the talk. Tuff has a pretty interesting history as it actually originated from the Tor project. So it's actually based on Fandy, which is the update system for Tor that was released in 2008. And Fandy is particularly interesting because it was refined by fire, right? Tor needed a secure update system that could withstand nation-state actors, right? And while withstanding those actors still ensuring anonymity over its network. And so I think the best way to capture how amazing Fandy is, I'm going to read from the blog post of when Fandy was released in the words of Nick Matheson of the Tor project. He says, I'm especially happy with Fandy security architecture. We assume an adversary who can operate compromised mirrors and who can possibly compromise the main repository. At worst, such an adversary can DOS users updates in a way that users can detect. Unlike lots of other software update tools where immune to rollback attacks, we can detect frozen mirrors and we can even handle key compromise relatively gracefully. Most encouragingly, the fact that Fandy is both decently secure and well-specified has garnered us some attention from some mysterious security researchers. And those researchers were the folks that put together the update framework. So today, Professor Dustin Capos and a bunch of other folks at NYU are housing the update framework which is the successor to Fandy. What the update framework does over Fandy is it generalizes Fandy. So Fandy was just for Tor, just one application. The update framework is for any application, any kind of content. Today, there's a living spec document for the update framework. It's all open source and you can check it out in your own time. In addition, concepts and iterations from the update framework have been published in peer-reviewed academic papers at some top notch conferences. So for example, at CCS, if you're familiar with computer and communication security and NSDI, which is the use next symposium on network design and implementation. So I've bored you enough of the history. You may be wondering, okay, I understand kind of where it came from, but what is tough? So tough specifies a signed metadata structure. So rather than signing the actual packages themselves like source tar balls, you're signing contextual information about them. And so as such, a tough repository consists of a linked set of signed metadata files that together describe a collection of distributions that you trust. The packages themselves are separate from the tough repository and can live anywhere. So they can live in a CDN, can live in a mirror, can live on disk, any combination, it doesn't matter. The way that a tough enabled package manner would work is that you would first download that tougher repository metadata that we have in the scrolls. You would validate and verify the signatures and other metadata for the repository and then get checksum sizes and be able to properly identify which package you want to install. So let's dive into exactly what's in each of these scrolls. So in the set of metadata, we have five roles. We have the root, timestamp, snapshot, targets, and delegation. And each role in our repository provides us with a certain set of guarantees for our trust. So we'll use these icons. There's the talk to keep track of them. So let's start with the root top. So the metadata for the root file specifies the public keys for all the other roles. So it anchors trust for the keys for all the other roles in our repository. And self-referentially, specifies its own keys. So the root file has its own root key specified and it's signed by that root key. Moreover, I had this little pin because the first time that you ever see a root for a tough repository, you pin that root as the anchor of trust similar to how Chrome pin certificates the first time you see them. And as we mentioned earlier, we were interested in thresholding. It's possible that you can have multiple keys per role. So you can imagine that you have five root keys and require a threshold of three, a majority in order for the root to be valid. And it's a particularly motivating example would be, you can imagine that in the Apple versus FBI case, if the FBI was to coerce Apple engineers in San Francisco to sign in a back door into iOS, they would have one signature. But could they get signatures from collaborators who could be in China or in Switzerland or other countries? So tough has no restriction over where the keys are located because it's client side signing. And so since the root is the anchor of trust for repository, it's very important to keep the key very safe. So it's recommended to back it up in a bank vault and when you do need to bring it online, you can use the Yubi key or a Nitro key, some kind of hardware signing module when you actually need to use the key to plug it in your computer, sign, and bring it back offline. So with the root, we understand how we manage keys in tough. But now you may be wondering, okay, I have keys, where do I get packages from? So that is the job of the targets. So that was the bullseye. So the metadata for the targets file lists all packages for a tough repository. And so in this targets metadata, we have a mapping from package name, human readable name to hash. And that hash allows us to self verify the package itself. And as we signed the root metadata with the root key, we signed the targets metadata file with the targets key. In addition, the targets metadata can also specify collaborators. So we could have two collaborators, for example, we could have Alice and Bob. We specify their public keys. So we have Alice public key, Bob's public key, and we can scope the packages for each collaborator. So for example, maybe imagine that Alice is a release captain for Java package, and Bob is a release captain for open SSL. We can restrict the signing capability for Alice to only signing packages prefixed by Java, and Bob for packages only prefixed by open SSL. And in doing so, we actually create two delegation roles. Do you remember the heart from our scrolls? One for Java and Alice, and one for open SSL and Bob. And basically, as we did in the targets file, we list, again, the package name to hash translation. So we have Java for Alice, open SSL for Bob. And as before, we sign with our delegation keys. So the open SSL delegation sign by Bob and the Java is signed by Alice. And so now instead of listing all the packages and just the targets file itself, you can have a tree where you specify collaborators in your targets, maybe some packages, but you can also, in specifying collaborators, have a tree of delegation roles. And this tree can be arbitrarily nested. So for example, imagine that Charlie is Alice's co-worker and is responsible for packaging the JDK for Java. Alice can further subdelegates to Charlie to only allow signing packages of Java JDK, and for herself, keep Java jerry. And of course, as we have a root, we could also have thresholding. So imagine that, like many projects, you may have a LGTM threshold, right? You want two of three maintainers for a package to say, this release is good, thumbs up, ship it. You can imagine that we have multiple collaborators for open SSL, we have Bob, Daniela, Edward, say LGTM, these releases are good, I'll sign off. So a tough package manager grants fine-grained control over which collaborators can sign for which packages. It requires that a collaborator's key be explicitly authorized by a tougher repository administrator, their roots or targets, and provides extra context around the signing primitive. And all because we don't have any extra tooling or opt-ins, it's all by default, this provides better authenticity with a very nice ease of use. Moreover, because a collaborator is required to sign in package hashes along with the exact package name with their non-root key, in their own metadata, as opposed to a global file, this provides easier to use package guarantees for integrity by default. And additionally, we saw for both root and delegations that you can do it across any of the role, we have very nice thresholding guarantees. So we've unraveled some of those goals so far, I want to take a step back and see what we've had so far in the picture. In this larger view, we have our root and we have our targets, which points to an arbitrarily nested delegations tree. So let's check out the snapshot next. So, I chose the camera for the snapshot, because the way you should think of the snapshot is it tells you a valid picture of a tough repository. And to do that, what it does is it points to hashes of every metadata file. And those hashes are over the signed metadata file. So for example, the root to its hash is the hash of the actual, including all the signatures, including if we had a threshold of quorum signatures. And so if there's any change to any one of these files, so for example, we added a new collaborator, new delegation role, we added a new key to the root file. You would also update the picture, our snapshot. So you can imagine, as you're rolling these scrolls, you want to draw a link between these rolls, the snapshot links to your root, your targets and your delegations, in the same way that your targets can link to arbitrary delegations. So this gives you integrity guarantees of almost every single metadata file except for one, which is the timestamp. And the way you should think of the timestamp is that if our snapshot gives us a picture of the repository, the timestamp tells us what's the latest picture that I trust. And to do that, what it does is you have, in the same way that the snapshot links to other metadata files by hash, the timestamp links to the snapshot by hash. In addition, I've kind of been putting this off for now, but until now, you may have noticed that every scroll has had this expiry at the bottom. So each piece of metadata does have its own expiry. And that is, when we verify a tough repository, we're not only verifying signatures, but we're also verifying that the metadata itself was not expired. So in the case of a timestamp, it's very natural to have a very small expiry window because you always want to make sure you have constant picture, like a constant view of the latest picture. And so if the timestamp was to expire, what would happen is that since this piece of metadata would also be expired and be invalidated, we actually would not have a link to a snapshot. So we wouldn't be able to get a snapshot. This is a pretty strong guarantee. It's a little different for delegations, though, because in delegation, we also have this tree similar to how a timestamp links to snapshot, which has its own tree. The targets to delegation tree is similar, and we could have individual delegations, such as the open SSL delegation that expires. In this case, if open SSL was expired, but not all the Java delegations, just the open SSL packages would be considered invalid. So you would, if you were able to list all the valid repositories in our tough package manager, you would see all the Java packages, but the open SSL ones would be considered invalid if that delegation was expired. So this gives us really nice fresh guarantees, even when we're in an offline mode. And it's interesting as well, because you have fine-grained freshers controls so we can have different expires for different metadata files. So thumbs up. So if we just take this up back again, we have our full picture of a tough repository. We have a timestamp pointing to a snapshot by hash and snapshot by hash pointing to our root, targets, and delegations. So now that we have a full picture of how this is all linked together, let's just run through a single update flow of how we might download a tough repository and check that it's valid and figure out which package to download. So imagine we're looking for the open SSL, one of the newest versions. And imagine that we've also seen the repository before. So we have a pinned root. So again, not the first time you're gonna download the open SSL, we have the pinned root, just how Chrome pinned certificates. The first thing we do is we ask for the latest pictures. We ask for a timestamp. We download the timestamp, we verify that it's not expired, we verify that its signature is correct against the timestamp key, we get the timestamp key from our pinned root. From that timestamp, if it's valid, we have a hash for a snapshot. Now what's interesting is that we could, an attacker could have compromised our database or our mirror and try to put in a snapshot with a higher version than the valid one. But that's not gonna matter because the timestamp will reference a snapshot by hash. So even if there's some modified metadata in our CDN that's holding our tough repository or some server, we're still gonna look for that snapshot by hash and the content just away. So we get our snapshot. Again, check that it's not expired, check the signatures, assuming it's good. We now have hashes for our root and targets. In this case, since we have our root on disk, we check that the hash is the same as the one on disk and verify that root. In this case, we're gonna assume that the root didn't change, so it checks out. And we also check our target's file. Then from our target's file, we wanna look for open SSL, right? So if we imagine the tree we had before, we had two delegations, we could have potentially more. All we need to do now is just look for that single delegation role, which we have a reference to by hash. And as before, check the expiry, check the signatures. In that delegation, we have our name to hash mapping for the package and now we can retrieve that package from some CDN, some mirror, some disk. And so that's our full flow. So that's one flow. How does this work out over a long lifetime, right? Like you're gonna be running a package manager for months and years. So how does this play out in the long run? And how do we keep our guarantees? So we're gonna use this graph to run through a lifetime. So we're gonna have each of the expiries represented by these bars. As we went over earlier, the timestamp should have the shortest expiry because you always want the freshest look at your repository. Followed by the snapshot, since it references every single other piece of metadata. Targets and delegations, you might not be signing in packages very often, so that lasts for longer. And root should last for a very long time because ideally you're not changing your keys around too much for the other roles because you're keeping your keys safe. So assuming that, what could possibly happen is that we don't actually change anything for a while. Like you can imagine that we don't sign in any new packages, maybe we don't have any new releases. So within 24 hours maybe our snapshot or timestamp rather expires. And that's totally okay because it's important to differentiate between software that's decommissioned and software that's just old, but it's still supported. Right, and so in the case that we still want software to be considered valid, that it's just an older version, we can just sign a new timestamp. And that timestamp can still reference the same snapshot which in turn references the same packages, right? So totally valid, you can just sign a new timestamp. You can imagine this goes on for a while, maybe we're not signing much and our snapshot expires. Again, no problem, we can just sign a new snapshot to point to our software. We then also need a new timestamp to point to the latest picture, the latest snapshot. And then we can keep on going. So okay, we gone over if you're not doing any publishing but what if we wanna publish something, right? We're gonna cut a new version open SSL. We're gonna sign in a new hash, so a new name to hash mapping in our target to delegation file. Now that the picture of the repository has changed, we need a new snapshot. And now to make sure we get the latest picture, a new timestamp. And you can imagine this goes on for a while, you have this chain of targets to snapshot to timestamp and whatever you need to, whichever level you're going to sign in at targets, just targets, snapshot or timestamp, updating accordingly. So okay, situation normal, we're doing pretty good, but what happens if things aren't going so well? What happens if you think something's been compromised? Like he's been compromised. And so I think one big, tough way from tough, big takeaway from tough that is very important is that it takes into account that compromise is not if, it's when, right? While being never compromised is a beautiful dream, it's impossible to keep everything perfectly safe. And you have to do the best that you can and react quickly, but luckily the tough spec writers took this into account. And so it's not the end of the world event when a key is compromised. So how do we recover? If you remember back to our route, it serves as the anchor of trust for our repository. The root file specified all the keys for the rest of their repository. And all clients anchor their trust on this root key. So as long as this is secure, we can very easily simply create new keys and sign them into the repo. So for example, we had our snapshot key was compromised. It's specified in our root file. So all we need to do is generate a new key, sign it into our root metadata file. So we need the root key to come online and assign in this new snapshot key. And then also sign a new set, re-sign the snapshot metadata with our new key. And now clients will immediately trust this because they also have anchor trust in the root key. So if we think back to how this would look in our flow, this is before we'd first sign in a new snapshot key into our root, then sign a new snapshot to point to this root. So we have a new picture and then sign in a new timestamp to point at that new picture. So the latest picture with our new snapshot key. So tough provides better authenticity, even if it's been man in the middle to compromise entirely and also provides survival key compromised. And so here we have a tough enabled package manager. The bars aren't exactly the scale. It's just a general more or less. But as you can see across the board, we have very strong guarantees for what we've been trying to get after here. And as you may have noticed, I've been putting off ease of use, but we'll get back to that in just a little bit. But one additional property that we haven't gone over that I wanted to mention that we do get with tough is a notion of auditability. So everyone who installs a tough repo has the same copy of the repository, all the metadata files. And they can, all these clients can detect when keys change or when packages change. And so this gives users similar auditing capabilities. Similarly, it's very reminiscent to certificate transparency, right? And so this seems really nice. So can we just put tough into all the package managers just right now and call it a day and we're done here and go home? There are some questions, right? So I was talking about a lot of roles and a lot of keys and it seems intimidating on how we might map this over into a real existing package manager, right? And so an example of this that you might be thinking of is, okay, who gets the root key, right? The root key is our most important anchor of trust. Does the package manager maintainer own the root key or should the package maintainers themselves, the people who are releasing OpenSSL Java, should they have a root key? So we'll go over construction where the package manager maintainers have the root key. Both are possible. There are actually are two peps that are proposed for PyPy that have both designs and we'll go over the root key being package manager maintainer later in the talk. So how can we start using tough, right? Like even though it's not currently integrated into a Linux package manager, doesn't mean you can't start using it because at Docker we've actually worked on this project called Notary. Notary is a separate component project from Docker itself so it doesn't have to be on Docker images. It can work over any kind of data and it's our own opinionated implementation of the update framework. So it's, you can check it out. It's under active development though we've cut a few releases. We've also integrated parts of Notary into Docker itself. So I wanted to get back to this ease of use thing because I've been putting it out for a while and I think the best way to show you how easy Docker, tough is to use and notary is to use, I'd like to show you a demo. So let's start with the video. So I can explain as it walks through. There we go. So I'm gonna initialize a notary repository. So notary is just a command line interface. So you notary init when it's con demo. I already have root key on disk that I wanted to use. So just ask me to decrypt it because all keys are encrypted on disk. And it also generates a new targets key and a new snapshot key and asks me to encrypt them with a passphrase on disk. And I do this list. List shows me all the packages I've signed in. Right now I haven't initialized anything. I need to actually publish for the notary server to get the initial set of data. And then I list, I have no targets present, right? And so, okay, we have nothing in the repository. Let's try adding something. So I wanna add this file, Docker, Couse, HelloLinuxCon. So all I need to do is do notary add, the repository name, linuxcon demo, the target name, it's like the package name, hellolinuxcon, and the file that I want to use. So you can also specify a hash directly, but in this case notary just takes the hash of the hellolinuxcon file. So it stages it and then we can publish. And when I publish, it'll ask for my targets key because I'm signing in a package into my targets file and also my snapshot key because I've changed the picture of a repository. So now when we do a list, we'll see the hellolinuxcon target package as well as the digest, the checksum, and also the size. And it's signed into the targets role because I haven't set up the allegations to keep it simple. So, okay. So you can imagine that maybe I'm in the business of distributing fun Docker scripts on my website. So maybe we can crawl from my website directly, do a notary verify, which will check that it's actually signed to my repository and then piped sh. And what you'll see is that notary verify, what it's doing is under the hood, it's going through the full update flow that we saw earlier with all the files. But and if that succeeds, it'll actually just push through the content that the curl retrieved. So if I try doing a notary verify on some other script or maybe it's the same script, it's been modified. So I uploaded this evil SH. Let's see what happens. So notary verify will tell us that, hey, I don't know what this package is. It doesn't match the checksum for the target that you told me to verify against, which was hellolinuxcon in my repository. And similarly, if I try verifying as the target that doesn't exist, so I try doing verify against non-existent hellolinuxcon, notary verify will tell me that there's no trust data for this package. So even if I'm doing curl to SH, right, with notary in the middle, we're able to achieve the full, tough guarantees. So it's super easy to use. And moreover, I talked about managing subkeys, rotating keys and GBG being hard, but in notary and in tough, super easy. So all I'm gonna do is notary key rotate, the repository name and the key. It's asking for a new key passphrase because it's gonna encrypt it on disk. I'm changing the root file and I'm changing the keys. And because I changed the root file, I changed the picture of the repository. So I'm gonna assign a new snapshot. So let me prove to you that I actually rotated the keys. So I'm gonna do an add and publish. So I'm adding other target, or there you go, to my repository. When I do a notary publish, we're gonna see that the key ID matches the rotated key. So I type in that passphrase. I also type in the snapshot so I've signed this in, have a new picture. Publish, when I list, I see both targets. So I see both original, HellenicsCon, and this new other target. So as you can see, super easy to use and I encourage you to play with it. And we have a very quick fire, Docker-composed setup that you can get started in a couple of minutes with notary and it spins up a server and you have the client and you have it getting started. And I think even today it was interesting. I saw that Cloudflare was working on using notary in a secure Secrets Bootstrapper. So people are using it, it's exciting. I encourage you to check it out at github.com slash Docker slash notary. So as I mentioned before, we've also implemented Tough in notary but also in Docker. So if you export Docker content trust equals one, Docker will enter content trust mode where it's actually using notary under the hood to do the same implementation of Tough. And so it's kind of interesting how we put together Docker content trust in the context of our key hierarchy. I wanted to explain it to you guys to motivate how we might do so for Linux package manager. So we're getting back to this hierarchy of our roles with our root as the anchor and by the way you can use UB keys in both notary and content trust. So maybe with the UB key at the root. Docker content trust has repository scoped to the root. So it could have been organizations at the root but that would have meant that, and images as targets, but that would have meant that any private images within an organization would have been leaked. And so you would have known all the images that if they were private, their existence in the metadata. So we have on a per repository level. And this way ensures that Docker users have control of their own root keys. And so I picked Alpine, but all official images in Docker Hub, so Ubuntu, Nginx, MySQL are also signed by content trust. So if we have the repository at the root, the targets and optional delegations have a mapping from Docker tags to manifest hashes. And so the manifest hashes are content addressable referring to Docker manifest and that manifest has hashes for each of the layers. So you have a Merkle tree with notary doing and content trust doing name resolution for getting the head of that Merkle tree. So you're getting the manifest and that manifest in a content addressable way that buys all the layers. So we talk a lot about how content trust uses tough. How might a package manager that exists today or a new one use tough? So some design goals that I picked and could change, but we'll use these for this thought exercise that you wanna root of trust in the package maintainer, package manager maintainers, right? So like maintainers of apt, maintainers of yum, and ideally with fresh holding because there are probably more than one maintainer. You want freshness guarantees. You wanna sign index of all the packages in the package manager and you want signed package targets for like Java, OpenSSL and individual packages by the maintainers. And again, this is a name to half resolution and ideally with fresh holding. So we can have like two LGTMs, okay, signed to my repository. So you think about these guarantees, they actually map really nicely to tough. So we can have the maintainers with multiple UB keys, multiple root keys at the root level. The timestamp key and role provides freshness. The snapshot is our entire picture of all the packages. We have a signed index of our packages. We can have delegations for our maintainer keys, similar to how we saw Alice and Bob for OpenSSL and Java. We can have a threshold of those keys, specifying a name of OpenSSL, package name to hash. So it maps over quite nicely. And again, there are peps open for both this design and design that has a slightly different configuration with package maintainers at the root. So I think we're pretty close. And for future work, at the beginning of the talk, I mentioned this idea of hermetic builds. So if you aren't familiar with the word hermetic, it's often used in the context of hermetic seal, meaning airtight. And so I guess if you think of tough and all the guarantees you get about authenticity, integrity, you're really amazing guarantees for software installation updates. And if you think of a Docker file, you have a from some image, a bunch of runs or ads. If you're using official images or Docker kind of trust, the from is already secured. You have tough over the from statement. But what if we could have tough on the subsequent runs or the subsequent lines? And so imagine that every line in the Docker file is secured by tough or a similar framework. This is something that the team at Docker we've been thinking about and are really excited about. And I think with tough, we're really close to getting this shipped out there and making it a reality. So I encourage you to take a look at tough and notary and content trust. As I mentioned, the living spec for tough is online in this GitHub repository. It's very easy to digest that it's a little, it's like 10, 20, 10, 15 pages. So it's a little long, but very digestible. And with that, thank you very much.