 Cool, okay, so thanks for listening. Yeah, I think we're gonna talk about that. Do you? Yeah, so. Oh, it's okay, I got it. All right, so yeah, this is our kind of brief agenda for today. We're just gonna do like a really quick overview of the critical keys in a six-hole deployment and we're gonna talk about how we selected an open-source solution to build a transparent community-rooted security and update mechanism for those keys for our six-door. So we've heard a bit already, six-door GA now and some observers might kind of worry that this is a high-value target for the increasing number of supply chain security attacks. Six-door is underpinned by the magic of cryptography, which you can see on the diagram. Most of the first-party components have various cryptographic uses in six-door and in any given six-door deployment, several of those components are going to rely heavily on public key cryptography. So for example, the Fulcio Certificate Authority, every certificate generated is derived from a root certificate and then once you've generated those certificates, they're putting to a transparency log and the transparency log issues a promise, a sign promise that the certificate really is going to be stored there because this is an eventually consistent system, you're not going to be able to immediately query the system for the event you've just recorded. Similarly, recall issues similarly, sign promises that some metadata has been stored in the log there and then we have the artifact signing keys, which give an indication of authenticity of a six-door project release. So these are all super important keys that we want to make sure aren't put to malicious use and anyone following tech headlines over recent years has probably seen that when crypto systems get broken, it's not the cryptography that gets broken, it's the human processes and the humans involved that ultimately result in problems. I've included a few demonstrative headlines on the slides and these were just the ones that were easy to find thanks to CNCF tech security. They maintain a neat supply chain compromise catalog. I'm really not meaning to pick on anyone specifically. This genuinely happens to pretty much everyone who tries to manage cryptographic keys over a long period of time. And in addition to the human processes that often result in problems like this, six-door is fairly operationally unique, I think. Not only is six-door a project you can install on your own platform and run your own instance of, but there's this public good instance that we've heard a bit about. And the public good instance is a free-to-use service that's hosted by a not-for-profit. There's multiple organizations, multiple stakeholders involved in running this infrastructure and many of those may be considered kind of competitors. And as alluded to, this is a really high-value service. So when we think about how we manage the keys for this service, we really need a solution that mitigates all of the implicit risks. No one operator should be privileged over any of the others. They shouldn't be able to make changes without some kind of approval process. And the service really needs to be resilient to compromise. The effect of any compromise should be minimized and it must be possible to recover the service. So given this really high-level background, I'm going to hand over to Frederick to kind of make some of the requirements more concrete. Yeah, thank you, Osha. So just gonna wait for some slide adjustments here. Yeah, so as we've seen, there are some services we need to secure in the system like there are a lot of keys going around. So to be able to manage this in a secure way, we have identified a few of the requirements. So as an example, we need to be able to rotate or revoke a single key at a given time. So this could be like a key can be compromised. It has to be revoked or it could be due to a schedule rotation because nothing is forever and especially not in trust. All the keys has to be single purpose use only. So if a specific key is compromised, the effect of it should be minimized. Over time, the number of keys will grow. So we need to have a solution that's capable of managing our growing set of keys because keys that are rotated may still be used because we need to verify a signature made in the past. Also, for a good developer experience, the solution need to have a way where we can sort of understand programmatically what the intended use case and status is for a specific key. All the keys has to be related to a trust route. So the client can verify them because the client don't just want to accept a key and then trust it. So it has to be verifiable in that sense, kind of similar to how a root certificate work for regular certificates. The trust route has to be very strong to protect against things such as a compromised key. So we need to have a quorum of trust route numbers approving all the changes to it. All the changes happening has to be verifiable and visible to the community so they can see what's happening. And of course, it has to be traced back to initial trust route. And a client that's working with a specific trust route has to be able to bootstrap itself up to the latest version as well. So this shows us a solution for this is the update framework or a tough. So tough, as we already heard from Ifan, is a framework for securing software update systems. It's an open source project. It's been around for a long time and it's widely used and tested and widely tested and used in a lot of production deployments. Some of the important things that you get from tough is that it describes, for instance, what are the steps to securely verify an update from a client's perspective? What are the file formats used or meta data files as they're called? So each meta data file corresponds to a specific role. We will talk a little bit more about roles, but each meta data file is signed. It has an expiration date and two of the important roles are the root role which defines what are the keys in the system and what are the roles. Another important meta data file is the target file that shows what are the targets that exist in the system and the targets is what the client is really interested in. So for us, it's a key usually. From public key infrastructure, tough does not rely on anything you might already have on your existing host. So instead, when you're shipping a tough client, you're expected to also ship a trust route. And of course, tough can work with a tough trust route and update itself to the latest version. So some of the key features of it is that, as we mentioned, like all meta data files can have a threshold in the amount of signatures you need to have. So this is a good way to protect against a compromise key or let's say an insider attack. Tough doesn't care about what the target is. It's sort of a peg for tough, it just delivers them but it does offer the possibility to add custom meta data to it so you can get some kind of semantic or understanding of the targets. We talked about two roles, but roles is the way where you can distribute different responsibilities within tough itself. So two other important roles are the snapshot role and the timestamping role. So as Ethan said on the talk earlier, the snapshot role is used to gather all the set of valid targets together. And this acts as a good way to prevent mix and match attacks. The timestamping role is used to periodically re-sign the snapshot file. So this is a way we can get protection for replay attacks. So as an example, how it may look like is that you might have your different meta data files somewhere in a offline place and you can sign them and then they get published to repository where the client can fetch them from. So as an example, the repository may be like an HTTP server or a cloud blob storage and from there, all the clients can sort of pull down the latest version of all the targets and the meta data file describing them. To end this off, I would just like to show that there are some honorable deployments already of tough and being in Detroit, I would like to call out Uptane, which is a derived work from tough used to secure updates for optimobiles. And with that said, I would like to hand it over to Asra. All right, thanks, Fredrik. So I'm going to start talking a little bit more about implementation details and about our six-store specific instance of tough. So I'm going to be talking about this red highlighted left part about the repository setup and the implementation of how we're creating this repository. So the first thing is given the requirements that Fredrik mentioned and the ecosystem that Joshua mentioned that we need to secure, what we've done is we've used the update framework to secure with the targets actually pointing to those ecosystem signing keys. So all those four targets that Joshua mentioned are actually the targets that we're signing off on. And tough is that framework that is going to provide us with the freshness aspect and the rest of the attack mitigations that Fredrik spoke of. The second thing is in order to share the community ownership of the public trust route and to allow the public to view the records part of the transparency ideology of six-store is that we actually host the trust route and all of the actions for signing and changes on the GitHub repository. So that GitHub repository is six-store slash route signing and from there you can see all the records of history, new updates and like basically audit trails of what's been going on there. So we encourage you to go check that out if you're ever curious about if there's a route event occurring, you'll find out there. So diving in a little bit deeper into the layout of the six-store community route, what we have here is a visualization of the four major roles in red and green and then the four artifacts that are signed by the targets. So we have two different types of keys that are used in managing the six-store community route. The first is those offline keys and the second is online keys. So the offline keys are distributed amongst five community key holders. So unfortunately we have four in the same room, maybe here right now, Bob, Santiago, Joshua and Dan and Marina more from NYU who's not here. So those five key holders each hold an offline HSM key that is used to sign the green metadata files, route and targets. The online keys are hosted on Cloud KMS and have configured with specifically workload identity from the GitHub repository to actually do the signing. So with that, I want to just first dive into how we got into this whole route system in the first place. So I described to you the stable picture of what the current route looks like but in order to actually create that, we needed an initial trust route which I think Fredrick mentioned earlier. So a root key ceremony or a signing key party is a ceremony that actually generates that initial bootstrapable trust and one of the things that it does is provide ownership proof with all those HSM keys. So Bob's kind of shady, Joshua's kind of shady, Dan's kind of shady, all these people are kind of shady. So you need a sort of system of actual trusting them that they created those keys from the HSMs we delivered them and that they weren't created in some sort of tamperable manner in which they can go and post on the internet or something. So that gives us key ownership proof and also key integrity proof. And then likewise, because we're using Tuff, we get this bootstrapable process where each update we make is traced back to the previous which means we only needed one initial key signing ceremony. So key signing ceremonies, they're not new, they've happened before, but that key difference is like for example in the Cloud for DNS sec, they hold a key signing ceremony every year because they don't use Tuff. So we don't need to do a key signing live stream ceremony every single year, we just needed to do that single one. Another shout out is PyPhi, I did a live stream for the TuffRoot and Pep458. We were heavily inspired for the layout of that and the runbook that are all linked over there. So without that, we honestly wouldn't have had a sort of guide runbook to create ours. So going into this, this is the layout of how those operations are actually performed. So there's kind of five rounds here and each are either done through offline manner because we have to have some sort of offline procedure for the keys that are managed with the UB keys. And then the rest is managed through automated GitHub processes. So a quick story is like we were limited to one hour for the live stream initial route and I was doing like tons and tons of tests. Kria, Dan, Abu, Jake were all involved in helping me run through this dry run for the live stream and we realized we only got through that first five boxes, those ad root keys during that entire hour because we were doing everything in a serialized process. And then we realized there's no way that that can happen. We need to sort of be able to do all these offline operations in parallel so the layout looks something like this. So yeah, with that, let me quickly give an overview of how the actual route is managed in the whole process. So we go from those key holders who perform those ceremony operations that gets pushed up to that GitHub route where everything is reviewed, validated, and then committed, then that triggers GitHub actions to actually do the deployment process up to our remote. So we host our remote currently on GCS so all of you six-door clients are probably familiar with finding that endpoint. And then from there, users can discover the next update. And then finally, one quick last thing I want to mention is a little bit more about that usability or customization that we do for clients. So in the six-door tough target layout, what we have here is some pieces of custom metadata because otherwise those targets that you're shipping to clients are totally black boxes. They're just byte payloads that we're delivering in a secure manner, but clients have no idea whether this was meant for full CO or whether this was meant for RECOR or whether this was intended for signing as in the current active shard or this may be an old shard that we're using for verifying old targets signed a year ago. And then likewise, we can also provide some other hinting material to what that target was actually used for. So the metadata looks something like this under the custom six-door piece. Right now we have that status of active or inactive, the URI that it was associated to, and then that usage piece which determines which component it was actually attached to. So I'm going to hand it off to Joshua now to talk about the client aspect and the ingestion and pieces that are more relevant to six-door clients. Okay, so now that we've kind of talked, yeah, about how we operate the repository. We want clients to be able to interact with the repository and retrieve the keys. And I'm just going to spend a few minutes talking about the high-level process to do that. So when we are implementing clients, the first thing we want is an implementation of the tough specifications client workflow. This gives us all of the standard tough promises. Do I know, a client knows when keys have changed, a client knows that they have the whole set of keys, a consistent set of keys, and the client knows whether the keys are correct, whether there's been any data corruption or tampering with the data in transit or on disk. And we really want the client to be able to do this in a way that's transparent to users. We don't want users to have to worry about any kind of key management and triggering updates of the keys. And we think it's reasonably simple as well for client implementations, especially when there's an existing kind of tough implementation for the ecosystem you're using, but the specification lays out in quite a detailed fashion, the workflow that we expect clients to follow. So as well as the standard tough client features, Sigstar has some additional requirements that Azure has detailed how we describe those on the client side, sorry, on the repository side. And that is, a client really wants to know what the target's used for. Is the key still active? Is it valid for recently signed data only? Recently signed data offer historical signed data only. And some notion of policy, whether this is the public good, Sigstar instance, staging instance, a proprietary instance on private infrastructure. And in addition to the features of a tough client, we rarely want a Sigstar client to be able to do additional things because the nature of where Sigstar clients are deployed is varied and the frequency of use of a Sigstar client is different to kind of some more traditional signing solutions. So once you've got clients at the edge during ephemeral environments, we want to be able to have some notion of configurability, am I? I don't want to necessarily update my key material every time I ingest a new container image. I might want a certain notion of cashability. And we rarely do need clients to ship with that initial trust route so that they can have that verified linkage from a trusted state that the client verified. We really want to avoid trust on first use for Sigstar clients. So concretely, this comes down to three things that clients need to implement. They need to bundle a copy of that trusted route metadata. They need to perform the steps of the tough metadata update workflow. And then finally, they need to be able to search for matching keys. And right now that's looking at the JSON metadata that you've retrieved and doing basically a wild card match, of pattern match on the names of the target files, the key files. But we're currently working to define a more robust target discovery mechanism so that each client doesn't have to manually inspect the metadata. We want to define more of an API contract for this kind of work. And really, I think it's really neat to be able to share this information if you are interested in implementing a Sigstar client. There's a bunch of libraries that exist today that can make it much easier for you. You don't need to start from scratch in most instances. If you are using Go, obviously, there's the cosine implementation that is available in the Sigstar module on GitHub. And that implements everything we've described here today. There's also a Rust client that's available on crates.io called Sigstar RS. The Python client is in progress. We're gonna hear about that a little bit more later today. It doesn't currently implement the rookie update mechanism, but there is some discussion about implementing that. And you can get the Python client today on PyPI and sign and verify Sigstar signatures. There's also a Java client available on Maven. And a JavaScript client, which is available on NPM, doesn't yet do the rookie update either, but there's ongoing work for that. I should note that the Go client is the only one that doesn't include some kind of disclaimer about this being kind of experimental working progress code. This is the little warning symbols on the diagram on the slide here. But I think given the GA today, we can expect to see these kind of rapidly approaching stabilization. And I would strongly encourage that this is a great place to bring your kind of software engineering ecosystem knowledge and start contributing to the Sigstar space. The more of these clients we can get. Implementing these features and stabilize then the easier Sigstar reduction will be moving forward. And then as promised, this was a very fast tour through the root signing mechanism and how to interact with that repository. So we collected a list of resources here that you can use to dig deeper into some of the things we've discussed today. I really want to highlight some talks happening today and at the rest of the conference. So Jed and Zach are going to be giving a talk on the life of the Sigstar signature. I think that's gonna be super useful context on top of this. And then at the main KubeCon event, there's going to be a maintainer track talk about Tuff. So if you really want to hear some of the deeper details on Tuff, you can see Justin and Marina talk about that. And then Santiago and Marina, sorry, are going to be hosting a Contrip Fest on Sigstar and Tuff and in Toto during KubeCon. And probably most of those are gonna be there. So if you want to come and hack on some of this stuff, like come along and we can get more technical on some of this and dig into the details. With that, thank you for your attention. We have a few minutes. If you've got questions, we'd be more than happy to answer them. Mike for the first question. I can stage a question. I don't have one because I'd probably eat it, but. No, we didn't want to make this the cheapest wrench attack against the Sigstar infrastructure. Okay, I'm gonna pause at a question because I think it's important. So probably some of you are asking, well, this is an online mechanism. What do I do for offline verification or air-gapped environments? And so I'm going to maybe quickly touch on that is that a lot of us are thinking about that already. And that while Tuff is an online protocol and you require doing online updates in order to achieve the full set of security properties, it's possible to configure your timestamp and other updates to be a little bit longer to have cashability, to put workloads that are mirroring that Tuff route internally. So there's a lot of solutions that we're thinking about in this case, but I kind of want to emphasize that you do lose some security properties when you go offline. And that's expected. It's just something to note. Yeah, I was going through that. So the question was effectively when you're deploying your own Sigstar instance, when do you need to go through all of this kind of process to generate your own Tuff route? And one of the things we've been talking about this week in fact is like abstracting the Tuff route into a generic kind of route interface with defined contract points so that you don't always need to use Tuff. If you're a relatively small company, you've got like tens of engineers having this core mechanism, maybe doesn't make sense. And the sense of ownership is much clearer in a hierarchical organization, the transparent trust is probably not a relevant notion. So we want to make it easier to bring your own kind of root key mechanism and bootstrap your Sigstar instance from those controlled using your existing mechanisms or a simple mechanism. So when does it make sense? I really think the kind of the transparency and the quorum are two of the key things in the route selling system that we've designed. Being able to chain like the updates is a really powerful mechanism. So I don't think I can give a concrete answer when it makes sense, just that we're thinking about this and we'd like to support different scenarios. I think also there's some work starting to make it easier to generate your own Tuff route without necessarily having to follow the same process that we use on the public instance. Do you want to add anything? Yeah, I think on that note, we are trying to make it easier for you to make a lightweight Tuff setup that may only be used for a single maintainer creation. So there's some work on coming out of VMware on repository workflows and on Tuff maintainers as well are looking at that. And there's also some work that VLA had done in six or scaffolding as well to create Tuff routes from secrets that you can load in. So we're working on that, but again, it might not make sense for you to have a public trust route that's like securely updated sometimes. Anything else? All right, well, feel free to approach any of us about any of this and attend the ContribFest as well. Honestly, maybe someone can do the six store Python integration and complete it there. It should be pretty easy. Okay, awesome. Thank you.