 Good day folks. I still have the background to hold on. I would do a little winter thing. So everybody's here. And here. Marina, do you know if Trishonk will be able to make it? No, I can ask him. Okay. And let's see. Let's get our slack notes up for people to sign in. So we're here. And we just give folks a couple of minutes to join. Before I share my screen, let's just change the screen resolution to be a little bit more friendly. Hey, and thanks for coming. Sounds like a Monday, Monday morning, a hangover of. Halloween candy. Oh, it definitely is. Free coffee. All right. Let's see. Let's get screen share going. Get the right one. No, that's the screen to share. Can folks see my screen? Yep. Okay, great. And let me just. There is the chat. Okay. Let me put the chat over here. And get in the terminal window. Where everybody go. Oh, there it is. Okay. Any word, Marina, or should I just start? We said it'd be half hour late. So I think we can go ahead. Okay. In that case, do we want me? Well, I guess. What was your topic you wanted to talk about? I wanted to go over some of the core requirements. We'd agreed on before for a signing key. And I think we'd want to shunk for that one. Yeah, let's go. All right. Well, I guess I'll watch the recording. He did give us a heads up that he might not make it. So we said the recording was the backup. All right. So, uh, let's kick off here. This is the, our latest update of where we are with the MB to prototype. I remember the model we're doing is where building a sample. A prototype that kind of like the Sagrada familiar kind of model of that Gaudi did where it's very hard to write blueprints for something we're trying to get all the trades engaged in. So we've got some ideas. We're putting a three-dimensional shape around it in the shape of code. And we wanted to base it on a scenario. In our case, signing content as opposed to. I don't know what Gaudi was trying to do with the church that would be anyway, nevermind. I'm just going to dig myself a hole there. So I anyway, so what we have is the, uh, the entity can move the signature can move between locations. That's been our main scenario. So we have, uh, the web networks company that's building this net monitor software. They put their signature on their object, their, their, uh, content. In this case, it's an image and S bomb a source. We're going to focus on the image in this one. They push it to Docker hub. And they say, okay, I'm going to put a signature on it. And the next Docker in, uh, we'll say, Hey, uh, this piece of software is actually, uh, part of their certified content. So they will differentiate it from all the other public content and put a Docker hub signature on it. And says that, okay, this is Docker certified for whatever that means. That's a trust factor of people put into it. Cause they don't know who a web networks is. The acne rocket company will then say, hey, this is the key for our net monitor network monitoring software scenarios. Uh, they trust, uh, content from Docker hub. They don't know who lab at networks are. So they will pull the content in and because they have a Docker hub key on it, uh, it will come in. But following gated workflows that we've been recently from, you know, just put a blog post at that, trying to help put some context to this. They will not put anything in production unless it's signed by the agency. So as part of their ingestion workflow, they take that software, they run some tests on it and then they put the third key on it. In this case, the acne rockets key. And then as it moves into production in four and five, the policy management says, don't care about anything else, but the acne rockets key. And that's what I'll put in production. Uh, that's the scenario we've been working on. So the idea is that the rabbit network's key can move in between those, um, in between those registries because we following best practices don't want to pull from public content or depend on it for critical workloads. We want to import it, validate it and use it from a registry within our control. So what we've done then is to enable this workflow is, uh, we have an instance of distribution that can maintain signatures. Uh, the details of this are not super important to this working group. Um, just looking here, right? That's seeing any registry focused people here. Uh, but this is definitely some scaffolding is a weird word. This is definitely some fake sets to some extent to the implementation. We have here. We know we'll churn, but it enables what we want for now for the experience that we're shooting for. And the actual PR is here on this prototype. So if you wanted to look, it's actually the first poll of distribution on the notary project, but it's also the first poll of the work that we maintain. So the experience is here. We didn't move this over yet. So, uh, we will, uh, we were factoring a bunch of the repose, but basically what we want to do is go through this experience. And, uh, the way we've got it mocked together is we wanted to show the Docker client, but we're obviously not going to build the Docker client ourselves. So what we want to do is we created a Docker plugin. Uh, that goes in. And, uh, if we looked at the source, you would see that we basically copy it to the Docker directory. And then, uh, we can alley us. The Docker NV two. That's the way plugins work. There's a sub command group. And we just lifted this command group up to the root. So what you're seeing is a total mocked up experience. It does function. So we're going to go, we were going to do it in bash. And, uh, the guys actually put it into some go libraries. And you'll see this experience, but rather than walk through this, let's actually take a look at it. Uh, so I have the client here. Let me just make sure I got my notes handy. What I'm trying to do. Okay. So nothing up our sleeves, right? There's what helps the spell images, right? There's no images locally. Now what I want to do is I'm going to, uh, I'm going to just, uh, I'm going to, uh, I'm going to just short hand some of this, uh, and I'll do this. Uh, make it be too. So I've, we've taken that instance of distribution. And hosted it on Azure websites. Uh, it called the NV two registries. So I actually have the acrockets and Wabbit networks domain. I just didn't get a chance to get those configured. So, uh, just work with me here on the pieces of it. There are complete. Um, I'm going to, uh, I'm going to, uh, I'm going to, uh, I'm going to, uh, I'm going to use the image is the net monitor V2, uh, for now. And, uh, if I make sure, yeah, I've got it here. So now if I do a Docker build dash T. Of that image using this local directory, um, we're going to take this Docker file. It's not really important. It's an image. It's hello world with an environment variable. So just the Docker build does nothing special at this point. Now, if I want to sign that content, uh, the way I would do this is first, I would enable Docker notary. Uh, enabled. And let me see. I think I've got the, this setup here. Hold on a second. Uh, but I have that handy. Oh, here it is. Yeah. Sorry. Too many screens. So here's a config file. It would be empty by default or not exist, but you're seeing the scaffolding. So now when I say Docker notary enabled and I set that flag, you'll see that we kind of set this config file. Um, that's kind of like the persistent state of what I want to do here. Uh, so let me just go back over here. Uh, all right. So now I'm going to say, let me just clear this. So I have an image now and now I want to sign it. So Docker notary sign. And what I want to do is pass it the key of envy to that Azure websites.net.key. And the cert envy to Azure websites.CRT. And I want to sign that image. Right. So we've used our Docker extension. We've lifted it. So the Docker command is really pointing toward Docker and B2 alias. Uh, and I'm going to say sign it with this key. Whoops. That's actually not going to work. And let's do that again. Or actually let's just not try to show off and just copy paste. So, uh, with that key insert and sign that image. And now we've got that. We've got that. And we've taken it and created a JWT that actually represents that signature. It's sitting locally. Uh, one of the mainline scenarios we wanted to support was offline signatures. So right now I've got the signature local. That's not in the registry. If I log into my registry. Um, which, what is the, the search in the, that, that command. The search. The search, the search, the search. Oh, it's, um, uh, blah, blah, blah. That's where the key is stored or. Right now, the public cert, right? Yes. Well, this is the private cert that I'm sorry, what are we referring to here? These are the, this is the private cert that I'm signing it with. The private keys. Sorry. Private key. Okay. So both. So why do you listen? I'm just curious about why it's both the key and the certificate. Are they doing something different? Certificate doesn't always have the private key included in it. You can also have a certificate where it just has the public key. So you can distribute that cert. I'm guessing that's what's happening here is that the key is the private key. And the cert is the public key as well as the X519 information that needs to be shared. And that was my assumption too. I'm glad the security experts have an answer because I did this because that's what She-Way told me to do. We can validate that. I only had a pass in one and I don't know the intricacies. So whatever those key manager folks agree is the right way to do it. I'm sure those will work and work so well. Could I pump that football and move on to the next? This is really the part of the key management stuff. So I'm just taking magic happened over here. I am using that rabbit and I'm going to use that for now. All right. So now that I've logged into registry, I have a local key. I'm going to push this to push. And if you remember, it just pushes to that registry. And now we're pushing that image to the registry. And because we've got notary enabled, it's taking any signatures associated with it and sending it to the registry. The registry links those together using the distribution changes that we put up. All right. So now because I still have this locally, I want to basically represent an ephemeral client. So I'll say Docker images, Docker RMI, dash F, and I'll clear out the store. All right. So now what I want to do is I want to pull. So Docker pull. Now, in this case, while I have the certs on my machine, technically, I don't actually have them configured. So this does this is part of we don't have magic just working because they happen to exist in some directory. It has to be configured. And if we remember, if we come back over here, we'll see that we have no certificates in our path. So if I come over here and just grab. Put that. Wait, whole secret Docker. Oh, here. It's just the wrapping for me up. I put. So where does that come from? Magic comes from where I, and Ian tells me it comes from. OK. That is the subject of key management. That's how we're slicing this thing and saying they're going to figure that out. Somehow it appeared on the machine. And now I have a way to do it. This is like the initial trusted certificate that is loaded with the client. So they have initial. OK. Yeah. And this is also coming across like it's very much a pinning of trust to the leaf as well. The leaf set in this case. So there is no concept of hierarchy walking a chain yet. Correct. Neos. I think we've talked about that in the key management discussions. So in the signature validation site, that's an extension we'll need to put in. The other change I can think of like looking at the JSON is also we'll want to kind of say either which registry repository or target the certificate is being used to validate for. So there are some changes I think we'll need to do. But at least the validation workflow seems to be working in a path where we could extend to those use cases. Yeah, super cool. OK, so now if I say Docker pull that image, because I've configured the cert at least one. And again, apologies, I really wanted to get the Acme rockets and web and networks registries and hubs registries set up. But so we didn't get it. So I have to hand wave a little bit. But you'll notice that imagine this is the web. But no, the Acme rockets key. I'm going to say, OK, I now have an Acme rockets key. It says download a newer image, blah, blah, blah. So the image originated from pulling from. So basically he's elaborating out some of the information he found. And because I do have the key appropriately, I was able to pull it. And I now have the MV2 net monitor V2 tag locally. So the idea is the validation passed the second time because I have the key configured and it does match the more elaborate part of the demo, which I will get to at some point we will get to is you would actually have three keys because if you remember, we go back to our scenario here. We have the web and networks key, the hub key and eventually the Acme rockets key. The way the current way the MV2 prototype works is if at least one of the keys that I've got configured here is on that artifact, it'll pass through. And we're going to, like all of the, if I want at least two or three or N where they have to be from a certain location, all that stuff will be layering in. But that's just the very simplistic policy we've got set up right now is at least one of the keys I have to have locally has to be on that artifact. And if so, away it goes. And of course, they have to be valid. That's the extent of the demo. Questions. Yeah, that's cool. Cool progress. Sorry, what was that? I think it's cool. Good progress. It's cool. There's promise there. Well, there's visual, there's visual references of a promise, I guess. Yeah. And clearly that wasn't a video because I don't think I could have orchestrated that well. So. Hey, Steve. Yeah. Yeah, really cool demo. I said the question when you were, when you say you're pushing it, is the signature, does it become like an OCI image by itself? Yeah, so it becomes an OCI artifact. So if you, if we go to here, great question. So this is the, and this is committed in the prototype one repo. So we're, remember we've, just from a process, we've got the various repos under Notary Project and we'll leave Maine to be the things that were, you know, planning a ship eventually. The iterative prototypes, instead of being lost in a bunch of PRs, we're committing to prototype one. So this is the current working implementation. And what we do is we're pushing a manifest. We push a signature as a CNCF Notary V2 signature, which we should probably put a 201 or something. I don't know, but anyway, and that's the media type. We can push a second one. So notice there's, what do I have here? I've got Acme Rockets and why do I have Acme Rockets here twice? Is that a mistake? Oh no. Yeah, there is a mistake here. That signature Acme Rockets where it's actually supposed to be Wabbit Networks. I don't remember what I did here, but it looks like I have a bit of a mistake. But here's the thing that's important. This Wabbit Networks, the Acme Rockets, those are the two keys. They're pushes and OCI artifact and they're stored. That in itself isn't really anything new. Of course, we can do that today without any changes to distribution. The things that we're doing is this linkage. We're linking the two together so that when I do the query of it, and this is just going into the multi-ark and so forth. Where is this linking API? Linking. So what we are doing here is we're linking them in distribution. Where is the reference to that? OK, somewhere in here there's a reference to it. But basically, there is a separate linkage at stores because registries know how to do a one-way link, as they know what the things that they reference. But there's registries they don't have the concept of things that reference an artifact. Because remember, we can't change the digest of the artifact we're pushing because that would change the digest. So that's the main change that's here. OK, and so are they, when you say they're linked, are they wrapped in the LCA index or how is that established? No, this particular prototype, for some reason, this looks like the options, which is why I'm a little confused. So give me one second. Talks, distribution, persistence. That's the spec in B2. I'm a little confused because I thought I pushed into this branch, not the options, but what we were actually doing. So there's basically another storage option in the persistence that keeps the digest linked together. And that's the part that we need to iterate on. So I wasn't really focused too much. We basically kind of duct-taped this together to make it work. And we know we want to do some additional changes on how distribution is persisting. So I think this solution would basically be, it would cover pretty much all OCI artifacts, right? Not just... Correct, I'm being, so sorry, it's the second good question I've had about being a little vague on it. The first one, I really don't know the answer because I further the key magic, folks. On this one, I'm being a little vague because we're in the middle of something. What we want to do is get to something where we have a third media type. Today we have an image and index that we've been using image with a different media type and there's a little bit of confusion around that. And index is a collection of things, but it doesn't really have a traversal model. So there's another deck and I want to, we only have five minutes left, but I don't want to leave time for news. There's a great conversation on this, but what we, and I remember where I last presented it, but I'll put the deck in the notes out here. Basically what we want to be able to do is have reference objects where you can put things in a registry and they might be physical artifacts persistent as blobs, probably like a signature or an S-bomb or a Helm chart or a CNAB or other things that have to reference each other. And S-bomb is a good one because it's a fairly large document. There's other things like metadata that I also want to be able to link, but I probably don't persist as a blob. It's probably persisted as some name value pairs. So we're working on a model where we'll have a way to persist. And I'm just taking a quick look if I have it here to show as a picture. Because I do have a PowerPoint of this. Yeah, let me just show this. Whoops, I got to show it on that screen. I want to stay in my 30 minutes. So give me a second here. So basically we want to be able to have a third type. So individual types is image, collection types is index. And we're thinking we might wind up with another type of artifact, another type of manifest that registries would support. There'll be reference types and then they can store all kinds of types in those. So because if you think if we push everything as an OCI artifact, what I wind up when I look at a tag listing or a listing of the content in a registry, I would see the artifact and I'd see some signatures as individual artifacts. And that's not really all that interesting. What we really want to see is an artifact that has signatures, might have an S bomb and might have metadata on it. So we want to, and this isn't just visual candy in a portal. We want the data behind it to represent that. So that's kind of why we're in this model where we're actually thinking there is metadata and artifacts. And if I now have three different types. So I have inject that index manifests and I don't know why I'm calling it artifact manifest, but basically all the things that are collection types would actually wind up using this other manifest. Because now I can represent, you know, collections, reference collections as opposed to just arbitrary collections. Cool. Thank you. Yep. I'm not giving that the, I'm just trying to give a quick overview and we'll Next don't see I meeting. Maybe we'll talk about that more. We're not going to have it this week, but that's a great conversation. I'm hoping we'll have more done by then. So that's that. Any other questions before I hand off to any of us. Great questions. Okay, I did put the links to all of that here in the markdown. So all that stuff is here. We'll get this one that Sages got for the prototype moved over as well. This is just a placeholder. We, we really want to get a collection of go libraries and then an actual implementation that shows it. So this is the invitation that shows the experience. We just didn't finish getting that cleaned up and factored out. So with that, I will hand over to part two. And let me take over. Thank you, Steve. Let me share my screen. Give me one second. So I went back and added a section to the key management scenarios doc to capture the requirements that we had discussed earlier. I wanted to spend some time kind of going through the requirements one by one and make sure that they're all documented. And if you've missed any from earlier discussions, feel free to chime in. We can go ahead and add those in. I think this should help us kind of guide conversations around whether any key management sort of decisions that we're making, whether they're the right decisions, as well as for any optional features, like whether they still meet some of the core requirements that we'd identified. So the link is shared in the agenda, but there's fair, there's a limited number of them. So I think we can go through them one by one. So the first requirement that we've captured, signing an artifact should not require the publisher to perform additional actions with a registry repository, a registry operator beyond those required to push an unsigned artifact. So this is essentially going to say that any signing that you do, you're able to do locally. There's really no interaction, additional interaction that's needed to generate a signature. Are there any questions or comments on this requirement? I love it. For me, this is a big one that I wanted. For sure. One question I have though is I feel like you at least need like the public key of the person signing it to be available somewhere. So I think some, some like minimal interaction initially, right? So just set up. So the key should be available through like the key distribution happens through a separate mechanism, right? And what we wanted to call out is that as you're doing the signing, right? Like there isn't any additional processes as part of that. If you're doing your own key management, then you don't obviously need to interact with the registry operator. If there is an optional mechanism for registry operators to distribute keys, really the only interaction is to get the key. But to generate the signature itself, and for that signature to be valid, you shouldn't have to interact. Yeah, I think that makes sense. I just want to make sure it's clear that you also have to do some kind of key management somewhere. So it's not just like magically signed. Correct. Yep. So this is really more looking at the signature generation process. Okay. Second one, similar kind of requirement. This goes more on the deployment side. Here validating a signature should not require the deployer to perform additional actions beyond those required to pull an unsigned artifact. Really the pool should provide the signature information that enabled the validation of the artifact itself. So any questions or comments on this one? Is this like talking about the user facing interface or saying like nothing else can be pulled? So nothing else should, in terms of the signature validation process itself should be contained within the same mechanism for pooling. And you wouldn't need to go back to the registry or the repository to get any additional information beyond the signature itself. This sounds kind of like an experience thing. Like if you noticed in the prototype we did, there wasn't a second, there wasn't a pole of the artifact and a pole of the signature and then a validate command that was all built into one. Is that what this is encompassing? That's part of it. The other part of it is that you're not really going back to the registry repository to get additional information in terms of like where did this come from or like, you know, which registry repository and like you shouldn't need any additional metadata for that signature validation. Like you, if you're doing key management, you have to do additional posts to figure out if you trust the keys, right? Which keys are trusted? So that's part of the, goes into sort of like validating the validity of the keys and the signatures. And so this is kind of saying that that validity should come from your key management, right? And not necessarily from the registry repository itself. So this kind of calls out that, you know, a registry operator can be someone that provides key management capabilities to you, but those aren't baked into how the registry itself operates. So we don't necessarily need to go in and change implementation of a registry repository to make this work. Okay. But the registry would be like allowed to provide it if they wanted to. Not as part. Sitting on two different angles of it. I'm curious what we're both protecting from and poking at. So a registry will obviously have to make some changes to support this. We're hoping it'll be more of this elevated artifact experience that signatures just happened to be part of. There is some, the client is doing some in our back and forth interactions with the registry together because they're stored separately. From a customer experience, there shouldn't be like, Hey, I got to stitch these four commands together. It should be super simple. I think that the customer client side of that makes a lot of sense to me. They shouldn't have to do a lot of extra work or even really know what's going on. But I feel like saying that nothing else is going to happen because you're putting it in an unsigned artifact. Isn't quite true. Because you are pulling at least a signature, right? Which is an additional action, which you weren't doing before. But the user is that that's what I'm trying to figure out of, you know, as you're, you're kind of outlining the. Expectations on the experience or. For the technology, because clearly the client, I like clearly, but the client is going back and forth and negotiating a bunch of. The other thing that I'm thinking about is, you know, I'm not sure if you can recognize that the client makes of the server. You know, give me the list of, you know, signatures that you have. Okay. Let me figure out if I could filter that list to the ones I care about. And then I'll pull the signature. Does that signature match something that I have? Yes. Okay. Now I'll pull the actual artifact. So all of that's happening behind the scenes, but the user doesn't see any of that. So I think what this is kind of getting more at is. What is the registry itself supposed to do in the scenario? Right? So if you're doing pull for an artifact and there are sort of like signatures present for it, you can go ahead and send the signatures first and see kind of like, you know, are these signatures that I trust, but that that all happens within that one pull command. Right? We're not necessarily going out and saying, do these additional steps. And that's kind of what I'm getting more at is that, however that signature validation process happens, if there is a signature for an artifact that gets presented, if there isn't, it doesn't, it really kind of like falls down to like the same way you would pull an unsigned artifact is the same way you're pulling a signed artifact from a registry. Right? The signature validation is something that you based on sort of like how your trust or configuration is set up, you can decide either to kind of like trusted artifact or reject it, but from a registry perspective or from a repository's perspective, you're really just saying, here's the artifact and here's the signatures if there are signatures present. Right? That's the bare minimum. I think that we're saying that registry should be doing. I think going more into sort of like, you know, should registries provide additional data or not, we kind of go then kind of go back into sort of like registries may now on optional features have different implementations. Should we try and have like a consistent experience, because then we start introducing friction for the next requirement, which was moving an artifact from one repository to another should not invalidate the signature on the artifact. Right? So to make sure that you can move one, you can you can move. Artifacts from one repository to another, you do need that process to be kind of scoped on into what is exactly a registry repository providing. So the next repository you move it to can give you that exact same information. I think I actually view with most of what you're saying, it's just that I feel like the second requirement is just worded in such a way that it sounds like the registry can't do anything in addition to what it's already doing, which obviously we're adding features. So that's not quite accurate. So maybe it's just a wording question. That's not so much a technical question. Yeah, there's probably some tweaking. So the way that is from a requirement, it's, it's, there's price a little bit of clarity and because I think there's three perspectives going on. There's the key management perspective and details there. So I can make sure it works across all registries. I've got this perspective. I actually know a little bit more about the registry implementation. I know there's a bunch of handshakes back and forth, but it's not really what Nia's is trying to get to. There's another problem you're getting at Marina is like, how do we clarify this in the actual spec? There will be much more clarity once we figure out how to meet this requirement in the experience that seems very. Easy for the customer for the user. Then I will obviously have to provide a lot more clarity on this, but I think the requirement is. That I can. There's no, there's not this 18 ways of doing something that no two registries could figure out how to negotiate. Is that. Yeah. I think, I think that makes sense. And I think like adding in beyond kind of like retrieving the signature helps clarify that. So I'm going to tweak that second one to kind of at that, but that's some feedback. Were there any comments on the, the moving in artifact from one repository or another? I think that one was fairly straightforward as well. That's the third requirement. Just ever we have this terminology thing. There is a registry has lots of repositories. And then there is lots of registries. So today, no review one Docker content trust. You can't move signatures within a registry. Between two repositories or across two registries. So it's like very, very strict. We want to be able to support intra registry movement. Between repositories and across registries. So there's again, just a minor tweaking there where I think package managers refers to repose generically. It doesn't really. I went down to the most granular there like with repository, like if you're able to move between repositories, you should also be able to move between registries, but I can add that clarification as well. Yeah. That actually it will be the opposite. You can move it between registries and you can move it. So if to say within the same registry between two repos, the keys, you can argue the keys don't have to move. They're still stored in the same place. They're just accessible from two places on the same endpoint. If it's physically two different registries, one's in AWS, one's in Azure. That's a very, you know, obviously there's a big difference there. So I think we just, there's some wording somewhere that I've got cause like, I always trip over how to say this cleanly. And I always wind up with it of within a registry or across registries. Yep. But I think we're agreed that you should be able to move containers between repositories or between registries and the signature should still be valid. Okay. The next one we have is a rotation of the root key should not require the use of the existing root key. This one was one we looked at from an attack vector that if your current root key is compromised and that root key is used to authenticate a new root key, then essentially an attacker has the ability to rotate your keys underneath you. So this one was one that we called out that we should have a separate mechanism besides using the root key for the, for rotating the key itself. But a root key can, go ahead. Oh, sorry. I was just going to say that I think that we maybe want to separate the rotation and revocation use cases. Because in order to not use the previous root key, you kind of need to do your initial trust all over again, which is totally valid if you need to revoke the root key. But if you just want to update it, I don't see why you can't use the existing key just to update the key that's used. Because the deployer doesn't really know whether it's an update or revocation. So if, if, if, if my key is compromised by someone and that someone then goes ahead and creates a new key from the deployers perspective, they would, they might look at it and say, oh, this is a key update. Let's just go ahead and switch to the new key, right? So any root key rotation should really happen through the same channel that you verified the original validity of the root key by. Yeah, but this goes back to Steve's original question, which is if it's hard enough to establish the initial trust, which he keeps bringing up, I mean, how would this make usability any easier if you have to do it, let's say once a year? I think that's once where we want to get into, like root keys really should not be rotated once a year, right? Like typically when we see root keys being used, we're talking about keys with 20, 30 years of rotation of validity and you're doing a rotation in half that time, which is usually 10 to 15 years. But I feel like the, for general security practices to not let any key last longer than I think five years at the maximum. That's not necessarily true. I mean, if we look at sort of like how public CAs are operating or how roots are defined, like you typically are not using your roots in daily use, you're using your roots to sign intermediates, which you're rotating on a more regular basis. And your roots are usually. But yeah, I think that like the mechanism that you have for establishing the validity of the root should be exercised every time that root gets rotated. Yeah, I'm with you guys on this too. I mean, our roots are long-lived. I mean, really long-lived keys. The intermediates are things that that root is only signing typically. So the frequency at which you're actually using that root key is such that it doesn't compromise its strength. But what about just like improvements to cryptography or computer computers and stuff like that? Do those affect the long-term effectiveness? I've been like crypto, crypto resilience. I'm part of this. Anyway, so I think that's secured today. I just know guarantee that 10 or 20 years from now. That's still the case. Oh, absolutely. That point to reconsider that. It is. And I think that's one where sort of like recommendations around what type of keys you should use for your root key, how to sign and generate your root certificates. Like those are all important considerations to take in. And so I think those are ones where, you know, whenever some sort of like migration, whenever some sort of notice like that comes in, we are kind of forced to go out and say, hey, we need to not just rotate our roots for intermediates and everything. So there will be events, potentially events like that that require us to kind of go through the rotation. But the general understanding is that outside of those events, you're really not forced to kind of change your root keys. And so it's one where we want to kind of have some good hygiene around like, you know, how the root keys are managed, what the recommendations are. But the rotation of a root key, like, you know, like an attacker being able to kind of rotate that and using an automated mechanism. It doesn't really offer, it takes away all the protections you essentially have of like having like a two-factor authentication, if you will, of the root. Well, except if there is a compromise, which is, you know, arguably a rare event too, not something that, although I don't think it's on the frequency of every 10 or 20 years, it will happen. And so you have to think about the convenience of rotating users who haven't been updated by the attacker to the malicious new root key yet. Because if you think about it, what can they do with the new root key? It's only as long as they keep controlling your server. Once they don't, they really can't do anything with it. So there's no reason why you can't use the old, yes, technically compromise key to move users to a new good one. I think part of it is there's this tree that we're talking about, there's the root, and then there is delegations of that if I'm using the right terminology, and then there's delegations. I don't know how many delegations you do, but I think the idea is there's very few places where the root key is actually used to sign so that there's some point of stability. That's my interpretation of this conversation. And the conversation we're really having is further down that there's delegations that are given that those can be rotated more frequently. Am I understanding it right? Well, from understanding the farther down in the delegation, those keys are actually pretty easy to replace because whoever delegated to them in the first place can redelegate to a new key. So the root key is kind of the tricky one here for sure because nothing delegates to root except for your initial point of trust. So if you need to revoke that key or change that key, you either have to go from the existing root key, which is the only thing trusted there, from some third-party trusted mechanism, which you have to make sure is more secure than the root key because if that's compromised, you basically compromise all your users for that entire chain. I agree with that. And I think that's really the friction that we're introducing here says that like the origin, like you have a mechanism that says I'm trusting this, right? Like you don't necessarily trust a key coming from somewhere with the certificate saying this is like, you know, from like webbit networks just because it says it is, you have some mechanism saying that I know that webbit networks actually uses this key, right? And I think that friction is something that you need to kind of introduce at every point that you are rotating the root because otherwise how do you know that root actually comes from them? I think we have a sufficiently easy to use and very secure mechanism for distributing the first time. It's not unreasonable to reuse that to revoke the root key. I just, I think that going back to the, making sure it's secure the first time is I think really the hardest part there, but yeah. Yeah, particularly because in Duff, it's, so I don't understand the proposal here yet to forgive me if my misunderstanding, but it looks like there's only one root key here, right? Probably because of the limitations of DLS, which we seem to be reusing basically the basics of DLS. With Duff, you can use 10 root keys, for example, and say you need like six, at least six of them to come together. So the odds of someone breaking the webbit networks and breaking six different root keys to distribute a new malicious one is so low that it becomes theoretical. So we don't need to worry about, sorry, go ahead. Yeah, I think that argument came up before. And so I think what we kind of get into is that if we look into practice, right, like would, like for example, like a, would someone have like, you know, HSMs in 20 different locations to secure those 20 keys, it's likely that those 20 keys are kept in sort of like a similar location, right? And so I think the number, n number of keys doesn't really kind of increase the security there as much as like the n number of n number of ways to kind of get that key, like separating the channel in which you get that key is what we've seen in practice kind of provides much better security, right? So if I have 20 keys set up on my developer desktop, if my developer desktop is compromised, then I'm losing all 20 keys. And so that, that to us, like hasn't really kind of translated into being more secure. So I have another question on this. So you're saying usually root keys are 20, 30 years. And yeah, from TLS perspective, that's indeed mostly the case. But when you look at things like let's encrypt, where they preferably choose for shorter lifetime of those keys, or when you look at the principles they use in Haschicorp Vault to use short lived secrets, you can also reduce the attack surface because the shorter the lifetime of this thing, the shorter the period is they can abuse it. So why are we choosing for root keys, which are per definition a very long lifetime? Well, let's encrypt root is actually pretty long lived as well, right? I need to look it up, but let's encrypt root key is very long lived because the reason they're long lived is it's very hard to distribute root keys. You need a very secure communication channel by which you can say, hey, here's something that's going out for browsers. This is baked into your trust doors. And so it's one where it is a very, it is something being like that it establishes the root from which you're designating all these levels of trust. You do need a highly secure channel, which I'm not disagreeing with. The thing that I'm pointing to is more that the root key should never be used on a daily basis, right? And the keys that you're using on a daily basis are the keys that are being short lived that are being rotated. There's nothing in this architecture that prevents us from sort of like rotating the delegate keys or the signing keys in a one month or a one year basis depending on what we see as the validity, what the best practices are. We're really talking about just the root key here. Yeah, so the reason I'm bringing this up is because we have currently this POC to manage some keys. And that means there we have that root key and we need the root key to create target keys. So that means if you have such kind of a web portal where some IT support guy can manage the target creations, then that root key has to be there. So in some kind of way, it's always needs to be available somewhere to be able to create new keys for new repositories. Not necessarily. So you can have an intermediate off of your root key which your IT is using and you can rotate that intermediate on a yearly basis and then have your target and delegate keys come up from that intermediate. So you can chain this and add in additional rotation in between. And that's really kind of the recommendation we want to make here is that your root key really should not be coming out and be used that frequently. So you're basically saying we are introducing another layer from tough perspective where you have roots, targets and delegations. You're basically introducing another layer there to work around this kind of issue. Is that correct? Yeah. And we're saying you can have as many layers as you need from a security perspective, right? It doesn't necessarily need to do one more layer. It depends on your practice as you can introduce as many as you need. One of the scenarios we talked about was let's say like for a large company if you have multiple divisions you could decide to have a single root or multiple root per each division and then have different intermediates for each one. So it really depends on how you want to architect it. Yeah. That actually makes sense. Now I'm just wondering whether we need to describe this kind of thing as a feature somewhere because in tough that's currently not in there if I'm correct. And I also don't think it's described anywhere. This is what I wanted to get into. I feel like we're entering into too much. Well, okay, let me put it this way. In tough we do have the option to use the root key to rotate itself. So my question is, are we forbidding forbidding the use of technologies like tough to do this we wanted to? Or is this a must not or should not? I think this is a must not because the feedback that we've gotten in on Notary's original implementation does go back into that like, you know the root key being able to be rotated presents a mechanism for an attacker to essentially take away the signature ability from you, right? And that's one of the gaps that we're trying to address here. So I don't necessarily think that this negates the use of tough. I think there are modifications that can be made in tough to address this. And that's kind of like where I wanted to go next with the conversation. I think actually one of the big differences between Notary V1 and tough is the use of delegations. In Notary V1 the delegations are very limited. Whereas in tough you can, at least for targets you could have pretty much infinite delegations in order to allow that distribution of trust. Which I think is definitely something really important. I think in tough as it's written I think there's no layers of the root key before targets but I think that is a critically valid addition that we could definitely look into and, you know, that model and talk about. Yeah. And also right now it's not clear but we do have mechanisms to fix the keys you expect to see in a repository so that attackers can come in and pull the switch and they can attack you're talking about. So there are ways to fix the problem without, I don't know, this seems too restrictive because technically it would prevent the use of tough as it is right now and it forces stuff to change for this. Well I think if you have this root key where you create another intermediate if that intermediate is just treated as the root key then that's your root key you are using from a tough perspective. So it's more or less recursively, optionally recursively I would say. So you have either one root key which is used directly to create targets or you create an intermediate which is treated in the same way as the root key and then that one is used to create the targets. So I don't think it's ruling out one or the other but just adding another layer where the example of IT support desks managing a bunch of target keys don't necessarily need this root key to create those. And Trishank you could kind of think of it as the the initial root key is kind of it's the mechanism for distributing the root keys that are then used in tough or whatever. I see, so okay so you turn off the root key rotation feature you basically never use it you just use a long, long live root key in tough you could technically do this with tough right now I think prevents you you just never use root to rotate itself and you're saying well you're forbidden from using it if you want to do it use the intermediate root key which you could I find it unnecessarily restrictive to be honest but you could do it. I do want to remind you we did say that notary v2 is not restricted to be backwards compatible with v1 we know we don't have useful adoption of v1 and registries we have people looking for checklists but it doesn't actually work for any requirements people have so we have to meet these requirements to be successful and that's the bar we're using. Oh yeah I totally understand and I think that's great I'm not saying that this from a usability point of view is a bad idea I totally understand where you guys are coming from what I'm worried about is that in the beginning of the project well it lives in the middle way there was an understanding that we could plug in stuff what we wanted right me Marina and you know whoever else is interested and right now it's not clear that that is the case so that's what I'm worried about so what we said was we haven't found a way to make dependency on tough work to meet these requirements yet so we split prototype in phase one and phase two so phase one there's we're focusing on the signature model that we've been discussing at the key management stuff that Neos and Neon others are trying to to categorize or capture we're not trying to exclude tough but we're not also trying to make it a bar to include it without changes it's not possible it's likely, I would say likely that tough will have to make some changes to support the cross registry, multiple registry interchanging of registries artifact movement in a secure way and the ephemeral clients I think it's the way the tough kind of works today is great for public registries with clients that have some kind of state the stuff we're trying to support here with several clients in this server of this world we're all trying to get to and the idea that there's no one master registry where content lives so we can we've really discussed some of these issues before with ephemeral clients how are you going to establish the root of trust anyway even if they're long lived you still need to get it out of band somehow it's the same problem here, it's the same problem with tough the same problem with any solution to be honest I agree, but if you look at tough today last time I read it it basically punts the scenario, it says that's out of scope we have to make that scope everybody punts it, to be honest we can't, we won't we won't say you have to use Azure Key Vault or AWS Key Vault but there will be a pattern that we have to specify that is the requirement for how this will work so I totally get that, I feel like it's definitely like it's not oh you just throw in tough and it solves all of the problems you can use it with you add on a solution for initial trust and you add on a solution for these other things and then it starts to fit together with the public yeah I think let's go through the remaining requirements and see if there's other things we agree or disagree on I think based on sort of like we've captured so far these seem like things that we would want to kind of move to as we're out of time should we continue sorry go ahead I have to drop because I just realized we did run out of time so why don't we make next Monday's meeting 100% this and continue to go through, I think we made good progress I'll see if I can find the wording for you so we can get past the first two quick and just pick up in fact we might want to change this from bullets to numbers just say okay we're starting at number three today whatever the number was and that's a pick up just do we want to have like a one-off meeting to kind of go through this as well I don't have an agenda for next Monday so I thought we could wait till next Monday move the Friday meetings to this and we're having trouble trying to of course in the US we have the time zone movement and I think Europe has a different time zone movement so trying to I think trying to keep the standard time whatever the hell that is one moving time will make sense let's do next Monday that's close alright thanks guys thank you thank you very much