 Good morning. Hey folks, I haven't seen this many people on so quickly. I feel like I'm late. Is my clock off by like 10 minutes? Somebody have a case of the Mondays. I don't know. We're just getting desperate for any kind of human contact. Give a minute here for folks for the rest of the world to actually join. Although this is the largest we've had in a while. Let me paste the link here for the hack and D dark notes. I will ask again, just as a reminder, because nobody's told me them figured out the magic key to capture the notes from zoom. So, and we still suffer from people not wanting to take notes. I keep on mentioning it until we get better. So instead of just typing notes in the chat, please type them in the hack doc. There is a section for notes. Maybe just for the sake of using time while we're waiting for everybody else for a less than important topic. Our hack doc has gotten pretty long and hack MD is not really optimized for so many headings and everything that it needs to index. So something Phil did on the OCI calls. He basically archived at a certain arbitrary point. And we save those in someplace. And then this thing can get smaller again. So I'm going to put them in a repo. I'm going to put them in a nice fast typing and feedback. I was going to just put them in a repo under the notary project. Called notes or something. If anybody's got an opinion on what that repo should be called. I'll just put them there and it'll probably just be one. File so that it's easier to search. I have to actually look what Phil did on the OCI call. But if anybody's got any opinions or alternative. I'm going to put them in a repo. I'm going to put them in a repo. But I definitely need to get this content. Trim down to the active content or, you know, something. That doesn't slow things down as you're trying to type. So I wonder if something else we discussed on the OCI call was putting more actionable conversations up front. Some calls are a little more focused on discussion and design. So they're more open-ended, but we do have some actionable items here. So we have a lot of different options. We have a lot of different options that we want to try to get closure to up front. And then discussions on the latter half of the call. And just as a quick one. I know there's, because we have things scattered in a bunch of different places because the various dependencies. I'm going to get an update out this week that will kind of be more of a. I don't call it blog posts, but some kind of summary article. That'll be here's the latest status and where we're at on the OCI call. So I'm going to get an update on that. I'm going to be continuing to see more and more people. Find out, want to know what's going on. And it's kind of hard to tell if you're not in the active conversation. So I'll get an update on that. I'll make it as a. Something that we can provide feedback on so everybody can put their input to. It'll obviously be things where we are that we feel comfortable with. And then here's the places where we're having active conversations on things that we don't have closure on. And I'll try to get it out mid weeks. So there's time for feedback. So by the end of the week, we should have it in a publishable form. So that's the status update. So I should probably. Okay. So that's. The two. Main items in the backlog of PRs. I'm having trouble reading these notes. Okay. All right. So let's get through. Let me share my screen and let's see if I can get a couple of PRs. So. Are there any other topics people want to talk about? I see the. The agenda. Mark was here. Great. So I, this one's probably the easiest fix is Hank. Here. Okay. So let me just make a real note of it. It's kind of feels silly for us to. Force an update with DCO fails when we're in the signing group. So I'll ping Hank again, or I'll just do another PR that just gets his name in there. So we can close that one out. And get his name updated. This is where we can take notes during the call. Speaker doesn't take notes while trying to speak and type at the same time. Okay. So the next one. So this one, there's some discussion here going on. Brandon, do you want to cover, you want to talk with us for a moment? Sure. It's been a little bit since I last looked at this one. So let's see what the discussion was on that one. We're talking 41. Yeah, I think if I remember, this is the one related to tags. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know. I don't know where our occasion may be delayed. I'm sorry. This is where I was. I was pushing this one over to go to 43. I think it's where we really want to talk about tags. Okay. Do you want to, okay. Did this supersede the previous one? Is that what you're saying? So the previous one. The 41 is just saying there are different ways that we can call an image. But it doesn't say that you actually have to sign the tag itself. Yeah. Did I comment on this one? Yep. Pretty sure you did. 41. Yeah. I was reading this one and I never finished. Hitting enter. So the NB two prototype one covers. The tag. And the digest. And the originating domain with the X 509 search. So we. We're going to talk about that in some thread. I was replying to this morning. I was covering this as well. So basically what we've tried to do is. Cover the origin. First of all, just start with the digest. Cause the digest is unique. Doesn't matter what repo, what registry it's in. So that's like part. Zero. Right. We must be able to sign a digest. We then talk about tag and whether tag. Obviously it was the combination of what is the tag is a tag, is it just the tag? Is it the path? Is it the registry and so forth? We've been, I put one here about calling names and we started incorporating some amount of this in the various documents. So I try to put a set of definitions that accounts for registries that use uniqueness by domain or uniqueness by namespace. So just to have a common vocabulary. But regardless, what we wanted to be able to do is we know that artifacts move between within and across registries, which means the domain changes and the namespace might change. I think one of the things that we hadn't finalized is if we know we need to decouple the namespace and the registry, then is there something interesting around the last part of the repo and a tag that that named element is something that somebody might want to sign. And the MySQL version was the canonical example. Every word is like loaded with something. Not canonical. So that would be a thing that you would name. The way the NV2 prototype one is currently working, it makes it optional for how strict you want to validate things. And I think we haven't, and what I mean by that is you obviously always have the digest and that's what the signature is always associated with. So that's, you know, that's the prize zero requirement. Then you have this extra piece of information in there that you can choose that the signature does have the registries, the fully qualified name in it. So you can actually go to the example in our requirements that if you're in the Acme Rockets environment, if Acme Rockets put an additional signature on it, now you could say that I will not to, I will not even pull this image. If it's not coming from the, the fully qualified name that's in the signature. So that's a really strict requirement that somebody could do. Now, if somebody isn't additionally signing content, you know, they might just use the web at networks or even the Docker signature. They could, and this is an interesting conversation. We should look at adding to the NV2 client as a policy. Is the last name, you know, something that can be verified, you know, as a standard, right? So we would put it in the NV2 prototype. It would be as part of the NV2 spec. So you could say the repo colon tag is an additional named element that must match the currently named reference to what it was originally signed because that information is in the signature as we've defined it. And I'll pull that out. I think I've seen it in the signature, but I don't know if we've got it in the requirement stock itself of saying that's what we need. And so that's where some of this issue is coming from. Oh, I see. The requirement doesn't call that out as a requirement. That's fair. Okay. I see what you're saying. Yeah. Go ahead, Maria. I think another aspect of this that I find really interesting is kind of the key management side. And as far as delegations, because it's, you know, it's kind of straightforward to delegate within a namespace, just specific tags or whatever to say, okay, yeah, you know, this subset of the namespace should be signed by this key or whatever. But if you everything is referenced by the digest, I feel like we could have some discussion about how you know what keys are trusted to sign that digest, because it's just kind of a random string. So how do you know which delegation path to follow when you just pull from that digest string? So how do you, how do you, you know, you know, figure out who is trusted to sign that if you don't have any of that other information? And I think this should be totally solvable, but I think it's an interesting thing that we should discuss and make sure we have a common understanding of how that would happen. And in addition to that, I think a lot of the comments on issue 43, we're talking about the question you're raising there, Steve, do we want to sign just the tag? Do we want to sign just the last part of the repo plus the tag? And it felt like, and I kind of threw there in my last comment on the thing from our conversation last week, that it probably makes sense to sign the entire thing, the repository and the whole part of it, and then allow the clients to say, yes, it's signed with some upstream repo and a path and everything else, but I'm pulling from this other place. Trust the different name for this thing. And to be able to put in that this thing that I'm pulling from is a mirror for this other upstream thing or that you're, you know, you have some aliases going on there and make that a client side thing where they could say, trust this other path for the same object. Yeah, I totally agree. And struggling was trying to type our notes while listen. So the, yeah, I think that makes something that we should add to the policy as an option because I can think of three, three different cases. One you use pulling from the registry that signed from the Acme rocket scenario, because I'm actually going to focus more on that because people shouldn't really be pulling from public registries and their production deployment. So, you know, they should pull from those public locations, validate it really came from those, but then promote it to their, their secured supply chain. And these are all these different conversations on mirrors and gate mirrors that implement the same secure pattern. Then if it's in their registry, if they choose not to sign the additional thing, that's fine. But one of two things happened. They, they obviously pushed it, they retagged it and pushed it to another registry. They could have retagged it with a different repo and tag. And what I think what we're saying here is there should be a policy that says, I don't allow that. So it's the data will be in the signature, but the client can say the last repo and tag, I enforce the name must be the same. And then of course you can shut that behavior off and just verify the signature. And I thought I heard somebody talking. Does that make sense? You basically, the data is there for the whole thing, right? We've said the signature must be, the fully qualified name must be signed. In fact, we have it set up so that, and I'll use MCR as an example of this, but I know the NVIDIA and the others do this as well. Nobody pushes content, the teams that sign their content, don't sign it and push it directly to MCR. They sign it where they don't be signed things yet, but they push them to private ACRs there for that team. And then there's this onboarding process that go to MCR. So a team couldn't actually sign something to their registry because that's meaningless. It's internal details. But the signature for Microsoft, the Microsoft registry can be associated with MCR. And somebody could sign, validate that whole thing. But in their environment, they should just pull the dot net image and validate it as within their registry. If somebody really wants to rename it, they can. The signature would still be valid. They're just choosing that this Microsoft image that's signed by Microsoft could be named something else. If it's named MySQL, then, and somebody shut that option off. Hopefully they're checking that this thing is, MySQL shouldn't be signed by Microsoft. There's those kind of configurations. You lose some traceability, I think, if people kind of are deleting other tags as they're moving things around. Because if something were to go wrong, you wouldn't know, did it go wrong when we were copying it? Did someone malicious manage to copy it? Or did it go wrong before that? Or whatever. Where did the malicious image get inserted? You lose the track of, okay, this is a valid image. It's still a valid image. And as long as if you're keeping the same signatures around the whole time, you can keep track of, okay, yeah. Yeah, we are taking the, that's the whole multi-signature movement. So when an artifact moves, when you copy it from web networks to Docker hub, and when you copy it when the consumer, Acme Rockets consumer copies it from Docker hub to Acme Rockets, the two signatures at that point would have gone with it. So you have the full traceability. And that's why we're spending. I think I misunderstood what you meant by re-sign. You spent like add another one. Yeah, it's not re, well, it's re-signed as, it's an additional signature, right? Let me pull up the requirements document. Yeah, cause while you're pulling that up, the other thing I was going to get at goes right into that, which is that we've been talking about, maybe the client could trust it with a different name, but there's also the person that's doing the signing could potentially push multiple signatures of different names out there and saying, hey, this is an image that's known by this private repo, but also by this other public repo and put all those signatures out there for other people to consume. That works in some scenarios and doesn't really work in others. Like when the supplier is not supposed to know the host names of the, of the private deployment. So there still needs to be some client side policy for either relaxing the map, the matching, or even better for remapping while still, still matching strictly. But yeah, as long as the signature contains a full name, we can build clients that do anything. Yeah, in addition, you could also have a different person to own the sign. So it could be a public image that a private company is pulling in. And so to use the Acne Rockets rabbit networks, the rabbit networks could pull in something from Acne Rockets and then re-sign it again with their own internal signature saying, okay, we now sign this with our internal key that trust it with our internal name. Right. And the whole idea is that we want to make sure that as content moves, all of this information can move with it. This is actually why I've been spending so much time on the distribution spec side of it because unless the content can move with it, you know, to Marina's point, we're kind of losing all of that traceability that we really need. And it's not just the signatures. It'll be some other metadata. It'll be S-bombs. It'll be like, you know, this whole graph of information needs to be able to easily move with the artifact. And it shouldn't have to be 15 commands that some person in the pipeline didn't bother to do because they didn't see the need. It should just be inherently default and almost opt out of the information when they don't need it. So just in the spirit of not taking one topic for the whole discussion, I see your point, Brandon. Let's get not just what we have in the prototype, but move it up into the requirements section that we want the option as a policy, as the way I would say it, that a client could validate based on the repo tag element. So you have all the information should be there. And then a client can decide what they want to do because there's some images that somebody is going to want, they want to rename the tag and the repo and or tag, a separate namespace. So they should have that option. And the best case scenario is when they rename it, they should also just resign it. So the acne lock its case would resign the MySQL image, you know, as prod-MySQL. I don't know whatever the name is. Don't get loaded on the word prod. So having that flexibility certainly seems key. And Brandon, that was a PR, right? So this was an issue I was just about to ask. I've got the 41 is the PR that doesn't really include that definition that includes a couple of things. Do you want me to add a second PR there to add a scenario 12? Yeah, I mean, I think if you can encapsulate what we just discussed and make that as a discreet thing, then we could just merge it in. Like the more discreet they use here is to approve and then we'll continue on the other pieces. It's been a while since look at the requirements. Do we have a section for policies or something? Or just think about how do we, we want to separate what gets persisted versus what the client has options to do. Yeah. I don't know how to represent that without looking back at the document. The only challenge of the multiple PRs is the numbering gets funny. If I don't know which one's going to get approved. The numbering is hard. Don't worry about it. Just. Okay. Yeah. Take the least likely to be contentious and make that the next number. We could always resolve it. Sounds good. Hello. Hi, I'm Joel. I'm new here. And I do have a question about the tag, signing a tag. How can you ensure if you sign a tag that none of the things that you signed prior have changed because you can. You can move the tag from one. Sha to another Sha or add another blob in the middle. And that doesn't change the tag. How can you ensure that when you sign a tag that you, that what you're consuming is really what was signed by someone. Oh, well, so first of all, welcome. Great point. Great point. Okay. So my question is we're signing the tag in addition to the digest. So it would never be to your point. Like you don't want to just do the, the evil version of my sequel just happened to name it the my sequel image and like it's a party. Obviously to Marina's earlier point, there needs to be some correlation that says the my sequel image should be a signed by my sequel. So that's a good point. So that's a good point. Certification process, if you will. And then see when acne rockets bring it into their environment, they would check whatever they think those right rules are and give it an acne rocket signature. But in all those cases in the tag signing scenario, the signature will always have a digest associated with it. The digest. The only thing that actually makes it unique. So I think that's a good point when you add this tag verification piece that you might want to qualify that that's in addition to the digest. Yep. Yeah, I just wanted to circle back to the other point though about finding the valid signatures. I feel like that's just a key piece of the puzzle. Or pulling by digest that we haven't really addressed. Yeah. I think you must start with the user intent. Either extra parameter I intense to pull them I see cool image now. And here's the public key or I think it's almost always implied by the repository the user is pulling from. So the client side policy would probably be keyed by the name space so that you can have one policy for your private registry one policy for the cryo slash library. One policy for the cryo slash my sequel and so on. Okay, I would define the keys naming requirements and so on. So the client would still kind of know that whole namespace path thing but they would just also say the digest that they actually want. Because that way they could still find the signature by following the namespace or whatever other delegation method. When you are with the image name from there. You find the code matching namespace in the configuration and then that gives you the public key. Okay. Or whatever is the root of trust. The image name actual image for that I get the manifest and the digest and get all the signatures. And those must match the root of trust and configuration. And just real quick when you resolve an image name you get a digest out of that. So when you pull an image you've always got the digest. Okay, God, I think I just misunderstood which parts were just there's an in addition to pulling just the digest you have configuration set up for where to get all that information. So if you always pull the digest, you may reference that digest by an image tag, which thing gets you the digest. And then we want the option to be able to sign that tag that says this thing that's pointing to that digest is really the thing we thought it is. Okay, yeah, I feel the signatures. I understand. I'm just trying to wrap my head around what the delegation, what delegation would make sense for pulling by digest and I think it sounds like it's the same as pulling by tag. I think of one of it as the unique identifier for the friendly name. And the unique identifier is globally unique. You know, until the world is filled up with that many unique images and we go to a 512 digest but for now 256 makes them globally unique. So we have the uniqueness and then there is where the the where the artifact is being pulled from what registry and what path is determined at pull time. Today it's embedded in the fully qualified string. There's a bunch of healthy conversations about decoupling that making that more of a configuration. But whether it's specified by a string directly in the deployment document, the helm or the couldn't deploy or you know, cloud specific service, you know, dot YAML or that is an additional piece of information that's mapped in with configuration. At the end of the day, the client must know the fully qualified path of where to pull it from. And at that point, as long as we keep going with the artifacts approach with, you know, the things can move, the signatures will always be alongside of that artifact as well. Great. Thanks for clarifying. I think one of the things that you're kind of touching on though that we haven't fully addressed, which is part of the key part of our management is if I have an image and let's use the web of networks one is a perfect example. If I find what the net monitor image in Docker hub and let's say Docker did not yet certify that image because who the hell ever heard of web of networks unless you're on these calls, right? It's just it's some arbitrary vendor that's publishing some content that the world shouldn't have to know that every vendor, right? It's that's an unscalable solution. So vendors need to provide their content and clients need to know, well, who is the key for this content? I don't have a great answer for that, but at this point until we have a better definition, it shouldn't just be arbitrary. The client should be able to specify, hey, this software must have a public key of this associated with it. That might be some manual research. I'm trying to guess that's you that's got the music plan to see you looking. Anyway, sorry, it's just fun to hear the music. So anyway, there's a public key part that I think we have to figure out like how does a client know that the net monitor software should be signed by web and networks. That's part of the things we have to sort out. Yeah, I feel like this goes back to I posted a thing on the agenda about where the route should be. And I feel like one of the ideas I've been kind of playing with is maybe having options for this basically. We're in some cases, you do want a registry route that can be used to say, yeah, discover this new rabbit network things that we've never heard of before. But then in other cases, if you're, you know, if there's a lot of private images on a registry or something like that, you might want something more like an organization route where you say, okay, so yeah, this is this organization that I already know about. And then, you know, these are all the images from there. I think maybe just a little bit of flexibility to deal with all these different types of registries that are out there, all these different types of public, private, known unknown image situations. What does that sound reasonable? Or what's the, what are the issues here? Yeah, I'm trying to look others chime in, but like there will be aggregation scenarios like Docker hub is a great aggregation scenario, right? They take content from lots of different vendors. Docker hub shouldn't have to resign all the content that Docker hub is hosting. They will resign a subset that they say is certified content by Docker. But I don't think Docker wants to certify the content from the evil, evil wins play on that one. I'd push back against that and say like, and if there isn't some kind of a curation process, why would you trust it? Just because someone pushed content to Docker hub without any kind of validation. I think that's, that's not an area where you would say I trust everything that's been pushed to Docker hub. I think that that separation is something you'd want to kind of address. Yeah, I think that it's, it addresses a slightly different problem. Yeah, you wouldn't necessarily trust the image because it was, it was signed by this process, but you could maybe Docker hub could know that this user did upload this image and they could just maybe have a separate change just to say, you know, we don't verify these images. We don't know anything about them, but this user did upload this image and this is the key that they used to do so. You know, use it, you're in discretion or whatever. What's the value you get out of that separate chain? Yeah, you should. So Docker like can, can use their own back end authentication system to know who uploaded this image to their server, but you as a user don't necessarily know that whole chain. You don't know that this was a valid upload to Docker. If there's someone as man in the middling or otherwise influencing between Docker hub and you, it's nice to know that this is at least the image that's on Docker hub, even if you don't know if you should trust that image that's on Docker hub. So what you're saying is it would be a verification if they had the Docker log in Docker hub log in. Yeah, basically be all it is and it wouldn't necessarily say this is, you know, good software, but at least this is the software that's on Docker hub that other people see on Docker hub or whatever. So the model that we've been proposing here is here it's showing the second signature from Docker hub, but maybe I should show you two different examples. If Wabbit Networks is not known by Docker hub, it's that vendor and it's an aggregation scenario. I may decide that I want to use it and that's okay is, but I might want to do some research and find out who this company is. And then I have a way and I'm totally punting to Nios and I don't need to do it while he's dropping, I'll be still here. That if I pull, get the public key for Wabbit Networks that over here in Acry Rockets I can validate that this software continues to come from them even if it's coming through Docker hub. So that's the signature transference copy, but it does just because I think it was Nios's point that Docker hub doesn't have to additionally sign it because I don't know what that means if Docker hub signs it if they haven't done anything to it. I'm not saying that they have to sign the image, but they can they can delegate to that public key for Wabbit Networks. They could just say this is the key that people are using to sign this thing called Wabbit Networks. This is what the public key is, you know, user or don't, but this is the one that we have. And I think that a lot of people will do that kind of validation and make sure that their images are all correct, but there I think are definitely some users who won't take the time to make sure every image doesn't work, but it's nice if they could still be protected by signatures, even if they don't do all the research by having an automated system to get that. I would say as a user, I would talk. I do the certificate and authoritative modeling except you don't have 600 certificate authorities, but you have only the ROKER hub or the Azure registry. You have maybe five and you know on which one you rely, I think quite a few users are going to use it because how do you bootstrap the trust anyway without that? When you visit Wabbit Networks and get their key on a USB disk? Yeah, just kind of real quick back to Maria's point I would say I probably wouldn't ever want to, myself, trust that signature that says, hey, that's something that Docker hub public verified that came through a user login just because those user logins are free and anybody can create one. That doesn't give value to me, but maybe others see it differently. Well, I think part of the value comes from having it of like one of a set of signatures that you want. So you could theoretically use it as a gating mechanism so you don't end up with images coming from registries that you've never heard of out in production. Although it's not just that the signature, it was signed by the ROKER hub, but that this was signed by the ROKER hub as the Wabbit Networks user login and you are pulling a Wabbit Networks image and the signature contains Wabbit Networks identity. If all three things match you have a fairly reasonable argument that the image actually is from Wabbit Networks. It's not just somebody anonymously locked in, but it is the user login you are looking for. So I think part of the goal that we're trying to facilitate here is it doesn't really matter where I got it from at any one point in time because we know these are going to move so much. It's is the signature that's on the artifact verifiable. So let's say that what I'm hearing is there's two scenarios. One that Docker is certifying instead of content, which I expect they will continue to do, but not all. We're purposely picking Wabbit Networks as one that at some point wasn't known by Docker hub and maybe eventually get certified. So if I decide to, I'll do one of two things. I'll either trust Docker hub or Wabbit Networks because anything they certify, I just trust that they've done some certification and that's good enough for some companies that will be good enough for other companies that were Docker didn't certify because they're not going to certify everything because then what's the point of certifying is I just do some research and I decide that Wabbit Networks is valid. So I get acquired Wabbit Networks public key and regardless of where I get that from, whether it be Docker hub or one of the promoted registries inside of Acme Rockets, I shouldn't be able to verify that that digest is signed by the key that I've got for Wabbit Networks. There's no key revocation issued, right? We still have to account for that. Doesn't that give me the, does that give me the flexibility that now Docker can opt in and say it's some additional certification, but I'm not dependent on it because I, I can't remember who said it. I don't know what the, it means that just because somebody had enough an ID to push to Docker hub you know, I don't know what help that is if it wasn't signed by some entity. Yeah. I feel like this is mostly just a way to get public keys for images when you aren't doing the additional verification. So again, a lot of people who are more security conscious will do the additional verification. They'll make sure projects are validated by Docker hub, you know, whatever. But I think that there are also, I think a number of use cases where people either don't care or think they don't care about the security of their own. And just want, you know, to download an image that will run based on this ID. And I think the idea of having Docker also list public keys for those images is just that you know, these, these are at least they're signed. It's better than just pulling it without any kind of key at all. And it's a little bit better than trust on first use for every single repository that you're pulling from because at least it's centralized. I haven't looked at these to make two at a while. So I have to re-sync myself. But like the reference you make to some scenarios, like let's make sure we're capturing those. So we could say this is a scenario. We are trying to support versus non-scenario. In fact, I just wrote down a number that we're not, we're not trying to support a trust on first use scenario. There is an implicit, well, there isn't an implicit configuration that says here are the keys I'm opting into. Maybe we should document some scenarios on synchronizing the images between registries, but also maybe between namespaces within the same registry. So probably that clarifies also a bit what is exactly the things that would be required for synchronizing. So that probably means doing a pool here, verifying the signature, pushing it to the other registry or namespace and all the things which would be required to synchronize the image. I don't think we have any scenarios describing that part yet, and that might help also in this discussion to figure out why are we actually doing what we are doing. Yeah, because I guess it is possible that if something is not verified by Docker Hub, we just don't want to support that use case. But I would at least argue that it might be better to have some support for that use case so that people have better security with it now. Yeah, for example, thinking about the scenario, let's say I have my own private registry where I want everyone to pool from, but to allow people to use images which are available on Docker Hub. I have some synchronization process that runs on a daily basis and does this automated thing of pooling, verifying against a few policies and pushing it to my own private registry. That could be perfectly a backend task, and that means everyone in the company can use that private registry to depend on. And all this verification and policy part can be taken away by a synchronization process. So I think for those kind of use cases it might make sense to document those scenarios to figure out why we are actually syncing this because let's say if I would trust Docker Hub, I could also trust Docker Hub based on the URL. Why do I need the signature to trust it? I can also just white list hub.docker.com. So what would be the value of also adding the signature part if I can also just depend on the URL? Well, obviously if you can just depend on the URL, that's fine. And you can do the SSSL validation and so forth. The problem is that that's really what Dewey V1 does and only does. And we're trying to make sure that customers can move the artifacts into their registry and still get that full attestation, including the gated mirror conversation that we're all kind of discussing. And for the full attestation, that's the reason we need to trust the signature from Docker Hub. I would also just throw out there that if you have tough metadata you get more security properties than just TLS because you don't just get, oh, this came from the server, you get verification beyond that of where you got the key from, which goes back to like a real developer instead of a server, which could be compromised. And concepts like timeliness and this is the current image and it can't be replayed in the current signature for this image. Makes sense to me. Yeah, Mark. I mean, for your imported mirror scenario, I mean, you could definitely. So first of all, number six does cover the copying of the signature along with the artifact, whether it's within the same registry or across different registries. That's like one of the key scenarios, the fact is number six, you might as well make it, you know, all one through six or all number ones or whatever. Priority was the, and it should just be by default. And if we're successful with NB2, then hopefully all the container D and Docker clients would just build it in by default. That's the right experience. The beauty of that is that if you're not able to pull directly from Docker hub because your production environment doesn't allow external sources for reliability or security, that you can move that content internally. And you still get all the same trust and com comfort that you would from pulling it from Docker hub. In fact, you get even more because, you know, longer dependent on all the internet connections in between. As that content comes in, all the content that was on Docker hub is copied with it, you know, all the signatures. And now you can reference it from your private registry inside of whatever network restrictions you want. The beauty of it is an additional, is that you can also say as part of that import process, the Acme Rockets team runs some additional certification on it, run some security scans and then add the Acme Rockets key to its base artifacts, you know, registry and says, Hey, not only is this from Docker hub, but we have done a scan inside of Acme Rockets and it abides by our company policy for teams to, teams to depend on. The idea is you get the best of both. Yep. So does that also mean we need to synchronize public keys or things like that? So we were briefly discussing this as well. I feel bad because the US isn't here right now and I don't want to, it's fun to dump things on him when he's here, but not fair when he's not, but that is part of the key management solution. I would argue that regardless of whether you're pulling it directly from Docker or pulling from a private registry, you still need the public key to decide whether you validate against that. So it's somewhat orthogonal to that. Yeah. For the tough design, one of our ideas is that when you move an image to another registry, you can delegate like, you can do it can be a multi role delegation where you delegate both to, you know, the public key on the other registry and to a public key on your registry. And then you can verify both. Yeah, like the previous person signed it and that your registry signed it. Just kind of, or you can even just do one or the other, but that way you can carry the public keys across the different registries using those delegations. I was trying to get back to our PRs and other things. We quickly got into great healthy conversations. All right, so basically Brandon, you're going to follow up on this one and form it into a PR that we can merge or review and merge. I think we've got the core things there. Hank, thank you for fixing your key. You're in. Nia's isn't here, so we'll come back to this one. This one was just an update. So I'll just, this is very basic. It was not a sonal PR, so it's basically just updating the images to reflect the documents. And then these are the ones that Nia's is actively working on. This one I'll archive because this has evolved into what became the current NB2 distribution spec work. So I'll, well, these two were initially for design conversations. I see Sam, Sam, let's chat, but I don't know what to do. Let's figure out, I don't want to lose these. I don't know if closing is the right thing because. Let's figure out how we want to maintain these because we didn't have discussions at the time. I think that basically covers that. And this is Brandon's, again, Brandon's conversation there. Brandon, is this the same? This is, these are part of the same thing. I have lots of conversations. So I'm just saying whether the client should be able to limit their trust. So you might pull in a key that says, I want to trust my SQL or something like that, but you only want to trust a certain image of my SQL. They've done. So I'm kind of hearing there is, and I've seen this in a couple of places that we, we have a requirement section that talks about the broad things that in some ways you could say is part of the, some of them relate to the data that must be stored in the document. And I'm wondering if we can do that. I don't know if we can do so. They might enable lots of scenarios, but then we've been getting into a lot of the policy configurations that, well, can I do this? And can I do that? And I'm wondering if we need. Some named element in our, in our document. That says, here's the things that I want to be enabled. I want to enable through a policy configuration. I don't know if Brandon, you want to help and take on some of that. I want to know, what kind of policy requirements versus the persistence requirements. Yeah. I'm not sure if I necessarily want to keep them separate. Just because I'm thinking that it might be some issues where, depending on how you implement the server, you might be limited of what kind of policy you could potentially implement. But that's fair. That's why I'm trying to do the requirements in such a way that the data that we think is needed is always put there. Like we've talked with the digest must always be there. Having a tag sign by itself is not all that useful. I mean, I don't know if I'd registry enables information in the, but you don't have to use it from a policy perspective. A bunch of these, if, if it's possible, I'll just say, you know, that's, that's just a check. We, we could do it. I just want to make sure a lot of these were thinking through the tough design of could we implement tough on top of what we're doing right now. And I just want to make sure that we captured. Here are the potential roadblocks that we might run into. Yeah. I'm trying to fit, you know, part of this is we've got the core requirements that we need regardless of what implementation it is. So I want to be careful not to pull in the requirement because it supports tough as opposed to does tough support the requirements. Right. So, so a bunch of these were saying what you've got done on your side. Looks good. Steve. And if we were to try to do tough, tough may have issues trying to implement this stuff that you can already do on your part. And so I just want to make sure that we documented what we are expecting out of the solution, which is what we've already gotten your current prototype that we don't lose it when we try to implement tough on top of it. Right. And that's, and that's the challenge, right? I think we were trying to make sure that we're capturing requirements regardless of an opinion implementation. And then let's find the implementations that can meet those requirements. If there's additional stuff, great, but it must have. It can't. The additional stuff can't invalidate the requirements that we have. And one of the things we've been talking about is. The cross org. Intermixing of content. Coke and Pepsi can't have any access to each other's information in any form whatsoever, including, you know, anonymized data. So they have, and that's kind of, I think Marina, the thing you're kind of getting to is maybe an org can have a root certificate that handles that. And then the other thing that we're trying to do is, you know, we're trying to make sure that the org might be a simple red, a simple repo, a single repo, or it might be a hierarchy or a collection of repos. Such as like Docker hub has, you know, the. Yeah, exactly. And I think just by giving that flexibility when you do have those private registries that really can't talk to each other, you can deal with that situation, but then you can also broaden it for things that are more open, either open source or just public projects. Yeah, I mean, the way I think, the way I think about it, as you're saying is Coke and Pepsi both have repos on Docker hub. Let's just say, not great examples, just bear with me. And they're signing their content there by themselves. And they have no intermixing with each other, but the Docker hub certification sits at the root and has the ability to certify things across Coke, Pepsi, rabbit networks and others like that. That gives you some broadening of scope. Yeah, exactly. And it also makes it easier to your key management if there's more projects all under that Docker hub umbrella, you can do the key management on one place versus, you know, when you need that more privacy, you just have to do a little bit more work for on the key management side to get the Coke key and Pepsi key and whatever else. So yeah. Okay, so I'm trying to balance the facilitation of a conversation with not having doing all the talking. So where do we want to go next? We're at eight minutes left. I'm going to work through some of these in the Slack session. I'll see if I can get some time this week to turn these down and leave time for conversation for anybody wants to cover there. Marina, did we already cover this conversation? Is that part of what you want to just take the last eight minutes and cover that? Yeah, I think we basically covered it earlier. It's just this idea of. Yeah, I think the current my current idea is just to have multiple options for what the root key covers. And that means like, you know, based on the registry, you can decide what makes sense for their registry. Because it sounds like that's really the concern is that different registries, different things make sense so they can just, and they'll know that so they can just set that up. Yeah, in addition to letting the registry decide how granular on its side. Also keep in mind, there might be some cases where you want the same root certificate on multiple registries. To take Steve's example, you might have a key that's signed in the development registry that gets promoted to production or off to a DR registry somewhere external and you want to trust that same key no matter where you see it. Yeah, that makes sense if they're on different places. I think that my question would be whether you'd want that to be a root key or just a targets key that is delegated to by different routes. I think it would depend a little bit, but yeah, it would, it should be perfectly fine to use the same root key in different places as long as you maintain. Those protections and whatever. Or maybe the same key but separate certificates for each registry. Anyway, many ways to do this. How do you do the same key, but different certificates? I mean, that's assuming that the root of trust actually is in a form of a certificate. Which might not be the case. I think they're big and maybe I'll ask others to help because it seems like all our key management experts keep on doing a lot of other things. If folks can help Nia's in the end with some of those, I know they've been trying really hard, but we all get sucked into other stuff as well. So if we can help them with some of the key management, because that actually makes me think more about like, how does the keys get moved around and copied and available and discovered and so on and so forth. And I know. Interesting we said we don't think we're blocked by it because people require keys today, even for ephemeral clients. But I think the experience would suffer if we don't have a good solution for that. So I'd love to see some more progress there. And that's captured in some of these issues, the issues that pull requests on the key management scenarios and attack goals and so forth. Anything else? I have a couple of housekeeping. I just want to close with anything else. Do a quick shout out for the, I've been adding a bunch of documentation about this stuff. I've been doing this. I've been doing this. I've been doing this. Tough. Notary work on the. The repo. So if anyone wants to check that out, give feedback, have ideas. Anything like that. So. Yeah. Yeah. Just feel free to add links to our notes. Cause it, you know, that at least would be some central point for people. Link in the notes too. So people can find that. Thanks. Okay. So just wrapping up a couple of housekeeping notes. Just, just in Kormack hasn't been able to make the call. He was recently promoted or anointed. Appointed, appointed. CTO at Docker. So unfortunately he said he had some other commitments at this time that he won't be able to make this time slot. I know others have asked for us moving this time slot as well. I've asked for volunteers to help facilitate a discussion on where to move the time slot. So if we, I'll ask again, if somebody can help facilitate that, that would be great. We have multiple time zones and continents that we have to account for. As 10, there's lots of voting tools out there. If somebody wants to take on that and just say, you know, put up a couple of votes and see who agrees to what, and we'll just. We'll do what we need to do to figure out how to get the most people to attend it at a new time slot. Cause I think it's obviously key to, I believe it's key that Justin is. Can make the call. That's point one. The other one was. Amy is apparently zoom bombing is still a thing in some other challenges with zoom. There is something else that CNCF has been setting up and honestly, I haven't even had a chance to look at what it is. So I'll go and find out what that is. So just keep an eye out that the link to the call might change. So we're going to, I don't want to completely confuse everybody with a different time slot and a different link. So the hack and deduct will stay the same. Just go make sure you're going back and checking that. If you're watching the CNCF calendar, I'll make sure Amy gets that updated as well. But if you've made copies of this on your private calendar with this zoom link and this time, just be aware that that might change, but I will post any changes I will post to our hack and deduct. So people know exactly where we moved the cheese. So that was the other item. And then the only last item was a Docker distribution. Did get officially donated to CNCF. Yay, that's great. I know it's been a long time in standing. And we obviously want that to be part of a reference implementation for notary V2. So the, what repo is in, isn't really all that important, but the fact is that we now have it donated. And folks from GitLab, GitHub, DigitalOcean, Harbor, and a couple of others, I forget all of them are maintainers on it. So they're actively looking to make additions to that. So that's exciting. There will have more people helping out here and an opportunity to commit to that code. The, there's a couple of intermix conversations and all of this. There's a listing API that we need for notary. We need to make sure that it actually works. So I'll, I'll post as many links as I can here for all these different elements, but I'm trying to keep everybody in sync so that we know where all the moving parts are. And we find some set of stability. And lastly, we're going to get signatures out for, whether you reference a digest or a tag for a previous conversation. So there's a number of these things that are all intermixed and correlating. And we need to kind of pull them all together to make sure that it's actually works. So. We need to make sure that we have some sort of a, a, a lot of clarity. And lastly, Josh is kind of getting, trying to get close to closing down, Josh and Peter closing down on the distribution spec updates, which won't incorporate some of these changes because we're trying to get the one, all out for a couple of years ago. But one of the things we'll work on is, you know, with the distribution spec, V one plus is, and hopefully maybe some of the stuff that Vincent had been doing. So that's the, that's the extension model. So you can tell what's in the distribution or not. So those are all the multiple moving parts of which I will try to capture as many of those without writing a thesis in the status update. I'll try to get out by the end of the week. So with that, without further ado, I'll wrap up. And thanks again for another week. Thank you.