 Hey folks, get a quick audio check from somebody. Hi, Steve, it's Jesse Butler. Hey, Jesse, thank you. Thanks other folks. Niaz has the table today, table agenda, something today. So just wait for him to get started here. For those of you wanting to look, here's what he had been working on here. Let me just ping him to make sure that something didn't come up. So many slacks. Evan won't be able to make it. Hey, there he is. Great. And we can do another minute or two for the proverbial Monday morning, five minute interval. I'm still faulting my cash back in. I was taking a couple of days off. In fact, in there, I think Niaz also had notes from the meeting too. Yeah, we did. Hi Steve, sorry I joined a little bit late. No worries. You're under the proverbial five minutes, so you're good. And I was just did a quick run through. Overall look good. I had a minor knits on just artifacts versus images and a couple of things on the admin, the title admin is sometimes over stated in the scope, but otherwise it looked good. I need to read it from top to bottom with the actual focus. I think I'll give some additional time over the next few days to get some more feedback on details. At a very high level, I think there are some questions that came out of the discussions that I think is worth addressing with the broader audience. One of them was, well, I'll go through the list. One is that what are we looking at for the notary service to evolve into? The current implementation, I believe notary is storing all the JSON documents for the signatures. I think here we've been looking at having the signatures traveling with the artifacts themselves. And so notary could potentially transform into being more of a key sharing service which developers could use to kind of say, here are the keys that I'm using, so that anyone that's looking to deploy their artifacts knows which public keys to trust. I think that's something that's worth a broader debate in terms of whether that's the right approach and we can address that in a conversation in the key management group. The second component that comes up is for people to manage their keys. We've talked about how the basic setup could work, but in terms of having mechanisms to create your delegate safely, potentially be able to do that for an enterprise. I think the base case is probably something that we address in the client and then we provide extensibility for others to kind of build on their own. But that seems like a stretch goal that we could potentially add in, but I don't think that's needed for an MVP, but I'd be curious to get other opinions as well. And then the third part that came up was, we've looked at container signing mostly as a private PKI signing, where every developer's responsible for having their own routes and delegates and having managing that key infrastructure. Do we want to potentially consider an extension into a public PKI where you could potentially get certificates from public CAs like Digisert or Symantec and potentially sign with those as well? So those are some of the three things, I think that's worth debating with the larger group. For the first one, you mentioned something about the signatures being on the artifact themselves. Is that what I heard? Signature is moving with the artifact, so they don't necessarily need to be embedded, but the idea would be that the repositories or the registries would be responsible for providing the signature information for the artifacts that they're sharing. Gotcha, okay. Yeah, because we've been thinking about this a little bit because in quantity of poll cases, if you will, just pulling content, regardless of whether it's an image or anything, we've been generically referring to as push and pull to kind of keep the same semantics. The key portion, well, I guess even in a poll, you may want to still validate the key. And I kind of leave it up to each artifact type, something that we've been kind of playing with the ideas because we want to be able to sign the digest of the artifact, which means you can't all do it at the same time is you have the artifact, you could then sign it and you could submit the signature content, if you will, the artifact that is a signal validation object to the registry that refers to the other thing. And I'm getting caught in a little implementation detail, but the point is that you can have those things separate so you can have multiple signatures, because that's the other thing is you want to be able to take something that's in the registry already and then add a signature to it without obviously changing the digest because we have this requirement, the digest and tags, digest and tags must not be changed just to add a signature to it. That allows the workflow that I've got something from wherever I got it from an external entity or even an internal entity and I can add another signature to it. The case where people are moving content between a registry, I guess that's more the key, not the signature that's really the problem. So maybe that's just thinking through that, it's not as much a big deal as I thought. So I think the question kind of comes is whether it's a Docker CLI or the Helm CLI or Singularity or others that are coming up. The question is, if we can get a good standard on this and they adopt it, that when they're pulling those artifacts, they know how to pull the signatures that are supported for those as well, I think is kind of the question. Right, I think the call out here is that essentially the larger would address how the signature is shared. So that's not something that would be addressed in the key management group. And then the key management group would essentially just provide how do we trust the keys that we're used to generate the signature. Right, so I guess if I'm also, if I'm understanding this correctly then, if you go with a proposal like the proposal, I understand that Steve has been talking about, then you would pull the signature information all quite easily with the image, but you wouldn't know what keys to trust or how or why. And that becomes, I think, a problem that has an exercise left to the reader at this point. But there's ways to solve that problem. And I think if, analogously looking at this in the tough case, then the metadata that you're downloading from the users already has all that information because it has the trust information, like it has the delegations and what keys are trusted and why and how they're, and also contains the, and also contains the signatures for everything that obviously artifacts don't change in tough when you sign them because all you're updating is your targets to metadata. I think the question is, this is part of the, I've been calling it a two phase, but it's not really the right term. The idea of how do you trust something with Notary because we need some other external entity in some sort, whether it's the client, if it's stateful or not. I think what I'm trying to understand here with the keys, key management stuff that Niaz is talking about, my understanding is we're not trying to, we purposely don't want to keep the keys and the registry per se. So you're right, there is an acquisition model for how you get the keys and say which ones you trust. I don't think it's necessary for the reader per se. It's whatever we come up with the Notary to spec will say the client that implements Notary to will say I'm getting keys from this pluggable model I'll over generalize it for now because we wanna be able to get keys from multiple locations. And the client when it's pulling things if the thing it's retrieving isn't one of the keys that it trusts, it's rejected. And there's gotta be a key replication in there as well. Actually Niaz, did you cover key revocation? I did. So the key revocation does have some components that we need to have further discussions on in the subgroup. The process is something I outlined but how the revocation lists and things like that are shared, whether revocation lists or even the right approach, I think we need to have some more conversations around that. Revocation we know typically has been very tricky to implement properly and there's a lot of sharp edges. So I think Justin pointed out accurately that that's one where we'll want to look at a few different approaches and make sure we settle on the right one. I agree. Yeah, I won't claim any expertise in revocation. I kind of refer to as just the revocation scenario whatever the expert say of how we deal with it that's something we should review. The doc does have what we expect for revocation to work in terms of like what the scenarios would look like. And so it does call out some of the things that we saw as issues with the Notary V1 implementation. So we should be looking to address those in this room. Sweet. So that was just some discussion on one. What was, we can have more, we should have more. What were the other thoughts people had? The second one was around sort of like the key management for signing itself, right? Like in how do you manage the PKI? I think here we could potentially have an implementation that using some local key store, how do you generate delegate keys, provides an MVP. But the question here, I think is like, whether we extend that so that you could do this for larger sort of like in your teams or sets of developers, that might be a stretch goal. I don't necessarily think that's needed for an MVP, but that is something at least that documented in terms of like, you know, here's how we would have extensibility to make this work. And then the last one, I think this is one where this is a question for the larger group is do we envision a scenario where we've won publicly trusted certificates? Publicly trusted keys. What's the pros and cons of that? What would you say the pros and cons of a scenario like that are including some of the air gaps requirement issues? So one is it takes away your responsibility of having to manage a route, which is slightly risky. And you could potentially rely on the public CAs to do revocation for you. There are costs for getting public certs. And so that might be something to factor in. Also now you're reliant on getting revocation information from the CAs themselves. So there are some trade-offs there. Would that be a challenge? Like big companies already have that investment and they're doing it. And it's an opt-in thing and smaller companies that don't want to buy into that bigger problem can use an alternative solution. Like are they mutually exclusive? I guess is my question. Yeah. I think that's... Go ahead. I would say that any interface with the ecosystem it possible must be not mandatory so that consumer can pay a specific route of trust for a specific vendor. And that's so that a company can internally deploy with their own keys and all of this without trusting the 600 or however many public CAs. If someone wants to opt-in to that model, I can't see why not, but I wouldn't want to really start with that as the default. I think also there's... I think it's a given that we wouldn't start with that as assumed. So one way to look at this is to look at it from the standpoint of is this something that like customers are gonna want and ask for? And I think at least at this point there hasn't been like a deafening outcry of yes, we get asked for this, we have to have it. The other way to look at it is is that, because I think in the absence of our customer asking for this, then the most important question is like, is this a good idea from a security standpoint? And then there's the question of, well, is this a good idea from a security standpoint compared to what and what scenario and what might replace it? And so I think it would asking if we need the feature it's a little hard to do with us without us figuring out like what we would have it in lieu of as an option in lieu of and what the likely security ramifications would be of those scenarios. The way I think about it is containers are not phone apps where you have one click to install and then just interactively use them. There is always some kind of deployment process configuration, something. So asking the users to set up lots of trust from wherever they even learned about the application is not much of a stretch. The one exception to that, I guess would be the IO slash library hierarchy or something like that that provides a fairly large set of images for public use and for the documentation is not assumed or is the knowledge is widespread. But for any specialized applications and so on, I don't think the CAs are really an accessory part and for the library it might be reasonable to even make the registry that maintains that library the root of trust rather than involving the wide CAs ecosystem. Yeah, I don't think this is... I was just saying, what was the context of how it came up? So when we were looking at how do we manage routes, we kind of moved into how this was analogous to how TLS search are to done today. And with TLS search, you have a set of trusted routes and essentially you are validating domains against those, right? So there is a potential of extending that model here. I do agree with what's being suggested in terms of that this would not be the default option or this would not be something that would be in lieu of like setting up your own routes. I think this is potentially an extensibility option. And I think I agree with Justin's next step there in determining is this something we need and is this something that would be useful? I don't really have that information to kind of make that call. I think that's the question that we went to follow up on and spend time and look and seeing is, like, you know, is this something that it would potentially benefit customers? Or two, does this also introduce any additional security risks? I think we can take a look into the security risks as part of the key management group, but whether there would be a need for this or something I think we may want another round of report. I mean, if you guys aren't finding like glaring benefits and nobody's asking for it, I don't know if we need it. So I'm only probing at questions because you brought it up. I don't have enough context to say one way or the other. So you guys make- We can, for example, I think I'll just kind of like document that as a potential extensibility. So I think as a next step there, I'll look for some additional comments. I plan on doing another iteration on this on the Wednesday and then reviewing in detail in the key management meeting on Friday this week. Are you guys doing it regular now? Yeah, I have a recurring invite setup. Sweet. And it's everybody liking the time because we've been discussing various times on various meetings. Is that working for people globally? It's worked for everyone that's sure. I don't know if anyone that hasn't shown up at like the different times. And the West Coasters are not complaining you have to get up too early, including yourself. But I don't see any West Coasters that have been other than yourself that have been, I'm not sure where Marco is in Europe, right? Yeah, I don't know if anyone else from the West Coast was planning to attend. If I am missing anyone, let me know and I'll explore sort of like what we can do from a timing perspective, but right now I'm the only person on it. I usually get up early, so it's not a big deal for me. Cool. Wasn't there a third topic? There was a third section, or was that the big jump to the third one? Those were all three. I think we jumped through the second one. That one wasn't too contentious. Anything else? I have mentioned a couple of times there has been something, the team, our team that had done the Dr. Content Trust implementation has been working on some ideas. We've been iterating on getting them written down in a way that they're digestible rather than just kind of dumping something out. So the plan is to have something, a public PR or something public to review in the next day or two. We have our meetings Tuesday nights, so probably one day morning at the earliest and give folks a chance to look at it, review it. And then I'll put it on the agenda for next week to drill in. And it's basically a prototype and a discussion, more detailed than I provided in what was titled the Reverse Lookup thing where basically you can, for a given artifact, by its digest or its tag, you can actually, the way they've done it is by the digest. So you can get a digest from a tag is the argument. But based on a digest, you can get a set of verification objects which could be signatures. It could eventually be some tough information metadata. The idea is just getting some kind of prime conversation going on how can we keep digests and tags without changing, add multiple signatures, verification objects, and what API would be needed on distribution to be able to retrieve that collection of things. And just kind of starting to have that sketch. Just if you remember, the other thing I put out was kind of the gaudy model of, hey, there's a lots of complexity here for various teams. Let's put out a model for people to see different pieces of it so they can figure out where in this they can engage, the plumbers, the electricians, the HVAC folks and so forth. So I've been really wanting to get it written down in a way that's consumable. And they've been iterating on that and I should, now that I'm back, I actually wanted to take on Thursday and Friday off last week. So I'll get some review today and tomorrow night and then we'll get something public out tomorrow for everybody to take a look at. So I'll post that on the Notary Slack conversation for others to get a chance to read and then I'll put something on the agenda for next Monday, or yeah, next Monday. We did have this interesting conversation. What was the thing we were discussing the other day in Slack? Oh, there was this interesting conversation we had around the digest and persisting the manifest. There is this conversation that we were discussing on canonical JSON versus the formatted JSON. And I think I saw another conversation on this on the OCI thread as well where the spec doesn't call out one way or the other but it seems at least the ones that we've looked at they're all formatted. And if you don't format and you format them you obviously get different digests that are in there. So one of the things that we've been discussing is maybe we should call out the manifest should be persisted. The argument there is whatever the manifest is initially created as it should be maintained so the digest doesn't change which means that you can't generate a digest on the fly if you don't have an agreement on whether it's formatted or non formatted. So we're probably gonna split it in two things. There's probably a separate conversation to have as part of the OCI spec which I guess is the image spec in this case. Should there be a clarification on formatting or non formatting? But regardless when we talk about notary and other artifact types that on creation if you wanna sign it then you need a manifest because part of the signature will be related to that digest of the manifest and there should be like you keep the manifest around. So if you're pushing it to another registry you have it, you don't have to generate on the fly hope two things generate the same way. So that was one of the interesting conversations we had and I see Derek both Derek and I thought I saw Cormac here. No, we're on that conversation that we had I think it was the notary channel. Derek now that we actually did a chance to talk in person do you wanna add to that or others wanna add to that conversation? The one we had in the slide? Yeah. I don't think I had anything else to add is there anything you want clarification on? I was just, yeah the question is just a matter of like I think we all agree that we should probably we should keep manifest around. So if I'm creating something and I'm signing it before I even push it to the registry that implies I have a manifest and this is part of the stuff that CNEP folks were kind of talking about as well because they wanted to support USB stick scenarios. So that part makes perfect sense. I think the question is just a matter of conversation around should the spec specify that a manifest should be one way or the other and I'm not even necessarily arguing which way it would be. I mean, yeah, I have a few thoughts there in terms of like what we should do to clarify the manifest like we talked about moving it out of the image spec into the artifact spec. Cause really I want the manifest to be seen as the artifact for distribution. Like if you want to distribute something you need a manifest and distribution could mean going to a registry could mean putting on a USB stick just anything around moving something should use a manifest. And if you like that's what should be signed and if you get rid of that then and you recreate it then it's new content. I got you. If you take the approach that the image spec is really about the runtime image it just so happens that the manifest that finds an image is in the image spec that if you the fact that you're distributing the thing having that manifest be the definition of the manifest and it's formatting isn't really specific to the runtime. I get that. Yeah, that's kind of been an ongoing confusion for the last like year I feel like we do need to address that at some point. I was looking at the long list of issues we have in both distribution and image units. Yeah, there's a lot of things we need to just kind of get to and clean up, but cool. All right, anybody else have any opinions, thoughts on that? For those that prefer reading it wasn't the notary slack wasn't it just sorry Derek that we were discussing this? Yeah. So there was, yeah, so on July 14th just for those that are interested that's the context of what we're talking about. Anyway, we're at the half an hour rather than taking the whole hour is anybody else have anything else they wanna discuss? I'll give everybody a half an hour of their life back and yeah, thanks for driving the discussions on key signing and having the recurring thing. So for those I encourage, I think you work you work with Amy to get it on the calendar. So it's on the CNCF calendar by all means join in there and we'll catch up next week. Thank you. Thanks folks.