 Get all the gadgets in place from the weekend. Good day folks. Okay. There we go. So always interesting as everything on Monday morning. So thank you, Justin. You know, I just saw your PR feedback. That was awesome. Thank you. I'm hesitant to paste the link into zoom until we get a couple more people because I think we've said that if. When you join, you don't see the previous posts and chat. Is that true? Yeah, I think it is. All right. Well, I'll paste it twice then so people can jump in. We're waiting if peeps can sign yourselves in. That'd be great. And we'll give folks another minute or two. We'll see how time goes, but she way just did the. PR on the feedback for the JWT format, which I've just been digesting myself this morning. One of the interesting things that'll come up in the distribution API proposal, because it actually has an influence on it, especially after reading the quay folks post-mortem. It was an interesting conversation in the, I think it was in every request they were doing a tar. So I'm going to go ahead and go ahead and go ahead and click on the, the, uh, D compression. You call it a D tar. Let me actually just pay the zoom. To see if he's going to be able to make it. Okay. So why don't we start with the first one? Uh, I can manage my windows here. Okay. The team says he's running late. So we'll go there. All right, so the first one is really just some cleanup. Well, it's two things, obviously. It's cleanup, moving things to the root read me because the more people that were trying to get up to speed on what we're doing with Notary, or tracking what we're doing with Notary V2, I realized we started slipping into the tribal knowledge thing of a bunch of stuff that's assumed, but not actually clearly documented outside of our notes. So what you'll see is there's actually two changes here. Let me see if I figure one was, we had this format that was kind of nice for our PowerPoint that Justin and I used for the KubeCon thing, which is I think next week or two weeks, not sure. Is it next week or two weeks? I think it is next week, yes. Okay. I was confused. I thought it was two weeks and then someone said next week and I was like, oh, here it is, actually here it is. We do have to be online. So it'd be good to get feedback on it. So anyway, there's one we had this, which was an interesting hierarchical one that kind of grouped it, but I struggled a little bit with it. So I tried a more bulleted approach. So, and the reason I kind of pushed on this one is you'll see in the other PR when I was trying to talk about the various options for distribution APIs, I'm trying to refer to requirements as to why we took a certain approach. And I realized we just didn't have these written down in a way that really made sense. And Ralph also committed before he left on vacation for most of the month, too, which is great. He should do that. Don't get me wrong. That we were gonna talk about the phases of which we're doing these. So I left these here as a starting point. So by all means, let's feedback as wanted and so forth. Any comments here before I go to the next one? So this is two things. One is just a movement from what was in the scenarios document up to the root read me. And then it's just a specific call out without going into deep, deep details for each one of these. There's a phone number clicking. So with that, the next item, let's see, how do we do this? The next item on the list was the requirements. We separated out scenarios from requirements. And honestly, I'm struggling with this one a little bit. I like the high-level requirements that we're calling out specifics. But again, we wanted to defer details into things like the key management scenarios, which actually has that part as well. The two things is that I kind of thought, well, there's three, because I realized when we started talking about firmware clients. So one, we wanted to make sure that we didn't embed key management itself, the underlying keys into the registry. There's a little bit of a conversation the house has been starting to tee up on. Can you get the key through the registry APIs, but isn't necessarily the place they're stored? Because we want companies are gonna be comfortable with their key management solution, whether it's Key Vault and Azure or AWS's Key Vault or Hashicorp Vault if they're using on-prem or something like that. So all of those are... The person's got the phone number. New mic, please. I'm sorry. Thank you. I don't know... So I don't know that this is written in a way that... I don't know what to make of the way this is written because when I think about something like a requirement, requirement is more actionable. And here, I feel like this is more saying we're going to punt this part of the problem space, which is punt but interface with. And I don't know that... Like sort of from a building a system standpoint, I think that totally makes sense, but from a security standpoint, you all of a sudden run into all the corner cases and limitations of whatever different key management system you have along with the difficulties of making it integrate and play with what you're designing. And so I think the fact you didn't use the word has to securely do any of this is somewhat telling because I think doing this securely is not... This isn't just like a box that you plug in and everything sort of works. So I would argue that talking about this with respect to the security outcomes you want is really key because they're not just using these boxes just to use these boxes. They're using them because they want a security outcome from them. I'm trying to... Because there's two parts. One, remember this is... We're not writing the spec yet. This is the iterative phase to make sure we're pulling out the right pieces. So I'm trying to... What we're saying here is the clouds and companies have key management, pure key management solutions they're already using. And those are... I would position secure because that's their purpose is to be a key management. In the end-to-end workflow, all we're saying is we want to integrate with whatever the key management... The actual key part management solution. The discovery and how those keys are fetched and retrieved is something that I'm, again, deferring to the key management group. All I'm saying here is... And maybe Cormac, if you could help me here because I'm trying to relate to what we talked about in the way Docker distribution or Docker Notary V1 was done, is we want to make sure it's not embedded into the registry or even the Notary service because there was no good integration with the key vault that a company is trying to manage other keys with. I think there's two requirements being joined into one here. One is the support for the vendor cloud key management solutions, which is the second sentence here which says Notary V2 must provide extensibility-enabled key management by any one of many solutions. And then the second requirement is that registries should not be required to play a role in signature verification or any part in that process, which is what allows us to move containers across multiple registries, right? I think those are two separate requirements and if we separate those two, then at least we have those goals to shoot for. These things just seem to me a bit weird because having a separate, why are these not in goals or scenarios? Why are these called out separately? Yeah, it's fair. I was struggling with a couple of things that I felt were core requirements that the scenarios support. And again, this is why it's a PR for discussion. But I really, until I did this, we didn't actually have these things specifically called out anywhere. Mutable tags does reflect in the scenarios, but I was trying to call it out a more deliberate thing, including the key management solution. So the scenarios support the requirements and the requirements support the scenarios. I'm not suggesting this is the final way to represent them. It was just getting something written down to have exactly this conversation and I'm up for any proposal on alternatives. I mean, I think it's a kind of weirdly ad hoc list of things that I don't think fit in one document. I think maybe you should open them individually as PRs against the places where they make most sense. If you think they're missing. Like in the scenarios you feel or in the key management scenarios? Well, I think they might be in different places for different things. I just, I found it an odd set of things to have a separate doc for, because some, and especially all added at once as a list of things, because they're just a random set of, it's a set of things that are not particularly related to each other and they're not complete. So it just makes a really weird doc on its own. Yeah. So as it says, it's a set of requirements. It's not a complete, it's neither a complete set nor a, so just, it's just weird to me as a separate document. So when I, when we write other design approaches, like for instance, when we get to the distribution APIs, I was trying to refer to something that was, not necessarily just a scenario, but kind of some core requirements that we've said was difference between V1 and V2. So I can certainly merge this into the scenarios doc and just say, you know, in support of scenario 1.2 or whatever we call it. I guess I was trying, I thought that there was some of them that were just more core one-liners. And I guess that's the, I guess that's the feedback because it really isn't that simple. No, because I think the way you want, like you want a security outcome for these, and you want actions that people can take and so on. I think some of this is covered and some of this isn't and some of it, it isn't really clear. Some of it you're describing like a feature rather than sort of like a way in which somebody uses something, like a workflow scenario or specification. And I think that's also part of what I'm trying to push back on here. I'm not sure I follow the workflow part. So in general, you know, the way to, the way I think you, you know, the scenarios and things are set up is that this is, like you have somebody whose goal it is to transmit, you know, a container securely despite the fact that the registry might have been compromised and have another party be able to verify it. And that other party may have no state initially. So you have a party that has no state or very old stale state and has to be able to pull things, images through a repository that may have been compromised. And so that's sort of a scenario. Like basically enable, you know, describing an outcome you want to have. And then, you know, but if you look in three, for instance, the second sentence on the initial pull when the client is discovered to need initialization, a secured process is initiated. Like I don't, I don't really know what that means. I think it's very vague. And I also don't know, like I'm not trying to say that it's wrong, but if someone came and had a way of doing this that didn't have a secure process of doing this at the time of, you know, on the initial pull, it isn't necessarily that that solution is wrong or doesn't work, doesn't meet the actual need of the system. But the goal is to have that transmission and have things happen securely despite a potentially hacked registry and whether this happened on the initial pull or whether some other way of this happening happened, you know, like something else occurred to make that feasible is I think you're relevant to what you're, what you're like the way in which you would judge as a security solution. I mean, part of it is they are intentionally vague in some ways because I wanted to stir the conversation, get the experts in the areas to come up with, you know, what should the specs say, so to speak? Like for instance, the femoral clients came out of this conversation around the verification concepts of the update semantics we've been talking about because we're saying that these clients are ephemeral, so how do we support that? It's not implying... That is a security requirement, that this must be secure for ephemeral clients as well as non-ephemeral clients. Right. So would you say that should put it into the scenarios then? Yeah. One of the scenarios we talked about, hey, a client is spun up, it's a brand new client because it's given as a whatever, you know... Yeah. Exactly. Okay. And it can have states that's old, which is, you know, because you're likely to have to tell it something for initialization, but it's not expected to have up to date. It's not expected to have information that's, let's say less than something like a month or a year old. Somewhere in that range or something. That is, that totally makes sense. But then like part of the, yeah, but what I'm pushing back is that you don't, you want to be careful not to add design in to requirements or to scenarios. So I think the question would be which ones did you want to cover? And maybe that's what you're saying. I just saw you light up. Is this one, which ones do you want to cover as part of the key management scenarios as opposed to the, how will we separate the name of the other scenarios? I think there are some of these, like the revocation and key management that goes into that, but I would actually also kind of say that rather than calling these requirements, calling these those goals because we did agree on some changes that we wanted to make an address as part of Notary B2, right? Like enabling key management across different solutions like HSS and cloud providers. We took that as a goal when we started this project. So there were a set of goals that we agreed on that Notary B2 would address. And I think that documenting them makes sense. I think I'd agree that this is probably by saying must provide extensibility. I think we may be going a little too strong on the language here, but at least the goals and objectives that we addressed, I think documenting them whenever I've been working through some of the scenarios and use cases for key management, we have had to come back and say, these are the things that we want to address, right? And so I don't think like, I don't think there's harm in covering them as a goal, as long as we understand what the goal is and why we're headed for that. So I think what I'm, what I'm hearing is goals. So we should, you know, iterate on these, you know, we'll just provide feedback if we've captured the ones in a distilled format. Then we have the scenarios that we're going to have to look at. Somehow I miss, all right, I'll have to check this. This is the, he's supposed to have the scenarios listed here. Which are basically here. But the question that I have is how do we recognize, so we, let's get rid of this, or let's take this as teasers for making sure that they're in the goals or scenarios. So what we're trying to figure out is we, we said we wanted to have a separate set of things called things for a second for key management. Do those just, should those fold into a combination of goals in the scenarios or should there be a separate key management scenarios? Or are we just doing a separate key management scenario doc for the sake of a PR feedback and we'll merge them in here. I think that's a question for organizations. Like if we want to eventually merge them in, I think it makes sense. Right now, I think keeping the doc separate makes it easy to iterate on those and cleaner pull requests. And I can't also see why we can't link one to the other as well. Right. If we just kind of call out in this overall sizing scenario where sort of like the key management scenarios or documents, at least we're not losing track and a new reader at least knows to go read this other additional scenario doc. Right. Okay. So what I'm hearing is, well, sorry. I should let others add feedback. Yeah. I mean, I think conceptually they're part of the same doc, but it's fine. And if we want to iterate on them separately as well. So that's. So effectively we have goals spelled properly in notepad scenarios and temporarily. Key management as a separate. Doc. To be merged with the scenarios. As they. Complete their PR. No separate requirements. I think that works. I think that works. I think that works. I think that works. All right. Good. Okay. So I didn't really hear that these were not, we don't want these as just to, and it was intentional to, should we put this in the requirements, but let's make sure we get these put in the right place. Cause they were not. Some of them weren't captured. And in its detail. And I wanted to make sure we brought those in. The next, what was this? Oh, this was the. This is the thing I just popped up to. Oh, no, this is the distribution. Thing. Was that the next one? Actually, let me do this JWT one. No, I'm going to do it the way I. I put it in the list actually. Okay. So the next one, the list is, we talked about a couple of weeks ago is now that we're getting some structure around what a signature object could look like. And we're iterating, including the JWT format, as opposed to a JSON formatting, which we'll come to in a moment. We've been wanting to write down how are we thinking about persisting it into a registry? And then once it's persisted in a registry, well, how does it get in and how does it get out? You know, the push and the pull. And what is the discovery of a collection of signatures? API look like. And it's teased out a couple of interesting conversation points. That I wanted to surface. So as you see, what I was trying to do was here was highlight some of the requirements. In, in here, and I pulled them out and I'll, you know, we'll fix this to be able to link them back to a specific scenario and so forth. But basically the original digest and the collection of associate tags shouldn't change this way. A customer user that has deployed something or is referencing something as part of their deployment scripts. That doesn't have to change just to be able to get a signature verification. We want to say the first thing that we're trying to do is verify the signature verification. So in the case of the deployment scripts, that doesn't have to change just to be able to get a signature verification. We want to say the signature verification verifies the thing that I'm referencing. Multiple signatures per artifact. We did. Push a better picture of this. Where did I push that? Distribution. Distribution. I'm not even in the right one. Requirements. So in the scenarios, this now has the updated picture. That talks about this walks through. I updated the text also refer to this. So the, the rabbit networks builds this network monitor image and their registry doesn't matter where they have them. They've got a registry, whether it's public or not as a relevant. They push it to something like Docker hub. Or Docker hub. Or Docker hub. Or Docker hub. Where the rabbit network signature is available there. Somebody may or may not trust it yet, but Docker can say this is certified content and they trust Docker. So now there's a second key on it. This. Reload right here says the district, the tag. And. Digest can't change because we're just adding another signature. They can put their own signature as they verify this content works for their environment. And then their policy management says, I only trust things that the Acme rocket team has certified. And it's often deployed in life as good. So, and this has been updated with the text to refer to that. We've been talking about this for a while. I just, I hadn't gone back and updated the picture and the text to reflect the, the feedback that we had previously. So where was I? So, so that's the multiple signatures that we've been talking about. Native persistence. This is the, I don't know what the star is doing there. We'll fix that. That we don't want to have a side car, another service sitting alongside the registry because things get out of sync very quickly depending on where you pull the tag from. We want to make sure that we've got that included. And we just pop this out. Oh, okay. Yeah, these are recorded. So that's helping with people in different time zones and different schedules. So that's the native distribution, native persistence. We want to support moving, right? We talked about and multi tenant registries making sure that sites like Docker hub that has multiple organizations or the AWS and Azure version where it's different domain for each, each registry, but within that we have different groups that need to be able to support. There's lots of ways people hash out how they do multi tenancy. We want to make sure that we can support the aggregation of that content. Private registries. So they move from public to private as the picture showed. And then we're going to go back to the, and then of course within those private registries new content is created. And of course the air gaps environments. So to support this, we've been discussing and we haven't really had a chance to review it in detail. That we want to use the OCR artifacts approach. Because it allows us to leverage all of the infrastructure that a registry already supports everything from garbage collection to object listing tag repo. Delete management. And so forth. So we're trying to not have yet another. End to end model for dealing with something. It's, it is a special type of object artifact, but we're hoping we can make it just one of those. The. Today in OCR artifacts, we deal with manifests. We say that based on the media type. We can determine that it is a particular artifact type. So in this case it's CNCF notary V2. You see if I have an example of that in here. I think I do down below. All right, not in this one. But basically use the artifact. Oh, here we go. Notary V2 config. And we'll see how this will switch over to a JWT and the other PR. And then inside of it, it references the digest, but this is in the config object. The question is, should we be parsing? Well, sorry. This is what would work today. Because we can stick the signature in the config object. And the config and the artifact itself can be defined as a signature. The. Pros and cons. I've tried to call out on this. Is that the pros with this is it works with artifacts today. Oh, here we go. So we can, and we can use the Aura's client, whether it's a library or not. The problem is that you would need to actually add. A new. I'm sorry, I'm trying to read my own notes. I guess I finished this Friday night and I was debating and presenting this and I realized I don't have it all done here. So the problem with this approach is you actually have to parse the config. To figure out the linkage. Oh, I know why there's a separate conversation of how we do linkage. That's why I left that out. So the, at this point, there's a reference. The other approaches in an index. An index already has the ability to reference other. Manifest, including manifest lists. And that's what registries are the ones that are actively adding this. In fact, we'll have a conversation on Wednesday or around the use of index. But the design is there that an index, an index already has the ability to reference other manifest, including manifest lists. And that's what registries are the ones that are actively adding this. In fact, that's what registries are the ones that are actively adding this. But the design is there that an index already knows how to reference other manifests. And that was the original proposal that I had. The problem is that it also requires the index to support a config object. Which we said we wanted to do as part of the artifacts conversation, but we haven't actually gotten to it yet. So this and CNAB, and I think there's one other. Might be the impetus to, you know, kind of say, maybe it's time to go do that, but that would actually be yet another change we'd have to do. And then that's just how to link them. This is where I separated out. So in the current current, whatever current means, in the JSON version of the signature object. There's a digest that's already in here. In fact, we've made this a full descriptor by putting the media type in here as well of what it is that you're. Signing the digest of. This is separate information for verification, so that we can link this. But again, you'd have to parse the config object to pull this out. Is that the right approach? So the idea is, where am I with this? Oh, so then what we're trying to figure out is how should we do this? Should we be parsing the JSON object? Sorry, the config object to do that. And or is there a separate API that we use for linking? And then we can still upload it using the standard APIs for uploading to a distribution. Right. To a registry. And then there's a separate API for. Signage that you basically add a signature is object to it. And all this says is take this digest and link it to this one. And if you don't call this API, then the artifact is up there, but it's not linked. One of the pros with this is we can actually put RBAC on a specific API. If we only, sorry, if we do it as part of the upload and parse in the config object, then you're saying, hey, somebody might have push rights to the registry, but because they're pushing a signature, they also need signer rights. And that felt a little weird to be sticking it into that API in the middle of that is all of a sudden the push is successful, but it fails because the type of object you're pushing. With this approach on the upload, sorry, as a linkage, you can actually put the rights check, the role check on the API. The downside, of course, is there's a separate upload and then there's the linkage API. The way distribution works is a bunch of these separate ones anyway, but it's just called out. The other is there's a unique signature upload API. Since the signatures have no layers at this point, they're literally just a manifest, a layer list, a layer list manifest, or a zero layer manifest with a config object. Then, you know, maybe this is another approach because here, what we can do is we can do the role check on the API. And it's a little more cleaner. The downside is we are treating it on the front end a little different than other objects that get uploaded. You could argue it's a special artifact. It's still better than having a separate notary server. That's part of the conversation. We get a signature discovery and we go into another set of conversations. So I'm going to pause there and kind of ask for a few minutes. And again, I apologize. I meant to get this done last Wednesday. So there's feedback on the, to read it first. So I don't expect all feedback here. We'll start the conversation. I don't see any benefits to, to using OCI artifacts in here. I'll be really warm. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. All we really want is a mime type and a blob. Either way, the registry must have a special support for creating links and listening links. Either way, every client must have a special support for downloading or uploading signatures together with the image. So all we end up with, if we are using the OCI artifact approach is forcing the protocol is forcing the protocol to have to HTTP request signature. If we introduce a specialized API, this can be more efficient. And it doesn't even limit neither the clients nor the servers from using any storage format they want. So if some registry wanted to internally represent signatures as special linked OCI artifacts, it could. Yeah, it's interesting. The distribution spec doesn't completely drill into the details of how we all store these things as blobs and so forth. There is some discovery, not just whether it's discovery, there is pull APIs that we're also trying to be consistent with. So once you have a manifest of a signature or a standard artifact pull would just work, but we have been debating the upload one. And we're also trying to make sure, while each registry has slightly differences in how we persist the objects, there is a mechanism there and we're trying to make sure that we're don't make major changes to how an operator would have to store them. But to your point, if somebody wants to go off and store them in a different way because they can retrieve them differently, there's nothing that stops that. So I think what I'm hearing from you is treat them as a separate API as your preferred. Yeah, an API that allows ideally get in all of the signatures in a single HTTP request or something like that, we don't really need. You're deferring the retrieval yet. I wanted to get the push part first. I'll get into the signature discovery and. Yeah, and for push, yeah, a single request to add a single signature or something like that probably. Either way, this is going to run into some difficulties against the original design of the Docker distribution server, I think, because there was designed to be content addressable and fairly stateless. So the idea of the registry maintaining and synchronizing a list of signatures that are relevant to a single image is not something that comes very easily. All right, it probably. It's what the index API does though. Like an index does keep track by design of the manifested reference. Yeah, but the thing is that you upload the index as a single unit and it either is there or isn't there. And if you change the index, you just upload a new one. Here, there would be a stateful object that contains the list of all signatures that are relevant for an image. That's a surrounding. Yeah, I think we're saying something different. And maybe I wasn't clear in bucket. So maybe I just shouldn't speak about the implementation design of the registry because I don't really know what I'm talking about. No, no, no, it's the index part that you're referring and I think that's maybe where I wasn't clear. When you push a signature, the idea is that instead of using manifest, which you would have this new thing in the config object to have to know the linkage or the separate API to do the linkage. So to your points too. The idea is do we use index, which already has a means for tracking linkage between a list and other manifests and a list can technically port to another list, which is totally valid. So the registries already, let me rephrase that, registries that support index, which is growing, not all registries did, I was surprised actually. So we've been starting to see more and more registries support index now. And the idea is that we are trying not to introduce yet another way to do tracking. So if you would support index, you already have to support the idea that an index references other manifests. The manifest have to get uploaded first and then an index gets upload. It says, here's the things that I point to. And if you delete the manifest, you should not be able to delete a manifest that is referenced by an index. If you delete an index, that doesn't necessarily have to delete the manifest, right? There's a ref check, ref counting system, trying not to say the word game because it's kind of like this whack-a-mole game. But anyway, there's a ref checking, ref counting implementation that should already be put in place. I'm trying to leverage, not create another one. But there is no single index that has all of the signatures because that would make it updated and we're trying to your original point, content addressable and immutable. Every signature is a new object that's uploaded. We don't expect to have hundreds of signatures per specific digest, even the one or two that we talked about. But the idea is that I keep on adding more signatures and nothing else has to change. So I don't have to worry about updating any semantics in place. Did that answer your question? No, directly, but I just realized that I really don't understand the storage model of the registry and I'm just going to shadow. Please don't. No, you asked some great questions. The underlying pinnings is the sausage factor we're trying to avoid to your point, but the interaction is the part that I'm trying to make sure it gets handled. Cormac, you were starting to say something. Um, give me a minute. Okay, let me paste this in to our chat session. So this picture kind of shows two separate signatures got uploaded. One was signed by the Wabbit networks and one was signed by Acme Rockets. So the original signature for the net monitor image came from Wabbit networks and then when it got moved, I didn't show the third for Docker registry just because it was getting to be too much. I was trying to show the beginning and end. Like here Acme Rockets said, hey, I also attest to this contents for my environment, but they're separate, completely separate objects because this one got copied in from the original registry through Docker hub to the Wabbit networks registry, sorry, the Acme Rockets registry. And then here, a new signature got added to support it for that specific environment. So we want them to be separate objects altogether, whether it's a manifest or index. Cormac, should I go on to the discovery APIs and just give people a chance to read and give feedback throughout the week? Why wouldn't you use a manifest? Sorry, that's my question. Why wouldn't you use the manifest to reference another manifest? No, why wouldn't the signature? Itself be a manifest? Yeah. So the idea that a signature, so usually manifest reference layers. Like I don't know of a place that a manifest ever references another manifest. Sorry, yeah, sorry. Yeah, okay, why wouldn't it be a manifest list? Agreed, so the reason, so my original proposal was to use a manifest list, the problem, and I realized we call it index, we could talk about it. Sorry, the terminology is really confusing. I know, and it's also an image index, which also adds a confusing because it's not just an image anymore. So I'm taking liberties in how I'm referring to try to keep as neutral as possible. Anyway, the reason it can't be an index today, which none of these things can be done today to be fair, is there's no config object on an index yet. So we don't have a place to stick the signature yet. Like in manifests, we stick the config, the signature in a config object, but there's no way to reference another manifest. An index has a way to reference another manifest, but it doesn't have the index yet. Sorry, it doesn't have the config object yet. And that's a problem for two reasons. It's a problem here for notary because we want to be able to store the signature in a config object. And it's also a problem is that an index doesn't know how to say it is something other than a collection of arbitrary manifests. There's no way to say that this signature, sorry, this index is a CNAB. This index is a signature object. So if we can add config to the index format, which we said we wanted to do for a while, then we unlock the ability to get both persistence and linkage through standard mechanisms that registry should be supporting today. Well, as they support index, they would support today. Because I think, yeah, I think that would make more sense to me because there are times when you want to refer to the signed object by its hash as well as cases where you don't. And if it's not a, that's much easier if it's a normal object. You agree. What do you mean by a normal logic? Just so I... I mean, just a kind of normal registry object that you can, that you potentially... Like an index is already a normal logic that has the concept of linking other things. Yeah. Yeah. So it's just a matter of us. Look, we're adding, we're going to have to add something. Like, we're not going to get notary V2 for free. Like, we know we don't want it to be as big as a notary server, but we recognize there'll be some APIs that have to change. We're trying, being a registry operator, we're trying to make sure that we don't have to redo our whole storage and all the infrastructure there. And we feel the same pain for other registry operators, especially because we've all done optimizations around our own cloud. So we want to minimize that pain. So the theory, if you remember back originally, we said, look, adding another API isn't the end of the world because, look, we're coders, we write APIs all the time, but persistence is like a major change. So the thought process is rev the index because we have multiple reasons to do it and all the rest of the infrastructure comes in place. So I'll leave that for some digestion. Obviously, reading, staring, giving feedback is the conversation we want to have, but there's a lot to digest here. Let me go on to the discovery APIs because this one also had some interesting conversations that came out. I don't think I've done a really good job writing them up here yet. So I'm going to discuss it and make sure the conversations come out and then I need some feedback on how I write it, honestly. So we said for the discovery API, notice I'm skipping the pull API for now. The pull API is assumed to just be another artifact, but we'll come back to that. The discovery one is the interesting one because we want to be able to say that I want a list of signatures. I want a specific artifact type. I want a list of signatures for a given digest. So it doesn't matter whether it's an OCI image, whether it's an OCI image index, a Nespom, a Helm chart, a Marky Mark document, it doesn't matter, right? Anything in a registry should be able to be signed. And for artifacts that don't have a signature model, they can adopt this and it will just work for them. And for ones that have special ones, and that's great, they can persist those as well. The, so you regard this to, almost to realize first question originally was, a registry is this thing that you can use a tag to get a digest, but once you have a digest, it's a content addressable storage. Using that digest, I want a list of keys. The list of keys could, is a collection. We don't expect it to be thousands, but of course we don't know and I don't know if I want to say the list should never be more than five or 124 or 128 or something, right, some arbitrary thing. So what we're saying is we want to do a page API as all rest list API should be. The interesting problem or question that we have is there is a, and I don't know how to type, let me put it this way, start with this way. In the distribution spec, there's a way of getting next and last, you know, the last one that I got. There's a paging API that's in the distribution spec. It is not as consistent. It is not consistent with whatever somebody would call the standard paging APIs that people put on a registry, put on a rest API. So my question is, I don't want to reference just Google docs of Google APIs because it's Google didn't invent the way to do rest APIs. They just have a good doc to reference it. There's other ways of, you know, that also talks to it. I don't know how to refer to the non-distribution spec as a way of getting standard paging through a rest API. So I need some help there. But more importantly, we need to make a decision, do we want to be consistent with the rest of the distribution spec for tag listing? Which because we deprecated catalog as a spec-driven API, is tag listing the only thing that's paged or is the list of layers? No, the list of layers are in a manifest. So actually, if I think about it, the tag listing API is the only one that actually has paging in it. So do we stay consistent with it or do we stay consistent with what people consider to be the standard way of doing rest API paging? Sorry, what do you think the standard way is? I don't know how to describe it. So in fact, this is where it got to be dinner time on Friday. And apparently this is where I left off. Look, I haven't written these in a while. It's been a couple of years since I've written production rest APIs. So there's apparently a different way of doing it. And I need to find the doc that we referred to it. It was actually in the PR feedback. I think it was Sam provided the PR feedback to what was originally the signature PR. Let me see if I can find it because it's the right conversation have. It was in this one because this original prototype had the distribution docs entered as well. And that was somewhat of a mistake. Well, we evolved and decided we wanted to make it later. Let me just find it here. Here it is. Excuse me. All right, I was keeping my notes here. So again, maybe we don't have to have the full conversation here, but I definitely wanted to tease this conversation out because I haven't finished writing it very clearly. But this was, what should be, I think we agree that there should be a paging API. We agree that the signature request, the signature collection API also needs to say I only want the signatures of rabbit networks. I only want the signatures from Docker Hub. I only want the Acme rocket signatures. And I want to filter the rest. So there's that part of it. But regardless of what parameterizing we put on it, what should the standard paging APIs be is the question. Yeah, let's not discuss that here. I think it's probably the courage answer to that question. So that's the place to look in for some feedback. And then the last one, and again, this is where I was editing some stuff. So overall gave some great details here, which I'm still adjusting because this is where it goes into the sausage factory. But the signature poll should just basically be any other artifact. You just pull it once you know through the requesting the signatures API, just like you do for any other artifact, right? You get a manifest and you say, hey, I know what I want now. Now I'm gonna go request it. The idea here is, hey, I would get to figure out what the digest is of the signature object and then just gonna do a standard signature poll. Cause we don't think that pulling a signature, a verification object of an artifact should be any, I can't imagine registries would want to put it any more rights on a pull of a verification object than they do for the artifact itself. In fact, I could see lots of holes where, hey, I can pull the artifact, but I can't pull the signature to verify that's the right artifact, so that feels wrong. So we're trying to say, and I obviously have a little bit of what I wrote here, but this should align with any other artifact poll. So that tees up the content that I had in the conversation today. The only other one that I just got this morning, and I'm just gonna queue this up also when I put it in this Slack conversation is we took the feedback on doing a JWT format for the signature that she was, he actually didn't make the PR all the way up yet. It's on his registry, so or his Git repo will push this up as well to the Notary V2 project. But the idea here is, and I was scandalous myself this morning, but basically the signature is no longer a JSON object that we have had all these conversations around how do you encode it and whitespace it not and so forth. The interesting challenge though is once it becomes this encoded string to the conversation Justin and I were just having is we don't wanna have to parse the config object to figure out the linkage. And then to Mila's question, we don't wanna have to have two separate APIs. So what I'm to summarize, what I'm hearing is we like the index object, even with the additions we have to make to it because it gives us standard tracing for linkage, the upload API should be a separate API, not a normal push API. So this way we can put role management on it, but the parsing of that would be a standard index object. And then we don't have to read into the signature itself to understand which digest because you don't wanna have to parse this encoded object which is a performance hit that we don't wanna take on every upload. The role management piece is potentially need to be brought into scenarios because I think that is, yeah, it's having a separate API because you want separate role management is not in the end a design that scales for any kind of object you want in ACI, so I think it's a more general question that's worth asking of ACI as to how you would do role management for different types of things. I mean, you could imagine a registry in which you want to give people different roles to be able to upload under the help charts and container images, but doing that by having a separate API doesn't scale. Agreed, agreed. And I don't think that conversation that I was thinking of is, and that's why I wrote this, our signatures special enough. Like if we think of the notary server we have today, that is a completely separate server, of course. Yeah, but that's kind of incidental, but yeah, my view would be that designing for a world where signatures are totally separate makes them less likely to be implemented. So what I think what I'm hearing you say is just building on the conversation we just had is index is good, but do we need a separate upload for signature indexes as opposed to, nope, just use the index, the upload, it happens to be an index, and there's a role check that is implied based on the artifact type. And that's how customers have scalable choices. Yeah, I mean, the registry always has the ability to have role checks based on artifact types and things like that. I mean, the fact that no registry does right now is suggests to me there isn't much demand for that. Well, there's only been one artifact type until recently, so. Well, yeah, but I mean, given those signatures are independent objects that just have linkage, you could perfectly well push them into a totally different repo with different permissions. Then you get indexes don't know how to reference things outside of their repo, which is, you could say not really. They do. I do know how to access things outside their repo. I mean. I mean. They do, I thought I have to get another call. So let's throw up another conversation. So let's save that conversation. Oh yeah, okay. I have to jump out of the call. So I think we got the right conversations teased up by all means Slack and PR feedback is our friend. It gives everybody a chance. Wednesday, we have a conversation on index specifically Darren, I got Schaefer stuck in my head. That's not his last name. Darren's going to talk about his experience with index and we'll do that in the OCI weekly call. So with that, I'll see folks on Slack and thank you very much.