 Hello there. Good day. I can see you. I think I can hear you. Yeah, I hear cooking. That's not my cooking. No, that was mine. I had to swap the headphones back and forth. I try to tune out over the weekend as much as I can, but I did see you posted something on the weekend that I'll go back and look at. I lose track of all the stuff I heard sometimes. Good morning, but how do I pronounce your name, actually? João. Okay. Good day, João, because I know you. I will make any assumptions on the pronunciation of your name of where you live. So a minute or two. Yes, it's first up, but we'll wait a minute. If folks could sign in the message here. Yes, you'll be up to go through your PR. So whenever you're ready, we'll just start there. Just setting my windows. All right. Can everyone see my screen? Yes. Okay. So this one is revisiting some of the key management scenarios that we had discussed earlier. I know that there's a lot of new folks that have joined the conversation since we last had this chat. So I think it's worth reviewing some of the overview and definitions and personas that we had here. The link is in the meeting note, and I think we can take like 10 minutes to read through. We should stop reading at this point, which is right after requirements for discussion. Most of the requirements that are listed here in previous discussions, we were all aligned on. The one place that we didn't have alignment on a requirement was a having a mechanism for rotating a root key. There's both pros and cons here, and I've captured some of those in the document here. I'd like to kind of at least in the 20 minute conversation figure out, are there any additional considerations we need to take into place? And if any, what other data do we need to kind of make a decision here and move on? So I'll give folks 10 minutes to read through the link, and then we can start discussions. I wanted to do a quick time check. Do we need a few more minutes or is everyone done reading? Maybe a thumbs up for people when they're new. I'm done anyway. Okay. Does anyone need any more time? Okay, I think we can go ahead and get started. Share my screen again for the comments. All right, I see. Let me refresh. Okay, see a few comments from Steve. Should not to become, must not see the same comment throughout. Thank you. What's the difference there, Steve? Just there is two, I think it was two of them. This one was, let's see, signing an artificial require the perform additional actions. Yeah, must not. So basically, we want to be able to support offline signing. And if we say should, I'm worried that somebody would build a CLI that somehow enforces communication with the registry and kind of breaks that flow. So it's more a matter of to support offline signing. We need to be able to do it all locally. There was interesting conversation in the distribution working group related to that. And then the same thing for multiple signatures. I think that was, that was the other one. Okay. So this doesn't seem to the question of some, if you do it offline, of course, they could be revoked in the meantime. Is there some timeframe like you should, you need to be online every so often? Or is that just ignored or? Oh, sorry, the offline is, sorry, you're, you're, you're bringing the air gap environment. So that's a good conversation. So this is more of, I mean, the thermo build client. And we've been talking about the consumption of a thermal clients. And recently, we've, Brandon's brought up some good conversations around in the build environment. We need to be able to build in the thermal isolated environment. That's kind of like one of the twists on the protecting from the solar winds kind of scenario. So it's, it's signed, it's built and signed. And then, but it's not sitting there for weeks or months. It's built and signed and then pushed through a registry and a secure channel that there's no real time note into your point. Sure. I could be signing. That makes sense. And this one, does that need to be spelled out here? Or is it, would it be clear to everyone in that case? We probably can reread it and reword it in such a way that's basically saying the offline, I'll have to go back and reread it. And it's a great point. I think we should track that as a scenario. We do have a list of detailed scenarios that covered some of those use cases. I wanted to go back and visit requirements first before we do that. But I think that's the use cases so far have really looked more at deploying in the air gap environment, not necessarily building in an air gap environment. So those are some additional use cases, I think they can be added in. One other concern I have, which I think I've brought up before, is just the, the phrasing that the validation shouldn't perform any additional actions with the registry. Because obviously at the very least, you're going to have to pull the signature or the file that has that information, as well as any information about how you get that key and other things. I just think it's a little unclear exactly what that means. Yeah. It sounds like maybe the term, it sounds like maybe the term validating should be clarified by bringing it into more than one part. Because I think you're talking about validating as in, okay, we have, we've got the keys, we're comfortable with that they're up to date and then we want to validate versus I've got this new image and I want to validate it. I have to of course start by getting the keys. So. Yeah, exactly. I think I understand what you mean, but I feel like it could be very easily misunderstood there. Right. Yeah, I think that's a fair point. Do you mind putting in a comment so we can track that? Yeah, sure. I think to catch like we want to be able to say, I'm trying to deploy this thing. Can I get the signature before even getting like, you don't want to pull the container image scenario, you don't want to pull the image and bring the Trojan horse into the environment, just to find out whether it should be there. Right. The idea is let me figure out, find the signature, get the keys, maybe discover from the registry where to get the keys from that's something we haven't actually resolved yet. I think the pull, well, sorry, let me finish the thought. So you'd be able to get the signature, get the keys somehow, discover the keys somehow and get the keys somewhere and then be able to validate locally. Probably there's something in there about I shouldn't have to pull the artifact itself to validate it to move forward. The concern that I was trying to raise is probably more on the push side because there was this conversation that today the way the container clients work is the digest isn't actually computed in typical workflows until it's pushed to the registry and that kind of breaks the workflow that we're trying to articulate. So I think the build, the pull and the error gaps got commingled in here in a way that we should try to do in a clean way. So Steve, for your next comment, the additional signatures. I hadn't tracked this as a key management scenario requirement. I think this fits more into the signature format itself. Do we have a separate dock that's tracking that? Can you help me with what the difference is? Because I don't know how many signatures a publisher would put on the same artifact versus multiple entities would put additional signatures on a similar artifact? So being able to add in additional signatures, that's really dependent more on how you are formatting and storing the signature, right? And I think that's something that we want to capture in a requirement that says what are the requirements for the signature format itself? I think we have some requirements there in terms of where a signature is stored, how a signature is retrieved that don't necessarily tie in with the key management scenario. So we can create a separate signature format dock and I think that addresses some questions around what exactly are we assigning in terms of what bits and what does that mean in terms of like how containers get packaged. So I think that is an area we need to dive into. But I think that's probably a better place to capture this requirement than in the key management scenario. Fair enough, that's fair. Which makes the question, do we have a signature requirement stock? We don't have a signatures requirements stock as we have the requirements for the notary and then workflows, which that is captured in it. The signature is there to support the requirements, so we didn't really make a requirements, a signature requirements. Can we track this in the overall dock then for now? Yeah, I'll pull this out. I'll reject this one. I am curious, why do we need multiple signatures though for a publisher? Like is the requirement there that in the same signature file I'd have multiple signatures or can an author, like what I'm so little... I can throw an example. Yeah, I can throw an example. I can think of some client environments that might use signing as kind of an auditing method to say this artifact that they've signed has gone through the CI build. Okay, now went to the next step of it, went through a security team that went through a security scanner on that and a passive security scanner that might have its own signing on it. So I could see something like that within a single entity where they use signing for that. Okay, so that's more of just the multiple signatures as opposed to specifically multiple signatures from the publisher. It'd be a single publisher but multiple signatures there, different roles within the publisher. What I'm trying to figure out is there's something unique that it's the publisher that's signing multiples. Can we also be looking at signing multiple tags? I'm sorry, Brenda, go ahead. Can we also be looking at one publisher signing different tag names for the same artifact? What I think I'm here just to say can move forward is I'm thinking of hearing things more around multiple signatures scenarios as opposed to something to do with the key. So I guess we can move on. I'm just a little confused by this one, like is there something specific around the publisher versus everything that Brandon's talked about is completely valid, but I don't get it that it's tied to the publisher. So I don't think there's a difference. I'm so curious, who's CFC of distribution? Because I thought I was. I'm curious who else is. Can you elaborate a little bit more on the what this comment is about the ephemeral loop line? Want me to chime in on that one, Steve? Yeah, please. And so in another one of the discussions, I forget where we were talking about supporting ephemeral environments and being able to inject the keys into that ephemeral environment. And this is kind of hitting on the point that I put back to Steve over the weekend, which was that you not only have ephemeral environments verifying signatures, but you may also have ephemeral build environments that are creating signatures. And so we probably want to support both of those scenarios. Is there anything that needs to get added to this requirement to make that clear? Because the way I was reading it, I thought it would cover that. I think it's covered, Steve. Yeah, I think we might just want to tie it back with those words. Okay, create a loop. I think maybe this is something we need to add to scenarios to kind of go into more detail. And then Marina, on your comments, I think the clarification here is once you pull a signature and once you pull the artifact, those two should have, you shouldn't need to pull anything else from the registry is what this was getting at beyond just sort of like being able to pull artifacts from the registry itself. Okay, I mean, I don't understand this requirement. Like, is there any technical reason why that would be a problem to pull additional things from the registry? So we don't want registries to be a dependency for validating a signature. Registries can serve the signature, but they should not be in the validation path. Okay, but so there'd be additional, so you'd have to have an additional dependency outside of the registry to then pull that information? Potentially, yes. Okay, but is there any reason like there couldn't be the option to have that also on the registry just to centralize things and make it easier? So I think this is one we've had previous discussions are registry operators can act in multiple roles in the sense that they could, depending on how we align on the key management designs, there may be roles that they take on as part of it, but these should not be part of like what a core registry needs to provide. The goal here really is to make sure that you can move artifacts from one registry to another and have the signature still be intact and having a registry dependent workflow there means that you'd also need to replicate additional things beyond just the signature and the artifact. So this is really standing a requirement to make sure that step is not needed. Yeah, but I think that the requirement there is that images are able to move and that registries don't have to do any additional work. I don't know if the requirement is that you can't use a registry to do anything else. If you're using the registry to do something else, wouldn't that imply that there's additional steps the registry needs to perform to validate a signature? Well, the registry could be one option for additional steps. Because I think I could see some situations in which you wouldn't want to interact with the registry and you want to do a third-party service, but I could also see situations in which you wouldn't want to set up all these dependencies on third-party services and it'd be easier to just have that all in one place. I just feel like forbidding it doesn't make as much sense to me, even if it's not like the go-to solution. Yes, what was that we were trying to protect with this sentence? So if you have, let's say, an additional API or additional information that you need to go to a registry for, then when you move one artifact from one registry to another, you need the second registry to support similar functionality. And so this is essentially where we're building in additional workflows for signature validation. I think probably one way to address this is calling this out as optional, in which case we can say that some registry operators may decide to provide this additional functionality, but that would sit outside of what a registry operator is required to provide. So a registry operator, there's nothing preventing them from potentially operating as a CA if that's the right design forward, but that doesn't mean every registry would need to be a CA at that point, and we need to kind of make sure you're going back to the original CA that issued the search to validate them. Yeah, exactly. I think that makes sense. Basically, it's not whether they can't do it or that they shouldn't have to do it, and I feel like that's the distinction. Okay, so I'll clarify the language to make sure this is optional. Isn't the should not already assigned that this is optional? If we convert this to a must not, then that is a problem. But if we keep the should not, it shouldn't, it can, it's up to the implementation, right? But if you're going to be more specific, we can add an optional on in front of it. So basically, we'll just try not to have that very coupled thing we did with no Rewind, is that the goal here? Yeah, essentially, you wouldn't want every registry to have to set up a notary implementation and then plug signature data from one artifact from one registry to another for that. I'll explore making this clarifying this language a little bit more optional. I think we're aligned on what this requirement needs to say. So I'll track this in the comment and think we can move on. Okay, were there any other comments on the requirements? I don't see any other in posted in the pull request. I'm in there, but that might have shown up in a different place because I think I probably clicked review instead of comment. See, I don't see any more. Could you maybe go through them and I can. Yeah, they're over in the requirements file itself, but it's just not showing up in the comments. So I don't know if I just clicked the wrong button. Yeah, I see a bunch of comments from Marco. Trishon. Can you send me the link to your comments? I see comments from the previous iteration. Let me pull what Brandon put in. Yeah, I've got everyone in there where I was putting the comments directly in the scenarios file. I'm sure I clicked the wrong button. Get up, does that to me now and then. So on 28, the only comment I had there, line 28, just that the push is going to limit the number of APIs we can call. And I know we have APIs for things like querying and other things like that that we might want to be able to run for some of these things. So just want to catch that little piece in there instead of saying just a push and unsound artifact, but there might be other things to query or pull or fetch and unsound artifact to elaborate pulling or pushing. We definitely want to solve the problem with no review when we're, if you, if you were turned on signing or not, you shouldn't get different content, but the digest should always be the same content regardless. Yeah, the actual manifest you pull back to be the same, but the scenario I'm thinking of here is that signing an artifact may require that we query the registry to say show me all of the tough targets metadata for this thing. And so you have to run a query against it to pull that back. So you can send update for the newest tough art, you know, metadata. I would worry about that when we do the implementation, if we did it that way, but I'm not, that's what I'm trying to figure out. Like if we were to do anything with time stepping and other things, I'd assume it's the registry that have to maintain it. Yep. Yeah, I think maybe that's also something to cover in the signature specifications in terms of what we actually end up signing. An ideal situation is where you're able to sign the bids before they've even gone to the registry, right? So having to query registry for data before you sign it might break, might reduce some of the things we want to cover there. So I think that's definitely something we want to push up in the signature format discussion. And this, that's a good point where we might be separating the pushing of the signature from the doing the sign itself. And so you can sign it locally to all that. And when you push the signature, that might involve a couple back and forths of saying, okay, not only do I want to push the signature, but also want to push an update to the targets metadata on all the signatures up there, right? I would say it's implementation detail. I still think that would be a, there shouldn't be any client interaction to do that. Like what do we want to capture from requirements? Again, we're not trying to spec an implementation. We're trying to make sure the requirements are capturing the constraints we want to work with them. But I think part of what NIAs had written around the additional integrations with the registry, we're trying to protect from us building this, I don't know, I don't know how to describe the sidecar dual implementation thing. But then we just stick to the requirements. I think we'll be okay. Yeah. And I think I found the button that has missed. So my changes might show up now. All my comments. Oh, yes, they show up. Nope. When you say starter review, suddenly there's everything goes into a holding state. Another one of my comments, line 37 for signature validation must be enforceable in air gap environment. Kind of goes back to another comment I had earlier of does this handle checking for the key revocation? Yeah, it needs to. What we had done when we jumped into the scenarios in the previous iteration was call out what air gap environments would need to do additionally to enable that signature revocation, whether there was copying revocation data over and a push versus pull model, I think their design considerations there. But the overall design needs to make sure that it can be enforceable. I think there's a trade-off decision here in terms of like, you know, how quickly we can refresh that data, but a mechanism to refresh that data needs to exist. Okay. So it's not immediate as soon as you revoke a key that it's immediately discovered within air gap environment, you're saying there's going to be an extra process that could be done in that scenario to make it possible. Right. Okay. And then as we're moving between different repositories, make sure I'm on the same page you are. As we're moving artifacts around the concern I have there is more when you're signing tags. And it's, I think it's possible it's just something that's on the client side. So I just want to make sure that we have the same thought process that they'll probably be a client side process to handle verifying tag signing. Just trying to do that on the server feels like it's almost impossible or in the API itself. Yeah. I think we had questions around tag verification before around sort of like, you know, whether, what does that offer? Do we want to enforce digest verification only? And that might be more a conversation to have in the overall requirements because that does I think leads to some challenges. Yeah. And we've got an issue for that one opened up already. So that's probably best handled in that issue. I just want to make sure that we solve that when you're looking at this one because it could have an impact for this. Okay. And then I'm getting into the root key rotation stuff. I will save this comment because I think that's going to be the larger conversation we want to have. Yep. And then back to the push comment. We already discussed that one. We've already discussed. Okay. Did anyone have any comments on the up to sort of like the requirements here that we haven't addressed? It's before the requirements for discussion. Okay. Let's move to this requirement then. I think there's both sort of like pros and cons here. So Brandon, since I had a comment here, do you want to elaborate on what your thought process here was? Yeah. The thought process that once you lose access on that root key, you're pretty much housed in terms of security. Someone can make any number of delegate keys they want to do signing on. They can sign any artifacts they want to. And so your repo is pretty much done at that point. You're going to be forced into some kind of out of band update of the root key and getting that pushed out to all the clients and doing the revocation and everything else on that. So you've got that headache no matter what. And so I'm trying to understand the value add of not allowing an in band rotation of a root key. If the concern is someone could also potentially, hey, they can potentially use that hijacked root key to make new root key signatures. I think you're already in trouble with the point when your root key gets compromised. Yeah. I think that's a fair call out. The way we were thinking about this earlier is that your root also comes with certain information associated with it. So if you think about standard X509 roots, there's potentially a CRL there that you can go to and that CRL can give you an invalid response where you now know that the root has been compromised. So there's a mechanism that exists in tying a specific root as long as that root also has some kind of like a mechanism for sharing revocation data where you can notify everyone that's using that root that something's broken, you need to go update this. The challenge with having an automated update mechanism there is that now the attacker could potentially have a new root with the new CRL, which now they're vending, which means that you lose that capability of breaking your end users whenever you have that compromise. So that's the I think the tradeoff to kind of consider there is that would you prefer to have that automated mechanism of letting your users know something is wrong and then have them decide what next steps they need to take versus you want to take that step on their behalf and then if something does go wrong, you then have to kind of do a mass communication to kind of get that messaging out. So that's really where that tradeoff boils down to. So there be a way when you're thinking of doing these in-band rotations that you could say, hey, if we revoked one root key that that also revokes all the other ones that got pulled into an in-band rotation process. I think we need to think through that because what you're essentially doing in that rotation is technically not a rotation as much as you're issuing an intermediate that's also acting as a root. And so then yes, you could revoke the original root and that could potentially trickle down, but that rotation doesn't really do much in the sense that you still have to go back and check what the original root is signing off on. So what does that rotation really then give you? Well, I think it gives you that the implicit revocation of key is so you can have your root key time out eventually. And after that time out, you can create a new root key seamlessly to users because you don't want keys that last forever because anything can be broken in forever. Right. And so I think the way we've typically seen root rotations handled is you start using a new root before the current root expires. And you give enough time where the two roots overlap rather than signing the new root with the existing root. Authenticating the new root with the existing root means that you're saying that this root, you should trust this root because it was signed with the original root, but then to know if there's been any revocation or anything's happened, you actually need to go back and check, still go back and validate whether the original root was valid or not. So you're creating a chain there, but deciding to stop partially up the chain in terms of validation, which didn't really make, I don't think quite addresses the thread vectors there. Well, the scenario is I'm thinking of where you have a user that went through the process to trust your original root key. And then you go through the process internally saying that we need to update it because the old one's timing out or we need to sign it because we want a higher key length on our root key. You know, they're a long list of reasons you might want to rotate a root key. Right. And so for those clients that have the old root key that they are currently trusting, do you want to provide an automatic method for them to be updated? And so I'm thinking of all those ephemeral builders that might have a key injected in there, should that automatically say, okay, I trust the old root key, so therefore I should go ahead and trust the new root key as long as I verify that everything hadn't been revoked in the process there. Yeah, and I think that this doesn't exclude the possibility of an out of band rotation if and when that's needed. But I think in most cases this question can be solved with an automatic rotation. And then only when things go truly, truly wrong, do you have to bother with all this out of band communication? But having support for this mechanism allows attackers to take advantage and set a new root information is the trade off here, right? And so if you have things like revocation information or other things built into the whatever the root key is signing off on, that is no longer verifiable and that no longer can be used in a mechanism to stop as a stopping function, right? I think that's the trade off here that we need to address. Yeah, but if the only reason the client trusts the new root key is because they're trusting the old root key and you revoke the old one, I feel like that could potentially propagate through as long as we design this right. And so the question then becomes what are you solving when you say, okay, the old root key got compromised and that's potentially exposing everything well until you send out that revocation out there and that gets approved, everything's vulnerable at that point. Doesn't matter whether or not you sign a new root key or not with it, you're already vulnerable. Yeah, I agree and I would say like what does issuing the new root achieve there, right? Because if you're going to still go back and have to chain back to the original root key, you have a technique, yeah. Yeah, and that scenario once you're compromised, I see you're going to have to go out of band. I don't think there is an in band way to resolve that situation. You've been compromised at that point, you have to send a new key out out of band process. But I'm looking for all the cases where you haven't been compromised and trying to make scenarios more useful for the users where they don't have to go through all this out of band process, just do regular maintenance on the stuff. There could also be a question here. I think that we may want to look at in terms of who owns root keys and what do root keys signify, right? Like if you think about the traditional model of like CAs issuing certificates, like a CA essentially ends up holding a root, in which case individual developer search rotations, those things can happen more frequently, right? I think that's probably a question that we haven't addressed here. And I think there may be more of a threat model that we need to go into with those roles in place to kind of determine what is actually the right mechanism here. Is it a more secure update if you say, for example, like a set of trusted roots within a Docker client that you could push out that has certain trusted entities that may be a model that that's more easier to update roots with. So I think there is something more to be had here. Do we want to take a pause in this and come back and have like a more detailed doc on this? I think there are more things to consider here than what we've discussed. Let me throw one quick comment out and then I would say, yeah, if we want to go more detail, that makes sense. I'm just thinking from the scenario of just updating a Debian laptop here. I don't have to, when Debian says their old root key is coming close to expiration time and they send a new one out there, I as a user don't have to do anything. I just do my regular package updates and it pulls down the package that has the current root keys. And because I trusted the old key, that got pulled in and trusted and approved and signed everything. And so it makes a much nicer user experience when I don't have to go through a whole lot of external out of band processes to get my clients to be able to work with a new environment when they already worked with the old environment. That's a great example of the key discovery and acquisition workflows. But not only, we've kind of never really answered it yet, but if we say we're not going to worry about it for now, that's fine. I don't have to worry about how I get the Docker or the Web network signature for that matter. But what you're talking about there is that problem as well. So to be able to, we don't want to do trust on first use, but I think once you have a key, to be able to say I can get updated versions of the key seems pretty important. Yeah. Is it fair to say we need a mechanism for key distribution, right? Where we can get these updated keys that that's really the requirement to work backwards from? I think so. I think once you have it, to be able to maintain it is important. I don't know, like, how do we do the update scenario without getting into the tofu scenario? I think that's kind of it. Yeah. We can, we can look into that as we can start going to scenarios, but I'll update this to kind of capture what the overarching requirement is without making any specific calls here. So just to keep to our timeframe. This is great. We've been trying to get to a good key management conversation for a while. So this is really, really helpful to make all this great progress. So we'll keep it going. Let's try to get for what we can do to get it merged. So the actual document reflects our current state, then we'll have to read through all the notes. So I'll do my part to make sure my comments are in a place that are obvious to be mergeable. And if we can do that, you know, it'll be great as well. Just on the point of order to get the status update out, was there any other feedback? I incorporated as much of the feedback as I could. On the status report, it's in the HackMD doc. Let me just pull that up real quick. And so everything from the white backgrounds for the dark background effect to there was some good conversation around balancing to the example commands provide too much detail that, you know, should I pull that out? But I thought actually there was a really good conversation that came out of it as a result of the ordering of pushing digest before their tags so that we can get two independent entities pushed to the registry. And then later do the tag update without trying to make some master transactional boundaries. So to me, that kind of felt like a good balance. But if there's some more in that way that we need to do, that'd be great. I think there's a some of the other there's two other pieces of feedback that was one, it looked complex. And two, are we really making progress just putting a status report doesn't really show progress. So the complexity ones an interesting one that's part of I can use more feedback as far as making progress. That is another part that I wanted to talk about some timelines and things. But before I talk about the timelines going forward beyond what I put into the status doc was ready for feedback we wanted to talk about before we merge and publish. Okay, so on the timelines, let's transition to that. Because that is the piece I think a lot of us are getting in a difficult spot going, you know, for the ones that are working on it, we know we're making progress, we always want it to be faster. Outside of it, there's a lack of confidence, because they're not really it's not obvious. And people are starting to spin up other efforts that it'd be great if the other efforts were actually as holistic, but they're much more sandboxed and don't really promise the cross registry solutions we've been working on. So after this doc, I have one thing related to teleport that I have to go finish up. But once this is my next thing after I want to be able to outline the three, I think it's three places that we need some investment that could use some help. One of which was the validation with open gatekeeper. We talked about doing that. The teams in Azure that are happened to be part of open gatekeeper, they would like to but they're not going to get to it right away because of other critical work they're on. So if there's other people that are at least close, if not in the gatekeeper open gatekeeper community, we'd love to prototype that. The reason I really wanted to be close to the open gatekeeper community is I want to avoid us. And it goes for all these things. Building something that works around what exists today, as opposed to this is what we would like to see changed, whether it be open gatekeeper or notary. So we really want people that are close enough to say, yes, I know enough around open gatekeeper to know that this would really work if we made this one change to open gatekeeper, as opposed to let me manipulate this thing to make it work only because I don't think I can change it. These are all projects that are just code. We'd love to make changes to it. So yeah, I am referring to open policy agent gatekeeper implementation. So basically the idea is we want to validate this with something very generic, like Kubernetes, because it's a very open framework, and that we can actually validate the end-to-end review. And if we can validate the end-to-end with a true policy manager like that, then we assume that we can do with other projects, whether it be cloud-specific projects or other things. That's the premise that we're doing there. So that's prototype, that's effort one, is to get something going there with the validating the next end-to-end scenarios. The other one was the CNCF distribution changes. We'd like to see some changes there related to the artifacts manifest spec that we've been working on and a list API that we can get the link list for the signatures out of the registry. I've been assuming that we were going to do that from our side because we were kind of already prototyping on that and we would make their next round of iterations. It doesn't have to be us. And once we have the artifact spec, artifact manifests more detailed, that should be easy for others to build in. And then of course, the updates to the MV2 client. So those are the three things. There's something here for the person who won't identify themselves. So challenging to even acknowledge them. Anyway, we are in time, so we can talk more about it. But I did get at least one plus one for the just publish it, let's move on. Because I think a lot of people keep on asking, I have not been able to say, here's the written status of where we're at. So I'm hoping this captures it. With that, I'm sorry, looking at the comments. Anyway, with that, we'll pause the recording, we'll keep going and thanks for everybody's progress. Next slide, folks.