 Hey folks. Okay. We got recording. We're good. Do the proverbial. We'll do two minutes this morning. We seem to have. Quick attendance these days. And if folks want to pop in that you're here. Here's our hack page. Actually, Niaz, did you want to cover the. Work from last week yet. No, I don't have the changes. And yet I'll close that next week. Okay. All right. Why don't we kick things off here? So. Thanks for folks for the feedback on the status report. We got that out last week. That's been helpful to be able to just point at a document instead of explain all the various moving parts. And I think we'll get a little more refined for the next one we do whether it's next month or not. I don't know if we'll do a monthly or not. Okay. The progress that, because so some of the feedback that I was getting was, Hey, is there any real progress happening here? So that, that was good feedback. And what can people do to contribute. So that we can make further progress. And those are great data points. And I guess the third one is what is the timeline? As with any open source project, the timeline is always more challenging. And so it's always a personal time and so forth. So the, no, everyone is an interesting one in the sense that it has real business drivers behind it. So that is certainly helping to keep it going. I know. At least the AWS folks and us in Azure are. Have to deliver this for. Commitments to our customers. But the nice part about that for this group is. Again, as Microsoft being a software company, in addition to a cloud, we need to sign our software, which will be used on other clouds and on-prem. So that's one of the things that's really helping keep this thing driving as a true open source project that by definition should be able to be implemented on any registry or the OCI conformant and what we're probably going to go out, explain a little bit of how we'll go down with the artifacts approach. I like to getting this stuff promoted. So that's kind of. A little bit of background on timing and the, the details of the timing is. I'm hoping to get something through spring. That will have something functional. So the three rounds I'd like to try to do. Is get something working with distribution, the CNC of distribution project where somebody can just stand up, run locally and do the testing back and forth. That's to some extent the easiest because. We can do quick iterations and it's certainly easy to make a change to a fork and do docker run locally that it is to deploy to clouds and other on-prem environments that customers expect to be stable. So that'll be the first, the next round that we're looking to do after that. As we have more confidence in that, then we actually will start rolling out to our stamp deployments in Azure for customers to be able to use. We'll do it in a subset so we can, you know, validate that feedback and we hope others will be able to do the same. And then of course we'll move to some kind of production thing. The, or public preview is probably the better way to say it. The thing that's come up though is that there is, obviously there's more of this end-to-end workflow that we've defined. In fact, if I pull up our requirements. Doc. Pull that up. And I'll share this. If we go to the scenarios that we've got here, you know, we've been talking and how we want to be able to deploy this out to a container host. The, and I kept it fairly generic here. The most obvious, you know, generic container host would be Kubernetes and not cloud specific, just, you know, Kubernetes instance. And the thought process is if we think of policy management's OPA gatekeeper, the ones that come up the most likely there. So this is a place that I'd love to get some more folks to help here that actually have SOPA and gatekeeper experience so that they, you know, we can do the, hey, open gatekeeper this way. And we see what we're trying to do with notary signatures. If we just did me this one change to open gatekeeper, it would make things fundamentally better. It's possible an open gatekeeper work just fine the way they are. I won't by any means claim any kind of expertise on that. So that's the most obvious place that we just have done zero work in officially as part of the larger group. I'm looking at Dan said he was going to join the second half. So I'll lay out a couple of areas and then we can talk about how we want to go about it. So that's to me is the biggest void that we know we were making lots of assumptions around what we think could work in this environment. So we definitely love to get that part figured out. We've got rough prototypes on CNCF distribution that we've been doing. There was prototype one, which really just hack stuff in with the third link these two artifacts together. So you push the image, you push an artifact, sorry, push a signature as an artifact. And then there's this third totally hacky API that would link the two together. Now with the new OCI artifact manifest. We could submit that image with the, sorry, submit the signature with the link to the image in it. And that's the work that you're seeing a couple of PRs that came in fresh this morning, which was Monday in Asia time. So I haven't had a chance to look over them yet, but that's the discussion we had last week where we're trying to make more progress on that. We'll actually add the OCI artifact manifest support to distribution. It's sorry, yeah, to distribution. It's in the notary fork. And then we'll add artifacts support as well. And then there's the artifact, sorry, the manifest links API that lets you query the signatures for any artifact and image in this case. So that's the progress we're making there. I don't know how far we'll get with the distribution one. And in fact, now that we have the distribution working, you know, the actual weekly meeting and people on it, I'm hoping that some folks there will want to jump in because they run distribution for their registries. So they might be interested to get that going in deeper than what we're going to implement or what we would prioritize from an Azure perspective. So that's part two. And the last part is we're iterating a little bit of an aura as to support the new OCI artifact manifest proposal as well. So that when the aura's libraries will be there so that you can push that signature up or use it for S bombs or whatever. So that's kind of the three areas. There's OPA validation. There's taking the CNCF distribution to the more complete level. And the aura's work that. Well, I guess there's the NV two or as work. I kind of think of those together. Thoughts. Volunteers. Just a thought around, I don't know if this is how to scope, but the support for our continued maintenance on notary Vee one. Is that a consideration? Not from this working group. We set out early on when we made commitments to the scenarios and requirements for notary Vee two. That we were explicitly saying we're not. Non goals compatibility with notary Vee one. And this working group isn't doing anything with notary Vee one. Basically, we just found the, the limitations of notary Vee one to extensive. To. And we haven't have enough. Deployment of it. In mass that. Justifies trying to bring something forward there. So I guess my other question would be, is there. And this is probably way further down the line by a transition. Point of within the CNCF or within project to recommend Vee two. I mean, the goal is notary Vee two should, well, to be fair, notary Vee two, we've very explicitly focused also on a registry approach that you can store and persist these signatures in a registry. Now the signature is a separate spec. It's a fairly, you know, finite spec. It basically says you can put a hash on a digest. So it's not overly rocket science. It's the, it's what is the exact format and what is the end and experience that makes it complete. So our goal has always been that notary Vee two from content from container images to other content that can be put in a registry will completely supersede notary Vee one. Not necessarily in a compatible way, but in a superset way. And what I mean by that is anything that was signed with notary Vee one would be re-signed with notary Vee two. That's been the bar that we felt that was the right bar at the time. Cool. And if there's something we're missing that we should bring it, you know, include that's obviously good to know. The fact that notary Vee one is limited to content has to stay in the same repo came and move across another repo in the same registry kind of made it fundamentally. Not very broad. I mean, we support content trust in Azure. It's been great to get feedback from customers that say, Hey, this is great, but it's not what I want. So it kind of helped us facilitate the conversations of what do customers want and need. And we believe those are all covered in the, the notary Vee two goals. So I have a question about the key management process because I feel like it'd be really hard to use the prototype until that piece of it is solved. And actually, I have a similar concern with kind of the, some of the other security goals that we've been pushing off and like making a, you know, publishing a complete prototype before those things are finished. So just be my concern. No, that's fair. It'd be good to identify exactly which ones that to me the, the being able to persist keys, certainly the public key and the private keys. You know, we've said that we want to make sure we leverage existing key management systems, whether it be cloud providers or open source ones. We don't want to reinvent that storage and persistence model. And the private key one doesn't seem like as much of a concern because it's part of whatever people or companies do, they have a standard model for how they acquire those keys and they sign them. The public key is an interesting one on how we discover and find those public keys. At this point, it's not considered blocking. The one to me that's the most that I do consider blocking is we need to define a key revocation solution, especially for, you know, as we say, air gap is part of our core goals. So that's what, you know, Niaz has been making good progress on and we need to get further. But that's, those are very fair points. I think for an end to end prototype, yes, we definitely need everything in place, but we should be able to scope prototypes down to individual components, which should have very well defined boundaries. Right. So if you're looking to do a prototype of assuming the keys on device or accessible from the device and we generate a signature and we can validate that the integrity of the artifact hasn't been compromised. Those are things we should be able to do without key distribution or other mechanisms in place. So I think that would be a good scope for how do we create signatures, manage signatures and get them to the end devices. But I agree for overall prototype, we need all the components in place, but I think we can move ahead with different components. Anybody else? No, I think that makes sense to me as well. It's, it's definitely valuable to get the key management solved, but I don't think it's blocking for us. I think there's a lot of work we can do without that. It's, I mean, I think we're making assumptions and I'll use the word assumptions that we can layer that on. Certainly from the discovery ability that, you know, we said we do not want to support tofu for trust on first steps. So that means that there has to be some kind of discoverability model for it. But it's not blocking, right? It could be, here's the doc that says, Hey, this is where you go get the web at network signature or the Docker signature. The idea that we can incorporate that into some kind of discovery model is thought to be additive. The, like I said, the key revocation is the one that kind of gives me the idea that we can, you know, we can do a lot of things because I'd like to have the standard flows that are automated, you know, and what, including opa validation, right? Whatever notary libraries that we have should hopefully have a way that can just without the user having to do anything special, it automatically, you know, beyond maybe some kind of configuration for air gap environments, be able to know where to go and look if a key was revoked. And I'm doing obviously some big hand waving over that. And I think that there are a lot of systems and especially security systems that have really suffered from not designing in key revocation from the beginning. If you look at like TLS SSL, I think that they, they hacked on a solution that I think works, but I think if they had, if that had been one of their original requirements, I think it would have been designed differently. And I think that that's definitely something that we want to avoid here. That's totally fair. Yeah, as you up for that challenge. Yeah, I don't disagree that that we need to, we need to address that. And I think we are kind of looking at it from, from the beginning. I just think that the prototypes can, we can have more clear defined boundaries, right? Signature validation happens in two stages. The first stage of signature validation is essentially validating that the bits that you've got are the bits you were supposed to get. And then the second set of validation is, is the signature expired? Is the signature from a trusted source? Has the signature been revoked? I think that second stage of validations is not something, is something that's blocked by key distribution, but the first stage of validation I think should be doable. And I think that's what I would focus on for now on the, on the other prototypes is making sure that we can do that validation that at least the signature tells us that this artifact has or has not been tampered with. That's fair enough. Those are definitely kind of the two pieces. So Marina, my, my question back on that is as you're thinking through like TLS revocation and other scenarios that made it more difficult. What are you thinking of in terms of the notary implementation that might need to be factored in for handling some kind of key revocation, other key management? Basically, wherever you get the, basically you need a way to say that this, this file is no longer valid and you need to have a way to have that kind of built into the process of finding the file, such that it, yeah, it's, it's trusted through another third party source because the problem with things like revocation lists, which are kind of the add on solution that's most common when it's not built in from the beginning is that, you know, it's, it's, you can get someone an old version of it. There's all these kinds of other problems you then have to solve with the revocation list, which kind of just creates its own host of problems. Whereas if you make key revocation part of distribution, you can kind of eliminate that set of issues. I think we'd want to clarify that in a doc. Part of the way the key revocation works today in TLS SSL is the certificate is telling you what the CRL or the OCSP URL is and that's meant to be that distribution list, right? And I think there's two ways of handling that distribution list. One is saying here's the list of things we don't trust, which is what TLS and TLS and SSL CRLs and OCSPs do. The other one is what TUF has in place, which is saying here's the list of things that we trust. So it's a question of what's the more efficient model. So I think if we want to do a comparison there of either vending the list upfront or having the list being queried, there's trade offs there. But that's one of the mechanisms I think we'll want to close in as in what are the trade offs, what do we want to share? So do you want to maybe start drafting a document there that compares the two models? And I think we can use that to drive the key revocation conversation. Yeah, sure. I'd be happy to put something together. And as the non-security, no, semi non-security person when thinking through some of this stuff, are we looking at basically the difference between a survey you have to go out and poll for revocation, say is this thing revoked versus a certificate that is short lived and therefore you have to constantly update that to be able to keep it valid or is there a third option? I'd be interested. Go ahead. The third option I'd say is the snapshot in TUF. TUF actually does two of those. It does the snapshot and the explorations. But yeah, it's kind of a combination. Yeah. I think it might be kind of like we wrote down the requirements here of what we want to do for notary. V2 is also to kind of scope out some of the problems that we're concerned about with the various solutions. And that is that content is on multiple registries. Content is on from multiple owners within the same registry. So we have all of those permission boundaries. They go into air gap environments. The registries have constant updates and other than tags. And eventually metadata, but certainly other than tags, a registry, everything that goes into registry is immutable. So it's highly scalable, highly efficient. And we take advantage of that in things like geo replication, availability, his own supports and reliability and all the, the ideas that go on there. We just, we want to make sure that the solutions that we're proposing meet those abilities to be able to continue. And I think that's been one of the biggest angst we've had about some of the, the various discussions on how we do this. So I'm not saying I know what the answer is. I'm just trying to say here's the, the rocks that we have to work around to make sure that this thing can fit in that environment. And it's not just, like, you know, we had to do a performance conversation. It's not just the performance because you still have contention. You still have security boundaries. So I don't know how to capture that in the best way. But we have to consider those. Again, multiple, multiple designs are great independent. It's do the designs meet the requirements and that's what will differentiate one design versus another that we can actually implement. I think having like a face prototype approach makes sense where there's multiple prototypes we can build in parallel. I think the key management curation can be one prototype on its own. But some of the ideas that you had about landing design and then you deployed them so accordingly. And I think most of the signatures, distributing signatures can be another prototype. So as we're thinking about these prototypes, I think having what are the different tracks and owners would help there. So Marina and Nia, are you tag teaming on this one? Yeah, we can come back with a proposal on what we anticipate will be creation, storage and distribution. I think that's a separate prototype on its own. I agree. Yeah, and I love this iterative model. I mean, I was, you know, for those of you that, you know, you have your, everybody has their own way. They run ideas through their head and, you know, I've been thinking about like, you know, what do we need to do next? And you kind of get to a point where you feel like you're playing back the same thing in your head because you need more data to come in to help take it to the next level. And that's why you kind of hear me talking about, let's get something working where people can stand it on their own, docker run, then you have a registry running, then you have an NV2 client and hopefully an OPA client or an OPA plug and whatever it's called. And then you can actually start testing this yourselves and we can start having whatever you want to define as users, customers, users, other people outside of this group that can start saying, well, it's not what I thought it would be. I thought it would be this and this is what I was trying to do. Like, you don't hear me saying when we're going to declare a 1.0 yet. You don't hear me saying when we're going to GA a feature in one of our clouds yet. We're, I think the really important thing is we have so many people desperate to sign this content that can move between sources registries that we got to get something in their hands. I feel pretty good about what we have so far. We basically have to deliver on the thoughts that we have and then get the next round of feedback on the overall experience. And of course, the revocation is like I said, is the gap. So I'd love to see more on the revocation stuff. I feel hopeful that tomorrow's meeting with the distribution group. I'll hopefully get somebody to help with taking the distribution to the next level with the stuff we've been doing. I'd love if somebody is interested in doing more of the OPA validations. And Dan's here and I saw Dan pointed me to some stuff that he had done with OPA. Dan. Hello. Hi. The conversation because you said on the Slack channel that you weren't going to be able to make it for the first half, we're basically breaking up the different places that we think we want to do the next level of investments. And one of the areas is some OPA validation. Nice. Yeah, so I've been trying to figure out how to plug in with OPA. One of the challenges and one of the reasons I originally kind of gave up on this approach early on was you can extend OPA super easily, but what you get is a, your own custom compiled OPA interpreter that you can use wherever you want to. If you want to use any of the actual things people use OPA for though, like a gatekeeper or any of the other things that have OPA built in, they have their own compiled custom OPA interpreters. And there's no real recursive extension story. So there's no great way for me to get my plugin, my OPA plugin into OPA gatekeeper, which is where you really want it. So yeah, you can write cool OPA stuff and then figure out how to build and distribute it to your own custom systems, but I couldn't really come up with a way to get any of this stuff integrated into the places where it seemed like people would actually want them. And they've acknowledged that the OPA community knows it's kind of an issue. I just don't know what a good next step there would be. So what is, so again, just to kind of backtrack a bit, this is one of the reasons why early on I really wanted it somebody that's part of the core OPA gatekeeper community. We have some folks in Azure, unfortunately they're just, they don't have the bandwidth to work on this right now. Because my concern is always, my concern with a lot of these things is, hey, if it's closed and I can't get what I need, then I give up or I try to hack on something from the outside. But if we just make this one little change, whatever one and whatever, what number of quantity of one is and whatever the definition of little is, does the scenario just open up and it just works. So for the people that, the experts of OPA, the maintainers of OPA for lack of a better word, they of course know all the things are varied and they know what to open up. Whether it be somebody that takes the lead and works with that community or somebody that's in that community, I don't really care which, we just wanna be able to figure out, how do we make somebody like you be more comfortable implementing it because it was something was fixed or added or whatever. Yeah, and I guess it comes down to the exact scenario. The scenario I had was trying to get this metadata in a place where you could use it to enforce decisions at deploy time, which is what gatekeeper is for. So if I could summarize it in one sentence, it would be a way to extend the version of OPA that is in gatekeeper without making me build it all from source myself to run gatekeeper. Right. And then the notary scenario is simply, somehow there's a config provided, whatever policy that says the key, it must be one of these keys, one or more of these keys. And in our example, it must be the ACPI rockets key. There's my scenarios. But you could also say, not only it must be an ACPI rockets key signature using the ACPI rockets key, but you could, there's a second piece that we've been looking at that it also has to come from the registry specified in the C name. So there's basically two choices because I might wanna consume a Wabbit network software directly and I don't sign it. That's a perfectly valid scenario. I might wanna consume software from Docker Hub and I don't do any additional signatures. And obviously, well, not obviously. We hope customers are not trying to deploy production assets from public registries and they're bringing it in their own registry. So there's a big assumption there that sometimes gets overlooked. So their deployment would be deploying from registry.acpirockets.io but the key would come from Docker Hub or Wabbit network. So is the key valid? And then a secondary option is, is the C name match the registry it's being deployed to. Those are the two that we've come up so far. And of course the keys can't be revoked, right? That's the first part that we just discussed. So I think we can get those pieces into OPA. That seems like the baseline and then I don't know what I don't know if there's something better, awesome. That's the kind of thing we wanna start thinking about. Yeah, so that's the easy part. The hard part is then getting that OPA into a stock gatekeeper that your customers or anybody would actually have and be running. So it's like, you can extend if the analogy is like the Python interpreter you can and it didn't have a way to install third-party libraries. So the only way to add new functions is to build your own Python interpreter and then you go to your customer's data center and they have the normal Python interpreter instead of your custom one. There's no real way to distribute those extensions. I mean, this is the writing that policy is super easy for a prototype though and a proof of concept. I mean, maybe that thing is iterated what would he ask B of the OPA community? Cause like there's, if you think about what we've done with artifacts like the approach is you can store anything in a registry, right? We don't, nobody ever tells us what type of files can be saved on our local computer, right? There's a file system that says I can store stuff. I don't really care what you call it. We're trying to do the same thing with the registry. Signatures is actually a little more specialized. Like we hope the Docker client container D maybe OPA will special case. It's a valid enough scenario that they will add specific support for it. Like we've talked before that Brandon was pointing out a really good ordering issue with as we push stuff to a registry that we want to remind me here, Brandon. It was push the signature, which doesn't ever have a tag. Push, no, you push the image. I know what it was. It's updating a tag. So if I have framework V1, the tag being V1 and I want to ship an updated version of it for a security update or whatever. And I'm using the same tag because it's a stable tag for a base image. Then I would push the update as a digest first. So it's still sitting there. I'd push the signature, which references the digest. And then I would push a tag update. Did I get that right? Yep. So that way when the tag is out there, you've got the signed tag already. Right. So, but that's a perfect example of the Docker client would actually hope, we would hope the Docker client or whatever the latest build scenarios are would incorporate that special scenario for Notary V2. I'm using that as an example there. I don't know if OPA is gonna say, hey, Notary V2 is important for us to special case or as we've done with artifacts, right? Like we're gonna store signatures in a registry and the registry doesn't know it's a signature, doesn't care it's a signature. If OPA can validate signatures without knowing it's a signature, even better. So that's a set of conversations that I would love if somebody has bandwidth to focus on with the OPA gatekeeper community. That make the volunteer effort too big. Dan, did I lose you? No, sorry, I'm here. Yeah, and I definitely don't have the cycles to actually try to get something upstream and embedded into the OPA community. All right, I'll continue to search. We're trying to get some stuff juggled even in house in Azure. But I mean, to be fair, as much as we're hoping to get out as fast as possible, I don't want this to be an Azure driven thing. That's why having Niaz doing a lot of the key stuff is awesome. And I'm open for anybody and everybody that will help in this case, not just because we want to, for the sake of having multi-clouds, I believe it brings a wider set of diversity of things that come up. So that the broader we can bring of input into this, the better the solution will be. That's mostly what I had for today. I put some links in there just for people to poke at if they want, I have not had a chance to look at it yet. We've already been discussing amongst ourselves that there's some gaps in those, but I wanted to get them out there so people can see them, they're public, you can see the iterations we're making. We do have some opportunities where it's coming up this week. We have Dan invited me to speak at an open spot that opened up an open SSF on Wednesday morning. So I'll put an iteration of what we have here there. And then in the red hat container plumbing day next Monday or Tuesday, there is the artifact model, which of course I'll use notary as the example of the reference artifact type that we'll be storing there. So we're starting to get more visibility, more public, hopefully more eyes, more feedback that on what we need to do to continue shipping this thing. So I am curious about the artifact changes. I started looking at that, seeing the links here at the meeting and I'm curious looking at the change, what was that pull 31 that they've got another spec skype.go directory. They called it V2 and they started putting some stuff in there. I'm curious cause like the descriptor came in from the image spec and that got pulled over some curious. Yeah, there's some details there that make it insured to figure it out. So if there's, and this is the problem of trying to figure out how to get it narrow enough so that we can get iterative approvals going through but the problem is this and the distribution spec and the image spec are fairly tightly coupled yet separable. When we bring up artifacts and we say, well, an image is a type of artifact that kind of throws a wrench in the works. We created a PR, created a issue, I think it was on distribution a while ago to refactor some of these things where basically there would be a set of descriptors and the core pieces that live across all artifact types would be an A repo. I don't remember if we even said whether it's gonna be distribution or artifacts or whatever, we hadn't really gotten that far. I don't remember if we did. And then what happens then is now the image spec is here's how I do container images. The Helm spec is here's how I do Helm objects and they could all point to a central thing. That's part of what you're seeing there and overall specifically was asked like where should these go libraries go and we're trying to get some of that figured out. So like I said, I haven't looked at it yet. We talked about it on Friday before the weekend didn't. So the overall thought process is that there's gonna be a specs.specs.go slash fee whatever in each one of these projects for the Helm, the artifact, the image all those different things have their own specs. I think where we're going, I think and I'm gonna have to go back and look at it but my understanding is there is a specs.go for I might not be saying the right file. There's a go file specific to each manifest schema. So there's image index, there's image manifest and now we're saying there's a third for artifact manifest and the third being now supports reference types. I saw Derek here earlier, but I see he had a drop and the idea is that there's a go file that matches that so somebody can interact with that manifest type. That was my understanding of the conversation. Okay, confused me a little bit when I saw the descriptor but I think you kind of explained it pretty well. Yeah, I think what happened is the descriptor instead of having the reference over the image spec we're trying to say, look, if you're not using the image spec, you shouldn't have to reference the image libraries. So the idea was generalize some of the things like the descriptor to be available there whether they're duplicated in multiple places or we start shifting to their stored in a central place because the OCI artifacts one is not supposed to be a specific artifact. It's supposed to be here's how we generalize artifacts to go into distribution and image is one of those. It just so happens that some of those original things were defined in the image spec. So whether we keep them duplicated, whether there's a shift, I don't know what the latest thoughts are on the OCI image V2 conversations are to know how they want to factor. I don't remember if they've thought about that level of detail, but that's what you're starting to see a shift there. And what we've also said is the link API so that I can ask a specific artifact what things reference it, an SBOM or a notary, sorry, an SBOM or a V2 signature as an example that we weren't getting a lot of confidence that that would go into distribution directly. So we're proposing that as a distribution extension that we would make as part of the artifact OCI artifacts project. So the links extension for now is being proposed through artifacts and my hope would be it gets lifted up into distribution because then it makes it more difficult. Now I have to say, if your registry is OCI conformant and artifacts conformant, then you can store signatures in it. It's just one more step, but it makes it more complicated. I'd love to just be able to say, if your registry is OCI conformant, you can store a notary V2 signatures in it and open everything else. But I'm not sure if everybody's bought into that. So I'm taking an incremental approach for now. Sounds good. Questions, thoughts? Okay. I do know that people watch this that aren't able to attend from time zones. I think I did not post from last week. So I don't know how long Zoom takes to process the videos, but I will at least take this time between now and 10 to post last weeks. And by the time that's done, if this week's is done processing, I'll get them both posted. The point is those that are watching it, once I get it posted, if you like to volunteer and help with some of these things like the OPA work, I'm sure Dan and others will love to collaborate and just share their ideas. We just need more people to kind of spend more time on it and take it to the next level. And Nia, do you think you'll be able to queue up the conversation for next week? Is that a reasonable expectation? Yeah, I'm also blocking some time at 10 a.m. Eastern on Wednesday. I'm gonna work with Marina and if anyone else wants to join in, can join in. I've sent a Slack to Amy to get something on the calendar. What we did last time is we just used this same Zoom. It pops up, says it's recording. But I think when someone sees that show up, they don't automatically post it. I don't know if that's you, Steve or someone else, but... Yeah, so I own the Notary 1s, but yeah, just tell me if you, yeah, if you wanna... By all means, yeah, don't worry. I mean, having Amy put it on the calendar is awesome. But just use this one Zoom link, that's fine. And I don't know if I'll be able to make that call, but just tell me that it was done or whatever it could be a poke and reminder and I'll process it and get it uploaded as well. Yeah, we've used the same Zoom link in the past, so I'll let you know once we, what we do in the meeting and put the notes in the meeting agenda. Awesome. If I'm not being too demanding, could you post announcements of those times in the Open Containers IRC channel, which I think is bridged to the Slack? Oh, because you're not a Slack person. Right, and the other one's not bridged to IRC, so. Yeah, I mean, to be fair, they're separate projects. So it's just, there's a lot of overlap of people and basically, Niaz, it's just, you send to the dev alias, it's dev at, what is it, it's dev at Open Containers? I just typed dev and I don't, yeah, dev at opencontainers.org. Sarge, that's what you're asking. There's the Open Containers IRC channel. I was just saying if I see an announcement there of a meeting then. Yeah, I'm not sure where that's linked. Is that just in the general of? I don't know how it all works. And when I send emails to dev at Open Containers, does that get there as well? There's a bunch of wormholes that are connected and I don't know where they all are. Well, I will see those emails, so that's fine too, yeah. Okay. Thanks, sorry to be a bother. No, no, we're trying to cater to everybody's style without being every style. All right, if there's nothing else, I'm gonna go process some videos and we'll see you folks on Slack. Thanks. Thank you.