 Good day folks. CNCF distribution returns. Yeah. I was about to wonder if we had the same person back from last week. Yeah, well, that's why everybody's got to sign in now. Unfortunately. Mask marauders. So from an agenda point of view. Niaz has a hard stop at 930. So. Niaz, did you actually get that on the agenda? Yeah, I put in an item and I can go over like a quick recap. So. Marina and I had a call around key management last week. And as a result of that, I've updated the pull request. The pull request now has all the comments that we had discussed when we looked at the previous requirements as well. We changed this into a key management requirements doc. And then we'll have an additional doc for scenarios that way we can do iterative pull requests. There are three areas that the requirements doc calls out that we will need three separate docs to dive further into. One of them is around. Key or signature expert expiry. This is one where like a timestamp authority also comes into account. We'll have a separate doc to kind of look at the pros and cons of different approaches. We'll have a doc around root key rotation. This is one that I'm working on. And then we'll have another doc around signature, allow list and I list. That's essentially looking at whether we go with a trusted update model, which tough provides or a revocation list. Both have pros and cons. So we'll dig into each one of those. I also spelled out what their prototype. Sequences should be. The first prototype is something that we can already start working on while we work out some of the key rotation, revocation details. Awesome. That's awesome. When just to maybe incorporate some of the prototypes. I did that open SSF talk this last week. And I realized that the steps that I had documented to try to do the high level of the goal wasn't actually as repeatable. If you try to do it yourself. We, for the demos that we were doing, we hosted it on an app service, which allowed us to have a public endpoint with a, I guess there was an SSL cert on it. And it turns out there was some gaps in the NV two CLI that couldn't actually work with a. Insecure registry. So we did two things. One, I'll explain why I'm doing this detail here for a second. We did two things. One, we fixed it so you can actually have support untrusted registries and actually use a local instance of Docker. Just or a CNCF distribution to be fair. Now. With a PR that overall wrote to support the linked lists linked items. So we made the changes to NB two. And I think it was, I don't know if it was auras or just NB two, or maybe it was generate Docker or the thing that she way did. So now, and I also wrote the full steps that it takes to actually do this from beginning to end. Basically my demo, I wrote down my own demo steps and published it. In fact, I'll, I'll put that in the notes here. So now that we actually have something that's provable, that it works, it'd be great if we can incorporate the keynote key management prototype that you're doing into that. And, you know, like I usually do a test where I try to push and then I try to pull without the keys configured. And the poll fails. And then you, you know, we put the key link in there and then the poll succeeds. It'd be great if you guys can do a similar thing that says like here, you know, the, you know, the key is, I don't think you would revoke the key first, but, you know, here I could pull and now I'm going to revoke the key or whatever invalidate the key, whatever you want to do. And now the, the poll would fail. So it'd be great just whether that's the exact demo you want to do or not. But I'd love to see that key revocation kind of in that same end end prototype that we could demo that all of us and flush out the gaps. Yeah, we can, I think quickly go through the prototype. Stages, if that makes sense. Great. I'm just getting the link to that. Was that the extent of your update? Did you want to go through any of it? Yeah, that was the extent. Okay. And let me just put this demo script. I'll paste it in our chat. As well. Anything else or do we have Marina take the floor. Oh, that's all I had. Oh, just last one on prototyping. I started hearing some people reaching out on doing some demonstration demos prototyping with OPA and gatekeepers. So for anybody else that's interested, I'll see if I can get something just basically written up. Just to kind of be a stake in the ground. So right here's the, at least a reference point of what we were thinking. And then hopefully whoever is going to work on that, we'll take it to the next level. But at least they kind of know. At least what we've been thinking so far. So they have something to work with. And there's some continuity. So with that, Marina, the floor is yours. Yeah, I was just hoping to give an update on the tough prototype. Effort. And I think mostly I've been working on a specification for describing how. Tough could work in notary to do. There's a poll request open. I linked it in the notes. I guess I can just go through the document. Unless anyone has any. Questions already that we can go through. I can try and share that. Let me see. All right. Can anyone, everyone see that? All right, cool. Yeah, this is just the, basically an overview of how. Like what basically what, which pieces of tough metadata would need to be put into the registry. And then I figured this could work alongside like the, you know, the other prototype efforts. And then fit into that OCA artifact format to then go on the registry itself. And then I figured out where this fits in. And I kind of focus on describing the processes that we'll need to develop to make this happen. And then the formats of those pieces of metadata. So I'll just go through each of these sections. The metadata metadata is, it's pretty straightforward from like a design perspective. It just needs to be, you know, signed offline to keep the. The process as secure as possible. And then uploaded using this process and this, you know, these commands are all, you know, examples for now, we can update as, as, as the design evolves. They just to kind of give it, give a sense for what the, what it could look like. But they do being, you know, you create it and then you'd upload this through metadata. And it would be in a format that could be uploaded to the registry, which I described later. So this is the workflow for a developer. So they'd sign the image. With an image name and then, you know, a location of a key, which doesn't have to be like a file. It could also be some sort of a, you have a key or any other kind of key thing. And the name of the role that is signing it to make sure that they have permission. The idea being that they would. They would have that in a minute. They'd sign it, they'd upload it, and then they were also as well as uploading the actual targets, metadata, they would send it to a snapshot process, which we'll describe in a minute, but that the snapshot process basically is responsible for the snapshot and timestamp tough metadata. And it's the separate process from the developer that kind of runs automatically to update those pieces of metadata and upload them to the registry. So that's what we're going to do with that up to date. We can go over that now. So this is basically, this kind of goes over a lot of the, it kind of is a way to do snapshot that I think addresses a lot of the concerns that people in this group have had. So the idea is that you would have a service run either by the registry or basically any external server that can, you know, has a verifiable current time. So pretty much any server, which can then run in. In times to improve information and keeps track of all the current versions that should be on the registry. So you would have that snapshot that knows what all the current metadata is. And then this delegations piece, which I think is one of the big pieces to kind of figure out, which kind of figures out how to delegate keys. So like, you know, each developer has a key. And one of the cool things that tough does is that from the, if you just have the root key as a user, you can then use that root key to find the key that's tested to sign a particular image or a particular thing that you want to download. So this is just kind of some information about how to configure that in the first place that the user can then just automatically download this and get to the image that they'd like to install. So they can see how they delegate it. And then also if they could, they could remove it by revoking that delegation and then after doing, adding any number of delegations, they can upload that role that's doing the delegations. And it'll be available to users who can then use these delegations to find any actual images that they want to download. And the download is pretty straightforward basically. This is just describing the things that the user would need to query in order to download an image. And this kind of section is just to describe, you know, what that registry query would have to look like in order to download these pieces of information. So it would just be the root metadata, time stamp snapshot and then kind of the chain of targets to get to the metadata that signs the particular image that they're downloading. A quick note about deleting metadata, which is basically, you know, when it's old, it can be deleted, just kind of, but it doesn't have to be deleted right away. This is just kind of to allow over time registries to not get like excessive amounts of tough metadata hanging around. So this is kind of, I think the key part really, this is what the metadata would look like. And I added this media type to try and make it fit in with the current version of the artifact spec. And hopefully this will all work as is. So all of the files, you can have, have this general type. They're a, you know, a tough media type with the roll name for, with the roll info for the particular role and then any attached signatures by KID and this can really, we can discuss that more. So basically this is just, you know, they all need to be included in the registry. And so then we have the different types of metadata. So root is pretty straightforward. It just lists the keys for the other roles. And so this, this is basically just say, you know, this is a root of trust. These are all the other top of our roles. Stop shot. Similarly, pretty straightforward. You know, it has a version number. And the most importantly, it lists the version number and optionally the length of every other metadata file. That's going to be in the registry of the tough. That's associated with this route in this snapshot. Target is where it gets a little bit more interesting. I think that this is, you know, where you're going to actually sign the images. So this would be these, the targets here include. Hashes of all the images that this targets file is a testing to. And then the delegations. Adds any, any delegations to other targets that could sign images from this role. Yeah. And there's a lot of fields here to deal with specific edge cases, but in the basic case, this just lists another place that can sign it. And then the timestamp role is a public, you know, the simplest of them all just make sure that this is all timely and that the snapshot is, you know, the most recent snapshot within a certain timeframe. Yeah. And then yeah, it's an example metadata with some real examples in here, using various key types. So. Any questions or any other things that should go into more depth, I kind of did, if that was a very brief overview, I'm not sure where people have questions or would like to get involved or whatever. Yeah. So this looks like, you know, this looks like a great amount of detail trying to align this with how you could push stuff into the registry, you know, with the artifact approach. So we don't have to add a whole bunch of extra APIs. And I'm still not sure how it, how we deal with the immutability aspect of it, but I think the, the thing I'm struggling the most with is. Like where we're going into detail about how we can use tough for this with the assumption we would use tough, but we haven't really explained what. Requirements are driving the need to use these things. I have other documents, I think that describe kind of more of that piece of it is kind of what tough does, which problems it solves. The idea here is just to show that it's possible with kind of the existing registry. APIs, like you said, like this, you know, this is what the metadata would actually look like. I think as far as problems that it's solving, it really is solving this kind of some of the key management. I think the delegation and actually key discovery piece of tough is something that is it hasn't been addressed in any of the other parts of the no duty effort, as well as a lot of the compromise resilience pieces and not relying on any single key for any single purpose. I think something Marina that would be helpful here is for the key management requirements that we have agreed upon, calling out where tough is addressing them and how it's addressing them would help. I know that like, for example, around key expiry key rotation and the signature allow list denialist, we still have additional requirements to define. But we have some requirements around where keys need to be stored versus how the trust or configuration needs to work. Right. So I think if you can call out how this prototype addresses those like that would be that would be helpful. Yeah. For sure. Yeah, maybe address some of those exactly management things like how to manage these root keys and other things like that. I think that the biggest place that this this deals with that is through the use of the delegations, which does a lot of that key management piece for you and includes the vocation and the, as well as the time stamping that's required for either vacation lists or lists of valid keys. So. Yeah. Okay, thanks. I think the thing that I just keep on holding that balance on is so secure that it's too difficult to use and so easy to use that anybody can, you know, get around it. We have to find that balance. And I'm just, I think that's why I'm trying to prototype this out and kind of make a working demo of these features to kind of show that, that the workflow and make sure that that's a workflow that's easy enough to use that it can be used in practice. So I think that's why I'm trying to find the idea here. I get it. I guess I would start with. Cause any, any amount of cause obviously there's some complexity or any amount of complexity has to be justified. So we could start with, here's the problem we're trying to solve and make sure that we actually have buy-in that that is a problem we want to solve. Yeah. But I feel like we have a lot of documents that describe the problems we're trying to solve. And I do think that this addresses a lot of those problems. I just, yeah. I think we need to put them in our notary to requirements. And that's maybe they're, they're queued up and they haven't got merged yet. So that's the part that I would really focus on. I feel like if we spend too long only ever talking about requirements, we'll never get anything actually written. Cause we'll be too busy. On that stage. It doesn't matter what the design is. But we need demos and we need actual solutions to the problems as well. I don't know. I feel like there's definitely a balance there. To do both of those things. And maybe I need to do more of the one. But I think there's also a value to having. You know, things we can show and show how it's usable. The complexity is kind of hidden by. A simple workflow. As long as it's solving a problem that we said we want to solve, that's, that's the piece that I'm trying to say is let, let's make sure that we've identified the problems we want to solve. And then we can figure out how do we solve them. Okay. Okay. So I'm going to put a bit of that on the requirements still on me. I know I only at least one or two PRs there for the requirement stock. Yeah. I'll find out what the video is from Wednesday and get that posted. Cause I guess you guys had a meeting and didn't get posted yet. So I'll find that. I think there were a couple of mine was earlier. And then there was probably one from. A week or two ago. That I wasn't in. In the requirements. Just that Steve was talking about fine, the Wednesday meetings. Yeah. Yeah, just ping me. Sorry, just a quick detail so we can catch up with the conversations, just ping me any dates that things got recorded. And, or that you had meetings that there should be requested to the order recorded. And I'll ping Amy and get them posted. So those are the two Wednesday ones, Marina. Yeah. Okay. Yeah. Question a little bit roundabout, but getting back to this. To you, Steve, the notary. I think we have a couple of questions. So what do you think about the implementation of the registry? Does that include a lot of the artifacts stuff that we've been discussing? And where does that stand? Well, I don't want to circumvent Marina. Are you good with what you wanted to cover. This is going to start back from right now. Okay. All right. Fair enough. Okay. So what we've been trying to do is. and stuff we did with Helm and other specific artifacts types is rather than try to make a specific technology work in registries, we keep on shooting for generic storage of artifacts. And what we found is we were able to just make individual artifacts, you know, standalone be able to push to a registry pretty straightforward by just using that config media type. As we got into the notary and espom ones, we realized that in order to maintain the immutability of the digest and the tag, you know, the manifest they get pushed into begin with and the ability to add another signature. And this was driven from requirements, right? So we said that was a requirement, we can't change the tag, we can't change the digest or the content of it. So that forced us to deal with some different design patterns. The design patterns say I can add something to a registry and link it to existing content. And I could add lots of additional things and link it to the same content without ever merging, having to update an existing manifest. So that's why we're not using index, for instance. That generic approach has turned out to have a lot of universal support, not just for notary and espom, there's a bunch of other confidential computing work that teams are doing that want to leverage that as well. So I would say that what we're trying to do is continue down this approach that we're coming up with a generalization and registries that will support these scenarios. So if there's something new that we haven't, that we don't yet support for, you know, if there's a requirement here that, you know, tough has an answer and requires another attribute or something, whatever it is, change to the basics of registry storage, then we can incorporate that. So does that answer your question or is that too much detail? Not enough detail actually. So on specifically it's like the notary v2 slash registry repo we got a image out there for notary or mv2 prototype one tag. That's part of your prototype that you're working on. Oh yeah. So if you go through those, the demo steps that I put there, basically it's a built version of the distribution. I forgot exactly how I put a couple of out there. I put out a web at networks, I put out a distribution under notary, I put out a couple of things to make it easy so people don't have to build themselves. But the idea is that is a prototype of what would be in distribution or in, you know, in a registry. So does that registry include the artifact query APIs to be able to query and say, which artifacts are there that point to this one? It's, if you look at the label, it actually says prototype one. So overall did a really rough hack just to make sure like an under the covers thing, just to prove we can do the experience. And there'll be a prototype too that actually has that manifest links API on it. Okay. So not there yet, but I'm not thinking about pack it's just hacking tags or something. No, it actually has a, there was a separate API. There's a links, there's a third links API that you have to get called the mv2 client currently calls it so that we don't pollute. Cause again, we were shooting for the experience. We didn't want tags to get polluted. He has a prototype of it, but we realized there's a piece missing in the payload. So in the next week or two, we'll get that posted. Yeah. Cause circling back to Marina's proposal that I was trying to think of how we can take a lot of the stuff that she's working on her side with a registry that has some of the artifact stuff for the query APIs and things like that and try to actually throw some of this together that might implement some of these API calls and make it look a little bit more something you can understand and play with. I'll, we should be able to prioritize that this week. We do our internal meeting on Tuesday evenings. So let me check in tomorrow night and see if we can turn that around and hopefully demo it by next Monday. That'd be awesome. Yeah, that'd be really cool. So back to what you guys were trying to do. Yep. Yeah. So that is kind of like, yeah, that is really the next piece is making this work on with the registry API calls to show the actual workflows. Cause I think there's a couple of documents that I've, I think they're in PRs right now that kind of rock through the workflow but I think it's better for everyone to see it, you know, encode in demo format. So that would be a great next step here. And as far as the requirements, I'd happily write up another thing describing how tough addresses some of the requirements but I feel like we have gone over that number of times. I think we've gone over it. I don't think we have good consensus that the answer solves a problem that we're prioritizing. And that's the point. Well, it's a problem that's listed in our requirement stocks. So, but I can call that out again. Okay. Yeah, cause I don't think that the, the just signature design addresses, you know, it doesn't address revocation. It doesn't address key distribution. It doesn't address time stamping and some of the other things that you really would need kind of for any solution that uses, yeah, keys. Yeah, totally agree on the revocation. Distribution, I'm not sure. Like that's, yes, those are the pieces, the time stamping as long as a key expires, you know, I think that's the question. Well, for revocation to work, you need time stamping because unless you're only using key expiration times for time stamping, either a revocation list or kind of the tough style listing trusted keys, both of those solutions, you need to know you have the most up-to-date version of them. So in either case, you would need some kind of timestamp that verifies that situation. Yeah, part that I'm trying to keep in scope as we're working on a lot of this stuff is I feel like the tough might lag some of the other work that we've been working on with the Notary V2. And so mentally I'm trying to keep track of what pieces we need to make the MV2 prototype work at all and what things can be added on later and where the value ads are in those add-on pieces. So that's part of what I'm trying to keep in mind on my side of this is how we construct this like a multi-stage process. And maybe not every user wants to have the full time stamping stuff going on. They don't wanna run a separate server. They just want some basic signing and deal with some of the issues they know they're gonna have by doing it that way. Yeah, I think that's totally valid. I just really think that if people need the full security I think there really needs to be an option that's built in to this system. And I do think creating options is really important and I'm trying to build those in here. Agreed. Yeah. I'll hesitate on the options one. I just like there should be a standard that we're trying to ship. And if it's optional, why is it optional? Like we should be able to find the right balance. But I think that the right balance for an organization with two people is very different than the right balance from an organization with 500 people with millions of people using their software. I just don't think that those could ever have the exact same, so like, you know, it could be within the same format, within the same spec, but they might, you do slightly different things with their key management because they can't host other servers and other things that you would need for the full effect. That's weird. Well, that's kind of what we're seeing is as cloud providers and vendors, right? It's not just cloud providers that they could provide those services and including your software, you could run yourselves. So because what we're trying to do is the company of thousands of people, you know, the Microsofts, the NVIDIAs, that I'm trying to pick some software companies for an example, need to be able to make their software available in a secure way that the companies that are two people that are running on AWS or Google or Azure can easily consume or even on-prem. So that's the real balance that we're trying to show. But what about the smaller companies that are producing the software? Because like service... Well, I think, sorry, if I may jump in. I think one of the things here that's important to keep in mind is defining where the interfaces are. And so technically, like you should be able to use any sort of key management as long as the signatures and the revocation data or allow list data is flowing through, right? So it's really just defining where the interfaces, if someone wants to do key management their own way, what information would they need to plug into the system? And I think that's how we would go about doing that, which is why I think some of the requirement stocks that we still have left to flush out like those should capture what those interfaces look like. I don't think we wanna be prescriptive in how keys get managed to that detail. A lot of companies are gonna have single sign-on. They're gonna have identity management pieces in place. Some are gonna wanna use things like tough. And so it's gonna be a mix and match, but we just wanna make sure that there is a standard on how that information gets propagated to customers. So you really just don't really care how companies manage their keys. You're still getting the information that you need. And kind of add on to that to probably come as a little bit of blasphemy as Steve, but not everybody that I work with is in the cloud. So I do have some of these things. There'll be like an IoT device out in a field somewhere that doesn't have a consistent connection to the network. And so they're gonna have a different requirement in terms of how much they wanna allow these keys to drift or potentially expire or something like that. And so they're gonna have different requirements of whether they even care about having that snapshot versus somebody that's working directly in the cloud and just wants to know, hey, these snapshots need to be continuously regularly updated by some server that's doing that for us or we wanna run that. That's, I think there are different use cases out there. I don't think we're saying something so different. Like the two, how the keys are managed, we've always said that that should be pluggable however they wanna use it. We've said we wanted to support offline, offline air gap environments, including IoT is extremely important because if anything, the IoT devices are even more important because they're literally roaming around the field on public internet in many cases. So we absolutely have to have a solution for that. So I think where I'm pushing harder on is if I acquire some software from a large company and I wanna consume it from a small company, the small company shouldn't have to implement something monstrous just to be able to consume that software securely, including an IoT device, right? It could be a small company of two people or it could be a device what's got a very minimal footprint of memory and CPU that can run on some farm tractor. So we wanna make sure that the solutions scale and that there's not the typical windows options of death that there's so many choices that I can't figure how to use the thing. But how somebody manages their private keys completely up to them. Yeah, I think that's kind of what I meant by options. Like more options of where you host things, options of how it's centralized versus options of APIs. Cause those I think need to be somewhat consistent for things to work together. Cause if you have a hundred APIs, a user can never download two pieces of software. And if we need to add, look, I've never been against having to run add new APIs or add new functionality, right? We've continued, it's just code. We can make it do whatever we want, but there's humans involved. You have to figure out how can they use this thing including how can they write code against it? So if we need another timestamp server, for instance, I'm just hearing words put up and there's a really justified requirement for it. Well, absolutely, you know, figure out how to make that work. I'm not trying to push back on that. I'm just trying to figure, that's why I keep on pushing, what is the requirement driving the pain and how do we make the pain minimally painful to support the usability and security confidence that somebody has? Yeah, my fear is honestly less about the difficulty of running some of these servers like a timestamp server that's constantly updating that in terms of complexity for a small organization. Honestly, it's the reverse case. It is thinking of the Docker hubs out there that have millions and millions of images. And if they need to update a timestamp on every one of those images every hour, that's a lot of overhead on them that doesn't properly scale nicely with the CDN and some of these other challenges they might have. And so my fear is honestly coming from the flip side of this. Yeah, I think on the tough team, we've actually discussed that particular case a lot because Docker I think is bigger than, there's just a lot of images. So something like a snapshot starts to become a little less scalable once you get into millions of images. I think that's where a couple of ideas either having the tough food at the organization level comes up or we have some kind of like snapshot miracle trees and other kind of space saving timestamping methods that could be used to kind of sign this one central thing that then has cryptographically aligned to all of the other pieces of it. Which I think sounds more complicated than it is, but basically it's trying to lower the amount of metadata that individual users have to deal with for something as big as Docker. Yeah, I do mean to get just not to be a broker, just figure out what the requirement is because one of the things we were in one of those prototypes and scale, we were assuming that we could actually bring even, it's not quite anonymized, but aggregated data across multiple, we're supposed to create a timestamp that span what I call the Coke and Pepsi scenario, which from a trust level, like we can't do that. We can't get access to two customers data and do any kind of aggregation because a lot of our contracts, we can't even acknowledge that customer exists. So we don't need to make sure that that secure boundary is within a customer's scope. But again, just kind of goes back, what particular problem are we solving? And then I'm sure there's a dozen different designs, but we'll know that the designs meet the specific requirements that we have. So maybe that's the place that like, there's some PRs out there that we, we including me haven't had the chance to kind of finish reviewing and get merged. Let's maybe focus on getting that stuff down because that's really helped a lot. I mean, early on, for those of you who've been with us for over a year now, we went a lot around a lot of things related to using index or other things, a bunch of different designs, but we've went back and just now that we cannot change the manifest. We cannot change the digest. People need to be able to do deployments on that. And that helped guide designs that actually evolved and created new APIs and new implementations because we had to meet that requirement. So we just now what problem we're solving, then I think we'll find the designs will fall out. And if we have to build monstrous, things that are easy to run, whatever that means, we'll design it, write the code, and people can just run it. That's the beauty of being able to download software and running it on-prem completely. Like there's no dependency on a cloud. There should be no dependency on a cloud to run this stuff. If somebody wants to run it completely on-prem with their own IoT, there should be no problem in doing that. Okay. Jehante for IoT. What is internet of things? I'm not sure what the question is there, but so internet of things, right? Small devices, am I missing? Okay. He just said with their own IoT, is it, I might have just drifted off and missed a conversation about internet of things. Yeah. In other words, if somebody that Brandon was making a point that I just wanted to make sure I was balancing, that if somebody wants to run some IoT hardware that they should be able to do that and they shouldn't have to depend on a cloud, right? Like there's certainly customers that are doing that. And we wanna make sure that this environment works for them as well. The solution that we're providing should not be dependent on a big cloud being able to run this monstrous infrastructure or because it's too big for any two-person company to run themselves, not just that they could use a cloud but they could run themselves. So that's kind of just the scope of what I'm trying to say. Like this is not a big company solution. This is a solution that should scale from big companies so that small companies can consume it and small companies can run all on their own. Okay. What else do we have? I did hear that the prototyping that we've been doing, more people wanna be able to party on that. And that's why I did complete the demo steps so they are runnable by somebody else. And I'd love for others to go through that. Please don't pick apart the code that implements it. It is totally, what is it? Cable ties and bailing wire and gum and so forth. We were shooting for is the experience possible and can we completely hack it and make it work? Now that we know and we like the experience so we know it can work, we're now going back and iterating on the actual code implementation. There was some comments around the, we're just using Docker save for instance, like, yeah, it was just, because it worked. Now that we know that works and this is why to be quite honest, some of the devs were not willing to actually make their PRs on the public because they didn't wanna be criticized for the code. It's like, it's okay. We're just, let's shoot for the experiences. If we like the experiences, we can go back and change the code to be the most proper factored, clean design that we have. So prototype one was definitely a hack to prove it works for the links. Prototype two will have the artifact links API that we've been talking basically as a manifests. Basically as a manifest, you give it the digest and then there's a links API, if I remember right and it hangs off the extension model we've been doing in distribution spec. So we'll have that within the next week and that should enable people to start putting other things. I've been trying to support Nisha in her SBOM work as well so they can start doing that. And we have some changes coming to ORAS as well so you can not just push individual artifacts like we do with ORAS today, but you'll be able to push something with ORAS that says, by the way, this links to something that's already in the registry. So we wanna get those out so people can start doing more experiments. Any other questions, thoughts? Okay, so just on the videos, I'll follow up with Amy. If there's other videos that you guys are recording just ping me or ping Amy, you don't have to block on me and she'll get those posted to, what she does is she posts them to the YouTube channel because you have to kind of grab them and post them. And then I, anybody could do it, once they're on the YouTube channel you can go back to the hack doc I just put recorded video under the date so people can find it. So I'm trying to, people shouldn't have to go to YouTube to find it. I wanna try to make our hack doc be the index of information. So if you guys are hosting a meeting, don't block on me if you don't want to ping her directly say, hey, we just finished this meeting. Can you post that? Let me know when it's posted and then you can go back to the hack doc and add that to your notes as well. So with that, if nobody's got anything else we'll pick up next week. Thanks folks. Thanks. Bye.