 Hey folks, how are you doing? Now I'll just give folks a couple of minutes here. We have the recording for others as well. Hey Jesse, welcome. And we just get folks to sign in. Great. Notes here. And I try to catch up with the comments on the PRs to date. So I'm going to go ahead and do a couple of things. For a film material while we're waiting. Do the five minute. Wait here. Marina, did you have a link you wanted us to look out? You just wanted to present. How'd you want to do your portion? I think I'll probably just present. It's combining a couple of different branches of tough right now. So. All right, that looks like five. So Marina, you have the fuel. Okay. It's not letting me share multiple terminal windows. Which is interesting. You might have to share your desktop. Oh, yeah, there it is desktop. Okay. Cool. All right. Can you see that? Does that look like it's. All right. So, um, so what I have here basically is, um, a basic. Kind of registered setup and basically basic client setup. Just to demo a couple of, of tough features. So first of all, um, the registry has just like a basic, um, some tough metadata setup already with just a lot of different, um, sample, um, repositories within the registry just for, for demo purposes. Most of them are pretty empty, but they're just so that we have some stuff to work with. And then there's the top level metadata of root, um, snapshot timestamp and targets. Um, which we'll get more into in a minute and then, um, we'll do a sample targets. Kind of within all those example registries to just, sorry. A few example, um, artifact files within all those example repositories just so we can show kind of how those, um, move around and stuff. Um, it's pretty bearable for the demo. And then over here on the client side, um, in order to demonstrate, um, how it would look for a, um, client that starts out as an ephemeral client. Um, the only thing that is currently over in this client is this root metadata. And this can actually be even further simplified by just having the root key or keys, um, available on the client. So that's like the only thing that this client needs to start out with in order for the system to work. And then, so with all that set up, um, I'll just host the registry here so that we can pretend it's on a different machine and it's hosting all this metadata. And then, um, over here on the client, we'll download our first file. So client, hi. Let's download those things in the way. Make some requests over there. And then over here, as you can see, there is, um, a file one got moved over into the, the targets over here. And one thing I want to show over here really fast. So it downloaded, obviously all the metadata needed to download this. And one of the new features I want to show is this, um, the, this snapshot file and what that looks like. So, um, so the, the difference here between the, this snapshot and the classic snapshot, tough snapshot that you may be familiar with is classically the tough snapshot lists every single, um, um, repository, every single, um, targets file, targets metadata file on the, um, register your repo and the version number of that so that you can't roll back to different versions because it keeps track of all these different version numbers. But there were a lot of concerns about, um, things like private, private repositories on registry and be able to see all of that information as well within the snapshot file, as well as scalability for really large registries that may have a lot of different repositories and a lot of different things listed, which can make this really big file that would have to be downloaded every time. So to solve that, what this does instead is it uses a Merkle tree to provide the same guarantees of, um, consistency across the registry of, um, data files so that you can't replay, um, both files and mix and match different files on the registry, but instead it just lists this, this amount of information statically. And so the, the length of this tree path is, um, proportional to the log of the number of repositories. So the only information that's leaked is the number of targets metadata files on the registry. It was not leaked is any information, but what's in those files. And these are all secure hashes. And so, you know, you can't get any information about how those are made or what other, um, artifacts are available on their repository. Cause that all makes sense. So with how that, how that kind of works. And then, um, as I'll demonstrate, this still provides the same protection against build back attacks. Um, so if we go over here and we, um, we copy the, um, so if we copy this Wavis network JSON, uh, metadata, which is the metadata that tells you information about this actual file one, which I can show you here. Um, the same currently in both places. So this tells you, this has the actual hash of the file and all the information about it. So if, for example, someone were just to, to look at this on the, on the network and they were to save this old Wavis network metadata file. So I'm just copying it into this, this other place and then say, you know, we find a security patch for this, um, this Wavis network image. And we want to, to make sure that it gets pushed to all our new year users. So over here, we'll update the, um, registry. So, um, you know, this is just an example of file in real life. This would be an actual like ex vivo file. It should be the actual artifact information just for the demo. This is just a bunch of text, but, um, we update that file so that it changes and that hash will no longer match. Um, because it's a new file. And then we'll do here. And you too. So we'll sign the rabbit networks. Um, metadata. So this is just saying, okay, so we, we have to do this file and this is the new, um, the new one. We verify that this is correct. And then not only, this is actually doing kind of two steps in one script just for simplicity. This is having the rabbits network, um, repository sign this and verify that they, they tested this image is what they want to be sharing. And then it's also having the targets, um, metadata on the registry say that this is correct. And I'll show you in a minute how we can, how the repository can be independent of the registry as well. But for now they're, they're the same. Um, So, so that's, that's signed. And so now we have like a local copy of the signed, uh, metadata file that includes the new version of the image. So the next step is to publish that. And so you upload it to the, um, the registry and then as it's uploaded, it's signed by the snapshot and timestamp. Um, keys on the registry. So that's kind of an automated process on a push. And it just separated out here for, for the demo. And then so we have this new, new metadata file published. And we could download the new metadata, but let's say there's an attack that wants to replay that old version. So, um, so we copy this old rabbits network JSON file back into the registry as the, pretending it's the new one. And then over here. If the client tries to download that rabbit network file just before, um, it gets a bad version of our error because the version number doesn't match the version number that's listed in that kind of metadata file. So, and then I'll just go over here and we sign this to make sure that, um, the registry is back in the good state. Um, start demo and publish that as well. I type it correctly. Okay. And then, okay. So, so that's going to be of the replay attack. And so if you replay the old targets metadata, then, um, the image can't be downloaded by the client. Okay. So next I'll show you the Southern new feature that we, um, that we kind of looked at, which, um, I think it would be especially useful if you have some private registry or private repositories on the registry. Sorry, I always get that confused. So the, um, so the idea here is that if, say, you're the runner of the rabbit network, um, repository and you have your, your clients and you want them to trust the, anything signed by you, rabbit networks, but you don't really know anyone else on this registry and what they're signing. So you don't necessarily want everything on the registry to be trusted by this person. You just want the stuff that you, you attest is, is good and valid or secure whatever your, your parameters are. So the way we support that is, um, you know, this idea of a target's map file. So this is, um, it's basically just on the client side. What it does is it says, okay, I trust anything signed by rabbit networks with this key. And, um, yeah, so anything that is signed by that work with this key is what I'll trust. And so it basically, it kind of overrides through in a certain extent, the top level metadata on the registry itself, because, um, in some cases you want to, you want to get all of your key information from the registry, like up here, what we were doing was we were using this key, but we were getting it from the, the registry. And so the registry's root metadata was telling us, this is the key that you should use to sign, um, to, to, you know, to verify anything from rabbit's networks, but instead we can give this directly to the client. And so the client knows themselves that this is what it's the key that I trust for anything from other networks. And then, um, they can use that to verify. And so what happens then is, so they can. So let's say they try and download something. So I have a map file and they try and download, um, something from a different thing, something that's not signed by rabbit networks. So say it's something from my repo. I have a repo slash, which is, um, something that's available over here. So they try and download that and it won't, it won't be found because of this map file that's blocking it. And actually without the map file, um, just to, to prove my, my demo is accurate. Let's see. So without the map file, um, oh, well that's weird, but without the map file it would be able to download the, the image one. This is probably a problem with my script, not with anything else because it might, my demo set up, but, um, sorry, but they are able to download something from rabbit. Rabbit. Um, networks. Slash. File one. That download is just fine because it's available in this, this target's map file. I'll look into that later. Um, there. So that's, so that's kind of how this map file works. So it, it, um, I think we can get addresses, some of that, the issues with that, um, prior repository situation. And another thing I want to just demonstrate really fast is, um, key rotation and how that works within, um, within the system. So any key within the system can be, um, rotated and kind of transparently to the client. So the client doesn't have to worry too much about, um, managing keys or what's trusted, what's not trusted. As long as a threshold of root keys are not compromised, any key rotation can be done transparently to the client. So for example, um, there's it. So. Let's say that. Okay. So we want to rotate this, that who's trusted to sign. Oh, sorry. I love networks. Um, I was going to do a top level metadata just to demonstrate. Um, how this works. So we run. So rotate. Time stamp. So it goes in there. Stories. So we're wondering here is I'm basically, I'm switching out what is, um, signing the timestamp key to be from the timestamp key in case there were multiple keys that has to specify which key is no trusted and then which key is going to be trusted in its place. So we're, I'm just going to use the snapshot key just to, um, to show. So you can see that they match and that it's being used. So. So rotate here. Let's use there. Oh, that's weird. Oh. And we chewed up high. And that will. I'm doing here is I'm, um, resigning the group metadata using this new key and the root metadata has two different. Signatures. So both of those signatures are now used to say, okay, so instead of just trusting this previous times timestamp key, we'll trust this new key to sign the timestamp. And then what I do really fast is, um, I did the script to use the, the new key of the rotation because timestamp is an automated, um, test. So you have to tell the script which key to use to do that automated process. So I just have to do that really quickly. And then, um, and then see we publish this new metadata. And so then, so that's all we've done here. And then over here, let's say we try and download a new file. I'm just trying, you know, the same file. And it works just fine. And the client doesn't know any different. But if you look at the, um, you'll see is that it's actually using this new key. So this timestamp keys now matches the snapshot key because it's using the same key for both of these. And this is just done transparently to the client. And so in real life, you probably wouldn't switch it to the same key, switch it to something new that, um, is more secure or, you know, uh, fresher or whatever. And this process can be used, um, to do that. And then finally, the last thing I want to show is, um, moving an image from one repository on the registry to another repository on the registry. Um, and this, this will be very similar to a new registry all together. I just, um, didn't set up a second demo registry, but I can do that in the future if there's, if there is interest. So, um, what I'll do here is I will move a target from, um, let's see, from repository zero, which is my very creative image example repository over into this web of networks one. So, um, currently there's the file one.txt in rabbit networks. And now we're moving also this file zero, um, file into there. And then all we have to do next is, so the web of networks needs to attest that they actually trust this file zero and want to, want to sign it. So all they do to do that is just say they add the target. They add, um, this target and they're going to sign it with this key. That's what happens when I type this password around. It should be a prettier error message. But yes, that's, this is what, yeah. So that would be, it doesn't use the key. Um, so yeah. So what I did here is I added the file to the rabbit networks metadata. And then I published this, um, this new metadata. It should be available. Over here. Where does it called file zero? Oh, that's because the map file doesn't include that. But it should actually work with them. That's weird. But anyway, it does. It is able to find it. Um, in the, in the, in the web of networks. Um, and so, um, yeah, those are the big features I wanted to demo. I have a couple of other things. If anyone is, has more questions or wants to. Show me elsewhere or take a break now to see if, see, you know, yeah. Any questions or anything. But I think that doesn't make sense. You want to look at or anything. Just let me know. So Milo has a question that he posted in the chat. Sorry. I don't have that. I don't know if you would find, sometimes it gets hard to find the chat. Session when you're presenting. Yeah. Really just read it off. Sure. Yeah. Just for that. Yes. Do I understand correctly that the rollback protection is achieved by splitting the snapshot data into a binary tree. If so, doesn't that still require serializing all repo. Updates over the full scope of the registry to update the root of the tree. Yeah. So it's, um, it is, uh, it's a miracle tree, which is basically a specialized binary tree. So yeah, it is, um, using them all, but the, um, so yeah, it does require some civilization. And I think the idea is that it doesn't have to be updated every single time any image is updated. Um, it can be updated once a day or so and still provide a day's worth of rollback protection. Um, for example. Um, and that way you can do quick updates to things without having to regenerate that tree every single time. And then it would just categorize updates and update the, the snapshot data. And then it would just categorize updates and update the, the snapshot and timestamp information on that cycle, kind of whatever time period makes the most sense. And I think that you all would probably know what time period better than I would. Um, something on the order of a day every day or so, I think would, would make some sense. Um, I shouldn't be too computationally expensive. I think even every hour should work. So, um, it would be done on that, on that cycle. And then in between that, um, images, could you think, you know, clients could either, um, use some other sort of protection in the meantime, like they could just use, you could skip that check. They could, um, have a smaller snapshot file that just includes that piece that's used in the meantime. I think there's a little bit of room for growth there, but definitely it can be, it does have to be regenerated, but it can be, um, it's called group messages. Yeah. I mean, it's always a three phrase, always a threshold. It's always a window threshold of vulnerability. And you just, we can never eliminate it's always minimizing. Yeah. So saying that should, like, however small it can be, while still kind of making it work. I mean, as I think the question that we keep on circling around is not the what or how, you know, because I know the Merkel tree is one way to do some more optimization, you know, you know, around lots of registries and sorry, lots of repos in the same registry and different registry operators categorizing somewhat differently. So there's, there's lots of questions around, you know, how do we do that in a perf and reliable and scalable way? Yeah. I think the question that we keep on struggling with is why do I need to do snapshots across multiple repos in the same registry when they're not owned by the same entity? Yeah. So I think the big thing is that it just improves your security because if basically, if you are downloading something from any of these different, um, repositories on the registry and then you download something from any of the other ones, you get the same kind of rollback protection. Um, and actually one of the benefits of these, this regal tree solution is even an ephemeral client can get this rollback protection because they can check the, um, the hash of the snapshot, even if they don't have an existing one on disk, um, which is pretty powerful. And so, um, Yeah. So I think, yeah, I think the big thing is to, um, to provide that. Reproduction. Sorry, I lost track of. Um, anything else I was going to say. Anyway, the thing is, you know, there's because it's not just one really big registry, right? I mean, it's not just Docker hub. There's quay. There is, you know, GCR. There's everybody's private registry. You know, there is, um, Multitudes of registered that anyone team works with. At the same token, they're not working with all of them. They're working with subsets of them. And it's not clear what benefit I get, especially if I have a frimble clients where I have to get this stuff anyway, there's no benefit to having stuff pre-cached. Like if I have a long standing client, try to put the right context to it. And over time I might read, you know, a dozen different artifacts from one registry. Sure. I don't have to get 12 of them individually. I don't have to get 12 of them individually. I don't have to get 12 of them individually. I don't have to get one in my cover all 12 scenarios. But in, and, but even 12 is still not 200,000 or whatever the number of repos are in something like with Docker hub or some of the other. The. So it's not clear. There's a benefit even for the long standing. And if we get to the frimble clients where they're pulling. One or two, maybe three. Artifacts from. Yeah. I mean, we're struggling with why should we try to figure out a perf scale security solution to something that wouldn't kind of struggle with this? Is it a problem? Yeah. And I think that that's definitely something we should continue to talk about. I think that the issue that, that, you know, that we see and that we worry about is this issue of, of replaying metadata. And if, if, if you, even if you have an ephemeral client or whatever, if someone is able to take that. You know, you know, you can have a file or a plain targets file in the tough terminology and save that and then wait, you know, six months a year. Another, if I'm okay, it comes up, they replay that. You know, then you have an image that's six months behind, which in the security standpoint can be really bad because there can be any number of security patches. Over that period of time. So I think you need to kind of something to ensure that. The timeliness of, of the images. I think you need to do it across the registry instead of repository by repository because it just gives you a bigger, a bigger scale. And so it gives you more protection because it, I think that's the question. And you only have to do it. You just have it, this one, one metadata file once in there. They're in the top of our repository and it takes care of. Yeah. I mean, I got the rule about protection. Like, I, we get it. And if we want to do that, we just, we just, we just need to keep that in mind. So, that would be something we want to be able to add. The question is, is the scope and whether it's one month or two months or three months, you know, the longer the months, the more risk it has because there's vulnerabilities every day. So we totally get that. It's the. The concept that it's easier for one across all isn't necessarily true because it's, it's trying to maintain those at a scale. Of the number of updates that get pushed. while we technically have access to everything that's in the registry. We have, of course, a test that we're not accessing things that we shouldn't be doing. So while I appreciate that you're making sure there's no, everything's anonymized and so forth. We promise our customers that we won't do anything around data across customers. In fact, we even have new rules that are forming where they don't want their data to be used in anonymized portions, there's some higher level security up at the registry. So again, it's not that it's, you know, it's code, right? We can write anything we want. The question is should we, and I'm, I don't, I struggle on if I know why I need to have something across all repos, as opposed to here's a repos I care about, as long as each of them are doing their snapshot than a daily, which they, of course, update individually much less frequently than the entire registry. Because there ain't one time the entire registry is getting lots of updates. If we can break them down to smaller units, it's a much more scalable problem, which doesn't impose any of the security data access concerns and performance concerns. Yeah, I think that the downside there would be for those ephemeral clients, because if they only download once from a repository, say, then the only protection they get is, they basically don't get that protection of that rollback protection, unless they do something to check that, to check against an external source. And the thing that this mercury provides basically is like kind of an external check. So if the mercury hashes match up, then you know that the, that, yeah, that is a valid metadata file, that a valid version of a metadata file. But the whole concept of an ephemeral client is it has nothing to start with. So, and it knows, it knows what it's being asked to pull. So as long as we have a way to say of the two things that I'm going to pull, can I go get the timestamp information for those two things? Then, if I understand it right, I should still be able to get that same threshold of time across those two things. But yes, I've got to get two, but okay, I know I'm getting two. It's not I'm getting 2,000, right, different problems we're trying to solve. Okay, yeah, I think that definitely something we should talk more about. The, yeah, I'm trying to figure out how to, how to. I think you're trying to secure that, hey, at any one time this ephemeral client could ask for anything from the registry and let's give them some good protection that they can just, so it doesn't matter which one they pull from is, as long as these timestamps match, you're good. But that assumes that a registry can have access to all that information, which we're saying we can't, like, we literally can't do that. It's not from a technical, from a security liability commitment to our customers. We are saying we cannot do that. So that's problem one, performance and reliability issues are irrelevant. There's no information about any other repository on the registry within the file. To store any kind of information that goes across multiple repos where multiple customers is the barrier that we can't cross. Okay, even if it's a secure hash that's basically around. Nothing, I'm one of these larger calls when the actual customers get on. I've been known to say a Coke and Pepsi because those are obviously competitors and come out, so I might as well just be public when I say it. And for those that are actually working at Coke and Pepsi, I'm trying to make sure you're staying secure. So please don't get upset that I'm using you guys as examples. We can never do anything that their data has any knowledge or intermixing or anything of their others in any anonymizer anyway. So even if we wanted to, we have a challenge that we shouldn't be doing. But the problem that I keep on coming back to is I'm still struggling on why we need to do that across multiple clients. Because it's not even that there's a single registry. The customers work with multiple registries over time. So it's not like, hey, if we can just get this one massive Docker hub, and it's the only one that's really out there, great. There's Docker hubs, there's PIPIs, there's NPMs, all of those. But we're seeing more and more that the public registries, one, people want private copies of them. There's multiple public ones that somebody has to deal with. So there's, in addition to everything else, at best, what you're offering is a single registry you can do all repos. And that's not really the bigger problem, much less, do we? I guess another thing that you could do is, I'd have to think more about this. It's basically you have the root and maybe top level whatever metadata for the registry just basically that just simplifies I think the key management a lot to have kind of one top level thing that then is downloaded and used, except for obviously in the case of this map file, right? Or anything where the client already knows the key's in trust. But then you could also just, you could do just the snapshot per repository. You're suggesting there's, again, there goes the details I don't completely understand. Are you saying there's one snapshot key for the entire registry? Yeah. I mean, that's another thing that customers want their own keys and their own management of that lifecycle of their information. Yeah, because the snapshot is mostly, it's like a, it's more of a, like a server management password, whereas all of the target's keys, which are used to sign the actual information about the files are independent. And they're, they would be very specific to that one repository. If I was trying to take something like Docker Hub, and I want to say all the Docker Hub content that they, it's official content that Docker is certified in regardless of where they got it from. And Docker wanted to put its certified key on it. Okay, that's for the Docker certified content. But for all the other content that's not certified by Docker, for instance, this Rabbit Networks one that we refer to is, it's built by externally. It's sent to Docker Hub. In fact, at first, it's not actually certified by Docker Hub because we're trying to show them examples of things. So I wouldn't expect, I mean, I guess you could say there's time data on that, but as a customer, even in a public registry, I'm not sure that makes a ton of sense. No, exactly. It makes sense in a private registry. And so like a bit, but the timestamp actually doesn't have much to do with like, like Rabbit Networks themselves, they just say, we sign this one thing and we put it on this, on this repository or this registry. And then the registry says, okay, we trust you to trust this thing. And so we'll, we'll assign this to you and we'll say, this is when we uploaded it, this is like when we received it. And then anyone else who's downloading it can check that the time they're downloading it is within a range of the time that it was trusted and the time that it was uploaded. And if there's a newer one, then they're not trusting the older one. They're trusting the currently uploaded thing from Rabbit Networks. It's not. Yeah, I get it. I just, I'm not, I'm not seeing that as the priority scenario with all the complications that we keep on discussing. I don't know if it's as much of a problem for that one registry where I get something as opposed to, I'm actually taking it from Docker Hub and putting it in the Acme Rockets Registry. So it's more important that the Acme Rockets Registry is kept up to sync with stuff that's happening in Rabbit Networks. And how do I, sorry, in Docker Hub, actually. Yeah, exactly, and I think that's something that the Acme Rockets just needs to be in charge of doing because that's kind of their, they can make it automated, they can do whatever. But if they're trusting it, then they need to continually attest that they trust it. And that is something that they sign and they want people to be using. Mila, did I see you pop it in there? Yeah, I think the long-term theme is that there is a non-zero security improvement to go in the full registry. But it's very hard to articulate in a way that you could rely on it. But another point is if the. I must admit that I've forgotten the details of the snapshot scheme, but if the overall idea is that the snapshot is signed and updated about once a day and we get protection on the granularity of days and not for protection of rollbacks being that day. Well, we don't really need a cryptographic structure for that. We just need to sign a timestamp every day with the current state. And that scales trivially because we can just do it for every single repo. And we don't actually need even the timestamp key. We just cannot rely on TLS as long as it's not governed by some CDN, which I'm afraid it probably is. That's it. Yeah, if we are going with the daily granularity, I don't think we need the Markildry at all. We got to the same registry feature anyway. Yeah, I agree. And actually, that's like the traditional tough method is to just list all of the kind of current state information, which is basically version numbers and in the time in the snapshot. The reason for this structure was just to keep the separation between the repositories on the registry while still being able to provide coverage of the entire registry. So yeah, if it is separated down, then that's no longer. I think it's just extra work. Now, what I'm saying is in the very traditional system, the snapshots were managed by the individual authors. And they were sequential. And the snapshot key was managed by the authors. So of course, they had to be consistent. But if we are having the registry sign the snapshot versions every day, we can just sign the snapshot version along with the timestamp. And that gives us the same rollback protection, doesn't it? Assuming all the clients have synchronized blocks, which is. I think it provides the rollback protection kind of the same way. The main reason for this additional structure, again, was for two big reasons. It's for the scalability for if you have a lot of different repositories on the registry with a lot of different images, you want to make sure that this, because even just listing the version information can make a really long file. And so this just shortens it. And it also allows for a separation between the repositories, because you only have to download for your particular repository. You don't have to look at every single one. I guess I'll go back to the Mercury paper, but for right now, I'm thinking I could just split the timestamps per repository and just not have a registry-wide structure at all. And wouldn't that work just as well? That would be going back to the Node-Ruby one model. And I think that the main, I think, concern with that model was kind of the trust on first use issue, where every single repository has its own base structure. You have to figure out trust for every single repository individually. No, I mean, that would still be, well, or that could be a registry-wide root key, a registry-wide timestamp key. But the timestamp key would be used to sign every repository separately. Oh, that's an interesting idea. Yeah, I'd have to look into that and think about it and stuff. But I think that might work. That's interesting. So you're basically given the isolation, so no one repo needs to go about others. Well, the timestamp private key would somehow still have to be used for all the repositories would not allow you to serve parts of the registry absolutely separately. But then you already have a HTTP router at the front of the hostname. So there is something that is shared. Anybody, I guess? Yeah, because if you're hosted in the same place, they do share some information anyway, so they could share just the keys, but then have individual. Yeah, but more importantly, it would be one direction. It would get the private key to the customer separate container, but nothing from the customer separate container outside. Coke's data has anything to do with what Pepsi's data is. There's some other data that happens to be used across both of them, but they individually have their own security content. And that also keeps the isolation that, in addition to the security boundary we keep talking about, but from a performance, I don't have to worry about any unimixing of different customers updating on different intervals that I have to figure out how to correlate across all registries, across all regions, across all air gap clouds. It's literally each repo has its own isolation boundary that I can, as a service side, manage those updates individually. And that's the only way we can get a healthy scaling across multiple regions, multiple availability zones, multiple clouds, and so forth. OK. OK, yeah. Yeah, I'll definitely think about that more, see if there's anything I come up with. But I think from first glance, it seems to make sense as a way to break that up. Cool. Well, I appreciate you guys keep on iterating on different parts of it. So that's definitely helpful. Do you have anything else? So just for the sake of time, we've got 15 minutes left approximately. I just wanted to leave time. I didn't have a new updated agenda. We have a couple of things that are in flight, but nothing that specifically ready to iterate through. I figured just give a chance for folks to catch up with what we've gotten done already. And if there's any questions, the recording is good for people listening, but it doesn't work for asking new questions. So I don't know if anybody else has been tracking what's going on and has questions that we should address. That's cool, too. In one of the things we always try to make sure is we're addressing everybody's style of communication as well. So we have the Slack channel that we've been tracking for the Notary Vee to work. Of course, questions here and any comments on the PRs. So just do it. We'll address it in every channel that we can. On the tough stuff, but back to some stuff that Mariana was Marina is presenting. We have been trying to make progress on some of the tough work as well. So several weeks ago, we realized that there's a lot of complications here we're trying to work through, such as these conversations here today. And we wanted to split this out from a phase one and phase two, because there's larger end in workflow pieces, which the tough metadata will be a part of. But we wanted to make sure we covered other unknowns that we don't necessarily know. And if you remember, we were doing the Sagrada familiar kind of model where we don't really know the whole piece. So until we get sketching it, we won't really be able to communicate and everybody can look at it like, oh, I hadn't thought about that. For anybody that watched the KubeCon talk, we talked about a sketch of a bathroom. And so I had done the sketch of a bathroom. And probably before we went and built it, I showed the sketch, the model, to Justin. And Justin's like, where's the bidet? He's like, I didn't think about that. We don't think about those in the US, not until COVID. We're running out of toilet paper. So we wanted to be able to get that whole model end to end, because we know there's some stuff around the tough metadata we have to think about. We don't know about the other pieces. So the latest we've been getting there is we have the signature object that we've been kind of working with. We have updated it with the JWT token, format or civilization. The next pieces that we're working on is how do we get that information in and out of our registry? And there's been a bunch of conversations around what is the persistence format? What do we leverage that's already there? And how do we make additions? And there was some questions specifically today on Ram was asking some questions in the PR around the impact we have to make to a registry. So and this one actually is coupled between the notary working group that is focused on a signing solution that can work within and across registries. But we also need to make changes to a registry so that also involves the OCI working group as well. So we've been kind of having one foot in each sandbox there to make sure that the scenarios we're trying to cover for notary are covered by the distribution APIs because unless this works across all registries, we didn't really meet the goal. So we feel pretty good. We had a good call last week around that where we got a support to go down the index path, which means that registries that already implement index and have to do that tracking between two things, in this case, the signature and the artifact that it's signing, that hard part of doing reference counting and garbage collection will tie into that infrastructure already building. And then the idea that index will be able to declare it is not just a multi arc container image, but it is a signature object. It is a CNAB. It is another thing that we don't know of yet. That's something we've been wanting to do for the OCI effects for a while. So the fact that we can leverage that for signatures means that it's not just a hack for one thing. It's actually a core platform enabler. So that's the path we're currently working down. And for those that might be watching, if you look at the Notary Project under GitHub, you're starting to see some other repos get added there because we need to make changes to the NV2 client that we've been discussing. We need to make changes to which is just our prototype. But then we need to make the end in work. We need the distribution spec. We thought we needed the image spec to update the manifest and index schemas, but we're going to move that into the artifacts repo. So you see the artifacts repo being added there. And then ORAs also, oh, and Docker distribution, which we already had. So the idea is we want to be able to prototype the end in experience out under the Notary Project, make sure we're comfortable with all the moving parts so we don't get all these different groups randomized. And as we're comfortable with that end-to-end changes that we would need, then we can make the appropriate, well, one will make the spec, but then we could also make the appropriate PRs back to the, as a group, because we'll know what the end-to-end is and we'll know what the final PRs would look like back to those upstream repos. The stuff we're thinking about with this phase two of the tough prototype, we believe that at least from a registry perspective, we've captured like, I don't know if it's 80 or 90. So there'll be some more additional work we have to think about, but we believe from what we know today, it will accrue up. So we're feeling pretty good about that so far. Anyway, so that's the idea. We want to be able to continue to iterate, do that model, see if we like it. Everybody's perspective can look at it and understand from their perspective what this thing looks like. And if we're comfortable and as we find new things, we'll keep on iterating. And when we get to a stable state across all of those affected repos, we'll get the spec put together and do the upstream changes. So that's the progress we're at today. The prototype that you mentioned in the cube column, is that the one named under the repository, prototype one or something like that? Yeah, that's a great question. So last week, I think it was last week, we were trying to figure out where do we commit this because we were doing PRs on top of the PRs and on top of PRs and different people's repos and it was getting crazy to figure out what exactly we would look at. We were gonna merge it into the master of NV2 or main, to be fair, we've also changed since the main app. But then we were debating, well, isn't the root of NV2 gonna be a reference implementation? We don't want to stick the prototype there because we know the prototype will change or we don't necessarily want to just evolve but we'll probably toss it, bring the irrelevant things over. So rather than create yet another repo for different prototypes, what we're gonna do is maintain branches in the NV2 repo and keep the root available for the eventual reference implementation. So right now you see prototype one because we were being very creative with the main and we just decided to get something done. And if we need to go down a different prototype, we can do that. And then wherever we wind up landing, you'll see in the root of NV2. So I did push an update to NV2 read me, the main read me just to explain what we're doing. So those outside of our group that they just come in and look, they should be able to find a path. And that's the progress we've got so far. Stuff you've been seeing is in the prototype one branch. Sound good? And just for those tracking, we also saw AWS added artifact support in ECR, which is awesome because that is the base of a lot of this that, and they've been working on it for a while. This isn't like a magic thing. It's something I know that's been working for a while and Omar Paul and Jesse and Sam and others were have been working on for a while. So that's a huge kudos to them. And just really that you're starting to see these things kind of roll out to be able to support these end end scenarios. Without boiling the ocean, we've also been trying to generalize, is there some general metadata APIs that we either need for this or would make this a little helpful? Or if we had a metadata API on all registries, how would that impact a signing solution? Because at least the definition of metadata in my head is it's not something you would necessarily sign. If you want something like that, you can put it on the manifest itself, but it's just like you can add signatures, we would add additional metadata. And the only reason I'm even bringing it up here is it's playing into some of the distribution API conversations we're trying to have is figuring out, like how do we not keep on throwing stuff on here? And by the time we're done, we just have this mad max vehicle of things bolted on and it does rationalize and make sense. Anything else? Well, we'll watch on Slack for the Notary V2 conversations and PRs and I'm hopeful we'll have some more, I'm hopeful we'll have some more progress on the distribution stuff by next week. There's progress being made, but we've got just some churn happening that we have to kind of work through so we don't randomize too many people on the conversations we're having. So that's about where we're at and thank you all for joining.