 Good morning. We're a good day folks. Hello. How's it going? Let's see what's the background for today. Who is that? It sounds like somebody was biking really fast outside. So I've pasted a I pasted a link to our roaming docks. We don't really have an agenda today. I sent out on the Slack channel if anybody wanted to chat. I'm happy to talk about whatever anybody wants or some updates that people are asking about. We've been mostly doing work in distribution to make sure we can push and pull and discover signatures in a registry so that we can sign things. So there has been progress, but the progress hasn't really been specific to notary, other than a little navel notary. So then that works for publishing artifacts and discovering artifacts. That's how notary signatures will be discovered. To be fair, I keep on trying to reinforce this, is that we're still in prototyping pain. So that's the current prototype. But yes, the answer is yes, but I want to make sure that everybody remembers that we're still not building the house yet. We're still building models to make sure that we like what we're doing before we go at scale is the best way to put it. And if you look at the NV2 repo, actually all the repos, we put a prototype one in all of the forks that we've created, or forks of other projects than the current NV2 project, which is rooted in the notary project grouping org. It's all to reinforce that we want to keep the root of the projects to be what will eventually be the reference implementation, the spec on the projects that are in notary project, and then the forks that are in notary project are staging grounds for what we want to kind of figure out end to end. And in fact, I was talking with Chris about this today, because we want to make sure everybody understood. In fact, I'm probably going to push some updates to the readme's and the root of those forks to say, look, these are not forks that will be ever shipped by themselves. The idea is that these are forks that we're going to prototype and then experiences. And if we like them, we being this community, both here and the OCI group, then as we get all the pieces coordinated, and we like that this works well with this other one, then we'll make PRs upstream to those projects, and those would come back down. I did skip a bit of that. Part of this is to figure out the prototypes, and then we'll actually write the spec. These will be reference implementations of that spec. But I know everybody's anxious to having something to work with. Surgis, that kind of frame the question, which is, I just want to use this thing, right? Yeah, well, I mean, yeah, I'm right now trying to find the repo. It must be linked from... Yeah, probably, but I don't know if we have them. So here's the roots, the org, and then within that org you'll see a bunch of projects. There's the requirements project, which is the... In fact, we'll just share. The requirements, of course, is where we've started, and that's capturing what we're actually trying to do for goals, non-goals, scenarios, and so forth. And then what you'll see is there's a bunch of other projects. Some are forks, some aren't. So the other one that's natively here is NV2, and you'll notice that the default project is prototype one, and this talks about the end and experience that we're trying to build around those requirements. The other projects that you see here, everything from distribution, which is the reference implementation of the registry, to the distribution spec, to artifacts and ORAS, are forks of other projects that are that staging content that I was talking about. Does the distribution one have any, I don't know, specific changes in it yet? We're working on some, but we actually do have. So overall, we've been working on this one. We meet Tuesday night, our time, well, our time, sorry, Pacific time, because he's currently in India, and she weighs also in China. So that's, we've been kind of getting conversations there, kind of staged for this. And I actually have to look at this. All right, so this is five days, we'll have to go back and look at this again. But so that's the stuff that's happening there, and we have been making sure that all the PR has happened here. So there is the PR on the actual distribution spec. But if you look at NV2, we've been trying to lead with docs. So here's the various distribution proposal. And then there's, so this is the actual API. This was the conversation of what the experiences were shooting for. And we'll just kind of click in here. You'll see that, you know, basically there's this conversation here, and there's still, there was a couple of options here that we were talking about. Let me just see if I'm looking at the right ones here. Actually, this doesn't look like the one I was referring to. Hold on a second. Which almost I just looked at. I got myself confused. Proposal for generic lookup and distribution proposal. I think this is the one that actually has the, yeah, this is the one that has the conversation. And it's got some pictures. So here, persistence, push, discovery, pull, and then some examples. In fact, Sam provided some feedback last week, last Friday or whatever. And I have to go look through that today. But basically, this is the one where we're talking about a couple of different options for how we would look things up. So here is option one. Basically, there's a manifest that points back to another, the thing that you would actually sign. So we use the net monitor software as the thing that we're signing. And then there is another option here where we're actually using index, which is more native to a registry of how things are tracked. There is an extreme to this where actually here is using index to sign another index where each thing is signed. And I'm purposely not going into detail, which I'm happy to go back, I'll just go and put some context. And then here is an even further more true, accurate representation that instead of the index storing the signature itself, it would store another manifest. So it's kind of an extreme clarity of implementation, which is a good conversation. Questions whether we want to go that far. And then, okay, so that's the persistence model. And then we talk about how we would link those. So the most of this conversation has been more distribution spec focus, like the notary folks that are pure focused on their signature stops, like I just want to get these signatures in and out. So this has been the majority of conversations we've been having in the OCI distribution spec working group with some overlap here of people. And then there's a discovery API that we've been saying, look, if I've, there's this reverse lookup model, because what you're basically saying is, I want the image, the image, any artifact. But I want to know what signatures were associated with this. Because remember, we have loosely coupled, loosely coupled artifacts, the signature is does not is not put directly on the thing that you're signing, because that would change its digest. They're loosely coupled. And you have a collection of signatures that could actually happen later on. So to make that happen, the registry would have to actually look into the contents and we can push to it as signatures and index the results. Yes, I'm trying to see there's a picture here, except I like pictures and others like text. So but it helps have both. The, let me use the non-exploded version here. So yeah, so, and we're the, so if I push an image, I'll just use images as an example, this net monitor software, I later am going to push a signature. And the signatures, you know, there's, there's two of them in this case, one for Wabbit Networks, which is the company that distributes net monitor and Acme Rockets, which is consuming it, but they want to have a signature that says this is good in my environment. The way distribution works today is you push a thing and you find out the thing that it references, which of course it needs to know at the time it's pushed. So normally that will be layers. The fact that this is pushed later and references it is not a something registries know today. They know, but there's no API that kind of provides that reference capability. So that's the thing that we're trying to find a reasonable API that would be used for signatures that could be used for other things as well. So it's not like, it's not, we're not trying to necessarily special type signatures in a registry. Do you have a specific other use case already in mind or you're just saying it generally should be useful? I, the interesting one is metadata, which I've got another proposal I'm working on which probably wouldn't be stored as blobs. So not directly, although I'm thinking about it because this is what was the fact that you know the use of indexes is not used heavily in registries. In fact, number of registries didn't even support indexes until recently. What we're seeing is multi-arc manifest as becoming more used. We're seeing that and I'll come back to how this ties back. Basically, sorry, when you start seeing more uses of index, then you actually want to be careful about deleting any manifests that are consumed by an index. So we think the broader use of index, even for multi-arc, will surface this need to have APIs that help you figure out what is referenced by something. So just the generic use of index for multi-arc. The use of index for whether it's be signatures or a CNAB, which is a collection of things or other artifact types, I also think we'll start seeing more need of this reverse lookup model. It's a great question. Like, are we over designing that part of the conversation we're trying to make sure that we're not? Okay, well, thanks for that. I feel good. I have more questions. Does anyone else want to talk about it? So are you having a meeting tomorrow night about the distribution as part of this? Yeah, sorry, I'm not going to continue. So we have been struggling on global time zones. And some of my own team is scattered around Chiwei works and lives out of China, out of Shanghai. Averal is temporarily in India. It's going to be coming back to Canada. So within our own team, we've been struggling with that. Rather than we, when we had a couple of meetings, there's a couple of topics that I really wanted to ensure Averal and Chiwei were there. We did propose a slightly different time, which I know others have asked for as well. At this point, these are just like our internal team meetings that we have. The OCI meeting is on Wednesday at 2 p.m. Pacific time. So the conversations that I was referencing was just more about the timeline of what I'm going to get a chance to talk about. Okay, I just thought if you were actually talking about changes or updates of distribution implementation, I thought Ram might want to be there to see how it affects us. Yeah, no, absolutely. Those are the Tuesday yet, sorry, the Wednesday at 2 p.m. meetings. Yeah, yeah. And what we've been trying to do is make sure that anything that we are proposing, like we were at some points just having things amongst our own repos and we always look, let's just be much more transparent about the work. There wasn't any secrecy or anything. It's just a matter of where this changes me. So we've been trying to get all the proposals pushed to these projects. In fact, that's why, that was one of the reasons why we went and added, like here, if I just look at distributions, you're going to see everything's got a prototype one that's a default in the Notary Project, specifically so that we can make sure that everything we're doing is kind of staged in a way that people can see in common time. I was actually looking at the chat because usually I have some great questions and feedback, but sometimes you type it. So do you have any thoughts that you'd like to bring up? Not really, I'm afraid I was only halfway in at that issue. That's fair enough. This is mostly just a catch-up week. I was really just wanting to make sure, I didn't want to, we have been making progress. It's just not as obvious here. So I figured, I'd hold the meeting. Sam, I haven't had a chance to just look at your stuff today. And I think you were just about to speak. Steve, I saw on one of the pull requests there was a prototype for tough integration. And what I couldn't find was just the description of what that was supposed, how that was supposed to work without me trying to read through the code base. Yeah, great question. It's the balance of engineering actually making stuff possible and PMs trying to describe it and understand it ourselves. So what happened was, if you remember a couple of weeks ago, we've been trying to make progress, specifically using tough implementation, the update framework implementation, and we were struggling on a number of different dimensions, which repos get used to go into it, the time stamp and rollback in some of the others. So Shiwei, who knows the space really well and did our Docker content trust implementation for ACR, had some ideas. But what happened was we couldn't figure out how to make them all work in the goal of what we're trying to do now with the ephemeral clients and some of the gaps. Rather than just, you know, shelve it, what we wanted to do is we shelve it without visibility. We wanted to take what he had thought and capture that. So all that is is like a shelf of some thought process and how we could persist it into the signature work. But there's a whole bunch of loose ends, like how do we have a second place of verification to make sure that the time stamp stuff is valid and others. So it's actually not intentionally doesn't, it's, we didn't have the time to capture all the information about it, but we didn't want to lose the work in progress. So that's why it's just parked there for now. Are there any like rough notes on it or is my best bet just to try to read through the code? There's no rough notes. There's, I'm trying to think of what, what context could I even provide that we were talking about beyond what I just said. That's basically it. I mean, basically that's like, if we were to serialize the metadata for the update framework into an artifact that we could push into a registry, that's what Chiwei was thinking about. All of the things that go around it, how do you maintain the time stamps? How do you make sure that an ephemeral client can get information from two sources to know that this wasn't just hacked? All of those pieces have not been thought through. And there's no notes to it. Sorry. There's literally a matter of how we continue to make some progress versus, because what we said is we wanted to make that in a phase two because we just felt like we were not making any progress by having all of that conversation, but we just wanted to capture at least what we're at. That's the best answer I have. I'm hoping that we can get through this iteration of prototype one, if you will, pretty relatively quickly that we can kind of, yes, we like this and an experience. Yes, we like the APIs. We're talking about a distribution, which would in theory apply to the tough metadata as well. And then we could come back and figure like, how do we assure a rollback of a hacked registry when the client isn't trustable? Well, they're not trustable. The update framework kind of is based on the fact that the client knows what it was at some point. And if it talks to the registry, it says, hey, what do you have now? And I go, and it can figure out, wait, you've gone back in time. I don't trust you now. When the client has no information, no reference data, because it's a newly instanced object every time, we need some other place to get that information from. So that's the larger thing we need to figure out is how do we do. That information. If you want to jump into it, we're happy to hear that. Well, the update framework is the one that handles rollbacks just by having very short spans on signatures, right? Or am I mixing things up? Okay, so I mean, I guess one question there would be, is that going to be a problem in terms of storage on our servers or restoring the artifacts to the signature? Yeah, there's been a couple of, and those are great questions. That's exactly, there's a couple of questions that come about that, and it's been viewed as scalability. And so the update framework folks went off and did a whole bunch of scalability things and came up with a pattern that says they can distill this down pretty small. So it's not a lot of content. And there's some great work there, but we really haven't been able to get a good articulation of the security problems of two customers and or even two teams in the same customer sharing a registry. So you have things like Docker Hub and now, you know, GitHub where you have multiple, completely independent entities that are sharing the same registry, you know, Docker Hub and GitHub. And I, for others, I refer to the Coke and Pepsi scenario or maybe Hatfield, McCoy's, whatever. And the idea is that there should be absolutely no knowledge of or sharing or any sort of data whatsoever between the two, even of digest. So we need to be able to solve that. The, so you can't really even use metadata that goes across those two customers, even within the same company. So we'll take Web at networks here or AcquiLockets is the consuming company. There's multiple teams inside that company, and they also may not want their data shared across them too. But we also have just a general explosion of registries. I think part of the update framework stuff was based around where there was NPM or PIPI as a single registry. And what we're seeing is that that really isn't the model that customers are scaling out to those become interesting sources. And this is a much larger conversation. But the reality is customers need to bring the content into their own registry so they can secure it and approve it. But we also see lots of public places for stuff to get to get information from. So there's there is no one registry to secure. So there's a there's a all of that is wrapped into this problem of what is the right scope of timestamp and rollback protection that we need to balance. Also coupled with the fact that all the clouds are moving to this model of you want to run a container or run an artifact. Here is a completely blank compute instance that you can then decide how you want to provision. There's no history to it. So now if I want to get history that says, hey, last time I talked to Docker hub slash Debian, which was a week ago or yesterday, here's the the timestamp data that I had. But that's not on that compute note because it wasn't yours two seconds ago. Where do you get that information from? Where do you persist that state? So that whole balance coupled with lots of registries is the tangled mess that we have to figure out and how do we really handle update framework on the semantics. And is this right here that I knew to discuss that or is there some other place where you with the tough people or something to discuss that kind of thing? I'm just I'm sorry, I just hear the Let me turn a bit block the wind. Is this any better? Yeah, that is actually okay. I'm just wondering whether is this call on other weeks the place to discuss that? Or is there some other venue where you discuss with where tough people show up people from the tough community? So they were actively involved. In fact, I'm doing a quick look here. I don't see Marina or Justin here today. We had or Trisha, I don't want to leave anybody out. We have been discussing with them for a while. That's how these conversations started. And we got caught up in all this. So I can't say there's any one good meeting where you can look for half an hour and hear this discussion. Everybody comes together, go, yes, this is a problem. We should figure out a salivate. It's more of the first several meetings that we had of Notary V2 got wrapped up in that. And that's when several weeks ago, we decided to split this out between two phases. So that's unfortunately that what I just gave you this summer is probably the best summary of it. And we need to reconvene and do that, have that conversation. But like I said, we felt we needed to separate it out into phase one and phase two, where we can get the end to end kind of figured out. And then we can figure out how to roll this update rollback semantics into the design. So yes, we will have that again. I don't have an exact timeframe yet. We have been talking behind the scenes, not behind the scenes, like just one-on-ones and several discussions of, hey, we still need to address this. I think the question for you, for search for you, is what are the pieces that you really feel are important that we need to capture? Those are really helpful. Right. So the situation I've got in mind right now is let's say that I'm not with them anymore, but let's say I'm with Ubuntu and I've got a container store, an image store that I'm publishing for people to use. And you've got another company that wants to have their lab be mainly off the net, but they want to use our images. So they're going to download them. Somehow they're going to need to delegate to the signatures. So if we're using the update framework and we're using very short signature time frames, we're now basically saying that this proxied environment is going to have to every hour fetch new signatures from the public service. And that seemed like a potential problem. So I'm wondering if that's been considered. Yeah, actually, that's a great summary of it because that you're starting to get into, there's no single registry that everybody authoritatively goes to. There's not only private registries that people pull content to, but there's also air gap registries that people pull content into, which is much in what you were saying. There's a combination, there's a known problem. How do you propagate that out? So the thought has been key revocation versus a particular registry got hacked. Those are two different things that I don't think we've fully wrestled with other than we... Oh, I thought that key revocation had been explicitly made not a part of this. Key rotation. Key revocation, is that what you meant? Yeah, yeah, key revocation. Sorry, I don't know if I heard it wrong. Key revocation is part of it. Niaz has been trying to drive that. He has meetings on Friday mornings, but has been tied up in other things that he hasn't been able to. I think he says he's going to start again this week. So it's definitely part of what we want to do with notary. It's what we've done is we've just split that out. The key management and the risks associated with that is part of this working group that is an active working group. It's part of what I would call phase one. It's just we've got... Some of the stuff is just so detailed that there's some key experts in the industry that we really need to be thinking about this, and Niaz has been driving that working group. So I think he said he was going to... We started back this Friday. He had a couple of weeks where he had to be off between vacation and some other problems. As for the tough times stems in disconnected environments, I mean, in principle, every client can just add some toleration. So if the client knows that the data is updated every Monday, then everything that is at least from the last Monday would be accepted. Client policy can always make a tough picker. So you're kind of saying for the Ubuntu kind of case is that if somebody is at least doing a once a week kind of thing in a separate environment, whether it's air-gapped or a copy or not. If the data is coming across the air-gap only once a week, then it's fine for the internal environments to only require time stems to be about a week old. Presumably whoever controls the environment is probably somewhat aware of the air-gap at least and what it is allowed or isn't allowed to do. Yeah, kind of like a policy. Some individual customers can make a policy choice in how they do that. Yeah. If the air-gap administrator decides to hide from you that something has been revoked, then it just doesn't exist for all intents and purposes. Yeah. This is where the... I think around the submarine scenario where it goes under, but we have other seagoing vessels I think is one of the ways that is categorizing whether it be container ships or cruise lines, eventually they'll be cruise lines again, that it's too expensive to take updates while they're at sea, so they do it while they're in port and that might be a week. So yeah, those are definitely policies where some other team might be an IoT environment where it scares the hell out of them to be more than four hours out of date. Sorry, Sergio, you were saying something. Well, I need to think about how to express this better, but I could imagine an air-gap scenario where the regular schedule is every week there's updates, but then on Tuesday, oh my god, there's this... We absolutely have to have this fixed in. So we bring the fix in. Since our regular cadence is weekly, we've now put a week-long valid signature on these things, but we have to in fact invalidate them now because on Tuesday we got this other thing in. I'm not quite sure how that fits in unless we have a separate service running inside the air-gapped environment that every day maybe just says, well, these things should still be valid, so I'll always sign them for another day. And on Tuesday this thing came in, so we tell that service not to sign those, so that on Wednesday the Monday images are invalidated. Is that the answer here? I think it's an input into an answer, and... Oh, I didn't realize I was sharing. Sorry, I was multitasking for something I wanted to queue up. So one, I would absolutely preach you to please meet with Niaz on Friday mornings because it sounds like you're bringing up all of the right conversations, but we want to focus on the key management scenarios. So we've been trying to focus on Slack as the place where we're having the conversation on a fluid conversation, and then he has the Friday mornings, I think it's like at 7.30 a.m. or something, that he drives that discussion. What I was just pulling up, which I'm sharing already, is I think there's a larger workflow that I've been trying to get more focus on conversation. The latest Docker terms of service has kind of been a way to drive some of this conversation, forward. I actually think it's the wrong... It's being perceived a little bit partially than it should. The reality is upstream content, even well-intended security fixes, can break downstream apps. My environment may not be dependent, may be dependent on some behavior that was just deemed to be insecure, and the question of, can you just blindly bring upstream content in, or do you need to verify it? Whether the upstream content you didn't even available is another problem, which I kind of talk about in this article, but I think what I'm trying to get more thought process around is this promotion workflow that you bring stuff into your environment, whatever that environment is, and you're going to test it and validate it, that it works for you. What works for one company is going to be different than what's for another company. I think there is part of this workflow that's a piece to that. That's kind of the larger thinking. What the real problem is, we just don't have good tooling around this. Even when in ACR, where we've got this ACR task infrastructure, we do base image update monitoring, all the raw components are there, but it's like going to Depo and finding an aisle of wooden nails and say, go build a million-dollar house, which isn't that big anymore. Sorry, what's ACR? Oh, Azure Container Registry, sorry. Thanks. The service that I run in Azure, and this is how this is where we think about the stuff. We're trying to have these conversations in a broader place because ACR is a consumption of it for our customers. We make our Microsoft software available on MCR, and just like I would say nobody should ever be dependent on production content directly pulled from Docker Hub or MCR, you should always have them there registries. We need better tooling, better workflow, and of course, the signatures have to be able to go into those environments, which is how it's very grounded in this conversation here. I think the key management conversations we're having here need to play into this because if I bring content across, none of them want to test it. If it's good content to make sure it works in my environment, to the point here is if that got hacked, I need to make sure that I know that that content is no longer valid. Anyway, that's just the conversation I was trying to have here with this post, and the link is in the chat session. It sounds like you got some great input. I would again encourage you on Friday mornings at 7.30. Let me put the CNCF calendar up for that. Well, would you mind adding that to the HackMD page? Is that the very top? Oh, yeah. Yeah, so I'm realizing we don't have, so here, let's see, Friday, CNCF Notary V2. I don't know what time is on 2.30 p.m. It sounds very nice, but that is not Seattle time. Oops, it might be a good time for you guys. But at the top of our HackMD is the calendar, actually the Notary V2 conversations on Slack. The only thing it is missing here is actually the Notary project. So any other thoughts from folks? There's nothing else we can give time back to folks. Just wanted to make sure, and let me see what we've got stirring this week to see, make sure we have something. I'll try to make sure we have some good update next week that even if it's just the distribution stuff we've been working on to give a good summary to it because I don't want to, I think we canceled last week because of the holidays. I want to make sure we keep good momentum to people feeling that we're making progress here. So with that, thank you. Thanks, folks. Thank you. Thanks. Thanks.