 Good day folks. Hello Hank. I think that's what I just saw the square to associate with the voice. So one of the things I just while we're waiting for you guys as I'm filling out this HackMD doc, one of the things we learned from the OCI call is we have way too big by keeping all of our notes in one place. The HackMD doc is starting to get extremely sluggish as it tries to provide intelligence and other things over all the content. If we archive the content to a repo location that would be awesome. So any volunteers would be also awesome. Paste this here. Please do the proverbial sign-in and the agenda. Just put conflict process for the quick answer for the freshest thing to just add read from our hack from our Slack channel conversation. Any other agenda items? I know those others are planning a join, so we'll just we'll fill in for a moment. Marina, do you know if Shrishank is going to join or is he just jumping on Slack? I'm not sure. I can ask. Okay. Neha's and Ian had just there's no suggestion of a big another big security thing just coincidentally the two of them had other things they had to deal with today so they weren't able to attend. It was my hope as I put it in Slack that we were going to talk more about the key management workflows for the ephemeral clients and so forth, but it's kind of pointless without the two of them here. Why don't we just kick off? This is what the recording is for also. So we're back for the new year. That's great. Everybody's been kind of digging themselves out from all the backlog of stuff from the new year and just stuff planning for the new year. So at least that's what I've seen from a couple of people and certainly affected myself this week. I'm hoping to get back on planning, not just hoping unplanned to get back going on the registry prototypes that we need to be able to store and retrieve signatures with the updated workflows that we've been discussing. And we want to get to a point where we're more confident about the prototypes that we can actually roll it out in something more than a web app instance of a hack of docker distribution that wouldn't actually scale. We actually want to be able to get it to a point where customers can use for Kubernetes deployments and whether that's a rollout in ACR, ECR, GCR in Docker Hub as well or something else that we can actually put production-ish workloads on is certainly a, we certainly need to get further in the distribution process to have that confidence. So that's part one. Josh mentioned to me that he was hoping to maybe help with some of the prototypes. The Harbor folks have also been very offer, you know, insistent, insistent in a good way, prudent or consistent in asking, can we help? Can we help? Can we help? And we just wanted to get some more of the conversations concrete across notary on distribution to feel that we're making the making the progress in the direction that we feel the relevant parties are comfortable with because there are this is cross cutting various open source efforts and getting people on the same page. So I'm hoping we'll get the next round of that this week that we'll be able to review next Monday and next Wednesday in the OCI calls to see if everybody's comfortable with the direction so we can make more process. So that's good on the ability to push, discover, pull signatures out of a registry. The other half of this that we've split was the, and I'm sorry my microphone was way over there, the other part that we've split is how do we get key management handled that everything where is it stored? We said we want to be able to leverage cloud specific key providers, key management systems with the Azure Key Vault or AWS Key Vault and so forth or HashiCorp Key Vault. But also what is the secure process for getting keys to ephemeral clients? We haven't been punting that, we've just been splitting that as two separate conversations and Niaz has a document that he's been working on that I will hopefully get more progress on in a good way. We want to make more of that, get that into the prototype and then work with the open gatekeeper folks to see if we can implement a solution. So that's kind of the update there. Anything in that room because I do want to talk about the tough conversations as well as what Trishonk kind of categorized as a conflict resolution because I definitely want to get addressed that. We know we're not going to agree on everything, but we want people to understand where we agree and disagree and what the next steps would be. So I think that's a totally fair conversation. Anybody? Hey Steve? Yeah. So I'm looking at the prototype for notary. I'm noticing the use of X509 certs. Is there any reason why GPG couldn't be plugged in for that? Yeah, that's another part of this great part of the conversation. We originally, the prototype that Shiwei had done had both GPG key and X509 and honestly I have to go back and look at the notes. There was a concern that the way we had factored it was GPG made it almost like it couldn't really be secured. It was almost too free for all. At the same token there's a concern that X509 is too high-end corporate meeting expensive to kind of get implemented. So that was one of the things we were struggling with. I think, again this is part of faulting and back over the holidays, I believe Shiwei had a GPG key solution prototype again that we brought back in, but I'd have to double check and figure out where that is. Okay. Is the Git history on that repo a good place to dig? It should be, I believe. So the way some of these prototypes were done is they were done more personal ones and then pushed up. I'm pretty sure that that history has been preserved as the PR was sent in, but this is a great conversation that we have to deal with the time zones. Just on the Notary channel, just paying Shiwei Hill, he can answer directly rather than be trying to represent. Okay. You think it's totally fair to say that we want to make sure that is part of the prototype, we address much more crisply where we're standing on GPG key versus X509 or something else. There was another conversation of using less encrypt or something like that, that there was a recognition that there was a general problem around keys and it started getting the conversation like, how many problems do we want to take on? Because this is certainly the type of group that could say, look, maybe we do need to invest in something like what's encrypt or something similar. I won't try to weigh it on one particular one. It's just the one that comes to mind. At the same token, we need to figure out how do we scope this so we can get something out so that we have something to build on. That's part of a great transition to the next conversation is the SolarWinds is a good example of an exploit that could happen. It wasn't containers, but it could have been. But it was also an example of something that the current scope of what we're doing with NV2 and the signing would have had the same mitigation in a good way that the SolarWinds exploit was. They quickly discovered that the distribution was not hacked. It was in the official build system that the exploit was made. That being able to identify that is critical. We can't do that with containers today. The Notary V1 thing only works in the single repo of the single registry as soon as content moves between registries that signature does not carry with it. We continue to feel pressing to get the signing solution that we've been identifying out so we have something because right now we really have nothing to speak of. If there's opportunities to improve that, we certainly will. Security is never a finite. It's always an incremental improvement. Not going off and trying to figure out a let's encrypt solution, even a GPG key. If we could literally only support X509 that would probably not be a blocker for now. I think that's part of this scoping we need to figure out. I would say that X509 is a blocker for most users. It's not a blocker for Microsoft, but it's a blocker for almost all of the organizations who don't have X509 signing set up at all. If we're trying to target registries that exist in the major clouds and the projects, this is not a hobbyist site kind of thing. Not that you shouldn't, but that's not the blocker concern. Is it that the registry operators couldn't provide those kind of services to companies? I really don't know. I'm asking the question. I don't know. Probably not, I'd say. You're on mute. I would say that in general registries would probably not want to provide signing services. Okay. Honestly, I can't remember where we left because I remember the GPG conversation came back in and I just got to go back and look at the notes. I'm not trying to portray it as a done deal. It's just trying to fault in my memory. We should definitely rediscuss that. I think I remember that the GPG part was having some issues with revocations or things like that. So revoking a certificate or something would be more complicated with GPGs. I think I recall something like that was a reason to go with the X509 for now. Yeah. I mean, I think using GPG is definitely complicated because it's not clear what you mean by using. Do you mean which GPG infrastructure do you actually mean? Do you mean GPG signatures or the GPG Web of Trust or what? Or GPG signatures authenticated by other sources? I think it was related to the keys. So the GPG keys, let's say, or a GPG key is compromised to get revocations in place. I think there was some discussion we had a couple of months ago. Yeah. I think that it should be possible to separate the signing infrastructure from the key management infrastructure. So if you could use GPG if you had a way to revoke and assign those GPG keys in a way that can be changed. Whereas X509, I think, has a built-in system which, I mean, has its own set of problems. There's lots of things written about the key application lists and how those can be compromised. Yeah. So with NoDury v1, indeed, at Philips, we have been researching this bit and we also struggled with managing those keys, at least for the end user, for the developers or people needing to use that. There were some difficulties. So we built some kind of web interface where you can do all these tough operations to basically create and authorize keys on repositories to mitigate that sort of. And I think that's also taken into the input Nias has been using for for the key management topics. So thank you, Josh, for opening the issue. I tagged Shuei to fault in some of the conversations. It might even be the last key wrapper that he put supports GPG. I just, I don't remember. We should get the answer on that. I think part of this is a good conversation in the sense of capturing some of the conversations decisions. I would, I know that some of these conversations have been captured in our HackMD notes. This is one that should be more obvious. So I think getting to the point of capturing issues will be more detail and I think that will help us. So we should follow that process. The larger conversation on this back and forth, we've had with the tough implementation on registries. I'd like to reserve some time to talk about that conversation and how we capture those because I do see us going quite a bit in circles on that one. We clearly want the folks from the NYU folks from the tough and in total groups to participate because there's clearly a lot of deep research that was done there that we'd like, you know, we want to be able to capitalize on that and incorporate and so forth. The challenge that we've been facing is we keep on going in circles on what is the problem we're trying to solve? What is a generic problem in package management as a whole? And then how does that get applied to the scenarios we're trying to support? And I think we get caught up in that. There's definitely some problems. It's just not clear that it applies. And I think this SolarWinds conversation that we had in December was a good example of it. There was a good capturing of that at the start of a conversation, but we never actually identified how that would have solved any of the work that we've been doing here. In fact, I think we came out of last week's meeting with a comfort that, yes, there is an application of tough and in the build systems like up to the point that the build systems are done. But we did not come out with any action items that we need to change anything that we've been doing from the point that an image is built or an artifact is put into a registry and distributed. So I think the challenges I try to capture just succinctly in the slack notes that what are those scenarios? What is the problem we're trying to solve? Go ahead, Justin. Are you asking, what are you saying? Are you trying to say that you don't think there's any difference in the security model between using tough and not using tough? Are you really trying to say that? I'm not trying to say that per se because there's certainly there's a lot more to the way tough generate signatures and the validations. The problem is that we have not been able to figure out how to take the pattern which seems to have been designed around a single public registry and apply it when public registries are owned by multiple orgs in a registry and content moves within a registry and across different registries. So we got focused on answering the perf and scale aspect which I don't know if we really solved that problem but we certainly focused on that as a there's a spreadsheet there was a big debate and conversation on it but it never addressed the just because you can doesn't mean you should. It was an assumption that you can take the data from multiple customers and hash them in a anonymized way to generate this metadata it wasn't but we never addressed whether that's appropriate and I would argue it's not appropriate to take any kind of anonymized data there's literally no access across customer data that we as registry operators should be doing. So I think that's the piece that I struggle that is not clear where this can be used in the experiences that we're targeting. I think that we're fundamentally solving different problems. I think tough isn't just about signing individual files that you're downloading it's about making sure that the signature that you verify is the correct signature that it comes from the right party that it's it's up to date that it hasn't that the key you're using hasn't been revoked and I feel like it's just a it's just a slightly different model where you're looking at kind of making sure that the system as a whole is providing secure files and secure signatures versus just oh I'm going to trust that this registry is going to show me both a file and a signature and I'll verify that file from that registry which is just kind of putting a lot of trust on that one individual system and a lot of problems if a one individual key is is compromised somehow and I think there's um there's nothing like inherently wrong with that model it's just that we're thinking that if registries are going to be you know the future and used by lots of different organizations and individuals then you need these you know stronger security guarantees that you can get from more cohesive metadata strategy Marina I think that I think there's a question about what size the system we're trying to protect or how many systems how many systems of what size they are is the registry the system or is the is the kind of customers um Kubernetes cluster the size of the system I mean or is the organization of the the total customer the system and I think it's more a question of like making sure that we get that size right so that it makes sense to the users and not and don't make it too big which has scaling problems like I think some people are definitely arguing that the whole a whole registry is too big a unit um but versus it being too small where there's too many things to manage too many keys to manage and I think that um I I'm I'm kind of concerned about for example um like having snapshot files for the entirety of docker hub because I think that's too big in in terms of the system I think it's just it's not the whole of docker hub isn't isn't a meaningful concept as far as anyone in store anyone working with containers is concerned because no one cares about everything on docker hub um I think that we we have problems with naturally v1 where an individual repo was probably for a lot of people was too smaller unit um and and I think we had a bunch of talking about where we could do things that were in between and it probably needs to be flexible potentially as well yeah that makes a lot of sense and I feel like there's there's definitely room I think for like maybe allowing both to a certain extent whereas if you have a small registry um that's mostly you know a couple of different organizations there's no reason that shouldn't be the unit or one or indeed one organization if it's your your organization's private registry or your team's registry yeah absolutely yeah but then like if yeah if say there's whatever millions of um organizations on docker hub then maybe maybe something in between those organizations and all of docker hub um I don't know what that um delineation like what kind of delineation would make sense but I think maybe there's some in between where you can get some of those so you have less keys to manage so you can get something across some number of those projects but maybe not all and then yeah and there's Steve's point of view is that there are things that are across registries so that you might have some things that are on both docker hub and as you are another and that and those things as a unit make sense but the registry is not relevant for those things and so maybe you'd have like the metadata mirrored across the registries that like points to all of them or something I have to think more about that yeah I think I think that's I think that's one of the problems is that the the size I think yeah as you as you say is definitely variable but it also needs to make it needs to I think that we've talked maybe too much about tying it to registries and maybe it's more um I mean maybe it's more about this the maybe it should be more registry independent than what we've been discussing but I think that um I mean so I think that but those issues are those are not irreconcilable but I think that things got a bit confused by making assumptions that they're similar to some of the other use cases with TAF where I think where where it's like you know package distribution has very pronounced boundaries of trust I mean something to recognize and we're seeing this a lot in the last year and this year more so is it's not just one docker hub and have a private registry what we've seen expand over the last couple years is there are multiple locations to get content there's in public locations there's docker hub there's quay there's gcr there's azure for some customers that they actually use their registries and make public anonymous there's github us it was added ecr just added theirs nvidia has a registry and those are all public registries that people need to get want to need and get content from a solution that we look at has to not assume there is some metadata that spans all of those um and after they've had all of that we're saying is in production they should not be pulling from those public registries in their production workloads they should take the copy of that move it into their right private registry this gated import workflow that we've been talking about and consume the content from the critical workload in in their environment where they own their supply chain there are good and bad things that happen on upstream sources that's part of what we captured in the oci blog and we covered everything from technical oops to humans that were upset and emotional and made a decision had a wild mind of having a much larger impact than they had planned um and there's a third one I forget but basically none of those were even malicious scenarios and they broke uh the plans so we need to recognize that there is multiple registries public registries and then private registries and then air gapped private registries that we we need to be able to support and that is the p0 if we can't support that core validation of content then that we just we haven't left go whatever it's do not pass go without paying too much I mean I think that the copying into a private registry is actually where the registry model makes more sense um for tough because if you know you control your whole registry you you want to make sure that you know you can even manage keys all within your same private registry but I think that um because I think a little bit more problematic is that kind of huge space where like what um source of trust actually makes sense and what groups of projects actually make sense together when it's not your own private system I think that's definitely something that warrants some more discussion um how do how do we set those parameters and I think that yeah the metadata between registries was something that I said kind of as a thought but you're right I think it would require maybe more coordination than is um like wanted yeah for a little bit of for like a note on sizing um quay has its own fork of notary v one that integrates with quay a little bit tightly but also provides a whole registry um like trust point uh that coro s is uh rocket runtime knew how to talk to and rocket went away and we haven't had anyone ask about it so it seems like no one practically used an entire registry as a point of trust although that's perhaps just because docker never did anything with it well yeah I think that um I mean docker went too far the other way into nursery per repo which was definitely also problematic um and people wanted to people definitely want to make larger trust judgments than a single repo in a lot of cases um like an org trusting an org is for a lot of people a good compromise or more than one org but um yeah potentially a number of orgs but yeah but yeah no one wants to trust the whole of docker half that's definitely true so I'm looking at it from a from a more overall perspective so when you look at the the whole software supply chain and so from a key management perspective I think a lot is captured in the in the notes Steve was sharing in the chat um but from a signature perspective I think yeah the metadata let's let's say from from development that's basically where I start my process I'm pooling images I'm preferably I would like to pool signed images which I trust from whatever location whether that's docker hub or kaiio or whatever registry and then it moves into a CI CD place where you probably also have your internal registries which are internal caches and things like that and also there are these signatures need to be be accessed and then I'm releasing to one or many of my clients the same image and those clients might have air gap or private registries as well so I'm basically pushing the same artifact to various locations and the the thing I I still don't really get is how we are able to to scale this because that means all this metadata of all the dependencies I pulled in but also the things I'm releasing myself has to be captured somewhere where everyone can actually access the metadata and verify the signatures and I think that's that's the most difficult part of the of the whole story to to capture this whole end to end flow because if we just focus on a single registry or a single solution then you also focus on a single step in the process and securing just that single step in the process is is not going to resolve anything of of the problems we are trying to resolve so that's a bit where my worry is at this this point in time how we are going to be able to scale this whole solution no mark mark I think it's a great point I think the this is where we've been trying to use the multiple signature model that doesn't change the the addressability of the artifact so the digest doesn't change to allow those circular boundaries like I want to know what we'll use the acme rockets and it was actually web networks writes the software they sign it put it on docker hub docker hub signs it brings in acme rockets acme rockets signs it as they consume it any one of those acme rocket docker hub could decide that they found the vulnerability and they can um I don't know if they would revoke that's what they wouldn't revoke the survey if they found it was hacked for some reason they could you know revoke or identify that artifact as being problem or even um web networks could being the source of the software ultimately acme rockets can decide and acme rockets could decide that you guys don't know about anything that's fine but they've decided that they don't trust that artifact so I guess what I'm asking is there's one this is one of those problems that we know of the complexity of movement so we're not trying to make all registries have one massive metadata thing that they can invalidate because we just know it's not it's not practical especially for the air gap does the approach we've taken with the multiple signatures across different environments including the original signature moving with it right by the time it gets into the acme rockets even the private environment all three keys would be there well I think does that solve the problem that you're looking at so that that's something we actually are already doing in our poc setup so what we did is we basically build this this user interface that that basically encapsulates the notary cli and there we are simply able to to manage certificates for a bunch of registries and then in CI CD we we just build some automation where you basically say okay I switch now to the other notary server I pull the thing I can verify the signature now I'm going to release to the other registry so I just toggle the switch with with the endpoint I have the certificates for for that other notary server as well in my CI CD and push it there but that means there is still this this potential place where you can put in malicious software because I pull from one registry verify the signature put in my malicious content and release it with my new signature to the other registry so what I'm saying is not review one sort of supports that already that scenario but still you you have the way to to to bypass the yeah the the security checks I think that there should be a way to like maintain those original signatures but then delegate to them on the new system so that with the idea being that you never remove those original signatures yeah so that you don't you can't change anything in between but then you say okay I also certify that I think that this is correct and they kind of combine those things and but I think that's one of the key challenges of this movement is not so much how to move the signatures on the packages that's just you know another file to copy but how to make sure that the new system knows to trust those signatures and how how they they learn that they can trust it and I feel like that's one of the key issues is is where do you decide who can trust it and the person who's moving it which thing do they have control over on each side so that they can say okay yeah I know that this is trusted on this new registry and this is where I'm going to put it there if that makes sense yeah so what I'm trying to say is that that's already possible today with not review one I did a small poc with that to to basically prove that the thing is just that you just need a few components to to make it easier for the end users for developers to to consume this kind of logic like building blocks for your CI CD some some interface to to manage the the certificates and and basically link them to the tough roles and structures to to to manage those keys but in theory you can already do that with notary v1 so I'm not completely filed up because with notary v1 unless you guys have done something different than what the notary v1 that was implemented is docker content trust the signatures don't move between registries or even within a registry different repos so I'm trying to understand that one a little bit more yeah we are re-signing the the release basically so I pull from one registry verify the signature and then just re-sign the same thing and push it to the other registry on on both those registries we have the same keys authorized but that means we just create at that point in time a new signature same keys but new signature on the same content okay so you're having to re-sign each time and the original signature doesn't move you have you have a trusting system where the signer every time it moves there's a new signing that has to be trusted so the security issue in our solution today is basically the place where you pull and re-sign that's the place where a hacker could pitch in and I think that's the thing yeah you would like to resolve but I don't see a way how we can move the the existing signatures in a secure way to to the other registry or to the other place where you are releasing the thing I think that's the main point to be resolved so the proposal and this is why I'm asking to take a look at the prototype proposal and why we also want to get the proposal in a functional way that people can not just stare at it but actually play with it to touch it right is the signatures are another artifact that itself are there's basically this cross validation between the signature and the artifact and it's pulled and the key still has to be validated don't get me wrong and that's part of why we're we're I'm looking for a neon to come in to come back with more of the details because we know that that's a gap especially for for more clients is how the the content when moved the signature would move with it we would do it in an OCI generic way which we need to add more functionality to the registries but it would do it in a generic way that they can be trusted as a cohort of each other and assuming that the keys are there to validate that um so I just so that you know we're we're doing this prototype model build you know discuss prototype model build and then once we validated it we'll go back and say all right we've got you know the the gaudy model we've been talking about right we've we've discussed enough to build a model everybody stared at the model and played with it and put their things in it to know that the model will hopefully work at scale to then validate going and actually building the the building I don't want to claim that we're building a church you know building the next level of investment so I think if you can help us by validating the what you read in the the two docs a little bit more to see is what about that doesn't smell right or what is it that doesn't feel that what is the gaps or we're just completely not going in the right direction that would be really helpful yeah yeah looking happy to help there um yeah we'll just have a look on the the nv2 repo I think that's the one right there's a couple of them so but the the two links right now the most the best that anybody can do in practicality without spending a day or two really full time spending on it is read those and just read through those and see if that makes sense that's why we try to be so crisp with this with the silly web networks and acne rockets thing is it it really starts to attack articulate the scenario yeah the we've done some demos if you watch the previous videos we've recorded of the nv2 prototypes my commitment to the next week or two is to get that to a point where anybody can easily run it that's what we're trying to get to yeah I was in that meeting with with the demo for for nv2 but didn't dive in in more detail yet and we recognize that people have time constraints so we're trying to make it easier to digest yep so the I do want to touch on something that I think where we're I don't say we're glossing over but it's the undertone is that we have is well I don't think we've all right I think we're all frustrated we're not running fast enough at the same token we have run fast in some places and probably could have done a better job documenting some of the decisions and what needs to be done so I'll commit that now that we've got a base in the git repos then as we take new prs and new discussions that there will be better of a history in the git history as opposed to having to read through notes or in some cases we actually didn't document the notes and you have to watch the video that's not a scalable solution so I think I I will commit to making sure that as we capture these questions and conversations that we have we can do those and get repos and have history and I don't want was any other conversations in that vein I would just to say that I'll make sure to put our our tough stuff on github as well so that we can kind of keep track of changes to that because I think our design has actually changed over time which I think is part of a lot of this confusion and so I think that we should make that all more more transparent and clear to everybody so um I'll commit to that as well just try and make it more you know more stuff on github to Justin's earlier point like our goal is not to split and not use any of the tough work it's how do we take that knowledge and make it applicable to the multiple registries multiple public registries the private registries the ERGAP registries and the content moving within those if there's no other topics there's nothing says we have to use the last 18 minutes although we could talk the other conversation keeps coming up as time zones I know that's been a challenge for folks and the people that actually made it to this call it's probably not as much of a challenge but I've been asking if somebody wants to volunteer and help a little bit with that just as the the splitting out of work that we're trying to get done okay um I'm not sure what the right amount of dead silence pauses until we decide to wrap up for the week um we'll call it there and uh to be continued thanks folks have a good week thanks Steve thanks everyone thank you very much bye bye