 Hey folks. Whoops. Hey folks. Hey Steve. Hey Omar. Thanks for joining. Where do you have a cool background like that? And why do I not have a cool back? Well, that's actually exactly why that because I'm tired of looking at the same pictures all the time. So I get a bunch of them from Discovery Park over here. I think how come everybody's in the same room, but I don't see them. So if we're good. I didn't turn my video on yet because I'm like, you know, not feeling the, the white wall love. So just remind your, add your name to our adverse to installing the Zoom client locally due to security reasons. So whenever I see like the fancy backgrounds, I always think, okay, well, there's somebody who's likely, if all of the reporting is true, to have a lot of security holes, just waiting to be poked. Oh, wow. Trush, like you quite filled out the agenda. Yeah, sorry. I'm on the whole offer to take the hour, didn't you? Take the hour just to read the agenda. Well, I mean, be careful what you wish for, right? You ended up finding something. No, no, it's fine. It's good. This is exactly what we wanted to cover. Is Radu going to join you or did he leave this to you to cover? I don't know. I guess, let me try thinking in the meantime, we'll give it two more minutes and we'll just start. So we do the two more minutes of the proverbial five. Yeah, I think, I think we might as well get started to be honest. Okay. Cool. So should I just go ahead and start? Yeah, Laura's yours. Go for it. Thanks. Great. Thanks. Thanks, Steve. Hi, hi, everyone. So as some of you may know, I'm Trishank. I used to work on Tuft and in Toto at NYU as a grad student. And now I'm a security engineer at Datadog, where I also put some of these technologies to use, among other things. And today I want to talk to you about our experience with key management with this tool we call Sine, which I'll explain in a bit what it is, what it does, and what it's for. So, and we do this with no tree we want. So what is CNAP, first of all, let me share my screen. I think it'll be useful. Okay, great. Presumably everyone can see my screen. Please yell at me if you don't. Great. Great. So CNAP should hold the actual repo here. So CNAP stands for Cloud Native Application Bundles. And it's a way to basically specify the entirety of your app which can be made of many Docker images, for example. And it's a way to tell the CNAP runtime, such as Porter, for example, how to install your application in different settings, but in the same way. So for example, Google Cloud or Amazon and IBM and so on and so forth. Microsoft, let's not forget. So that's the high level idea. And the key thing you need to know for this talk is that there is basically a bundle.json file that specifies the application. So for example, where do you get credentials from and TLS certificates and port numbers. And what images, the invocation images will tell the runtime how to install your application. And then the rest of the images can tell the runtime what your applications are made of. So for example, you could have a DB or you could have a Kafka. Yeah, yeah, you get the idea. So then for the purposes of our talk, what we're going to talk about is how we sign these bundles. So you can think of images as sitting on a lower layer than the bundle which referred to these images, right? And you may really use Docker content trust to sign and verify these images. Now the key idea is that you can reuse the same technology to sign and verify your bundles, right? And potentially you may not even need to sign your images anymore. Now you have one bundle to refer to all of them. And so if you do it properly, you can get all of this, all of the signing for different images for free in one place. And so the good news is that I'm very pleased to say after months of work, the CNAP security spec which basically talks about using Toughman in Toto was finalized last week. It's 1.0.0 GA. So I'm very, very pleased about that. Hard work from Radu and Raul and others. So that's CNAP security. That's CNAP in a nutshell, which tells you how to install your application in a cloud-native wall. And CNAP security is a way to sign and verify your bundles. And Siney is the tool that is the reference implementation for CNAP security. Does that make sense to everyone? Please let me know if you have questions before we go further. You guys have had some fun with your names. I saw you got crabs and Siney and fun things. Exactly. Our mascot which Caroline came up with is a crab, exactly. And Siney is a great name for a climbing tool. That Radu came up with. I can't take credit for that. So let me go back to the agenda to remind myself. Right. Okay. So we've talked about what CNAP is. We've talked about what CNAP security is. And now we know what Siney is. Great. So what we call our minimum viable product, and if you care, you can read the entire spec. But the basic idea is that you can see it's very similar to what Docker content already does. What we call the minimum viable product basically uses notary to sign for a bundle. It's the same old security model you've seen before. You have these four rules. We don't have to worry about all of them. The thing that matters the most is the so-called target's rule that distributes the hashes for your bundle in this case, which in turn will distribute the hashes. Ah, I should mention the reason you don't necessarily need to sign your images. We'll talk about good reasons for signing for it. But the bundle.json would contain the hashes for the images anyway. Great. And some of you may be wondering, hold on, if it's a bundle.json, it's not a Docker image. I know how to stick in Docker images in OCI registries. How do you upload a bundle.json to a registry? Excellent question. People like Radu and Sylvie and others at Docker came up with this nice, it's called CNAP Registry Spec. Came up with a nice way of reusing OCI artifacts. It's a little bit of a hack at a moment, but I believe it does use a standard way to push an arbitrary file and you just have specified a media type. So that's what's happening right now. We use a tool called CNAP to OCI to push the bundle.json to a registry. So you can get bundles and images in the same place, which is nice. Very nice for your runtime because you can reuse the same idea, basically, same machinery. Great. And so the next thing we should talk about is, okay, so that's the MVP. Nothing we haven't seen before in the Will of Docker. Let's talk about an extension to the MVP. It looks slightly more complicated, but hopefully it's not too frightening. It's just complicated. It's all of this. So what we're doing in this case is, okay, so what are we trying to do here exactly? Why is this so complicated? What we're trying to do is stick in so-called S-bomb metadata. And it could be anything. It could be any tool you want to use, Graph.js metadata, let's say, or in total in this case. The key idea is that you want to provide additional signed metadata that tells you how the bundle was produced. You can think of it as a software supply chain, right? Now, so in this case, what the MVP gives you is that, okay, somebody, maybe a human or a machine, sign the hash for a bundle. Great. What it doesn't tell you is who produced the bundle. The bundle itself is a very complicated file. It's got a lot of fields in it, a lot of information in it. And you can't get fine-grained information such as who, maybe you want to know things like, well, who produced the so-called invocation or installation image, who produced this other image was a machine. Did the machine produce the entire file? Or did the developers produce some parts of it? You don't get any information with the MVP at all. So that's what the extension of the MVP does. And the key idea here is that we use something called delegations. Before I go into, before I throw this word around without defining it, let me quickly show what I mean because I've noticed that sometimes we use this term to define an image. Delegation is a very simple idea. It's that, so, okay. So without delegation, what would happen? Normally you would use the target's role and this role has the power to, a role is a manner of machine. It doesn't matter. The idea is that, what is the responsibility of a role, right? That's what we mean by a role. Normally this role would have the power to sign everything, any bundle, any image, whatever you need. And that may not be desirable, particularly if you're, if a machine is signing a bundle, for example, right? Or everything about a bundle. So what you want to do is split the responsibility even further and that's what delegations let you do. So imagine that this is the target's role. We call it projects here. But we call it the target's role and the target's role says, hey look, I'm not going to sign anything. But if you're looking for Django packages, let's say, go to Alice and here's a key, right? That's all I know about it. I don't sign anything, but a delegation is simply a mapping of public keys to what sort of images or bundles or whatever arbitrary artifacts actually, you just specify the namespace that they're allowed to sign. So great. So now you know the runtime knows, oh, I'm looking for Django, I better go talk to Alice. Great. And I've got her keys too, which is nice. That is part of the power of delegations actually is the decentralization of key management. Okay, great. And this role can also change the keys anytime, right? And so Alice can further delegate the power and say, hey, look, yeah, I do, I am responsible for Django packages, but if you're looking for Django packages that end with power.gz, let's say for Linux, go ask Bob about it and here's this key, right? This is roughly makes sense so far. It's basically a splitting of trust, a fine-graining, yeah, it's a fine-graining of trust management and system. Okay, so what we're hoping to do here is that notice that we've delegated things to a role called target slash releases. And I'm just going to call it releases from now on because the whole name doesn't roll off the tongue as you can imagine. But releases is the traditional role that Docker itself lets you use today, actually. What you can do with Docker right now is say, okay, yeah, I've got a target's role, but I'm going to let a machine sign everything. So I'm going to delegate everything to a machine. And if the machine key ever gets compromised, I can use the target's role key instead of the root role key, which is presumably more difficult to get to. It has more power than the target's key. So you want to use a more friendly key, shall we say, to revoke the machine key. So that's one thing you can do with Docker, the content trust already today. But here we're doing something more complicated. You don't need to know the whole details of the whole thing, but what we're basically trying to do is say, there's immutable rules about the software supply chain that should be signed by offline keys. And that's why it's green here, okay? And that guy directly signs some immutable rules of the pipeline, shall we say, and that cannot be changed. And so what the machine can do is sign all the other metadata that is safe for a machine to sign, because basically if the machine misbehaves, we can check using this immutable rules that we have fixed here. And I don't want to go into the details, but that's the high level idea. The key takeaway here is that we can use delegation to safely split what a machine cannot sign. Does this make sense? So human developers always have the power to specify the most important things in the system. And machines can only sign the least important bit to the system. So even if someone breaks into the machine, you're not going to lose sleep, because you can eventually catch the machine misbehaving. Okay, and so that's the usefulness of delegations here. Great. Ah, and I should talk about an example, minimal as bomb. Again, I don't want to get into the details too much, but basically you can think of the pipeline as having two steps. You should really draw a nice picture for this. But basically this is a developer's step where products are basically input and basically developers can sign everything under a bundle case. It's got all this metadata in. So developers always have the ultimate power. And what we're saying a machine can do is that a machine can add hashes about images to the bundle, but that is all it can do. That is the only S-bomb metadata it is allowed to produce. And if it does something else funny, if it tries to change what developers produce, you can catch the machine misbehaving, which is a very powerful, propitiated system if you think about it. Okay, so great. Okay, and Sine is the reference. So, okay, so, and I promise some lessons we learn when developing this reference implementation for CNAP security, what do we call Sine? The first lesson is reusing keys. Docker already by default, notary, I should say by default, already pushes the responsibility of the so-called timestamp key to the server, which is great. And we simply added a very simple thing. And we said, yep, you know what? The less keys we have to manage, the better. So we're also pushing the snapshot key to the server and Sine. That's a very simple extension that we made. In fact, I ran through this problem with Phillips today. I should mention. Phillips research is writing their own tool called DCT Notary Admin, but they're basically trying to do a lot the same thing, but for Docker content trust, they're trying to make it simpler to use for the developers. So they were stuck at signing the snapshot. They were unaware that they had to sign the, they had to provide the snapshot key password. And so that's one thing that I helped them. So as of today morning, this tool is also doing the same, not just Sine. That we basically pushed the time-time and snapshot. And you don't actually lose a lot of security this way. These roles don't have the power to change by themselves the hashes of the images or bundles. So it's not unsafe to do. And it increases usability a lot. Another thing that we did for Sine is that, so Docker already reuses the root key, right? Across images. For a repository. So something, if you're not unaware of, if you're not aware of, is that the model that we have today with Node 3 v1, with Docker content trust v1, is you basically have a separate, tough repository. And we don't need to call it tough. We can call it a metadata. Basically a separate metadata repository per image or per bundle or per whatever artifact you care about uploading to the registry. So this means that you have potentially have a lot of keys to manage. For example, the time-time snapshot. And we removed thinking about this by pushing both of the skis always to the registry. And another thing we did is that we reuse the root and target skis across repostries. Because if you're the same organization uploading all these different images to a registry, you really don't lose a lot of security by reusing the skis. In fact, increase usability. You don't have to manage so many keys anymore. And we can split the, if you're really concerned about security, you can have a different machine signing key for every image. And that's one thing you can do. And we should give you an option right now. We don't have it, but you can potentially do it. Okay, great. And so, yeah, we need to, one thing that I think we need to do is revisit the idea of why Node 3 needs, right now, it doesn't need it, right? And I don't think it needs to be necessary. We don't need a separate metadata repository per OCI artifact. We should think about trying to remove the burden of managing keys from end users and basically developers as much as possible. So they don't have to manage a separate root and timestamp and snapshot per artifact. We really should do that. That's something we should think about. And lesson number two is using the power of delegations. I've discussed briefly what they are. I talk about how we plan to use them and are actually using them in Sinead right now. I should mention that Node 3 Phillips, basically also wants to use delegations. Node 3 actually has supports the full power of delegations behind the scenes. It's just that the Docker, and Justin Karm actually feel free to correct me here. But as far as I've seen, it doesn't expose the full power of these delegations to the end user. It basically gives you the targets and the releases role, but that's an either-all model. But basically what people like Phillips and Sinead want is a way to let developers go on and then manage their own delegations from there. So that's another lesson. Let me show you a quick demo about what we've done so far. So I've got, let's hope things work. Things no work on live demos for some reason. It's a cardinal read. So I'm starting up the Node 3 server and Siner in my machine. And I'm going to show you what the typical signing and verification workflow looks like. So I'm going to show you what the typical signing and verification workflow looks like. Okay, great. And I know I've wrapped up. So basically I've got like a shortcut command here called SineSign. I know a very imaginative name. But basically I'm signing a test bundle and I'm pushing it to my local host registry here. And I'm also note that I'm also adding in total metadata at the same time. Actually sticking it inside the tough targets metadata, which is a nice property of tough, by the way, we're basically using tough as a trust bootstrapping layer. You can think of it that way. We're transparently adding, this is something that tough. In fact, Node 3v1 already lets you do this. We're adding additional metadata. And this could be your Sbomb metadata. We're saying here's the rules of the pipeline and this is the way you catch if the machine misbehaves, if your pipeline misbehaves. And here are all the Sbomb materials. And here's a public key you can use to verify the rules of the pipeline in the first place. That's what we're doing here. You can see we don't ask you to worry about keys at all. In fact, we're getting some key passwords from environment variables behind the scenes. But it's basically three keys that you have to worry about. The root, the targets, and the releases role. And as I mentioned before, we reuse these keys across repositories, across artifacts for usability. And so verification will be basically the same with much less information, of course. We say what bundle we're interested in. You can see it's stacked just like a Docker image. The tool doesn't know any difference. And we say, but also verified in total metadata. Now, it actually bongs out here, which is not great, but this is actually a bug in our part. We're using a bad pipeline that's not actually made for Cina. But it's a test pipeline that we're using and we really shouldn't be doing this. But basically, if things just worked, it would have been successful here. And we're basically verifying that everything, the bundle was produced correctly, actually. We're actually working on a PR to fix this right now. But you can see, we designed the signing and verification processes to be as easy to use and as transparent as possible. And for users, not to think about all this low level decisions such as, how do I manage my keys? We basically pick a sensible default. We're kind of opinionated, I guess, that's the word for it. And if you want to do things differently, you can. But the default is, don't worry about it. We think this will work for 80% of the users. Yeah, and that's basically what I have to share in terms of demo and key ideas today, key lessons. What I also want to say is that things that Phillips wanted to talk about, Marco Frunzen, actually, I think, from Fillet Research. I got his name right. They're very interested in Node 3. Just like us at Signing with CNAP security. Both of us are working with extending Node 3. We won in the meantime. One thing that both of us would like to see, in fact, the Python stuff suffered from the same problem. And this is something we should fix for Node 3. Also is that there really should be abstract interfaces or APIs for dealing with keys and where you read and write metadata from. Like we cannot assume a file system. We cannot necessarily always assume a key value system. And keys may be on disk or keys may be in vault or they could be in a HSM. We don't know. And we really should provide abstract interfaces to overwrite all this and choose a particular back end. Another thing that both of us would like to see is a very simple one-click user interface for things like key revocation. Right now, it's not clear how to do this. You might have to mock around with a command line, for example. But what we would like to do is provide very convenient GUI, for example, to be able to do this. Something I should mention really quickly is that Philips is looking for funding. So they were wondering whether CNCF can help here, but maybe it's not the right form. I just wanted to throw it out there. I just wanted to say that basically, there's interest from different communities such as CNAP security and Philips. To see some of these lessons that we have learned, be incorporated into Node 3. We do. Another thing I wanted to mention before I stop talking is I think it would be very useful to have an EU and Asia-friendly meeting time also. I know it's difficult to find such a meeting time for everybody. But I think we should try, because I know that people like Harbor and Philips and all are interested, but they're unable to join the meetings right now. I think it'd be great if we could include them. That's it. I'm sorry if I took more time than was planned for, but I hope that made some sort of sense. Yeah, let me know if you have questions. That was great, Krishan. And for the time, just because that's a quick one to hit off, we're experimenting with the OCI call on Wednesdays this week, actually, with a different time, which I'm forgetting when it is. But it was specifically to enable Europe and Asia. Hey, Krishan, thanks for that. This is Omar from AWS. I got a couple of questions. That the first one is that these call-outs you just made on order review, too, like we should ensure, let's say, for instance, that there's an abstract API for key management. We should make it easy from a GUI perspective. Have you had a look at the scenarios doc on GitHub and ensured that that's there or not there? That's a good question. I remember we opened an issue about key management a long time ago. Let me pull it up, but that's a good question because I don't think I've updated the scenarios document at least. In the meantime, let me try to pull up the issue that we found. The key management one is more of a requirement and it's an area where I think, I'm not sure if we've written down well, but it's certainly a requirement where we're tracking. There we go. There's here's something that Radu and I had filed off a long time ago and I believe just in Karmac commented on it. We did start a separate doc for the key management requirements. That one just has a bunch of questions at this point. We're going to set up a key management meeting this week to go over some of the additional use cases. I think these are all good questions for that doc. Cool. Yeah, the basic call-out is that while we're putting things online, and we have discussions and there's all these learnings that are coming from all of us, right, different communities and different things that we've seen, that would be the place to dump our feedback for Wanda for better work. That's a PR, it's an amendment, however, so that when we go review that, these are not lost. I was just wondering if we haven't done that. We could or? Yeah, I mean, I think we've got this one captured here. This one's a little intertwined because it's key management and tough. Like there's a separate one where we just want to make sure that every cloud provider and private instance of this registries and scenarios can actually use a specific, their specific key management solution. Right, and that necessitates having an API in order to achieve that from a specification perspective. Fair enough. And that's what I'm hoping, Diaz, and others will work out the details there. There's another layer on top of that of how tough would go into that. So I think those are layers. That's just a good point. Yeah, sorry. No, no, finish your thought. So how, so someone should write this down, I agree. Do you guys need me to volunteer and where should we put this? There's probably more. I'm not so worried about the key management one per se because Diaz is going to hopefully drive more that to conclusion. There is, there was just a separation between scenarios and requirements that I was trying, we were trying to capture, like for instance, there was the scenario that talks around not changing the tag and digest, for instance. And then you can, in theory, you'd say that's a requirement, there's a requirement for key abstraction, key management abstraction. So that one I'm not as worried about because it's hot on the topic. It's mostly just getting, you know, getting the doc in there. And I, like I said, I think Diaz's doc will really culminate in what we need there. So if anything I would say, if you can work with Diaz and what you guys have been doing, that would be good to have your thoughts in that working group. Sure, would love to. So I was curious, because from knowing a little bit about what you guys were doing with CNAB and Sinai and so forth, I was kind of just focused, I guess I just wanted to put the question out there and is were you focused on trying to enable CNAB in the existing infrastructure? Because CNAB is obviously a larger super set of things as opposed to, or not opposed to, or as a beginning step towards the balance of public registries and private registries and air-gapped environments. I was trying to get a sense of how you guys were thinking about that. That's a good question. I mean, yes, for us, it is basically not negotiable. It is a constraint, a requirement that it has to work with existing infrastructure, which is why we went ahead and got started. So that's why, for example, it's important to push bundles as OCR of the facts. You want the same place to keep bundles and images. You don't want two different places for it. And ideally, it would have been nice to have the metadata in the same place as the registry, which is something which I believe is a requirement for No3v2, but it's not there in No3v1 yet. So because of this, so that's an excellent question about air gap. We thought about this and we basically prescribed two solutions. One is when you're pooling a bundle and you're verifying the signature on it, and you push a copy of the bundle in your own private registry, you basically have two options. What are your options at this point? You can't blindly copy the signatures because you basically don't own some of the keys. So there's at least two options for you. One is when you pull from a public registry, you verify the signatures, but you basically push it and you don't sign it in your own private registry and you consider it a trusted zone, basically. That's one option. The second option is we say re-sign it using your own keys. It's just slightly more complicated, but it can be done. And the idea is that we don't want to force people into any particular, like, Tau shall do this or Tau shall be considered insecure, but we basically give you options and tell you what the pros and cons are. So to be fair, and that's why I was asking. It sounds like you were really trying to make sure you can get your new content types, CNABS, into existing infrastructure. And we're trying to get to those areas, but unfortunately, just because of what's existing there, you kind of punted it. Which issue? The media type issue? No, no, no. The media type, it sounds like you've got. And I'm curious where you guys wound up, but the ability to move content between registries is just not something you were able... It's just based on the current infrastructure that exists today. You have to kind of re-sign or trust. There's really... There is no way to... You didn't magically find a solution to move the signatures and everything with... Okay, and that's totally fair. Like, I just wanted to... There's... I think there was... I wanted to see what you guys had done to kind of see what progress you guys had made. And I'm actually curious about the shell scripts that you guys were outlining. But... And you guys have made great progress on how to do this in a large thing. And there's a bunch of details on the canonical JSON I think you guys had worked through. But it's not like you've come up with a magic bullet and we can just build everything that we... Like, all the Notary 2 stuff that we're targeting is still very valid for what you're trying to get unblocked on. Yes, yeah, exactly. Especially delegations which we found at Notary V1 is actually powerful enough to do what we wanted to do, right? But I don't think we can fundamentally solve the copying between registry metadata, I mean, on behalf of Notary V1. Because it basically... You have to copy metadata between SQL databases or whatnot. And that's just something that we... And it's a private API, basically. So, Radu is here, by the way. So he might want to add something to this. Hey, everyone. I just got off of a conflict and couldn't get sooner. I'm not exactly sure what you've discussed so far. So if there's anything specific, I can jump into it. I'd be happy to talk about it. Yeah, sorry, I should provide a context. Steve was just asking us how we solved the issue of copying signatures between registries. And I said we basically didn't and couldn't, if you like the idea of that. Yeah, and he was right about that. We just didn't. If you recall, we had the conversation probably winter about what us, as the CNAP community, are looking for in Notary V2. And that was probably the biggest item that we've had in there. We do have some guidance about four people who are really, really interested in doing that. And it probably could be achieved with Notary V1. But as a spec and as a reference implementation, we really wanted to stay away from that. Because I don't think it's a large enough use case for CNAP security V1 that it was worth us doing that. So the biggest thing that we were interested in and we are looking for in Notary V2 as well is signing a representation of the object rather of a manifest. Which is the thing that allows us to move an object from registry to another. And not even to distribute objects using registries and still be able to verify a signature. Because we are verifying the signature of the object itself rather than of a manifest generated by a registry that describes an object. So we hope that we'll still be able to do that with Notary V2 as well. I'm trying to adjust exactly what you were. So what was it your concern that you couldn't do with Notary V2 or you just want to make sure that we capture? Specifically offline or export or something? No, specifically what the signature when you sign an object what that signature represents. For CNAP we made the explicit choice that we signed a content digest of the actual object of the actual artifact rather than the actual content digest of the manifest. Right, the manifest is a representation of how an OCI registry stores an object. Right, whereas for CNAP we have a requirement that specifies that you can distribute the objects without a registry. So you don't actually have a manifest for you. And you still want to validate a signature or even generate one. So we want to make sure that hopefully Notary V2 will make the choice of signing the actual object rather than manifest, which is the default in Docker content trust. The default is you're signing the manifest. Right. Yeah, no it's a good point. So why is it that you're not worried about why you don't want to sign the manifest? Why do you want to sign the content? Because we have instances where you distribute the registry. Sorry, you broke up. You have series where you're trying to distribute artifacts without registries, is that what you said? Yeah, yeah. Or you can have a mixed way of distributing half of your customers with registries, half of them distributed without. And you want to have a way of making sure that you can check the signature in the same way, in a consistent way. And you wouldn't necessarily just have the manifest available offline so that it still represents the collection. Given that the manifest is generated only when you push to a registry, or for simplicity we can say that, right? The manifest is how the registry represents your object. So we thought it was a clear choice to take the content digest of the actual artifact. Okay, I'm missing a little bit of a gap there, but it's okay, we can follow up later. But I think we definitely want to be able to have, if you look at the scenario as we talk about being able to sign an artifact before it's pushed to a registry, and then you subsequently push it to a registry. So from a scenario perspective, I think we've, I'm hoping we've captured that for you. To be fair, the actual change required to do this is minor, right? So it's a conceptual change rather than a technical change. And on the public and private registries, are you guys kind of been more focused on like, how do I make a bundle, not a bundle, a CNEP.io collection of bundles? Or were you kind of thinking, have you spent much time thinking about the individual customer that's building lots of bundles themselves for their own deployment? And they're, you know, they have automation that's constantly building. It's only in their environment. They're not trying to share necessarily outside of their company. Have you thought about those complexities? So far, I think the biggest use case has been, yeah, has been the individual organization trying to build bundles for their own use case, not necessarily trying to distribute them publicly. But that being said, there isn't necessarily something that's missing from allowing organizations to publicly distribute bundles. Because we're using distribution and notary, they can use it exactly the same way you would use Docker content trust right now with the changes that Prashank mentioned about delegations. No, that's good. I was actually afraid you were focused more on the public and weren't dealing with the intricacies of private registries and the way companies move, not, sorry, move, but work in private registries different than public registries. And when I say public registries, there's a different surface area and challenge when I'm trying to publish things for the world to consume versus I'm publishing things within my company that does need off to connect to everything. There is nothing that's anonymous. So I don't want to say it's the more complex area, but it's definitely more closer to some of the things we've been more tightly focused on in Notary 2. Both of the requirements have been from organizations trying to build in private registries. And to be fair, the actual implementation still has a few things to, there's still a few things to be added to the actual implementation of Saini. So we're still, we keep adding to that. Whereas the spec, we just, I think Krishan mentioned that we finally managed to have a vote for the actual spec text last week. So I had one more question that I want to make sure we have time for. One of the things that we've been discussing and we need to put more focus time on it is the UX, the experience for what the client should look like. And when you guys had Saini, I was kind of like really excited to see what that CLI was. But I noticed, Krishan, you were actually showing Saini being called by a shell script. So I'm curious why, and you still had to pass a bunch of parameters to it. I'm curious what you guys found in the challenges and why you couldn't call Saini directly. What would you do? What would you like to see? What would you like to do? What were the good and bad of how you wound up in that spot? Oh, we could. We really should call it directly. That is my bad. I know Radu's not too pleased about it. But that's really for local development purposes. It's really just a convenient wrapper for passing in a lot of default boring parameters, like the root certificate for the notary server or port numbers and so on and things like that. These are things that you can easily write your own wrappers for it. It is really just for my... Basically what the shell script does is just fold a lot of boring parameters to Saini. That's it. And is that because of the limitations of the Notary V1 stuff that we've got in registries? Or is it, do you think those are parameters that every user would need and we need to spend a lot more time thinking? I'm hopeful that there's a Saini or something, docker sign, container design, whatever these... Whatever the clients are because we want to be able to enable multiple clients. That it's literally like something space sign and you pass it some kind of reference to a key because I want to be careful to say the key isn't necessarily local because it's part of our key management stuff. But it's very minimalistic on what the user has to provide in that parameters. So is part of that because you had a local instance of the distribution running or... Yeah, sorry, go ahead, Radu. Yes, because we wanted to make sure that you can configure, pretty much everything could configure when pushing to a Docker distribution as well as to a Notary instance. And if you add all those parameters, there are quite a few. So first of all, we wanted to make sure we can satisfy every combination of parameters and the shell script is just our way of not dealing with UX currently and just making sure everything works from UX onward. But I totally agree that for the user experience, we definitely are interested in that discussion. I'm unclear as to how much a user should want or she needs to configure before just signing something. I totally agree, it should be as... So for example, here's a simple thing that we ran into, right? So Radu is testing that our tool actually works across many, many registries now actually with Docker content trust support such as Harbor. But there are things we ran into, for example, that we should improve default out of the box discovery, for example. So for example, simple things like it's not clear the port number of the signing server, for example, simple things like that or the root certificate for the... This thing should be automatically discovered, right? Things like this. Those are good things that... I don't know if you want to write a couple of those notes, maybe an issue or something like learnings from CNABB or SINI or some cute name just because it's fun. That it's some kind of reminder in the issues for the Notary Project that we capture those as we start doing the UX for Notary 2. Most of them should serve as a reminder for consistency in choosing defaults. I'll make a note to myself that we'll document this somewhere. I just want to leverage your pain. Anybody else? Nope, not for me. I think we covered most of it, right? Making sure that whatever the pain was is documented or at least tracked. And then if I think good things in Notary, we make sure we don't mess it and if they're on, we fix it. Cool, yeah. That's it from us, really. Anyone has... Anyone, anything else in the discuss? Yeah, please jump in. I'm trying to leave white space for people to jump in with whatever thoughts they have. So for example, I will say that privately someone DM'd me just now and asked this question, which I wish they had asked publicly because it's a great question. It's totally not obvious at all. They were like, why couldn't you just copy signatures? Why not just trademark? Copy signatures from one registry to another. Great question. The thing is that those signatures... Yes, there is a public API to copy the signatures, right? And you can get it over HTTP. That's not a problem even though internally, behind the scenes, it could be in RAM, it could be in MariaDB, it can be in Redis, whatever. You don't care about that. The point is this is the public endpoint to consume the metadata. Great. And let's say you blindly copy the signatures and copy it in your private AERGAP registry, for example. Why is this not a great idea? Well, it will work temporarily. The problem is that you don't own all of the signing keys needed to refresh signatures. The thing is this metadata expire for good reasons. You don't want to replay or freeze the tax, for example. So that's really the biggest blocker right now. So basically it's the timestamp or the metadata info that you don't necessarily own. And you can't replicate that even after it's moved, if it hasn't changed. Oh, you can. You can copy it. You can... Oh, but you don't have to sign. You don't have the keys to refresh it, exactly. And so you need to periodically pull it from the public. And if you forget to do this, there goes your private deployment, right? Which is not great. No, that's fair. We've talked about this and we've written a bit of this up as the kind of the movement. An air gap environment is an isolated environment that runs on its own. Either it's offline or it's got a controlled network egress into it. The assumption is whether to be remote keys, there is an update interval that can go into that environment. It's just that you've got to make sure that everything in that environment can operate without external access. So the idea that you can move key revocations into it, you can move additional new versions of things and new versions of the metadata are all valid things. The interesting one that I can think of is the person that produces the original content, how often do they refresh? Does that meet the requirements of the entity that's consuming that content? Like if I know I'm going offline for three months, or let's just say a month, as I'm going across the notion that I'm not going to spend expensive connectivity pulling from a satellite, then I need to make sure that the content I'm consuming wasn't refreshing once a day. And that's the assumption. Exactly. Ideally, there shouldn't be this implicit requirement that you depend on refreshing at all. I agree. You could if you wanted to, but it shouldn't be a requirement. It's the, how do you provide some disconnect between the producer of content and the consumer of content? Because while the consumer of content, we assume many of these companies, especially ones that are really concerned, will have an additional signature they'll put on it, additional verification that says, hey, I've tested this for my environment. So not only does it have Trisham's key for what he produced that I consumed, but it's also got my, whatever it was, the rocket network or ACME or whatever it is we were using there. Key, but the thing is that we want to know that it's still valid in my environment in our company, but we still need to know that the information that you've produced is still valid because we assume, we kind of take the security model that we assume at the time we've gotten the content at that point, it's in a known state. A week later, the same content can turn out to be vulnerable because the vulnerability didn't find, it wasn't discovered until after the thing was released. So in that case, I still want to be able to go back and get something from Trishank that produced the original content to know that, hey, it was good last Monday, but now it turns out we've discovered it's not good. I don't think you want a key revocation scenario in that one because it's really, anyway, we have to think about that. Yeah, exactly. And this can be done, right? But really it's a question of how often can say a nuclear submarine surface to the air and refresh it in the church, right? So there will be some fundamental trade-offs, I think, but nothing impossible. And that's the thing I want to remind, again, I wrote this somewhere, is we have been talking about the air gap scenarios like these really esoteric submarines and oil platforms and things that are really important, but to a very small group of people that we could all punt the scenario. And at least I can speak from an Azure perspective, and I know the AWS folks are doing the same, is we see our customers wanting air-gapped environments for their standard workloads. Like they're comfortable on-prem, and one of the ways they're comfortable moving to the cloud is they can move into the cloud with an equivalent air-gapped environment. So it's really the average customer wants this. That's why we're all creating these private VNetting capabilities so customers can really lock down all external access, even in our public clouds. So it's not, we should not treat it as an esoteric environment. I think it's a really good point. And I know that Tuff makes it easy to sort of replicate data into disconnected environments, but I want to say that it's not too crazy in Notary V1 either. Not that I've personally done this, but I think the only information you need in an air-gapped environment is the targets metadata and below and the delegations. And then you can either choose to sign really long-lived root snapshot and timestamp, or spin up your own timestamp and snapshotting server inside your air-gap with your own root snapshot and timestamp keys. And that gives you a pretty wide array of choice on how you want to keep that up to date. All right. Well, thanks. Anybody else have any questions? We have a couple of minutes left before people are already dropping, but all right. For those that didn't make the whole call, we'll obviously have the recording. And this is great to see what you guys were able to get with what exists. To some extent, you kind of validated some of the things that we're still working on. We still need to keep on working on because you still need them. So that's really good feedback. So, Radu, Trishank, thank you for you and your whole team. Thank you. Thank you. Thank you for letting us present. If you, I think you've already put them in the notes, but make sure there's links so we can kind of, people can peruse around what you've got there. Oh yeah, there's more gory details in there, you guys are interested. That's to remember, you had quite the, well, the agenda was big. I'm assuming there's some links in there as well. So thank you for the detailed info. And with that, so I didn't, I don't want to ignore the request for a different time slot. I want to see how the one goes for the OCI one we're doing this week. We debated doing it in the evening or in morning. So this one is more of an evening call for the OCI Wednesday meeting. And we'll, I want to see how that goes. And then based on that, we could talk about, you know, what we want to do here. Because it seems like Europe, China and India and the Americas, it's really hard to get three, four, I've already lost count. So let's kind of figure out what works and maybe we just need to do some kind of intimacy like, all right, what continents do we really need to support for this call? So with that, we'll wrap up for next week. Thank you, everybody. Sounds good. Thank you. Thank you, everyone.