 Good morning folks. Good afternoon. Good evening. And the HackMD link is in their chat. Folks want to scribble your name. And Nios, you have the floor today. So I'll probably just give a couple of minutes for folks to join. Sounds good. Folks want to just take a look at the hack, the PR that Nios has got up. I think that's a preparatory reading for the call. Marina, do you know if Tisha is joining today? I think he had a conflicting meeting today. Yeah, and I didn't get the link up to the videos, but I saw somebody else posted them. So who was that so I can thank them specifically. I'm going to go ahead and get started now. Okay. So we're continuing looking at the requirements from last meeting. We were in, we were close to kind of signing off on this. So we're going to go ahead and get started. So we're going to go ahead and get started. Okay. So we're continuing looking at the requirements from last meeting. We were in, we were close to kind of signing off on this fourth requirement, which was a rotation of the root key should not require the use of the existing root key. I don't think we, I didn't see the notes from last meeting. And so was there any other concerns, Marina, that we needed to address? I thought we'd kind of closed on this one. I think that there's still the concern about how to actually communicate that the new root key in a secure manner. But I think that using that an out of band process should be all right for this. Okay. What I'm going to add is a section to this doc that calls out certain assumptions that we're making in terms of what is happening out of band. So for example, the sharing of the root key. This is something that we assume here there is a secure mechanism that happens out of band to make that work for the initial root, root sharing. So I'll add in a section for assumptions that we're making or happening outside of the scope of this document. Yeah, I think that's a good idea because it's important people understand what happens out of band because I think sometimes not explicit enough. Yeah, I'll add in a section for assumptions and also guidance. There's certain things that we've called out as best practices for setting up like roots and signing keys. I think we can start kind of tracking those. So we have a list of practices as well available once we're done with the implementation. Okay. Moving on to the next requirement. Publishers should be able to sign with keys stored in their local machines, including your tokens, HSMs or cloud based key management services. I think this one coming out of Nordery v1 we realize that we would need to support developers having keys in a variety of different places. Is there any objection to this requirement? Okay. Next one. It does have some implications about the kinds of keys we support just because of so because those mechanisms don't always often have restricted sets of kinds of keys they support. It does. I mean, it does in terms of best practice or usage, it means we have to be fairly lenient in the types of keys we allow. Are you basically saying that a registry would require a specific type of keys so doing offline? Well, I mean, we can't just say you have to use like ED 25519 keys because to be honest, no cloud provider has a service that can sign with ED 25519 keys. And so we just, it means we, in general, most clients are going to have to be able to understand them for as you have key types, which is fine, I think, generally, but it's just, it's kind of, it's kind of frustrating how unstandardized and different the cloud provider APIs, for example, happen to be for signing. The focus of this meeting that we should come up with a standard. Well, it's not, it's just like you should come up with, you should, you should use more of the date. Now, I think that's part of one of the things that we want to hash out as we get into implementation, right? Like what is the crypto API standards that we want to support? And that should come out and have requirements then for each one of the cloud providers to have an interface that works with the V2 client. And so the interface, I think we need a little bit more research to do, but once we agree on here's the approach, I think we have a common set of approaches we can look at and have a more detailed sort of discussion around key types and API support as we get into more of the detail. The next one that we had, go ahead. No, I think it should be fine, but one point that was one cloud provider who didn't support asymmetric encryption in there, but I think that may have been fixed. The next requirement we had was publishers should be able to generate multiple signatures for a single artifact. This one has come up several times where we need prior signatures to be preserved when an artifact is being re-signed. So I think this brings up a requirement for having multiple signatures, the ability to sign off on a single signature or essentially append a signature with their own. And I think just mostly technicality. I think that pairs with clients being able to kind of specify who should be signing different artifacts. Right. This does introduce a trade-off in that signature size can essentially be unbounded or could have multiple, but with multiple signatures, the signature size is growing. Are there any concerns around that? So the way it's worded, I would do it a little differently because it sounds like a publisher can provide multiple keys, which they could, but it almost makes it too narrow. What we're really trying to do is an artifact can have multiple signatures. So the publisher might provide one or more. I'm not sure why they would provide more than one for as a publisher. But the idea is that a publisher can provide one docker hub as, you know, a curator can add one to it without changing anything. And then in our example, Acme Rockets can add one for their private environment so that only things signed by that, their key can be deployed. So I would tweak the wording a little differently. So the end result is broader. That just makes it a little narrower. And the, the second point is the size of the keys. It's an interesting one because what we're really, it's not that the key itself gets bigger because there's multiple signatures. What we're saying is you have multiple signatures that can be assigned, but each one is independent. So it means you can get multiples. There's nothing that says I couldn't get a hundred signatures for a particular artifact, but it's not one document. So that kind of raises the question. Well, great. If there's a bunch of signatures, how do we get the one that I care about? So one of the things we're saying is we'd like to have a filter mechanism to give me back the keys that I care about. It wasn't signed by Docker. I don't care if it was signed by Wabbit networks, but wasn't signed by Docker and wasn't signed by Acme Rockets. So I don't, it's my point is key size doesn't, signature size doesn't matter because it's a collection. And tweak the publisher. I mean, generally, I'm not bothered about signature size because it's very small compared to the object. So however, Justin, it's hard to hear you. Sorry. Signature size is very small compared to anything else in a registry. Yeah. The manifest is bigger than the signature size. I think. Yeah. So I'm really not, I'm not worried about that. But I think the as is making a point with, if you support multiples, it could get bigger depending on the implementation, but because I mean, even if signature is unique. Yeah, but that's still not ready that big. Oh, even if you've got the entire collection, you're saying it wouldn't be a big deal. That's fair. It just, we're paying the, but you have to go get them. Size isn't the issue here. Yeah. I agree on, I agree on the wording change. The size one is just something. Yeah. I agree. I agree on that. And just to call out to make sure that we don't have any bounds on it. And so if it's a collection, I think that's easily avoidable. Okay. And I'll reword it. Collections will have paging APIs too. So I, I don't know if we want to put an arbitrary limit on the collection as long as we provide. Filtering. And like from a spec perspective, which is what you're kind of driving down on. I don't know if we need to clarify it anymore. Okay. The next requirement was there should be a mechanism to evoke signatures to indicate they're no longer trusted. This is one we've heard from a lot of different customers about. And I think it's, it's almost essential for a signing feature. 100% agree. Is your screen frozen? I don't know how you would know that the top of your screens is not required. Do that we are. Oh, sorry. I can highlight it. Um, got you. I mean, what, what, I mean, this could be implemented in. Different ways, but I mean. So, um, you want to read revoke signatures separately from the keys that sign them to you? Is that the implication of this? I think the implication here, the implementation could be that you're revoking keys to say that here's a set of signatures that I no longer trust. Being able to revoke individual signatures does get pretty challenging unless you have a pretty robust key management in place. So, uh, it's more around the ability to revoke a set of signatures. I think is what we're trying to get at here. I think you doubled back. It's revoke keys, which implication revoke signatures. Right. Okay. And I also worded as not publisher, basically anybody that any key that was used to sign can be revoked. So that's the publisher. Yeah. Yeah. Yeah. So I think the key owner essentially should have a mechanism to revoke their keys. Um, and so I can, I can change the wording on this to kind of clarify that a little bit. Well, we, we want all keys to be rotatable. Was to write this rotatable and revocable. Or is the industry just assumed that when a key is rotated, the old ones revoked. Is that how it works? Yeah. But I mean the, the user should be able to rotate their key, the user of the key. If, if I think my key has been compromised, I want to rotate it. Uh, if we have a key hierarchy, the person who own, who is above me in the hierarchy should be able to remove my key. Like if I'm no longer an employee. And therefore. So I think there's a difference between key rotation and revocation, right? Like key rotation is something we expect to happen from a best practice to make sure that you have limited blast radius of things you're signing with your key. Uh, revocation is something that's saying that I no longer trust this key. So from our perspective, just because the key has been rotated, doesn't necessarily make the old key untrusted. Um, you wouldn't start on trusting the old key unless there's a very specific compromise that you're calling out. Or key, for example, has. Can we just make this a bit more, maybe break this down a bit more? Because it's just. Ian had a note because his mic isn't working. So voices, comments, um, something about. Yeah, I don't think. Can we, I think we're missing a bunch of. Requirements here. Cause we haven't got requirements about rotation as well. I was trying to say. I think that's a good call out. I can add in a mechanism for rotating keys. I think that's a requirement that should come in as well. And also I would say that it's probably the keys that are not the signatures. That's what I was saying originally. Yeah. So I think it's just a bit unclear. I think that's fair. So I'm going to change this wording to go revoke keys and then add in another requirement for rotating keys. Uh, the next requirement that we had was, uh, I would say that the trust store should be configurable by the deployer. This is essentially calling out that you can go configure the routes you want to trust in your deployment environment. I would say it's a, um, it's definitely true that you want it to be configurable by the um, Deployer, but I would say that the trust stories. I feel like it's a bit of an implementation. Details in the requirements. Um, Exactly how. That works. If that makes sense. Um, I don't know what you mean. I don't know what you mean. The registry, the client. Oh, that section right there. You just scroll past. Okay. Yeah. So a trust or essentially recalling out as a relationship between. Uh, the signing keys and the validation being done. Um, so the trust is essentially going to relate. A source, whether it's a specific registry or positive tree or specific target or any source. Uh, with a, uh, certificate that has like a root key, uh, either intermediate or signing key. Um, and so that's really the relationship we're establishing there. And so, um, what we're calling out here is that there should be some control for, uh, Uh, developers to essentially go in and say, here are the roots that I want to trust. Uh, and that's a control that should be exposed. Okay. I think I understand now. So I'm a little confused as whether this is in the registry on the client or both still, like, um, Maybe an example call out. They should be in the client. I can add some more context and narrow this requirement down. So essentially what I think we're, uh, what this requirement is, uh, can be rewarded as like developers should have the ability to configure roots that they trust in their deployment environments. That makes it super clear. Um, I, I, I mean, that's enough. That's why I'm asking others. I'm wondering if the definition needs to be tweaked a little bit because maybe the, what you're teasing out as an interesting one is certainly on a client. I want to make a determination that regardless of what keys may or may not be on an artifact, it must have this one, two or three. And if it doesn't have one, two, or, or end all three, I think we're going to be able to, you know, have this one, two or three. And if it doesn't have one, two, or or end all three, I won't deploy it. There's another one maybe I'm asking is in a registry, should there be some enforcement of what keys and roots are allowed to be pushed to a registry? Do I get to push the bad guy key to a registry of, I know if the registry knows there's the bad guy key that it doesn't want to accept. Is that a real scenario? I think that should be an optional requirement because I want to leave that choice up to the registry operator to say here's how we're going to curate signatures and here's what validation presentation we're going to do. But there's nothing that prevents them from using whatever we're using in the deployment environments to do a similar validation themselves. The next one that I had was deployers must be able to configure trusted entities for individual repositories and target. I think this one gets rewarded to say for individual registries repositories and targets just to make sure we cover all three granularities. But this is essentially saying that rather than saying here's the key that I trust for everything, we're allowing the capability of saying which specific registry repository or target do I trust this key for? What's the use case for configuration per target? That seems really weird thing to do. I think it kind of boils down to how do artifacts get managed? Do we always expect like a uniform management where let's say large enterprises are going to have let's say individual repositories per different applications that they publish or do they end up the same targets in the same in a single repository? And then you're saying I trust this key for this team versus this entire organization. Normally the permission boundary and the registry is the repository. So normally the same people have right to access for everything in the repository. And we don't have separate permissions per target. So it does. I mean effectively if you have set it for one target then it's simply one item and he must well just use the hash of it and not check the signatures. It just seems very strange to me. The hash would essentially tell you that the whatever you're getting from the registry is has not been tampered with but doesn't necessarily tell you whether coming from the publisher has been tampered with or not, right? And I think this kind of went more into like do we expect a scenario where multiple teams would end up with different targets in the same repository and would you want to scope that trust down to like a single team or single sort of like group within an organization, right? And so this one I think you know if we don't have a use case for that it's pretty simple to kind of say registries or repositories but it I don't think you chain, go ahead. Whether, but you wouldn't, if a target would be one you'd have to know which items were going to be pushed up front to know which team they belong to which would be, it's just I can't see any circumstances in which that would really work. And you'd have to have a list of items and I know in advance what all the items were going to be. I think you're touching a little bit on the permission boundaries and stuff. Whereas, you know, individual registries, you know, Coke and Pepsi in the same cloud are going to have different registries or different storage buckets whatever we want to call them. So they have complete autonomy amongst themselves. In a single registry as a number of us support multiple teams in the same registry with some permission boundaries. Team A should not have any access to Team B's content even in the same registry. And the same repository like in the same namespace. In the same namespace. I don't think many of us are doing unique permission. That's why there's that that's that's why the target thing doesn't make sense to me. Yeah, and that's why I think I'm curious if we're just getting tied up into some of the the update framework. Terminology of targets. When that case, what do you mean by targets? Yeah, that's what I'm asking. I kind of went down to sort of like what is the most granular level that you can specify trust for us. And that would be down at the individual artifact level. Like from a signature validation perspective, it doesn't really matter much because you're essentially just looking at different chunks of like, you know, where this artifact came from, right? So we can like this. I don't think this changes the trust or except that if you have people kind of like, you know, scoping down individual targets, that's a lot of like work they're putting in and the trust or can essentially just grow in size. But it doesn't, I think, break down the implementation that we would have in how do we go validate these signatures. So if this is a control that we feel like is going too far, I think that's fairly easy to kind of just say we're going to allow you to scope it down to repositories, but yeah. You put mast and I mean, I don't think I'm, I wouldn't implement it. So I would not be in spec. I guess that's what I'm saying. It seems, I think taking out the mast for targets would be fine. So when you're saying targets, Justin, just so I understand the term, target is the specific artifact in the same repo. So artifact and artifact two are separate targets. So it's a permit to 16.10.1 and you have a special rule for a bunch of 16.10.1 that is different from your rule for a bunch of 16.10.2. Okay. So the way we think different tags in the same repo should not, we don't believe today. And I think this is, so maybe it's at repository instead of targets. And this might be just terminology around registries compared to other package managers. As of today, I don't know of any registry that does unique permissions within a repository. So to Justin's point, whether it be a version tag of the same Ubuntu image or even an architecture tag in the same, basically, because the tag is just a string. There should be no, we're not saying we want to support permissions at that granular level because none of, I don't know any of us that are doing it. I don't know any of us that would want to take on that extra burden. But at a repo makes total sense. So I, and this would give me some amount of that rollback protection, I assume, because the 16.04 versus the 16.01. Having some kind of rollback protection around that makes, makes sense in the permission boundaries that we support today. Coke and Pepsi can't play in the same repo. And neither can team A and team B. So I think part of this is also kind of making sure that you have roll forward protections, right? So let's say you have Ubuntu 16.1 and 16.2 comes out. Your administrator may not want you to go to 16.2 until they have been able to validate it, right? And they may still want you to kind of verify with the original Ubuntu signature. So in that scenario, there may be a case where they actually want to go in and update the target to say, okay, now we trust 16.2 with this signature and have that level of control. So you're not able to necessarily pull in any version of Ubuntu that you want. So I think there's different mechanisms to do that. The reason I have it here right now is that whether you're validating a registry repository or target in, in, in, in terms of the implementation, it doesn't really change much because you're really looking at what is the artifact I'm getting and what is the, what is, what's specified in my trust store for this artifact itself and you're matching to see which part of the registry repository or the target you're, you're matching from the entire string, right? So to me the, this putting in this functionality doesn't really change the implementation. But I think it is a good call out to say that the most of it should be registry repository because we don't have a strong case for target, but we can get the target out of like the same implementation. So targets is optional, but you'll get it as a result of the implementation. I mean, I like your scenario that I have to validate an old to go to the new. I have to think about it more, but the conceptually it makes perfect sense that the beauty is because we allow signatures to move within a registry across repos and across registries, then there's a promotion process that can be done out of band. So I would have my staging registry or even my staging repo that could have multiples, but my prod or whatever I consider the critical work or that I don't want to do this roll forward thing until I'm comfortable is a way that I would probably suggest people doing it because now I don't have to worry about permission boundaries or even the configuration boundaries inside of a repo. So I think that threads the needle of what Justin and I are kind of covering on the boundaries of what we want to do in a repo versus a registry. So I think, and again, I think this requirement gets good. I think this requirement gets rewarded to say for individual registries and repositories because that's where the need is. If we get targets out of that same requirement, then that's a bonus. I think that's optional, right? I don't think we should. I agree with that there's probably not a strong requirement to kind of say, Hey, if we don't, if we can't support target, this is a no-go, but if we get it, then that's great to have. I would, unless somebody else says differently, I'm looking who else is here to represent the registry. I wouldn't actually not even include targets in that and just say registries and repos. Justin, what do you think? If we omit it from the mast, then that's what I'm happy with that. Because if someone makes an implementation that supports it, I guess I don't care. I'm not going to make one. Just because it's not in the spec doesn't mean somebody couldn't do something. I think CodeFresh does something at tag level, but I wouldn't want to put it in the spec of even a should or could or would. There's nothing stops people from going further, but I really want to be careful about trying to put any suggestion that you can do permissions or anything unique inside of a repo. I could have Ubuntu one was signed with one key and Ubuntu two signed with a different key because it's a really critical difference. Then I could have a configuration that says I don't trust key one. I only trust the one that goes to key two. That's one way if you really, really needed to do something in a repo, you could. I think we've beaten this one to death. I think the point is we're trying to suggest any permissions inside of a repo. Yeah, I think let's let's move on. I think I'm okay switching the language there and calling that out. The next one was developers must be able to validate signatures on any version of an artifact, including whether they've been revoked by the publisher. This one, I think just kind of going into like being able to validate older targets and make sure whether they've been revoked or not. What is. I'm confused on a little bit on the signatures or any version. The revocation is at the key level. What is the version aspect that can be. Basically, you have to be able to go back in time and see would you have validated this last Thursday or something for audit purposes or. Or I just want to, I want to my my machine. You know, that kind of that kind of question that you have to be able to answer. I think it's my understanding currently. Yeah, and then part of it also is that like, you know, when we go back to that earlier example of the boom to 10.1.2 versus 10.1.3, even when 10.1.3 is the latest, if my security team hasn't validated that I should still be able to use 10.1.2 or even 10.1.1 if two versions behind. And I should be able to validate whether those signatures have been revoked or not independent of the fact that 10.1.3 is the latest version that's out there. But you're using that term signature got revoked as opposed to the key got revoked. So I don't know how you revoke a signature. Yeah. I think I need to reword this to clarify whether the keys that have been used for that have been revoked or not. And then this goes back into the best practice where we would say that for each new major version, you're rotating your keys before you sign it. So you have a smaller blast radius in the in the scenario of a key compromise. Is that a best practice? Sorry. Major version. Are you rotating or rotating and revoking? You are just rotating. You're not rotating and revoking. Yeah. In that case, the wording is very confusing. I thought you actually want to test things that had been revoked. So that was why I came up with the example of testing what you would have seen last Thursday, which is a different use case. So I'll clarify this one to say that you should be able to validate signatures on any version, including whether the keys have been revoked by the publisher. Okay, let's take another iteration on that one. Okay. And then the last clarity when you when you give another scrum. Sure. And the last one I had was signature validation must be enforceable in an air kept environment with minimal updates. This goes into making sure that we don't need air kept operators to do constant changes in their environment. They're able to either pull in things like CRLs or update their trust doors and be able to manage them within an environment. We can come back and address what the minimal means in the more detailed implementation. I think a lot of it really depends on how quickly we expect trust configurations to change, which we want to minimize to the security events, essentially, like whenever something bad happens is when you'd want to go in and update that trust door. I think this is one we're going to have to elaborate a bit more because just to repeat the same thing that air gap is not nuclear submarines and oil platforms. Just or it is not just those it is in clouds. Customers want all public access disabled. So it's literally a private network, which there's millions of them inside each cloud. So if there's millions of them, we can't have millions of additional things, complex configurations set up. It should be super simple to set this up, I think is your main point here. In those cases, you would presumably say that they're not totally offline. They're not in a submarine. So they can actually update when keys, when new key configuration, when there's new keys and things like that. Well, the biggest secret is air gaps aren't completely offline. There's just a gap between it, whether it be a diode or others, there's always some way to get content in. It's just extremely controlled. I think it's what this is trying to call out is that if you have a controlled update where, let's say, I know that there has been like a route revocation, or I know that one of the keys has been revoked. That's only really when I need to go in and take an action, besides that I'm not necessarily having to go in and touch my trust doors, right? And so that's really what it boils down to is that, yes, there will be some work for air gap environments to update their configurations, but the work there should be minimized to when they need to. And I think this is where minimal is kind of like a goal here, rather than it is a requirement. Once we get into the implementation itself, I think if we have trade-offs where we need to discuss, like what would this mean for an air gap environment, I think this would help kind of clarify some of the decision-making there. I think maybe if it's not quite 100% clear, maybe not making a requirement, but some kind of like suggestion or something, just to make sure that's clear, that'd be helpful. Yes, the requirements, everything else is really good for requirements. This one's a little nebulous on the how, so it's almost like there's a goal, because what I, one implementation, or one more specific thing could be is, I don't have to go back to the root authority for the key to know it was revoked. There's a delegation model that says, in my air gap environment, this URL is a proxy for everything else. And I trust that proxy, so that's okay. So it's, I get what you're trying to say, like air gap is a requirement, that is a requirement for our success. This is the one that has a little more nebulous to it, that I don't know if we can get more clear. And again, I turned to the key experts on what is the standard practice for how to put that in a requirement? I can change this to an optional one just to kind of make sure that we track it as a goal. I don't want to lose sight of this one as we kind of go through some of the more implementation designs, right? I think minimizing the work for air gap environments is something that you should strive to do. So I can change this to make this a little bit more of an optional goal, but. It's an optional goal. I think it is a must goal. The problem is, is that the must minimal updates, I don't, I think people are struggling on the word minimal updates as opposed to we must support air gap environments. And maybe you can just say we must support air gap environments and don't, and just drop off the minimal updates part and then we get more clear later on. I think the minimal updates is the kick me. I'll look into this one a little bit more and see if there's a clear language. I think there is a different requirement though that we want to talk about. I just saw Trishong's note come in, which was the fourth requirement, which is rotation of the root key should not require the use of the existing root key. Yeah, hi. So as we discussed last week, yeah, my objection here is that it, well, maybe it's meant to be optional, but it seems to strongly discourage users who want to do this anyway from being able to support this use case. That's my concern. So I think the reason we take a strong stance on that is that from the perspective of the deployer, there's no way to know whether this rotation is a transparent rotation or it is a malicious rotation, right? And this is where we think going back to how you're out of band mechanism that you have to establish the root is where we want to push people towards. And those roots should be long lived, should be used infrequently and we should really push more towards like being using intermediates that you're rotating this that I think gives us the best security posture. But if you're able to rotate your roots transparently, it means a malicious attacker could also potentially rotate your roots transparently. I think this is where the difference between rotation and revocation comes in. Because I think that if you're revoking the root, I think you definitely need to do that, not using the existing key that is compromised or at least have a mechanism to overwrite an attacker rotated key. But in the usual case, when there's not an attack, I think this is actually more secure because it uses something that's already trusted instead of an out of band mechanism, which could also be compromised, kind of adding to the attack surface. But the deployer doesn't can't differentiate between the two is what I'm getting at, right? Like as a deployer, I don't know if you have updated your root because it was a normal operation or you've updated it. A malicious attacker has updated it, right? Like I can't distinguish that. Yeah, you also can distinguish if an attacker is just uploading metadata. Yeah, I mean, if a malicious attacker owns the root key, you can't trust anything at that point. So I need doing an out of band, right? Replacement is the only thing you can do at that point. Well, not for uncompromised users, right? So I think it's a double H. I think I understand where Nias is coming from here, and I think it's a valid concern. I think you end up with two, it's damned if you do, damned if you don't. And here's what I mean, because let's say the root key is compromised and just is absolutely right here. You could do a lot more damage than just updating it to malicious root keys you control next, right? You could do a lot of damage. You could install malware and use as machines. So being able to prevent them and then say, hey, look, you're not going to be able to use the old root keys to make it sign for new ones. Great. Now here's the problem. What about, let's say that scenario happens and then now you have all these other users who have not fallen for this attack. Now how do they update? So the root keys are broken until everyone updates, which is also potentially destructive. Does that make sense? I'm not quite sure I followed in the sense that if the root is compromised, you essentially need to configure a new root, right? And that's a painful process. I totally agree with that. But how you know which one is the new root to trust, right? You would need some kind of a mechanism to say here's what we do trust, right? And I think this boils down to when we think about configuring trust, I'm not necessarily saying I trust this key because I got it from a hub. I'm saying I trust this key because I can verify the publisher and I know that this is the key that they're providing because of X, Y and Z reason, right? Either that's because for large enterprises, you may be able to go to their website and say this is like Microsoft.com or Amazon.com or Docker.io. I know that this website is validated so I trust the keys that are hosted here. It could also be something along the lines of I trust CAs because they're publishing audit reports and through their audits I know these are the roots that I can trust. But you have some mechanism that's not necessarily automated that tells you that this is the source of this key and this is why I trust it, right? And chaining up to sort of like a root, I think kind of like, yes, it does allow you to expedite kind of getting a key in place. But you're only just really trusting the previous key to say that and that key cannot necessarily always be trusted to the same degree that you would trust an out of band mechanism is what I'm getting at. Well, I mean, I think that you're putting a lot of faith in this out of band mechanism in this case. Because there's nothing to be said to say that the out of band mechanism can't also have, you know, cannot be compromised. I think that's part of the challenge here is how much you rely on the third party not really specified mechanism and how much to try and rely on the internal mechanisms which we know how much or how little we can trust. Well, this is kind of, I think, pushing towards a factor authentication, right? In that sense that if a root were compromised, you're expecting that their communication method hasn't been compromised. And if their communication method has been compromised, if their root isn't compromised, like you're not, it's not really, there's really not an issue there. You need both to be compromised. Sorry, but if you've managed to compromise the out of band mechanism and you replace the existing root key claiming that it's been compromised, you now have control of the registry. You don't because it's not an automated mechanism is what I was getting at, right? Like if I see that someone has rotated the key, I still have a mechanism of validating that it's not an automated process that's necessarily pushing in a new key into my deployment environment. Like there is still a manual action that I need to take. And when I see that key changing, like I have an opportunity to go ask, hey, why was this key changed, right? Yeah, I think that's what you're saying. That's a valid concern. So I think rather than trying to prevent this outright, what I would rather see, because I still think there's a valid use case where you do want to use the previous root key to transparently, in case things haven't gone wrong, for example, like Marina pointed out, there's no reason why you shouldn't be able to do this. So I think what might be a better solution is if we say something like by default, you tower shall not trust the previous root key to hand over the, hand you over a new root key. I think there's a good safe default. That's fine. You should be able to overwrite this default if you wanted to. Sorry, go ahead. There's also, it's actually confusing about what the wording is trying to rule out. So, yes, are you trying to say it should be possible for there to be a rotation without the use of the existing root key or are you trying to say you must not trust a rotation that does use the existing root key because those are kind of different things. I think it's the first where the rotation should not require the current root key. Okay, so you're not ruling out in-band key rotation. I actually think I should. That's not clear from, yeah. Say I want to rotate my key from 2448 bits to RSA 4,096 bits because it's more secure now. You know, kind of thing or something like that. Or some other, you know, I want to rotate from one cypher to another because it's got better support in my HSM or something like that. Why should I not be able to do that in-band? That seems a perfectly reasonable thing to want to do. The in-band part comes in because you can't differentiate whether the in-band is trusted or not. We have to think from the deployment and the signers being sort of like different entities. If you don't have communication or something else that says that, hey, go trust this, then you don't really know that that's what I'm getting at. Any mechanism where you have in-band rotation is vulnerable to an attacker compromising and then rotating the key for you. Sure, it can have other impacts, but the in-band rotation kind of, I think, adds to the pain of being able to kind of get back to where you need to be because now you have to go back and manually redo your cluster anyway. So I'm not sure what the in-band rotation offers. I think the two things that I would kind of throw out there is that we also want the root keys not to be rotated because there is like security requirements or something else that comes up, which there are, but you're not having to do this on a yearly basis. Yeah, that's fine. But you shouldn't prevent people who want to do this. So that's why I recommend you can turn this off by default. You can say, you know, we strongly recommend you don't do this. You don't trust in-band root rotations. It is turned off by default. You can just flip this Boolean, for example. I think that's much more reasonable. But what about the security scenario there, right? I think that is introducing a security flaw in our mind. Like it doesn't protect you from an attacker rotating your key. But this is true for, I mean, presumably you would need a way to rotate other keys, not just root key. That's where you're in the same flaw. Well, so you have your chain of trust coming in, right? Like if an intermediate is compromised, you know, or if, like, a signing key is compromised because, like, let's say a developer had access to it. You, because you have your root protected and your intermediate protected, you can actually go out and say, I'm going to revoke this key and I'm going to rotate an issue, a new signing key and sign with those, right? So from the perspective of, like, you know, the keys should have different exposure. The keys that have the most exposure have rotation built in. It's really just the root that we're saying should not have auto rotation built in. Everything else, go ahead. But if you're, if the attacker has compromised the root key, they can rotate all the other intermediaries to their keys, which has basically the same effect as rotating the root. That's true. Yeah. Ken, what do you key management experts parse what the end is suggesting also here or adding? I think that that's a really good idea because it does mitigate the impact of any of these compromises we discuss if you have two different roots. Then as long as they're stored in very, very different locations and different methods or whatever, so that they're not both compromised at the same time. I think that's part of the idea of having multiple signatures on root, in fact, is that you can have different keys stored in different ways that theoretically couldn't be compromised at the same time. I think in practice, we're going to see very different things. That's what concerns me there. Yes, we can expect people that have, or developers that have a lot of resources to potentially go set something up, but are we really going to see multiple roots across multiple regions or distinct HSMs for individual developers? I think that's where my concern is. The way I look at it, let's say, for example, if Docker had a root, right, and if you have intermediates going out to different developers that are using it, then potentially those intermediates can be rotated, but really Docker saying that, hey, I need to rotate my root is a very different distinction than a publisher having to rotate their root. It really boils down to how much trust we're placing in this root, and that's where I think I'm very uncomfortable saying that an automated mechanism to rotate it would be something that we'd be okay with. I think this goes back to how much is covered by the root, because if every developer has their own root, then yes, I agree that maybe it would be hard to store, but if the root is higher up in the chain in the registry level, I think we would have a higher or expect a little bit better key management from that group. And I think that's kind of what we were going towards, right, like for individual developers, we don't necessarily expect them to set up the PKI as much as they need a mechanism for key distribution, which I think could have like a curated root, right, where you're signing up for some sort of an identity, you're getting an intermediate, but the root is not something that you would be touching or have exposure to. Yeah, exactly. I think we all have the same ideas in mind. I'm just concerned about ruling out use cases like Justin mentioned, for example, you want to update from smaller size keys to bigger size keys, because suddenly the security requirements have changed. And I'm not sure that's a great idea. I think I'm kind of also looking at how TLS certs have also been managed. And I think, you know, like I said, like a root update is a painful process regardless of how you do it. And we've seen that work. Like that model is something that we know, like, you know, root rotations will take years to plan for. And you need to make sure that the update gets out. It's not a streamlined process where you can just go flip a switch and like the root gets updated. I totally agree with that. But I think the trade-off there from what we've seen it has worked, right? And this automated root rotation is something that we have gotten a strong pushback from in terms of like it goes counter to the security models that we have in place today. Well, in practice, Karan, if you look at TLS, the way root it's done through your operating system or sometimes the application itself. So there is work done somewhere and you don't get it for free yet to update the application or your operating system. I think this is a larger issue that I'm afraid we're running out of time and I don't keep... Yeah. But we should discuss this offline. So I'm not sure that we should make this a requirement. What are people wanting to? Shall we revisit this? Shall we do people want to spend more time thinking about it and come back with some further discussion or what or what do we want to do here? I mean, we did want to get something committed. So maybe Niaz, if you want to tweak the ones where I think it's just a tweak if we agree in concept and propose that and then do a separate issue or separate PR on this one to iterate. Does that sound like a way to divide and conquer on this one? Yeah. I can mark this one with an asterisk and say like, you know, we were still working on this one and have the rest kind of revised. I think that works. I'll set some time up with you to kind of like chat more in depth and see if we can find the middle ground here. Sounds good. Thanks. Anyone else interested, please join us too. I'm just a point of order for scheduling. So we have next week and then the following week is the Thanksgiving week and a lot of people just turn in the turkeys so that week will be kind of shot. So my proposal, we won't have any week, any meeting the week of the 23rd. So anything we want to cover next week and then we'll resume back on the 30th. So just a point there. So what date, when's KubeCon? Is that next week? I don't know. Let me do the recording. So I don't know. Yeah, I think it is. So it's the whole week. Okay. So you want to, you're saying basically let's take a quick look and then we'll just I think, um, yeah. I made security days on Monday. I think. November 17th to 20th. So technically, oh, because you're saying it's the clouds, the pre-con is the 16th. Yeah. Yeah. Okay. So you're proposing cancel next week as well. Yeah. But if we can try and follow up on Slack. It's a bit of a gap. Yeah. Obviously everybody keep up on Slack conversations. So, um, I've got a couple of things that I've been, uh, focusing on, um, this week and planning. So, uh, I probably won't have many updates from the distribution side and the notary client. And the next two weeks, but, um, any company, why don't we just do that? Let's, let's put a focus on. The coup con for next week. Unless somebody has something specific to talk about, we could talk about in Slack. So let's cancel next Monday's. And the 23rd. And we'll resume on the 30th. So with that, we'll let people get on with their day or evening. Thanks, folks. And I'll get, uh, the hack of D doc updated with the, uh, the recordings. See you later. Thanks.