 Hey, folks. Hello, Steve. I guess it is a new Friday for us. This is a new meeting time. All right. Let's see. Two minutes. Let's get the ritual sign in for everybody. And off here. Let's see Marina, the floor is yours for our first one. Yeah, so I guess that for what I wanted to talk about here was just kind of finalizing some of these requirements and scenarios before we go too much further with any of the key management discussions to make sure that we're all on the same page about what the goals are here. So what I linked here just is two PRs from a while ago, looking at these scenarios. So as we can look at the threat model to start just kind of what the goal of the attacker is what we're trying to protect against. And then I guess we can just talk about it and see if we can come to some kind of consensus here and maybe get these guys merged. Yeah, so this first one. Let's share the screen. I can see that red. Yes. Okay, cool. So this just adds to the set model so some goals of the attackers. The first one is, I guess you can interrupt me if people have questions concerns, whatever it's pretty short. Trying to have a party install a malicious image under the attackers control. So that's pretty straightforward, you know, install malware or whatever. You can already install an outdated image. For example, one with known security vulnerabilities. Sorry for this one I had some questions around how we define an outdated image. And because just because something is old doesn't necessarily mean it has security vulnerabilities. I think this goal, if, if it, if it's an attacker girl I think we should be more crisper about what, what threat we're trying to protect against here. That's a really good point. Yes, I guess what we're really trying to protect against is an image that the signing party no longer wants to have signed right not any old image. Yeah, I would definitely defer to the person to when the signing makes that decision is not us to come with our definition for this. Yeah, so um, yeah, that definitely needs to be rephrased so I'll make a note of that. Yeah, I mean you just want to phrase it as you don't want to let them install a revoked image or an image with a revoked signature. Yeah, that might be a good way to say it the revoked or like the rescinded where you know it was once signed but now it's not. Yeah. And we haven't actually said that a registry could be set to not accept unsigned images or for that matter revoked images. It's not to say a product couldn't do it but in the past. So this is the line that we've been struggling with a little bit is what is the spec and and standard for notary v to experiences versus what is the product line. And we struggle this a bit, even on saying tags can be locked, but that's not something that conceptually is in the spec. So what does it mean to stop somebody from pushing an image that's not signed. Yeah, it's more like pulling an image and I think that the client can be configured to ignore notary to and just ignore this warning that says, oh, you know you're pulling something not signed. And I think, you know, we shouldn't encourage that but it's definitely something that may happen in certain cases in like, you know, testing environments or whatever the case may be. But I think that within the bounds of notary to we're saying, you know, we're trying to verify the ones that are signed. That's how I differentiate signed versus unsigned. That makes sense. And I think you're, because once, once you cross the line of signed and unsigned, you're, you're saying, I think there's also a signatures revoked so I think that's the question is on the push or on the poll. Yeah. And maybe we should go go a little bit into defining that but basically like, you know, maybe notary to is for the images that are signed and kind of that's, you know, as like a scope of work. And there's not much we can do about developers who don't want to sign their images right because yeah. Well it's just up to the registry until there's a something in a distribution spec that says, you know, a registry has the may have the ability to block images from being pushed they're not signed, which I think it's a great idea that I imagine many people will implement that but is that part of the spec or is that, you know, the feature thing. That feels more like a registry feature and I'm wondering what workflow risk that would encounter, because you're always going to push an unsigned image to start with and then push the signature after that's out there and so just working through mentally what the registry would have to do with that but I feel like that's getting out of the scope of what we're trying to do here. Yeah, the toggle here. So I was when I was playing with the, the references API, a couple weeks ago, I was trying to simulate the scenario you were asking about with the, you know, the whole tag update thing. And there was some interesting pieces in there. Okay, let's move on for the sake of. I think we got it. Oh yeah I think we know what needs to be done there. We're just going to rewriting images. Before you go forward. I think, I read when you said you're going to rewrite that number two scenario like that line, you were talking about rewriting and adding information about signing on it. Oh, no, so it kind of two different things I think to rewrite this one to say, trying to have a party install an image without a currently valid signature or something to those to that effect. I've also kind of gotten through discussion about, you know, the fact of, you know, what's in and out of scope of new review to which I think maybe is a separate, you know, sentence that needs to be written somewhere here, which is like, you know, this is what specifically is in scope. Okay, so I think like there are two things right like there's the vulnerability part of it, the security vulnerability. I believe that is like the one of the objectives of the attacker, and another different one is the signatures, if it is signed or not, but I don't think that does that make sense to talk about the signatures on the attacker goals. Um, maybe like that's like a means to the goal, to a certain extent. But definitely didn't use to be written somewhere. So we can figure that out. Okay, cool. We're going to just one clarification because I'm reading it, and it says install, like I'm installing an image in a cluster. Did I just hear you wrong. I had the impression, there's something that we were just discussing this was putting it into the registry, as opposed to trying to actually pull it from the registry and do something. Like these make sense from a sense of, I'm trying to run it on a note. Yeah, and maybe the word maybe a better word than install is like verify like, you know, do the note of you to verification, despite the fact that these things. You know, don't happen, which may then lead to an installation but it's more like, but that's the step where that they're really attacking. And I think by saying it, it happens upon verification is then the Brandon's point of like, if this is a registry feature, a registry might verify things on push. So you by by saying it that way you're not making a state of whether it's going to a note or going into a registry you're saying the notary verify option should enforce these capabilities. Yeah, because I mean, if if the registry can prevent malicious images from, for example, this first case from being on the registry in the first place. And they choose to do so. I think that would actually be a win right. Then, you know, there's less chance of people pulling those. Yeah, but at the same time, a lot of what we're doing here is assuming that the registry has been compromised. And I don't want to depend on that, you know, that's nice added feature but I don't want to depend on that. Yes, maybe like so when the downloader verifies the image these things happen was at the moment that we want this to be. Well, I think the we're recalling this out as a, it's definitely it's mandatory whenever the deployer is pulling the artifact right and I think it's optional when the registry does it. It would be a good practice for registry to do it but it's it's not one that's a separate conversation right. Yeah, I think that that's kind of I think what yeah I was referring to my verification which is not like the registry can do it in lieu of the user but they could maybe also add some extra checks. Alright, so number three making images unavailable for installation kind of a denial of service type attack or hiding images or something like that. I'll give it a minute and then number four is preventing a party from learning about updates to currently installed images. This should be more a lot more for that tag update scenario, specifically, I digest obviously wouldn't learn about an update. So number five convincing a party to download large amounts of data that interfere with the party system. This just be basically another kind of malicious upload that would then cause this kind of thing. And number six enabling future attacks of the above types to be carried out more easily. For example, by causing a party to trust an attacker's key. I had a quick question about the point for preventing the party from learning about updates. Why would why should new new updates with just include new digest. So I push something to the new digest and not to protect the tag. Shouldn't clients also be able to see that as a, as an update a little bit. Is it required to always update tags. I think from what we've talked about in the past and people can go ahead and correct me that this is mostly apply if, if you update the tag to point to a new image. I guess we'd also wouldn't want them to be unable to learn about a new digest that's pushed to the registry. But I think specifically, the concern is that you know if a tag was once signed in association with the digest and attack we just continue replaying that forever and never show new associations. If those tags are being updated, really hiding anything on the registry would probably be a good thing to avoid so we could, we could even expand that. It's a good goal I just wonder. If it makes sense for us to put the full canal of service prevention in here because I don't think we're going to be able to do that with notary that's getting a little bit bigger of a scope. Yeah, I guess if you can't control the network I think it. Yeah, probably be very peaceful and maybe this, this third one maybe should be a little more fine grain for that as to as I was reading it just now I kind of realized. So you control the whole network that might be a little bit tricky. When you touching on five or was it jumping ahead. I was kind of going into a bit of three and a bit of not so much five but more just three and four mostly. I might specifically call it out as out of scope. That might be a good thing to add like a little out of scope section includes that in the unsigned images. It's definitely someone's going to want to work on their side but denial of service tax or a little bit beyond I think some of the stuff we're getting into. It just, I know it's jumping ahead to five but it registries by definitions to our large amounts of data. And we've all seen these images with multi gigabytes of data so I'm not. I don't know how notary would get involved with that on number five so it just. Yes, we should figure out how to solve that and that's how we have been throttling and there's other things that we do, but I'm not sure how five is a notary go. It might not be there's the slow retrieval attacks, which have been used as a tax on update systems but they get they're kind of less common and it really is kind of on the on the border of denial of service which again is probably out of scope so I would think it's a good argument for leaving that to the registry so. Yeah, if we're going to have five it might make sense to scope that down to checking the manifest and the individual signature and not all the other blobs that might come along. Yeah, maybe it can just be scope to make sure that you're downloading the correct amount of data, even if it is large like you know this is the size of the thing you should be downloading that kind of thing. I'm interested in that because we're talking specifically around the notary client, the notary client will know the approximate range of a signature. I don't know what we're whether in a K or 2k or whatever but it's not going to be multi megabyte. So that that is an interesting checks on that you can do. So just yeah I think adding that little bit of protection and also if you're signing a hash you can sign the size of the unexpanded hash and then you'll know, you know, when you're when you're pulling too much. All right so yeah I'll start going down for sure though. It's still not clear to me why that is a noted requirement though that should just be your standard getting an artifact requirement right like what why is this something that we would need to address from a signing perspective. This is just, I mean, I could, I could go either way on this one but I think that my, my thought here was that if this is kind of an opportunity to include those those size requirements because we have this kind of signed and verifiable information, and then if we assume it you know an untrusted registry, and if the registry is giving us those sizes then you know that they maybe couldn't be trusted as much. And I'm just thinking to the OCI discussions have been happening lately on adding the data field and figuring out how big a manifest can get it's been a very contentious discussion over there. And so it does make sense to a certain level we just need to be careful to scope this so it doesn't include all the blobs and other things and making sure we're not stepping on the OCI questions that are going on on another side. I'm thinking about this one though, and I'm so not sure we need to put any denial of service stuff in here. So it's kind of like, if you get past step one, you know, do we even want to cover any of the size DOS kind of challenges that all we're going to cover is in a notary scenario, the content of the blob should know should never be a certain beyond a certain size. The problem we're having the OCI call is those manifest are used for everything. So there's no way to make a determination specifically. So, your point is right that that size constraint is a problem but that's because they're generic. I'm trying to make a statement that still not the manifest right we're still talking about the blob itself. Well I actually was thinking about both because in once you get the manifest itself downloaded, and if that includes a size, you can then use that to make sure that if you're downloading it you know a two megabyte image versus a two gigabyte image, you know that difference and you know how much you should be downloading. So I'm thinking that's going to be outside of our scope just because that should be part of the image pull process that knows okay I've got a manifest that tells me I'm pulling these things and it should know at that point. And so we're just saying this manifest is good or bad. And then once the pulling client knows that it's up to it to verify the layers and everything that's pulling down. Yeah, I guess if the if the. What's it called the digest is signed, then that's pretty much that you already this the size is included in that. But the question is if you're getting the size information from an untrusted source. Does that actually provide you this protection. So the size is in the manifest as part of the descriptor. Okay, we are signing it so then therefore we are providing this protection. Okay, so that would be my argument. Yeah, the point I want to be careful of is that we're not saying that notary is going to protect the client from a malicious registry that starts shoving down two gigs of data and a blob that's only size for three makes. And so, I think it's going to be loading after the size that they expect. So it's not notary doing the download though that's that's where I'm trying to draw the line there. So notary will tell you the manifest is good or bad but it's up to container D or something else is doing the actual pull the blob to know that it should stop pulling at that point. So maybe notary gives you the information you would need to not download excessive information. Without having to tell you the manifest is good. Yeah. Think of notary as a gatekeeper. So it's just a matter of, and there already are validation checks on the digest. Like I was actually with you for a second there because basically the signature does have we're signing the digest, which has the size in it. So yes, you could actually look at the signature and know that it's on the content. Actually, no, I take that back. We signed the manifest the manifest has the size of the manifest the manifest. You have to look into the layers layers or globs and decide which size so yeah. I think that's a design question I think we could choose to include the size in our in our notary signature requirements. The size of all layers combined. Maybe maybe that's because of the layer situation. It's trickier. But yeah, maybe I guess I would say all layers combined. We can. And again, this isn't like my number one here. I think we should at least look at it and and decide. It's an early one to think is what we're getting at. And to be fair, the, the general digest descriptor the descriptor already has this in it. So, in theory, this checks, these checks are already being done. And because it's done after no, we would have been handed off to whoever, you know, whatever is running the thing container D or whatever. Then you're kind of out of the notary scope. And it's just the container D client by definition should be validating descriptors before pulling. I'm still struggling to see why that should be a code signing requirement, though. Like this sounds like this is a manifest and a container requirement where we can say whatever is in the manifest, like whenever you're pulling, you can, you know, validate size. If there's other fields that come up, those those also get validated. And I think it's a question for the manifest in terms of what is required versus what is optional, right? Signing the manifest is just saying that the manifest hasn't been altered. And I think that's the only part that should be in scope for notary. Okay, so maybe we should push this to an OCI discussion. Okay, well, I'll create an out of scope section and everyone here can feel free to comment on that and we can make sure we're on the same page there. Okay, so the question on the number six specifically the, for example, causing a party to trust the attacker's key. Just clarifying what is in scope of the system versus not. So with the whole key management and any key update or trust store update, be part of the solution or is in scope of the solution. I think there should be something that measure how this is implemented is something that is set up by a policy external to the system. The set of keys are roots I trust. Might be a better question for when we jumped to part two of the meeting when we have Nia was talking about the key management. Yeah, because I think my assumption here is that key management is a part of the threat model so as a part of these attacker goals. Yeah. All right, everyone happy. I'll make some updates to this and maybe folks can comment and we can try and get this put together in the next week or so. I just had I'll add the same comment and artifact instead of image because we still want to secure helm charts. Yeah, I think I first read this before some other documentation so yeah. That's it then the as Europe. Did we also want to talk about the attack scenarios scenarios. Yes. Sorry Maria to topic on your line. Yeah, that's right me get that one shared. Yeah, that was two quick scenarios to the, the list I think we're depending on whom who merges first I think we're going to have to update these numbers. But so the first one is a mirror is compromised. Just the idea that it's not just, you know, the initial registry itself that might be an issue but if any but if there's any mirror that's just copying all the data directly. So that's the source of attack. And this next one is a, I guess we'll call it machine in the middle attack. And this is I think actually kind of is not related to the denial of service but it's just that someone is watching the network traffic, and maybe saving stuff in the network traffic. And making sure they can't do like a replay attack or any of those other ones that we've talked about. All mirrors should not have permission to edit packages and signatures seems I mean, they should not be able to do it undetectedly. Maybe. Oh yeah, that's the point. Yeah, they can we should be able to detect it. What I'd want to look at is we might need to define a difference between a mirror and a copy of an image. So people make copy images from, say, ACME copying to Wabbit or vice versa. And they add their own local information on top of it saying okay we've ingested this now it's trusted by us they add their own signatures or own details on it. That's from upstream. And so that's a copy is not really a mirror in that case from I think the definition we're looking at here. I think the idea here is more like mirroring the whole registry and that maybe should be defined somewhere to agree. There's also a couple of references to keys being stored in the mirror in the registry and I thought we were saying we were saying a registry would not store keys. So maybe having pointers or something that says where you get the keys but we simply wanted to say a registry was not going to get into the ownership of the key itself, even the public key. Yeah, it might be just a bit of in, you know, in exact writing it's more like access any, any of those pointers to keys or kind of alter those key those pointers to keys to then, you know, be able to perform an attack. So that's the definition on the key management scenarios to. Yeah, I think maybe as we clarify that we can clarify this and figure out exactly what the scope is. All right, no other comments. If you could also comment in the in the PR and I'll get those things updated. I think that's what I had, we have some key management discussion next. Let me get my screen share going so I expanded the pull request that is currently open and expanded the section covering requirements that needed further discussion. The remaining areas of the doc had some prior comments that haven't been addressed yet so I'm going to skip over those sections for now. Do I share? Yeah, I'm working on that. Go. Okay. Is the is my screen sharing up. Okay. So, I'm going to go through five different areas and I'll pause for each one. Right now, I'm, we've focused on adding in the, the scope of each one of these areas and what's being detailed out. So we want to capture any areas that need to get flushed out and questions here and try and come back and answer them next week so we can have a more in depth discussion on what the guidance on notary here is going to be. So the first area is the signing key expiry. Right now, whenever the signing key is being generated, you have a higher key in the hierarchy certifying it and that certification has a valid validity period to designate how long the signing key can be used for. So here we're going to in this section we'll discuss sort of the trade offs between the setting expiry times and enforcing key rotation behavior and what those need to be. So this will discuss trade offs between having like short lift keys, having longer lift keys and what the trade offs there are and whether we need to have any restrictions here. I'll pause here for a sec. Any questions on signing key expiry? Hello. I would like to ask something that we discussed here in, in Ericsson earlier that do you think we, we are able to, for example, if we have a our own signing service, kind of an internal within the company, do you think we, we might be able to, kind of a offload the signing operation into our own, own external signings server or service and then then use notary for for pushing and key distribution. Do you think it would be possible into future or is it part of the, the, any, do we have any use cases for those ones anywhere? I think the scope for notary is, is, is different in the sense that the scope for notary is defining how artifacts are being signed and distributed, not necessarily how the keys themselves are being managed. The, the specifications here for the keys will describe how the keys need to be configured externally and how they pertain to validation. But you could use any key management service with those API is to kind of make that work. Right. I think that's the, that's the design goal that we have. Okay, so that's what I like to hear. Yeah, we've had a lot of feedback. I've, I've got a key management system. I want to work with it. How do I can know to be support that and that is a key goal of ours. That's part of why I was also poking on a little bit of we don't want to store any keys in the registry either is it's how do we store the reference signatures and distribute them, and then we'll have a solution out of the box. So if a customer, a vendor, a cloud has a key management system that they're using the customer shouldn't be locked out of doing that. Okay, yeah, that's really good. I think in the side key section there, how to have a first sentence that defines the hierarchy, because I think I know how to interpret that, but there's more. What does it mean? What, what, what do you talk about with the hierarchy specifically? So first set sort of define hierarchy, I think that might help. That's a good call out. We'll expand the section. We haven't, I think, defined the key hierarchy in the rest of the dock. Let's see. We have a brief definition earlier above I'll kind of point to that in the key expired to kind of make it clear what the hierarchy looks like. Okay. So the next point we had was having support for an external timestamp server. Time stamping authorities, we had there's a standard RFC 3161 specification that we have public timestamp serve authorities that are running right now. This scenario is going to discuss adding support for it with the benefits there are. There's some pros and cons in making this a requirement. It is a free service, but it is also a service that's managed by an external entity. So it adds in a dependency. So this section we're going to kind of carve out whether we support timestamp servers and external term time servers. And as part of that is that like a mandatory requirement or is that an optional requirement and what the behavior would be in either of those scenarios. But this is definitely going to talk about because I think that regardless of what we decide here, we're going to need some notion of timeliness for things like revocation and the resending of signatures. So I think that yeah, we can talk about this and maybe it makes more sense for there to be something run by the registries or maybe it makes sense for that for like there to be some kind of service that people can run specifically for notary but I think this is definitely something that we should discuss. My own impression is that I think we want to have this as something that the client the verifier and also the signer can have the option to configure but I wouldn't want to say it's a requirement just thinking of all the behind the firewall scenarios and other disconnected environments. And I think that there's also I would also add that it in some ways it's easier if you can sign, if you can provide the time information not about every single signature itself but about some kind of aggregate of them. You know within within you know all the things we talked about our aggregates but because that way you know you have to access the time server less often and been, you know, sign less of those things. I think the, the purpose of having a timestamp server is a lot of signature based assertions are relative in time. And so the requirement to have that is fairly clear, having an external one is essentially separates out rules between having your root compromise versus having like a public timestamp root compromise like it adds in sort of like a second layer of protection, but at the same time it adds in implications from an availability perspective so we're going to detail out the trade offs here. And I think that should help ascertain, you know, whether this is an optional requirement and in that optional requirement how does it pan out. What we also need to discuss here is that with an external timestamp server, you're also using another external route, and whether it's managed by registry whether that's, you know, you use RC 31 61 timestamp services that are out there. We need to discuss sort of like how those roots get managed and distributed so that's that's that's what we'd look to cover in this section, and also air gap regions I think that will also be covered in here. So as far as the roots I feel like that you your one group could delegate to the other right so you have less of a digital roots to manage. Does that make sense. So like you could have your whatever repository registry roots say this is the time server you should trust for for this. You can also use a public time stamping authority. So there are public TSAs out there, which is where you can rely on a shared trusted service that way you enable if your root is compromised for example, like then you know someone can also will also have a time stamping service in that scenario so having a public timestamp service here I think has some additional benefits at a certain cost. Okay I think we can definitely, you know, list out those trade offs and figure that out. Yep. Any other questions on timestamp servers before I move on. So the interesting thing for you is the whole verify thing the, because we talked a little about some air gap environment scenarios where you don't actually use the top the TSA for verification you need to search for it, but you don't need the public key but you don't need the TSA to verify. So, when we think of the air gap environments they come in different sizes so to speak. There's, you know, the air gap cloud, which is big and, you know, largely funded and so forth and they may actually even have a TSA, but they may not. The point is is that where you sign the content you need a TSA where you verify the content you don't need the TSA. So you can either sign on the public side, move it into the air gap environment still validate that in the air gap environment. If the air gap environment is big enough and complex enough, they need their own TSA to sign, they can set one up in the smaller TSAs. They're not just air gap they're literally air blocked like they're walking around on battery packs and have no internet so in those cases they're not building and signing they're just verifying. So this is the the line that helped me understand those a little bit better to recognize that the TSA have the TSA is getting used for the verification process. So, if that helps I'm not sure if we got that in here or not but here I think that's that's a part of what we would discuss in the trade off section which we're going to put in next is described is describing how what the implications for air gap regions here are. So, if you're creating an artifact for example within an air gap region, you can also potentially create your own timestamp service and that's using that and using it for validation so this really is just kind of calling out that if you have a different route for a timestamp what benefits do you get from it and what is the cost of doing that so we can say hey yes this is this is a good requirement it should be mandatory, or this is good in certain cases it should be optional, or you know the benefits here just make it to complex let's not use this at all so that's that's that's what we would come up with the recommendation for. This is part of things we also talked in the last week is some of these things just need some summary definitions that don't go to RFCs for the average mortals to try to understand how these things are actually used in some of this because that was the thing that the summary of the TSA that I didn't realize that I'm not sure if others are in that same boat it would help to have like a summary of these kind of things, because then that's a good yeah that's a good call out I'll look into a link in the summary I know we did that for like a freeze attack. So I think we can call out what RFC 3161 means and what it's useful for so yeah that's that's a good point. Can I just ask is it an explicit goal that if my key is revoked at time X anything I signed before time X remains valid. It's not a. I think that becomes a transitive goal. When we look at like signature rescinding what does that mean. And yeah that's that's I think that's that's that's something that we would want to support if for example your signature has a time stamp in it and and you have that functionality. Yes. I was trying to bring up that specific question that instead of having to revoke the key for all 10 years or whatever the time is can it be revoked for since a period of time and you may not know exactly when the compromise happened, but you know with all hazardous things like when doctors do cancer they always talk about that margin of error they take a little extra just to make sure so if you think it was March that you're maybe you include February whatever date range, but anything before that should be able to consider valid still. So that's that's where the TSA comes in and that that was helpful for me to understand how that was that TSA played an important role in it. I think and through the scenario I'm wondering if it just makes sense that you revoke the 10 year long certificate and then create a one and a half year long certificate to replace it that it has already expired. I think that becomes that becomes more of a implementation as to how we get there, because signature resending and key resending have different models which you would end up with different implementations there right. So I think this this is more covering that if you have an external timestamp server, all of those time based decisions, you add sort of like an additional layer of protection there. Moving on the next one is we're going to discuss signature expiry. So, as of right now, the signatures can either expire in one of two ways one if you're using a certificate based model of the certificate expires. Or if you're going with sort of like a signature allow list or some kind of certification there it's it's it's based on the the expiry time that's indicated within that certification. So we, we want to look here and determine what the signature expiry time recommendations need to be. And whether this is something that should be configurable, regardless of sort of the, the key validity or the certificate validity. This kind of goes into more that if you're going with longer lived signatures, you, you may have sort of like a specification on how long that signature should be valid for. So this will kind of this this session will talk about the trade offs there and what sort of recommendations we want to have the benefit of this is that it will help protect against freeze attacks. But the main the main scenario I think we're we're trying to cover here is that this is sort of an assertion that saying for this, this this is how long after you've generated a signature. You can trust it for before, you know, you'll start kind of saying this isn't valid and I should treat this artifact the same as an unsigned artifact. I'm looking at this wonder do we need to really separate the certificate of the signer from the sign content or do we want to merge into one and say the sign content can never last for longer than the certificate of the signer and so we've always got a time stamp on the sign content at that point. The reason we tease it out is if you have let's say short lived keys, you may not necessarily also want short lived signatures. And so this is calling out the distinction between a key validity versus a signature validity. So that's something that maybe I need to think about some more just because I'm going through the scenarios do you really want to sign or that has key that's going to last for a month sign something for three years. Yeah, and I think this this goes into different aspects of it and I think that's why it's important to kind of call this out right so for example, you may want a artifact artifact signature to be valid for let's say a year you may want a signature validity manifest if you will so which is like you know the update list or a CRL to be signed for a shorter period of time, like a week, and then the key itself might only be valid for a day. Right. So that's the scenario you set up you have short lived keys because you want to have blast radius controls and if a key was compromised, how many are other artifacts were signed with it. And so there's different reasons you'd have fork having different expired time so this one is explicitly going to drive into would an optional signature expired time make sense or should it be mandatory I think this is more like something we want to call it as optional, and we'd call it what the behavior of this expired time would be if that optional field was present. And I'm just thinking through the attacker scenarios and if an attacker gets a short lived key. I also want to limit their blast radius. And so I'm just mentally going through the what's the risk here if you allow them to sign something for a longer period of time that can mean, you know, you lose that scenario being able to give a CI pipeline a very short live key that you can do the signature and do it stuff with you don't have to worry about the CI pipeline doing something else later on, but we add on the risk that they can now potentially sign the stuff that you don't even realize is still out there. I think to add to what we have said the thought process here was having just short live keys and the time stamp signature with those two variables, you get to do in the absence of time so we just have the key expiry to depend on and in the presence of the time stamp signatures, you can use short load and then signatures valid in the time stamps in the time stamp signature expires, which may be like 10 years. So the signature expiry gives you a lot of control where customers can exclusively define a period, less than the time stamp signature expiry, but isn't really tied to short lived key expiry. So, let me ask this is there a concept of when I give someone a short live key that I can also control the max lifetime of the content they're signing in some way, like a separate expiration. I think that's a that's a good question in that you know where where that expiry time comes from right like you could also you could you can make this a sort of like it's configurable at the time of signing or this is something that the root has to sign off on so we know that this is something a policy that the administrator has globally configured. We'll need to think about what that implication means I think the trade off here is for a single route. Is there a scenario where you would expect multiple expiry times or is a scenario that like you know an organization would establish this is a universal policy we'd want for all our artifacts so that's a good good suggestion I think we'd want to clarify where that sits and then I think there's a different trade off there to consider. Any other comments or questions on signature expiry. Okay. The, the next section we're going to cover is the having a mechanism for a transparent root key auto rotation. So, roots are forming the basis of trust here. When you're configuring a route into your into your trust or you're saying I've done some out of band validation mechanism to say that I trust this party, having that route actually rotated has some implications. There's both pros and cons for it so we've listed out a couple of scenarios, one where a public route is compromised. It's essentially the access to the root key that's been compromised and what does that implication, what does that meet pan out. This would enable an attacker to create new routes within the hierarchy in terms of like a, if a timestamp route or if, if a delegate gains up to it like what are the implications of that. And another other scenarios are sort of like the route is about to be expired and a new route needs to be created and distributed, or the way that the current algorithms are used there's a stronger key that can be used and so moving to a new key where necessarily there hasn't been a loss of trust in the original route as much as a newer keys more secure needs to be created. So there's different scenarios to be considered here. And so we'll spell out sort of what the pros and cons of having this mechanism are so we can have a more detailed discussion on whether this should be something that should be supported or not supported or be optional. So it doesn't necessarily go into key distribution per se like a root key distribution mechanism. This is really looking more at like you know, does this automatically update the trust or on the, on the deployer site. So my question before I even got a chance to ask it, which was, you know, the, the biggest challenge with notary v one was the Tokyo, we didn't want that trust on first use. And so do we have anything in this document, or in our overall design of what we're thinking of for how we're going to do that initial distribution. So this is a requirement here. I think as we go look start looking into distribution will want to consider if there needs to be a mechanism or not like there's a question of who the root owners are and higher establishing trust right. I think it's a good entity because I haven't out of been mechanism where I can go get the root from them. Right. So if it's like, you know, I trust acne like I know what acne site is like they're sharing it in some public domain or, you know, through some domain that's that I can validate So a distribution model would rely on saying that I trust, like if I'm getting if I'm if I'm getting roots from registries then sure I trust the registry give me the root. But otherwise I would I would I think you'd want a mechanism that's outside of the scope of notary to kind of have that root distribution. So probably thinking of other equivalents out there this might be closest to something like the Deviant repositories where you might get a couple roots preloaded in your distribution for Docker Hub or something like that when they give you the Docker desktop install but otherwise if you go pull from someone else. You're going to have to know ahead of time to go do an external curl command pull down the key and inject that into notary on your local environment. Correct. I think you're making a good point we simply said we don't want we didn't call out that tofu is not supported, you know, in our requirements or goals, but we haven't defined what the usability is to actually solve that. That's a good point. And I do think like the Docker desktop is a good example of something like there is a product that you're trusting. It's associated with a trusted registry you might do that in the AZ CLI we might, you know, help with actually not sure what sorts those would be right now that I think about a more because it's the customer so we have to figure maybe there's a way to integrate with Azure Key Vault because that's the Azure CLI the same thing you do in AWS. And then you would configure where do you get your keys from for your registry would be a place to do that. The question is what do you do for the public registries I mean, we do have to figure out the usability story there. It just can't be first one in gets it and you own it. Any other questions for I move on to the last section that we're adding in. So to go back to actually focusing on what you put in there for the transparent. Does this then start to imply that we're going to start putting keys and stuff like that up in the registry. And I'm trying not to break Steve's neck with you know the shaking. I should be stirring keys in the registry because mostly we keep on talking about this as being the one of the risks and we're trying to say we don't want for me to say for me from running a registry and from all the conversation here from everybody here is. I don't want to take on the responsibility of keys and having things hack and there are products that are specifically designed to secure from that kind of thing. Yeah, I think that there are different kinds of keys to I think that like the developer signing keys used to actually sign images should definitely never be on the registry because otherwise you're not getting really another factor of authentication you're just getting, you know, the registry again. But I think that if there are if there are other like less security intensive ones maybe, and the root keys themselves I feel like the keys shouldn't be stored on the registry for sure, but maybe metadata signed by those keys could. And it could even be managed by the same people in some other way. But especially for things like route where you need a way to distribute that and maybe the registry is the way to distribute that, but it shouldn't be the, like this key shouldn't actually be stored there they should be stored offline or in a more secure place. I think what I'm looking at is when you have the root key that you already know trusted locally. How does it discover all the other keys that have been signed and approved by that intermediates and otherwise and is that something that goes up to the registry where the registry says you're all the public keys for these things that we trust in this repo or is that something that we have to have another out of band processor is that part of the actual signature the artifact itself for all that stuff just gets layered in there. I think if those are signed by the root key right then you're not actually trusting the registry only any more than as a distribution mechanism. Yeah. Yeah I think sorry I was going to say no you answer the question I think we want to tease out what what's actually being distributed right. So the root key should always stay in the possession of the of the root owner right. That should never go anywhere else. I think that's that's pretty that's pretty clear cut the next one though like you know how the public portion of the root keys distributed that needs to come through a distribution model that you trust. So we're trying to say that we are protecting against registries tampering how why why would you trust the registry to give you the information to validate the entity right. So I think that that that that's that that's sort of defeats that purpose. So you need a mechanism that's out of band of the registry. And the the goal here the mechanism here has to be something that doesn't necessarily rely on the specification itself or a centralized service it's whatever communication mechanism you trust to go talk to that root owner whether it's a website that they own whether it's through sort of like you know you have a contact within that company I think those mechanisms are out of band but it has to be a trusted mechanism that you have. Yeah but if I have gone through and said okay I'd now trust acne rockets. And I've been important that I've done the out of band process there what I'm looking for is, I now need to have the way of saying I trust developer X because acne rockets trusted that developer to do the signing of this individual certificate and so whether that goes into something that is part of the signature itself that includes the full chain there or whether there's some other way I know trust this key I'm just. Yeah, how that works. That's that should be a part of the signature mechanism right so I don't think it's. You need to necessarily know every key hierarchy or everything that's been signed when you're looking at the signature you want to understand how does this signature chain up to that route. And is that hierarchy still valid right like those are the questions you want to answer and I think that's part of the the signature spec that will will call that out. And I think there are some thoughts there, like, for example, with with public routes and consumption of third party packages or base layers. And if you have multiple environments you have repository with different images for different components and multiple environments and they're deployed. They seem to be a single set of. Publishers signing keys. Our public keys associated with the publisher not necessarily a route. You said these are the routes that I trust. These are the publishers that I trust. In different environments you might want to set up specific rules, and it seems to be some parts of these are policies that can be applied to specific services or environments, what is the centralized thing that live in the repository. Hey folks, I just noticed the time. I have a hard stop. I don't want to squash the meeting. This is good. I just want to acknowledge I have to drop. Yeah, I think we got a lot done. We continue on this next time you've been through. I'll call up the last scenario this is really just more look at rescinding signature validity. We are going to split out the allow list and denialist models here that we've looked at and just kind of call up the pros and cons and so we'll schedule more time as we flush out these stocks but those are the five areas that we're looking to cover. So if you have any comments on the the section we didn't discuss or some of the other sections feel free to put comments in. We'll look at them when we come back with updates next week on or more detailed discussions in these areas. All right. Thanks. Thank you.