 Good morning Milo. We'll see who shows up today as I likely added a lot of confusion to what time the speeding actually starts Just to make it more confusing Milo's gonna you're gonna be twice Good morning Twice is good. Good morning Trent. Good morning. You know Good morning Marco Or a good good day, sorry, I gotta get I've been trying to get better about being Time-sensitive to people around the world. Good morning and if folks can Sign in we'll kick this off because I'm not sure how many others are gonna join with the confusion on the start time today Which was totally my fault because of the trends that Amy did make the changes and we're just gonna blame cove it so I Think folks don't see the chat session if they if her message is posted before so I put the link there again and With Nia's here the recording going Nia's I guess we'll just hand off to you and let you kick it off Yeah, give me one second. I'm just updating the pull requests real time Justin time Justin won't be here speaking of Justin Justin will be here this week. He's off this week Yes, you have that PR that we could put in the notes It's on in the two or requirements Let me see. It's in the notary project the main Well under notary which project I see an updated key management scenarios, but is that the one that was open? I put it in the chat Okay, great. Yeah requirements, okay great, so the changes in here are fairly minor, but they clarify the Distribution workflow that we talked about last time so I added in some definitions around What a signing key and a root key are and also clarified the trust or definition? Which I think was causing a lot of confusion last time and then I use that to kind of add in a section on what the distribution workflow looks like Some of the automated distribution will want to kind of investigate whether tough can be used for that and whether there's blockers there But let's look at sort of like what the manual workflow looks like And see if that's enough for like our V1 launch You want to share your screen into a walkthrough or? Sure, give me one sec. I need to change some system preferences. If everyone can start reading it I'll go ahead and set that up so I can walk through comments Opposed to link directly to the markdown Yes, I'm looking at this. Did you cover how keys are acquired? That's what I'm trying to skip through here It's covered in the Signing use cases, I believe let's see I'll let you finish up because I guess I talk about It's in the larger doc, but might not show up in the differences it talks about the steps needed to do it In terms of like using one key to sign another there are implementation details that we can walk through as we start implementing it in terms of like the PKCS Specific interfaces to use But I didn't call out the step-by-step process at this point Yeah, I'm worried what I'm trying to poke on is the how do we Establish a new I'm trying to find a word for a client that is like clean and pure like a like in a hospital or something The ephemeral client scenario was what is the Way of which we Can put content on that in a safe and secure way So an ephemeral client will still need some Configuration to know where it's getting artifacts from right? As part of that it can have the trust or configuration So the trust or is a deployment variable consider it that way at the same time as you're telling it here's the Either the registry repository or target that I want you to get here's also the trust or file that you use to validate those signatures So that would tell it what it needs from a validation perspective Okay, I guess the of my question to be What is considered a secure way? To to put that there so And I'm trying not to random you're trying to change something so you can share your screen and walk through so I'll let you I think I actually need to quit this call to do that You want call right back? Yeah Okay, I'll give everybody a chance to read Hey, yes, let me know when you're back and can hear all right. I'm back. Let me try sharing my screen again Is everyone done reading or should we take a few more minutes and do the handrails just have handrails thumbs up when people are ready I think at times out the Key revocations are interesting too Just a time box we might as well start because we want to get some of the content too. Let's see. I don't see any new comments Guess we can go through some of the definitions and does anyone have any question on the definitions themselves With ease with these just pulled from like the Notary staff and they look familiar so I'm not sure it's just industry terms or which is good that we're not inventing something new or I Went with Industry terms. There are some clarifications for the trust or I think for Container signing itself like we are defining what the relationship of the Of the signing material to the source of the material is That's that's I think something that specifically comes in for this, but this is General terminology that we use for the trust or certificate. I haven't moved into using x519 certificate definitions That's one where I think we can look at it further down the implementation Whether the x519 format is something we want to go by Or whether something along the lines of what tough has previously done in terms of using multiple JSONs to key certificates or dirty chain certificates is the route we want to go down So that one I think we can sort out more during the implementation phase I don't think that disrupts the workflow too much at this point The only thing that comes interesting with that and I won't pretend to understand the differences between the different key Solutions is the prototype that she was got us Using at this point with the x519 uses the C name to align with the registry and that gives us some interesting benefits But I understand that at least some of the complications with x519 certs so it would be interesting to kind of elaborate more on that and the other reason I'm making it a more of a Priority conversation is because the current prototype does use it So the Yeah, I think that's it that's a modification we will want to make where using the C name kind of ties you down into getting a certificate from the from the registry itself and so I think the the route there would be that this is just another Configuration so think of this as like a key pair you're getting from the trust or that says for this Location here is the certificate that I'm going to be trusting the certificate really the only thing that the validation mechanism needs to pull out is just the public key And the URL the identity is more for the they can be the administrator to understand This is where the key originally came from And so really from a validation perspective the two things that are needed are the public key and the CRL URL The CRL URL is essentially to go check what the certificate revocation Latest update looks like and then the public key is what you're matching with the signature to make sure that that Artifact was signed either with that key or with something that changed from that key So the CRL is how you get the revocation list is that how I understand it? Yes, that's correct Okay, and that's embedded in the key That's embedded in the certificate Sorry, yeah Okay, that's interesting. So how how does that work when the We want to be able to support keys that move between registries, so It's originally started in Web of Networks went to Docker Hub went into Acme Rockets And then in the Acme Rockets scenario that goes into an air gapped environment which always adds interesting challenges Is the idea so we can Maybe I'm jumping yeah No, that's actually a pretty good question I think I started like I that's half a sentence that kind of didn't get finished. I missed that There is I do talk about the air gap environments They can specify the trust or and then you can put in a proxy for the CRL So you can either download the CRL make it available locally At which point it's up to the air gapped environment administrator to figure out how frequently they're going to refresh that CRL But that that is something we need to address for that, you know You aren't going to get up-to-date information In an air gapped environment, and I think that's expected Okay at the end of the day You can use CRL proxies within your network Because the CRL is always going to be signed So there isn't I don't think there's a risk there in using a proxy within your air gapped environment So that's kind of a standard model in key management is CRLs can be proxy For air gapped environments, yes Okay, we tend to think of air gapped as being very I Should say isolated, which is kind of a goofy way to say it not common the submarine kind of scenario is When we start thinking about environments where the average Large or critical security workload will be air gapped even in a public cloud Is that still kind of scale to that kind of functionality I guess I'm just is there's something around that that makes it very unique and not often and more complex or does it scale to Hey, probably over the period of time I don't know 60% of People that you that are dependent on keys will probably be in an air gapped environment air gap meeting a private v-net That doesn't allow anything outside of it to get Let's scale to that It should I think it goes back into like how air-gapped environments get managed Something that might come out is that you know people that are operating in air gap environments with those kind of security controls May already be resigning all their artifacts, right? They may not trust signatures that are coming from outside as well So I think there is a mechanism here for air-gapped environment operators to decide how they see fit to manage that signature But it doesn't kind of provide A Potentially like alarming and tooling right like air-gapped environments could for example Monitor the CRL externally and you know an alarm and take actions whenever that's CRL is updated or you see a change to it And then go take a manual action to update their gap environments. There's a lot of different ways they could build that business logic But I don't see anything here that would prevent them from coming up with a mechanism to address that No, that's actually a great point I mean, I actually like that a lot. It scales in it's the consistent model. We've been thinking about it is In fact without even thinking about it. It's what we already have in our workflow. It's Robin Networks publishes something Docker arbitrarily not arbitrarily decides that hey, this is certified so people can trust it and Then when they bring it into the Acre Rockets environment Acre Rockets on the out on the Outside of the air gap environment is evaluating that content again for them And part of that evaluation it has to have a Wabbit Networks or Docker Certificate on it if it does then it's ingested its security scan that's tested for their environment and then it's given an Acre Rockets Signature that when in production They don't care that is anything the only thing that's makes it allowed to go into production is the Acre Rockets signature not the other one So if you took that into the air-gapped environment, basically what we say is Outside the environment. Yes, they go check on these other keys as part of the ingress But to get into that environment There is a secondary stamp that's put on it that says yes, I trust this and that Update model is maintained So that inside the air-gapped environment, there's no reach outside It just means that content is constantly flowing into the air-gapped environment, which is this diode model that People tend to use So this is what I'm a little bit confused about because It seems to me that we're using X509 TLS here basically right unless I misunderstood something please please correct me But if we do that then I'm not sure that he can support Second second stamp signatures like that at least not out of the box What do you so? Elaborate a little bit because I'm confused or if it makes sense to you Nios So if I go to if I go to hub.docker.com right now, for example, I don't get two sets of signatures. I only get one Right, that's part of the problem That's part of what would change You'll actually yeah, I think What was what are you trying to validate from hub.docker.com? Sorry, I didn't quite catch that. Oh I see yeah, let me clarify So so Steve was talking about possibly one thing a second set of signatures right sometimes to me I want it may perfectly well be that 90% of use cases 80% scoured by one set of signatures Which TLS lets you do that today, right? So if I go to hub.docker.com I know a Docker sign for something and then that's it I don't get to have Docker and let's say Microsoft both signing for the same TLS connection When you say TLS, you're talking about the connection to hub itself Yeah, we should separate the two I'm not talking about TLS in this case, which I understand But if we're using the TLS infrastructure, then basically you get one set of signatures at any given time Yeah, so it's interesting that you mentioned Microsoft in that case because I haven't even there's there's a bunch of interesting things They get pulled into that so when the connection Interesting, so if you go to Docker hub You'll find a bunch of content that there is a catalog that lists the content and the URLs actually come from Docker hub They actually don't say it. There's a default registry name. That's a whole nother conversation Let's just assume that inject Docker.io in there What's interesting is to your point of this connection when you look for Microsoft content in Docker hub Actually, it all comes from MCR at Microsoft com So Where now obviously the Docker pole doesn't navigate Docker hub and read text and decide where it's going to go there's you know but the the details of kind of what you're getting at is exactly what we're trying to make sure are disconnected if you will so Web networks isn't big enough to worry about hosting their own registry. They like the idea to put it on Docker hub So that they do put it up there, but they're also small enough that most people don't recognize them So in that case having two signatures is really important because to your point, I don't know who a bit networks are I'm not going to trust that cert. That's like might as well say bad rabbit So but they do trust Docker. So now Docker has a way to differentiate arbitrary public content that bad rabbit puts up there and Rabbit Networks does and Docker says, you know, they're a partner. We've certified their content and Docker puts a stamp of success on it At the same token the Microsoft content we actually don't even submit the actual images to Docker hub we Syndicate a catalog to them All that is somewhat irrelevant because the best practice is regardless of where the the Public content is hosted Customer should bring that content into their environment where they have control over it So now there's a copy and they want to be able to provide their key because it's nice that Microsoft certified it but we're going to certify stuff we're going to publish stuff that we Accidentally, you know broke somebody, you know, well intended security change on an update will break somebody inevitably So they should adjust it to their environment test it validate it and Whether it's an air-gapped environment or not put their own key on it. So now that's the third key scenario Um, I think they're like if you're talking about signatures, right? Signatures you can have multiple signatures. So you're really just looking at what key is configured. So In the scenario of like something being moved into an air-gapped environment and being re-signed It could still have the original signature, but that's never included in your trust or so. You're just never looking at that in the first place Shashank, I think For the purpose of TLS validation. Were you talking about validating the URLs for CRL or for some other? For something else. Oh No, no, not so. Yeah. Yeah, I should so I'm not talking about a TLS connection I'm talking about the signatures of the metadata. It looks like we're using X5 and 9 right mid borrowing ideas like using CRLs and everything My question is is it technically possible to attach Multiple sets of signatures to the same content using X5 and 9. I'm not I'm not sure about this I'm sure you can delegate you can say sorry What we're already doing that in the MV2 prototype I the way you're asking makes me wonder What would prohibit that? It's just a one too many collection. So what And I know you know This the this details of security a lot more than I do. So that's why I'm there There's a reason why you're asking the question and I don't understand the question Oh, it wasn't clear to me that that the current prototype supported multiple sets of signatures I'm very curious about how this works. I'll take a look I got to check if we've been posting the yeah, I think X Sorry, I was just gonna say just There's a question of the The previous sessions because we're not good about taking notes and we kind of given up about taking notes We're just going to point to the videos But let me make sure the videos are getting posted so that people we can refer to these more easily But we did a demo of it last week So x509 is the certificate format It really just defines what the certificate needs to have to be valid It doesn't define the signature format. So you are free to kind of construct your signature In any shape or form you want from an x509 search Okay, so basically there's a layer of metadata that nv2 is adding now where you're saying for example I can map two different x509 keys to the same metadata Is that correct? Right, that's correct Got it. Okay. Thanks for the explanation Can I ask a question in addition? So the PKI Keys x509 in general you have the CRLs which which basically define which keys are still valid So that's the the TLS infrastructure as we know it today So the CRL URLs we are talking about is basically an extension on top of that which Provides the same mechanism on the signature level. Is that correct? I didn't quite understand that So let's say if you you just have your PKI keys Which we already know the the CRL concept. So you you you create some keys You define a CRL that's baked into the key So the key defines where the CRL is located and if that CRL defines Hey, this key is revoked then the key doesn't work anymore But the CRL we are talking about now is on the signature level So that's from my understanding then just an extension on the technology as we know it today from from TLS No, your first part is actually more accurate. This is kind of at the certificate level. So Uh, when you're defining a certificate in the trust or and also when you're Introducing the certificate in the signature Uh, both of the certificates, uh, if you use the x509 format would have the CRL I think we This is one where I do want to dive into and compare how the Tough mechanism kind of compares which is why we haven't made a decision on whether this should be x509 or not But the CRL should at least in be included in the trust or if not as well in the signature itself. Yeah Yeah So regarding the the whole proxy part, how do we prevent a compromise CRL Let's say I I get my access to to this CRL and I know some of these signatures or certificates are revoked And I managed to to modify the CRL and and put that in place So that depends on your Uh, on your TLS controls. So in a In a public setting in a non air-gapped environment, you're relying on CA's to do that validation for you, right? So now similarly when you go to an HTTPS site It's how do you recognize that HTTPS site is actually the URL you're looking for? Um, and so that's the mechanism in the public environment in the air-gapped environment To create sort of like your proxy. You're going to have to set up your own Certificate infrastructure and trust a route of trust to kind of say this is this proxy URL is something that I trust instead Yeah, so is the security of these um, CRLs dependent on the security of tls in that connection? Uh, yes in in this in in here it does Okay. Yeah, because my concern would be if um any Server that was hosting these URLs was compromised and that would mean the entire signature system would be compromised, right? So there's two Factors to it. I think we kind of need to go into a little bit more of the threat model itself. So Uh, one the CRL itself needs to be signed Uh, and so unless you are revoking a route certificate In which case your CRL your new CRL you just don't have a new CRL available um The you you need to compromise both the signing key and the URL to generate a valid CRL So just a compromise of one wouldn't necessarily generate a invalid CRL That's it. So like what is what is the signature sign that signs the CRL or it signs? So you for to generate a CRL I didn't quite go into that but you need your route certificate to generate the A CRL, right? So your CRL needs to be signed with your root key as well Okay, but then after it's generated the it could later be compromised, right the the server hosting the The keys if the server if the server is compromised As long as the root key also has not been compromised. It's fine Because you can't really generate a new CRL Okay, I think I understand I just don't understand why it needs to go to a third-party URL to download it instead of just including the content It's just to like um the chains Yeah, it's it's uh the part of it is to kind of make sure you are getting the most up-to-date CRL um, that's available and so um, if the signature itself contains the the uh, the CRL itself Um, then you could potentially have like, you know start being like a signature being revoked But if the artifact was like, you know, it was generated prior to that it would still not kind of show up Show itself as revoked, right? So you do need to kind of go to a place where you can retrieve the latest information to get that So point of clarification because this is part of this is new to me. So The stupid questions might help provide clarity um Today the way the prototype works is I have a key I create a signature the signature goes in the registry I could create multiple keys. That was the second link. I think I put in there or the either those two links Now when I taken a firmware client, right pristine environment, whatever and I say hey I'm trying to pull this image as artifact Can I get the collection of keys, please? And it brings back a collection of three in our in our example My local store. I might have no keys or I might that match or I might have one or more keys that match To validate the crl The question I'm trying to figure out is I'm going to get back a bunch of signatures from the registry If I don't have any of those keys locally Do I need to know what the crl is to go get to figure out whether revoked or it doesn't really matter because I don't have the public keys for those anyway so My question is kind of baked in Does the crl need to be in the signature we persist in the registry or does this or can this is the crl just assumed to be on the key Because until I have the key Then I don't really need to know whether it's revoked or not because I don't have it And if I to so you're Marina's point If I can get a if I get a compromise key I don't know if I can get a public compromise key. Can I because if I got a public compromise key, then of course it could put to a different crl So I'm kind of confused on that So I think I I am having the same confusion there because if if my crl is signed by a key and specifically that key is Is leaked somewhere? I can modify the crl So I think I'm not sure there's two There's two factors here, right? You are getting the crl from a specific url. So You need a compromise of both the url as well as the signing key for There to be an issue here and that's Like we're talking about a pretty rare event there, right? Well, I mean usually they're in the same place So I'm not Not sure that that's usually a two separation thing. But but sorry, I didn't mean to interrupt But ever I think we're getting a bit digressed by the crl because that's just revocation Let's just talk about key distribution in the first place and we can talk about revocation um Here's what I don't understand. Are we using tls? Are we not because if we're using tls I don't understand how steve's three keys get distributed with tls. It just doesn't work that way You get one set of keys at any one time We're not using tls. So the These are coming from and how they're signed and and it's um, I think just as schedules have happened You've missed when we've done the demos trishank show I'm happy to do them again because we'll continue to get new questions every time we look at them and I'll I I would do it now but with 15 minutes less marina wanted to cover something as well And it will take me a minute to pull up the demo anyway Um, so why don't we schedule that for next week next monday at this time? Totally my fault. Amy did all the calendar changes. It was my fault at the same time next monday I will walk through the uh latest prototype and we can talk exactly about this but trishank the the idea is that The keys the signatures that get posted to an artifact Were originated from different URLs to your point if we're using x5 or nine, which we're currently using But I can still get that collection From a third a fourth registry like I might have an a web and network signature that's assigned to it A docker hub signature and an api rocket signature But honestly in the production environment api rockets is going to have multiple registries And the final production registry will actually be possibly a different Uh, not one of the tls keys from the previous one. So it's Yes, I can go validate the URL that's on it, but that's a secondary verification And as I try to explain it, I don't think I can do justice without doing a demo. So, um, maybe it's just Are you available at the same time next week? I'm hard to say right now because I'm going to be traveling but I will do my best and yeah, sorry I haven't caught up with the previous work. That is my bad. I will I just don't know everybody's got busy schedules. There's no no weight put to that whatsoever. It's a total acknowledgement Thanks. Yeah, I will I will do my best to attend next week Okay, and I'm happy to set one off to just you and me, but I think others have the same question That's the only reason why I'm trying to do it next week's meeting Okay, so I think we did get a little off I think we're we one of the things we keep on coming back to In the CRL the okay, I don't know if CRL is the exact the key revocation scenario, which I guess CRLs are part of it Um, I guess I'm hearing that they are part of it is a really important scenario But it's it's hard to talk about revocation unless we don't know where We're all on solid ground or how we're getting the keys in the first place. So um Maybe that's the place, you know as if you want to because you've obviously put a bunch of thought into it There's a bunch of great content in here. Do you want to focus on that for now? Well, I'm what I'm missing here is kind of what the concern with it right now is right Um, like this is something that we Kind of see a standard in Uh, a lot of the different sort of code signing formats that are out there today and we know it's proven to work Um, and so you have two forms. Um, one like, you know, your url is something that Uh is signed by a public cda that you're vending a certificate for from a tls authentication perspective This is usually different. Um, significantly different from your signing infrastructure that you're using to manage and sign your artifacts. Um, so It generally presents sort of like a like, you know, you need to take down someone's hosting As well as take down their signing infrastructure to be able to kind of result in a compromise here Um, this is different from the tough approach where You are or the notary v2 notary v1 approach where you are managing all your keys as part of notary, right? um, and so Having two different sort of like key mechanisms here Um, you do need to compromise about so what I'm missing here is what is the threat model or what is a specific threat? Um, that we feel like is an issue here that needs to get addressed I think I might have misunderstood earlier, but I think the concern that I have that um Just one to think about more than anything is just to make sure that the c url can't be used to overwrite The existing trusting trusted keys. I think as long as um, it's just used to evoke them and used to um To manage that situation. I think it should be totally fine and I think That's good because it's interesting way. Maybe we put too much weight on what the c url means But the the other thing that I hear a constant Uh tension on is this challenge of x 509 and and certs and c authorities being only available to the people that want to spend the money for it Which is obviously certain ones that would but we don't want to exclude The rest of the community that also wants the needs to sign something So even the big companies that want to have lots of environments That wouldn't be an that wouldn't be an issue for x 509 certificates x 509 certificate is a certificate format So you can generate your own x 509 certificates from your root keys if you want It's only if you're getting an x 509 certificate from a publicly trusted ca that you're having to pay for it So, uh, whether we use the format or not. I think is uh, doesn't necessarily impose that restriction Is just the concept of self sign search Yes, so you can get a self sign and certificate as your root Um, that's essentially what the public CAs are doing for you. They have their own self sign routes that they're giving you intermediates or Uh, subordinate keys from so you can go ahead and create your own self sign root Share the x 509 certificate for that and say this is the trust. This is the route that I'm going to be using Uh, and then generate your keys off of that When we talked about the community use case, we could have registry operators do the same Where they have their own route that they're allowing developers to sign off of so I really wanted to kind of Enable this to be more of the tooling and really justify any of the different business use cases for it rather than Poursing to one through the implementation Yes, and just Trying to carry the questions that I keep hearing and don't completely understand is I hear the tension between x 509 being only for the big Which you've just elaborated that we can still do sex self-sign starts and gpg being The more friendly developer thing that anybody could use which is I assume why The update framework and notary v1 have adopted it, but I may not be doing justice to the conversation So I think this is a middle ground between allowing enterprises as well as community developers to kind of have a path forward With the gpg you still have similar kind of issues where you do need to kind of share your key Information publicly as well like saying here's the keys that you should trust for my For my entity, so I don't think gpg necessarily takes away from kind of providing From getting rid of that mechanism But that's something I think like, you know We talked about like whether we want to use x 509 or gpg or even some like, you know, the formats that were there in order v1 I'm still not convinced that they impact the workflow Which is what I'd like to get consensus on before we start kind of doing more research and figuring out what are the right formats there I think to a certain extent the format Um Shouldn't matter as long as it includes the fields that we need right and if x 509 includes the fields that we need I think that should work So I guess that At least for me that would be the the question is what are the fields that we need? How do we get those in a format? Right, and I think I called out those in terms of what we need from the trust or perspective And then from the signature really all we need is the Is the public key really? And that's really only The only part that's used we had an open question around time stamp that didn't that we haven't closed out yet And so besides that I haven't seen any other Necessary fields that need to show up in the signature itself I think if everyone can share comments on this pull request I can take a crack at closing out any remaining questions And if I if there's more if there is confusion and certain steps need to get addressed I'll take a crack at closing those out. Um, I do want to spend some time going through marina's doc today Yeah, why don't we read? I don't know it's nine minutes enough for you to go through with what you had We can give it a try and then if whatever we don't get to we can do next week Okay, why don't we all right? So let's just for for this, you know, let's definitely get to a point where you're comfortable because I know we wind up with prs That stand in prs forever and we want to get something confirmed that we can then iterate over So let's figure out how to get it to a point and you know, maybe let's make that a goal whatever we feel comfortable with Let's commit this week and then we can open issues and continue conversations on changes to it This will we have something for reference Yeah, if we can get some comments on that pr I noticed some come in already. Um, I'll go ahead and provide some clarifications and then we can look into getting the the pull request approved cool All right, we're in tears All right, can you see this? Okay, cool. Um So the main idea of this document was just to look through some potential ways to use the Kind of the tough workflow with some of the discussion we're using we're having about notary One of the big ideas here in this document is how to separate kind of the default public registry use case and then the private registry Use case where we need to be a little bit more client defined a little bit more specific And in this document, I also focus on first time clients because I know that that's a big concern Especially with like ephemeral clients and other things to make sure that whatever model we come up with still makes sense for those clients So it doesn't cover everything. It just focuses on kind of those use cases for simplicity Um So this is that diagram from last week can just kind of in context Um about it and so I think that the one of the big I think open questions And I think the reason for collaboration between these different groups is this like step number one in these different workflows is um kind of the things that you need to have Ahead of time and these are the things that you need to obtain through some sort of External mechanism, which is I think where this will key management process really comes in and can like coordinate with the tough work because um for example in the the public um example here using the default registry targets metadata The client needs to start with a copy of the trusted root metadata And this was where in note review one it was this trust on first use issue which I think is a big issue especially for these ephemeral clients and so um kind of my Tentative idea here is to somehow include this in an out-of-band process either by shipping it with the client somehow securely or by Having a key management solution shift with the client that can then access this trusted metadata So this is I think one big area It's not like solved but definitely one that we should keep talking about and figure out um And this just goes through one of the key things and there's two parts is How's the metadata get created and does it go across repos that shouldn't have to be able to share content and I don't want to get into that right now. We're just like that's part one but part two we keep on Waving our hands around somehow the client should be able to get this information and I think that's the part that we really need to figure out And one of the things that I keep on hearing is if our registry gets compromised, then I can't trust anything from it That makes sense Do we and I've heard some conversations that talk about hey, there's two things that a client would get from because the idea is It is less likely certainly likely possible. It's less likely that two things could get compromised should Notary Vee to have a design that there are two endpoints that somehow are Hard to compromise both at the same time and should the pattern be built around that? Yeah, I think that's a really good idea like for example for getting this trusted metadata You could have one that's like shipped with the client However, that client is shipped and then you could have another one that's downloaded from external Trusted location and you can compare those on the first use and say, okay, do these match if they don't match You know something went wrong and maybe we should reconsider um Trusting this chain I think there's a lot of really good when you get the client some other words I'm going to get a binary that I have to put on the machine which is fixed But then there's data that I can acquire at the same time But yeah, this is just it This is just off the top of my head. It's just kind of one way you could get that from two locations um, but I think getting it from two locations in general is a good idea Um, is a two location really needed? I think this goes back to kind of like even when you're deploying an affirmable client You're telling it a location to go get things from right? So You do control What's being deployed to the client in terms of like a configuration, right? And so in that scenario as part of that push of configuration Did if you're saying here's the trust or that we want to trust What is the concern there, right? That's something I think came up in the previous conversation as well, right? Like I think we'll want to understand What the threat there or the risk there that is that we're trying to address Yeah, I think as long as it comes from a trusted like configuration location I think that's a reasonable place to start the chain of trust I think it depends again on the use cases and threat model, but um Yeah, that is the the gnarly ball that we're trying to deal with here because What I think of an affirmable client get initialized Yes, it has It basically it's provisioned with someone that says hey, you have permissions to go to this key store and get information The idea though that you're getting something from the key store is a piece of data. It is a piece of data It's a key that has Some time period associated with that is not very short lived, right? There's a certain amount of key rotations, whether it be a year a month or whatever But what I hear is this other part is I want to get this timestamp metadata. I don't know if I got rolled back and that piece of data is assumed I'm using the word assume here intentionally to be rotated or updated much more frequently than A key store probably would Be now, maybe it's a configuration store. There's a key store. There's a configuration and then there's the the actual artifacts And that timeliness of that update and how how it can get updated is the interesting problem Yeah, so I think that that's Kind of goes into like the next few steps here. Obviously this number two is just verifying root But then I think that the proposal I have is that you then download this timestamp and snapshot from the registry And these can be updated on the registry on some kind of cycle And this can be like an online automatic process so nobody has to go and do this every say hour day depending on Whatever that time frame is and then At the time that the client is ready to download something they then download that from the registry And I think in the case of like air gapped environments, you could do something a little bit tricky where you say You know You you mess with expiration times a bit and say okay instead of did it expire before now? Did it expire before like a week from now when I last updated? From the um the internet Or whatever the case is um But yeah, so the basic idea is that this um timestamp and snapshot would come from the registry be verified using this root that you got And then the top level targets, which tells you the actual I'm trying to not intentionally interrupt, but it's just It's one of those we go past go and we're already operating on a place. We're not in comfort yet so This is my understanding is that What makes the timestamp valuable is I'm comparing it with something So if I'm just getting the current timestamp from the registry What am I comparing it to to know? Because the assumption is the registry got compromised. So if the registry got compromised, what am I comparing it to to know that? Yeah You were compromised. Yeah, sorry. Could I could I try explaining here a little bit? Go ahead. Yeah So so yeah, Steve does it does a good question So the first thing that yeah, there's several questions and that's let me let me try to answer all of them The first thing is you got a root metadata, which you're absolutely right about you need to trust a good copy somewhere Right and the way we typically solve this and and the important thing to note about root is that it's not changing all the time at best maybe once a year Once a year this thing usually we solve the problem even with tls by just baking in the root certificates with the operating system Or your client basically and that can be done today, right? That's not the hardest thing to do that that can be solved that way The important point is that you need to have a secure way to update that root metadata Once you've sort of baked in into your client because even ephemeral clients are going to need the binaries package somehow Right so you can ship in a good copy of the old route that way The point is to securely rotate the route from there because the server is going to try to tell you Hey, let me try to convince you that this is the new route and you should be very very suspicious and say Whoa, whoa. Hold on. Let me check it against the old route. Is there a line of trust from like a to b basically So hopefully that answers that question Um, and the second thing Sorry before I move on that does that answer your question? I Um, there are a couple of things. I think they're one. Um, I would expect roots to last longer than a year To kind of not necessitate the requirement to go upload it as frequently And the other one was that um, I'd like to see one where you don't necessarily need that a to b path Because in the event that your current route is compromised Your new route like you necessarily you want an out of band mechanism to say trust this new route Rather than sort of like an automated mechanism Sure. Yeah, that needs to be basically both ways. Um, the out of the out of band mechanism has typically been New the old environment If it's a container and just install the new binary Um, or the other ways to say, hey, we've controlled the server now We know we have compromised root keys But as long as attackers don't control our url, for example, like you said example.com if they don't control it They can't publish bogus metadata about it. They can put it in another server But you're not going to use the other server Because he's still using the trusted server. You can still use the compromise root keys temporarily to switch over people Who don't want to use the out of band mechanism, but as long as they are options. We should be okay Okay, so we're at time um I'm here's here's my thought process we These are complicated things. All right, we've always known this We know we have the experts in the industry attending these meetings. It's not that we don't have the right people It's that we're not getting We're not getting the experts from some to get to the other one and get all the pieces connected And then we Don't want to ask the stupid what appears at the stupid question, which I don't usually have a problem asking stupid questions But we then build and we're getting lost. So Here's what I'll suggest Let's take next week. Right. We've we've got our weekly meeting I'll do a demo with the beginning because we said we wanted to do that But I think we need to just keep on chipping away and building the base That we all agree on and understand Um, and then we can build on top of that because I I feel like that's the part where we get disconnected Does that make sense? I mean kind I had a quick question. Does everyone mind sticking around till 10 15? Um, is that or for another 11 minutes? Yeah, I'm so free Um, I had a I had a comment that I think like I'd want to discuss here to kind of figure out what the next iteration of this stock looks like The the big thing that I kind of uh took away reading this is that this is kind of going back into tying Uh, the uh the information that's coming, uh with the with the information that's in the registry and the repository uh and so My assumption from last time was that like, you know, we would look to see how Tough could be reworked to provide Uh, ski information and not necessarily artifact information, right? Um, I think that allows us to kind of move away from a registry Dependent model so you could potentially sign something Uploaded to docker's notary, but regardless of you know, whether you're getting it from docker's repository or not Uh, a registry or not you can get it from any other place and you can still reach out to the uh The docker sort of like hosted version of tough if you will to validate your keys So that was something that um, I think created some confusion for me from like steps four to seven Yeah, okay, so um, I guess kind of two two answers to the question. So first of all the um So basically when I was looking at it that what I kind of Um, realize what there's certain there are certain properties that you definitely should just you can just chain to Access the keys to then access the artifacts The problem is that there's also a few a few security properties that you can't achieve just through The signing keys on the individual artifacts and so for those properties things like rollback protection things like timeliness consistency All those kinds of things You need kind of something on the registry as a whole And so my kind of idea was to split these two to have The registry as a whole has a route which has the timestamp and the snapshot which ensures those properties for the whole registry and then the client has an option to Have a different chain for getting their Your trusted top-level targets, which then chains to to access the signatures for the individual metadata files and so in like this second workflow where they're using a client defined top-level targets They still they they um Where is it? So starting at step like five through seven This is this is a whole chain of keys that just comes from what the client decided they wanted to trust And just these Yeah, the rollback protection. Yeah, I think that's that um, we discussed this earlier In the key management group, right? Like we didn't want to use signatures as a mechanism for preventing rollback Preventing rollbacks because there are a lot of valid scenarios where You want to roll back to an earlier trusted artifact And we wanted to kind of ensure signature validation as more a mechanism of saying can you trust this artifact or not? And rely really more on revocation as a mechanism of saying that trust has been revoked and don't roll back to this specific version, right? And so I think Here kind of what I was expecting from the tough framework to say Was here's your latest set of revoked keys that you should not be trusting or here's your latest set of signatures that you shouldn't be trusting Rather than saying here's the latest artifact and don't trust anything else before that Yeah Yeah, I just want to because he touched on something really interesting I hadn't really thought of before that If I if an image an artifact an artifact ships something on monday and then it's signed with key one And on wednesday they and they ship a new thing It's still signed with key one And they realize on thursday that wednesday's release was broken. They want to roll back to monday's release as long as Mondays release the thing they're rolling back is still the same key and the key wasn't revoked That should be a valid scenario Which somewhat feels in conflict with The stuff that I've heard tough talk about in the past of We actually don't want to enable it because yes, it is valid But it has some vulnerabilities into it That are known and it's fine But we don't want the thing the current to be rolled back So it kind of feels in conflict with each other Yeah, so the um the snapshot metadata I think traditionally in tough does include all of the different artifacts on the On the you know registry repo, whatever you want to call it, but um The another thing that includes is the is the the current version of all of the signatures and all the like all the metadata all the targets metadata that's on the registry and so um That means that if you want to remove like say that you know this You know this metadata should no longer be trusted because you know, we realized that this release was bad You can move that from the registry and then make sure it also gets removed from the snapshot metadata so that anyone who downloads that snapshot knows That um that file is no longer valid. So if they get that file, they know that You know, something went wrong and they shouldn't necessarily trust it And I think we can work with what exactly should be included in that snapshot and maybe it shouldn't be artifacts Maybe it should just be the metadata about artifacts to to keep track of You know, what should be trusted um right now doesn't it still keep the dependency on the um on the registry in terms of like where you're getting your artifacts from though um a little bit so the For the snapshot and timestamp that would be dependent on the registry and those would have to be generated By each registry, but then the whole target's chain could be copied between registries But that still requires you to copy things between registries, right? I think one of the key requirements we had What's to make sure that you can move artifacts from one registry to another and still be able to validate signatures through that And you shouldn't have to go back to the other registries to validate Oh, yeah, definitely. So as soon as it gets moved into the registry the registry can automatically generate those Those files and then the the rest of it would just be already trusted by the client Especially in this way that would have its own metadata for timestamps and so forth Yeah, it's just this timestamp snapshot that the registry has to generate the other stuff is transferable I think that that goes back into like at this point. Are you trusting the registry or are you trusting the original? Siner, right? Because now the registry is generating all this information for you So I think this the the part that kind of caught me here is that it seems like, you know, some of the things we had discussed in the Uh in the signing architecture, um, we're revisiting here in this doc. Um, and Um, my question is that like, you know, do we not have consensus on the signing workflow and what we're validating there then? I don't think we have consensus on the workflows. That's part of why we're having the conversation So that's part of what we're trying to get agreement across the varied parties with that have the extra So all these are very good questions and I think I think here's a way forward Here's what I propose and you guys can push back feel free to push back and tell me what you think But I think We don't necessarily need to agree here provided different people as long as we can agree on the basics Like everyone basically wants artifacts to be signed by developers, right? For example, like this is a basic minimum requirement fantastic I think as long as we can provide options to like say nv2 tough version of nv2 to like optionally validate timestamp snapshot metadata As long as there's a way for it to be able to do that and I don't think it should matter To any other implementation. What do you guys think? I think there's an assumption that The tough workflow works for registries or contents moving between registries That I don't think we've come to a conclusion on yet So it's not that we want to preclude something but we do want to say that the signing solution we're putting into registries will stand behind And it supports the scenario as we've outlined in the requirement stuff Well, I think Trichonk another point there is that you know, um, we Are trying to I think agree on a basic signing Syner like workflow That says here's how you have developers, you know Sign their artifacts and this is how the signature gets validated, right? On top of that if you're looking at tough as a mechanism for a key distribution For developers that they can use For that to Be successful I think one of the things that we would want to consider is How do we make the signature scale across repositories, right? And if we have a limitation there that says if a registry does x y and z or if this is kind of part of the default implementation Then it addresses. I think it creates friction for that adoption on the flip side If those are optional, I think there is a path forward, but Um, I think we'd want a question like, you know, what how are those optional fields being maintained? Who are the operators and what value does that provide? um, the the issue I have is that We want to cover the base case first and have agreement there before we move to the optional requirements um, and so If we do we the question first is like, you know, do we have a consensus on Here's how we wanted the basic signing workflow to go and I don't think we have that yet Uh, and then here are the optional use cases that we want to address through like tough or s-bomb or through some other places, right? um, and that's the part that I think, you know, we Uh, we may be going a little bit in circles Um, and so I would say let's close the base use case first before we start going into the optional use cases Yeah, I think my concern would be that if we design the base use case in such a way that it makes the optional use cases impossible, we'll have to just revisit it anyway Well, I think we Certainly recognize that tough has got a lot of great work in it And that's why we continue to have these conversations. So just we're not I don't think we're at connecting all the pieces yet And we just don't want to be in that spot where the pieces just don't connect and customers are just confused This doesn't this is kind of like all browsers share a way to do a security model and Maybe not using the right analogy here But we need to make sure that as content moves between registries, which they have to be successful That we all share our common model. You can't constantly be doing adaptations between registry one and registry two um So that's that's why we're kind of pushing on this core standard not that it couldn't evolve But there has to be a standard way when I pull an image from doc rublin I move it to what acne rockets that it will just work. It doesn't matter where acne rockets hosted in AWS google azure on-prem You know somebody else with J frog or harbour they should all have this core piece in it Yeah, and I I just don't understand where this wouldn't be able to do that and that's a great question to Carry to our next meeting because we're now 15 minutes past and I moved this other meeting so I can stay so Let's let's conversation continue. Okay. We'll continue next week then. Thanks everyone Thanks folks