 Good morning everyone has everyone doing today. Good. Good. I wanted to primarily go through a feedback on pull request. I don't know if everyone in the chat has had a chance to go through it. We can take a few minutes to read through it. I don't see anything else on the agenda. Did you want to add anything to the agenda today besides this pull request? We can read through the session take more than 1015 minutes. So just wait for everyone to catch up. Hi folks. You'll hear me okay. Yep. I just wanted to introduce myself. I'm Ian McMillan from Microsoft. I was speaking with Steve Lasker about this topic. And I wanted to come join in. So I actually come from a background of from the team that manages and maintains all of the PKI and code signing services internally for Microsoft. So this this key management scenarios are near and dear to my heart. If you will. So I was just reading through this PR. It seems to hit a lot of the main topics and scenarios that we're concerned with as we look into the Docker content trust capability and signing for for all of our product teams. And yeah, I think the biggest one for me is the way in which keys are generated and managed. Like all those scenarios being able to cover them is is super important to be able to have something that can support the the local key store all the way to someone who has NetHSMs or even offline HSMs who are generating your keys and and especially the root needed to be generated offline. I think that was one of our biggest challenges as we looked at adoption was the ability for us to to generate a root pair root key pair offline. And then being able to have that basically have a public cert that comes out and any delegate keys being signed by that that private root only in the offline air gapped world is something that we needed to have. Yeah, I think we I manage similarly key management for Amazon and science code signing for Amazon as well. And so we had some very similar kind of concerns and I think that's kind of where we flushed out a lot of the potential different use cases for generating these keys. I think as we're looking to it like we're thinking we haven't quite gone into the implementation details, but I'm thinking something around like a pkcs 11 type API would make it both able to use local key store and potentially any kind of network HSM or cloud HSM, and I think really just having that flexibility so an individual developers that want to kind of leverage their own laptops if they want to or just kind of like you know get signing started they have a mechanism to do so. At the same time, you know we give enterprises the opportunity to kind of secure their keys, especially if you know there's a lot of value to signing as an enterprise. Yeah, totally agree totally agree. You know for for me coming from the kind of authentic code, signing world and publicly trusted, signing certs as well. And a lot of those private keys we have key management standards, and ones that are evolving to require a higher level, higher security bar level, where local software key generation is something more or less that we're we're starting to say that's not going to be acceptable. So with, with this, are we going to run into any kind of similar policy where there's, there's going to be certain standard bodies that are going to say, don't allow software generated keys or only allow a certain way of key generation say on a fifth level 140 dash to complain hardware crypto module. We haven't really thought it's thought through it to that extent. I think the way I was approaching it was that, at least from a crypto algorithm perspective. We can say which algorithms for key generation meet the bar versus which ones are just insecure and we should just ignore it right. Like we wouldn't want to launch support for Shao one, for example, in terms of the key storage requirements. Justin, you might be able to weigh in a little bit more on this, but are there sort of like requirements we can make in terms of like, you know how secure key storages have to be or would that be too onerous for developers. I think that is good question. I don't know. I mean, I think that I would be more inclined to go along a route whereby we had mechanisms for attestation for people who want to use them, rather than making it required. Because I mean, because it's I mean you can You can't You can only do some kind of attestation to prove that someone has generated a key somewhere in hardware, for example, and And it's that's kind of information you might want, but I don't think we I don't think there's any way you could ever require that. Unless you require a very specific set of methods, because things have to be relatively generic. I'm kind of I'm following you there Justin too. I mean, the way I think about it is, let's say if If we had a root key, somebody was asking hey, hey Microsoft, can you can you put this in your Content registry and and trust, you know bring that that trust from my route into into your own registry so anything that I sign over here like you can use There and that's kind of like similar to how we run our trusted root program and you know those fall into the in those cases then if you want us to add that root Then we have to see an audit that says yeah you are meeting the certain standards such as key generation in a hardware back solution versus you know if I'm if I'm just doing this in my My home or in my own enterprise. I don't have to have those restrictions necessarily. Yeah, I think in here from because there's a wide variety of use cases in terms of how the The process for verification works I think here we allow all use cases, but in terms of like you know this doesn't necessarily preclude trust stores being managed and set up with their own specifics that Right and I think that could be a process that happens outside of the the specifications itself in terms of determining like you know when roots are being shared or public keys for roots are being shared for trust Depending on which trust store you're getting into there may be some additional requirements that says your keys need to be auditable and meet certain standards. Yeah, that's exactly it I think that sounds sounds good to me. We can even provide guidance about like if you want these properties this is something you should consider doing. I don't think we could really mandate that for everyone, depending on their use cases. Yeah, I mean we naturally be one came with a set of recommendations, which you didn't, you know, as to how to How we recommended you manage keys. Which I think is a good thing to have so Just so that people understand what what good practice looks like at least and Talk about the things you might want to be straight to her about Gotcha. Yeah. Okay. Looking through the comments. Justin, did you have any comments that you wanted to add or I hadn't quite finished reading the verification piece but I had a comment about something before that. Can wait a few more minutes. There was actually there's one comment just on the wording that I was about right at the beginning Line 33 Publisher uploads route public is registry registry can verify user for container out plays. What did you mean by registry can verify user for container out plays exactly. Let me pull this up. So this was a line 35 3333 So this is This is a potential case where Publishers can register with the registry and specify what their upload keys are. So this is an additional verification That the registry repository can do. This isn't something that's necessarily essential for the deployer to verify it. It's more an additional check that registry operators could potentially do to verify that. So I just I'm very confused by the wording. What do you actually can use explain what you're saying could happen. I just really the wording just confuses me. That's all. Can you just explain the can you just explain the scenarios happening. I'm slightly confused what you meant who you mean who's the who's the user in this second sentence. Can you hear me now. Yeah. Okay. Sorry I got disconnected for a second. So the the user here is the is the person that's building the artifact right so the the person that's kind of building and releasing artifacts when they generate a new set of keys. They can register those keys with the repository and this essentially kind of enables repositories to verify that artifacts came from the publishers. This isn't necessarily essential because we do have another set of verification happening at deployment time. So this is more just additional safety feature that lets repository owners verify that the containers are coming from or the artifacts are coming from the registered sort of publisher. Okay. So, um, um, okay, so, so, right. So this is a sort of pre verification. Essentially pre verification check that What you just did is you're on the same check when things are uploaded and then you can Yeah. Okay. Okay. One thing that was a little bit confusing for me in this document here in a couple other places is Like the different roots of trust. So like, um, I know we've talked about this before, but the, because there's a root of trust where you're trusting stuff that's provided by the registry as a whole. And then there's also each repository also has kind of its own trust chain or if you want to call it. And I think that root is used kind of interchangeably to talk about the root of the registry and the roots of the individual repositories on the registry and I, or maybe misunderstood, but I just found that a little confusing. Yeah, I think the the only place that I'm trying to use root here is root from the view of the Of the entity that's that's publishing essentially or building our containers. Right. So this wouldn't necessarily require registries or repositories to maintain their own roots. The the only place where we're generating keys would be the would be individuals or enterprises that are essentially publishing containers based on an identity. And so each root would belong to to that entity. And that's where the signing keys would chain off of. And so essentially the distribution then becomes whichever entity is essentially the publisher here. They're having to distribute their keys to both the repository owners as well as the deployer. Okay, that makes sense. So this is the like, you know, the developer or the group of developers they have like a root and then that is given to clients that they can verify files and also given to the repository to verify them as well. Yes. Okay, cool. I think I understand. Oh, sorry. Another side question. How does the repository verify that it came from the correct person that this root key. Do they have like, you know, like an out of band verification process that the repository like performs on it, some such or if it comes from this account or something. I think it has to be an out of band process. This goes more into how people are registering for different repositories. So if you're registering an account for Docker hub, whatever mechanism you're going through to validate that you are who you are. If you say you are to Docker hub as part of that process, you would be registering your keys, right, and then different, different deployment scenarios may have different verification requirements. I didn't necessarily want to specify them in the requirements here. Yeah, it makes sense. It depends a bit on the use case. The one other question or comment that I saw in terms of preventing a replay attack. I think that's a really good question. That's something we can address in the design of the revocation list. There are a few options I think we have there in terms of time stamping CRLs are potentially looking at some other mechanisms to prevent that. But I think the next step I'd want to do once we have an agreement on the overall specs is really go through a detailed threat model and try and figure out all the different places where we would need similar kind of mechanisms to prevent different kinds of attacks. I think that sounds great because that way we can look at that attack, you know, through all the different pieces and see where it fits in. Yeah. Ian, Justin, did either if you have any comments, any additional comments? Not from me at this initial kind of quickly read through it. Justin. I mean, so for the deployment workflow, can you just run through the, can you just run through the deployment workflow just in terms of what the user is validating exactly. Yeah, sure. So first, there's I've split out to deployment workflows. One is the provisioning, which is saying the initial configuration of the trust store. This is essentially saying here are the routes that I trust based on these developers that I know. And so once you have that trust or configured, whenever you are pulling down an artifact, you are checking the signature on that artifact and seeing that it chains up to the routes that you trust. So if it's if it's signed with the we talk about only one key here, which is the signing key. So the signing key needs to chain up to the routes that you have configured. The one thing that we don't talk about here is time stamping. That's something that I think we need to have a separate conversation on on whether the time stamping key should chain up to the same route or whether it should be a public route. There's some pros and cons that I think we need to address there. Um, but I gave it you're making you I think you're making some more assumptions about what you're actually validating here on you. I mean, so how do I know which route key I expect this image to validate against. So you wouldn't necessarily so the signature would tell you which route that signature chains from right. So this is something that would be expected in the signature itself and it could match any one of the routes in your list. So we would need to come up with what the route look up from your trust or looks like. But essentially you're matching all the all the keys that you have in your local trust or to see if they match up with the key that's presented in the signature. So, so, so you're you're the trust model that you are supporting is that I trust effectively a list of people. Yes, publishers list of publishers and we I started working a little bit into potentially where how you would get that list. At the very basic it's an out of band like you know developers are publishing their their their routes. We can also look into potential aggregation of these routes based on sort of like you know some some repository of keys. But we need to kind of walk through what the the business use case or the model or justification for that would look like at the very basic from sort of like a publisher to deploy model. I think the assumption we'd have to make is the publisher is able to get their keys distributed to people that need to verify applications or artifacts coming from them. So I mean it's over to be a rolling out supporting tough. I think the way I would look at it is like comparing this against tough and seeing sort of what the pros and cons are. I think one of the issues we had with tough was the the automatic configuration that we wanted to get away from right one of the key aspects of signing should be being able to specify that here are the parties I trust rather than trusting parties because they are coming from a repository right. And I think in that in that scenario that's kind of where the tough model starts I think breaking down a little bit. Yeah I go and Marina you can I've been thinking a lot about is kind of like splitting it in half so that if especially for more like open source projects and those kind of entities. You can you know build that trust from the repository but then you have like a second option of building trust using a third party system using the same repository but you kind of established trust. In these two different ways so that if you don't want to trust everything on a registry for example or if you only want to trust developers that you kind of specifically now. Then you can do this this separate process but you could also get things from the registry if that's easier and if the security concerns aren't as major. Yeah and I think what I'd want to kind of look into more there is if we can turn that more into a registry if you will of roots that you that you want to aggregate together. That's one approach of looking at it but that's that's something I think we have flexibility for in terms of figuring out how we want to address that. But I think I'd agree that like you know there there needs to be a mechanism where you can narrow down to exactly specifically which publishers you want to trust but there's also value in getting that automated sort of list of developers or participants. Let's say in an open source community that you want to trust by default. So in that model would we necessarily need tough as it sits or would potentially like repositories also having like a repository of keys be sufficient. Yeah I think it would kind of be like a little bit of both so you'd have the because there are there are like other protections that tough also provides but I think that you could build this on top of those. That's kind of where where I was thinking with this is that you you kind of build this like the second mechanism for a different way of establishing trust in specific keys but then you use those keys in tough to then verify like timeliness consistency all those other properties. I think that in concept that makes sense I think we'll want to flush that out a little bit just to make sure that all the workflows make sense and we're thinking about them accurately but yeah I envision that you know like at this core like really we're saying here's how the keys get distributed how the key gets distributed is something that we have we aren't necessarily speccing down here and I think tough could just as easily well kind of like you know be that mechanism for if you want to kind of build that collective trust I think that could totally work. But yeah I'll keep thinking about it maybe try and fix something out and share with the group. Yeah I think it would be helpful to write this because because this the use case document effectively is in is implying what I want how I want to validate things here. And I think that's not is not the right place for that to be I think we need to write down specifically what we tried what the users security guarantees that they're trying to enforce actually are separately. Or what they what or what kinds of things we can support there. Because I think we need to be very specific about this because it. Otherwise, otherwise because I think one of the problems is that people need to understand what guarantees they're getting and see if those are the ones they've actually wanted. Yeah I think I can flush out a little bit more about what's actually getting verified here. I think the root distribution addresses a slightly different problem in which it's saying how do I know that I trust this key to relate to this person. And I think tough as a way of solving that there's potentially other ways we can look at for solving that. I don't think those necessarily need to get specced out in the key management but I do think you're right the key management does need to call out that in terms of if you've identified that I trust this public key how does that. How does that chain down into sort of like the artifact that I'm getting. So I think we can separate out those two. And we can have like a tough implementation that looks at the looks at how to configure the trust stores. And then we can we can take some several more passes at that. Because I think there are. I mean there are definitely potential issues with this like I might. I'm. I mean I think. You know I might. I might only. Trust certain publishers for certain things. Certain use cases or certain. I mean I only I only trust I might only trust Microsoft for. For Excel and not trust Microsoft for Visual Studio say. And things like that as a thing so like just chaining to a Microsoft key is actually quite a weak statement. I think that's a good call out. I think here we'll probably need to expand the key configuration to also potentially includes targets or some other field that says that like you know here's the key I would trust for this application itself. That's that's something I think that we can we can definitely add in. I think it if it comes down to one where you can blindly trust different publishers or you can trust publishers for different applications I think we've seen both of this use cases so that's something we can add in. I haven't quite followed where we were with the S bomb sort of like integration. Is this better addressed through the S bomb sort of integrations or should that really be handled here at the key management layer. Well. Yeah I mean I think that's a. Well there's a slightly recursive problem and that if I don't trust the signature. If I don't know if I trust the signature I can't see if it's I have to trust someone to assign the S bomb in order to eventually. In order to validate it. But but yes I would probably trust Microsoft. I trust Microsoft. To sign. If it signs an S bomb that says the thing comes from Microsoft after I. Potentially. At which point then I know what it is at which point then I can decide if I trust it for my application I'm doing here potentially something along those lines. Yeah because I think the other issue is that if you're not trusting Microsoft to tell you what it is in the first place right. It doesn't matter if you scope it down like they could still give you visual studios instead of giving you Excel and just sign it as Excel right. Yeah. I think there's there's there's something more we'll have to think about like you know how to. Potentially yeah there's also the I mean there's the but there's the actual case where I saw I trust them for both visual studio and Excel but not for Docker and therefore what I saw I don't want them to go around. Maybe that's not a good example like that. But yeah I just I think they're just. Once you if you have a larger if you just have this set of keys. That's not. You definitely might want some sort of policy on top of them. Yeah I think the policy makes sense. It's one word argue that signatures might not be the right place to kind of enforce that or key configurations might not be the right place to enforce that. Because when you say you're trusting the key. You are to a certain extent trusting the publisher to take certain actions right. And if the publisher themselves where the method of attack then trusting the key doesn't necessarily protect you or trusting the them telling you which application itself. Yeah. Yeah, that's right. Yes. Yes, that's right. I mean I do like the kind of the general model and ideas, some level of stratification across a a large organization say like the in the Microsoft example. I'd love to have the ability to be able to issue keys to say the office. And then there's even subordinate keys under that that are specific to each application. For folks to to be able to pin trust say to the Microsoft route is fine, but then from a blast radius. In case of key compromise, say if the office key needs to be revoked. It doesn't revoke the Microsoft generic key it just revokes the the office and I'm able to issue a new one off of that Microsoft route. And not harm my other say business groups say as a developer division anything that comes out of Visual Studios. Yeah, I think that's kind of where our best practice guidance comes in that ideally you would have a route and you would separate out all your different products into different intermediates or subordinates and people then have the option of trusting either the intermediate or the or the route itself right. Yeah. And that's that's really more like a best practice guidance than that comes out. I can feel he has a concern here. And I'm trying to route it out a bit. Yeah. No, I know that's a concern we've talked about before right. And so I think, to me, the an S bomb integration feels like a right way to limit packages. Because then you've said to a certain extent that I trust whoever has sent this to me to tell me what it is, and then I have this next layer of filtering to say I'm getting the right sort of packages that I want to have and install and use. Yeah. But what about what about the what about the use case where every every vendor in the world publishes Kubernetes is but I when I'm if I'm trying to validate my Kubernetes installation I actually only want to trust one of them. I guess I just have a set of three keys I use with validating Kubernetes and that's what I use for that circumstance. So what I know I'm trying to run Kubernetes. I think we're going to need to some kind of like client defined map thing. Or like the client can define or you know or the user who whoever can define. Okay, so I want Kubernetes from this publisher with this key. I want Microsoft Office from this publisher with this key and so on. And so that way you can so that they have the ability not like the requirements but the ability to define those things more find greatly depending on how fine greatly they decided who they trust with these various products. 100% agree. I mean the ability to to support a curation of these are the things that I want only I don't want this version or that version from this publisher is super powerful and frankly I mean as you do your deployments at least for our services. I even look at I only run things that I've signed even though they've been signed by the producer. I'm a consumer of that content at that point and I'm just saying hey yes this this is allowed to run in my environment. Yeah I mean we have we have quite a lot of people trying to use no tree v1 as effectively as assigned workflow management tool and that's kind of how they thought about it so if I sign it that means it's a it's okay for this to happen to it. Yeah which is which is a different kind of model though. If you're actually doing it as workflows. I mean it's kind of what in Toto's design is about really you know where you have a set of things that have to happen you want to have signatures that they have. I think that's kind of like the layer that I saw this validation happening more in and so the question comes in and also in terms of like you know what else is being presented in the base signature itself or potentially an information that's coming in that lets us know that this application is Microsoft Office right. So we want to potentially tie it down to something in the artifact itself. Are there things that we can kind of like limited to that aren't necessarily changing with versions or updates. I think that's one we can take as an action item. I will come back with an answer for that one next week and see what are the different ways we have for addressing that. Because I think it is a use case we want to we want to solve for I think where our disagreement right now is we're in the workflow we solve for that so I'll come up with some suggestions for that. Yeah I think that would be helpful and I think I'm just trying to because I think that just we need to explain how people can address these use cases. Okay I'll address that. Any other comments? Okay I'm going to add those to the notes for the next revision. The one other thing I wanted to kind of spend sort of five minutes on in other assigning areas typically whenever we rely on time stamp we would rely on a public time stamping authority. Rather than using the same route to generate both the time stamp intermediate as well as the signing intermediate. I wanted to get some feedback on whether that was something that was considered for notary v one or if there's any sort of like objections to that approach. I mean that came far tough. I don't. And then Marina have you got any thoughts on that. The short answer is that it kind of depends I think it is important to have a consistent view of time. I think the thought was that because you know applications on the internet generally have synced time clocks anyway. You can utilize that source of time and it'll be all right. But in other deployments of tough this has been a concern, making sure that the time source is secure and is accurate. Because, yeah, you can even like accidentally dial service yourself if the computer that's signing something thinks the time is, you know tomorrow, and then no one else or yesterday and then no one can download it because I think it's updated. So I think it is definitely something that we should think about, but I don't know if we need a whole separate solution for it or if we just need to make sure that the times are accurate. The question was should timestamp keys be chained off the root key or is it okay to chain them off some other public timestamp key that you trust for timing information. For me, it just adds, you know, an extra complexity and an extra thing that could be compromised. But there's no other reason not to as long as you trust that other thing. Yeah, for time stamping, right, like we do rely on public timestamp services that are pretty resilient. So I know for example for authentic code signatures, you know, you can use DigiSert, you can use Komodo, you can use a lot of different time stamping authorities that are out there. And this is a service that's generally free. And so it's in that regards I think because there is a free time stamping service that's available that, you know, these certificate authorities are doing all the audit providing all the information for we I don't necessarily see a higher risk there. I think it adds more mitigations but I before I kind of like wrote up a doc on it I just wanted to make sure that this, whether this was considered or not, and get some feedback on what prior considerations were. And I couldn't check with folks who are around earlier in the tough process but I think that basically the services are newer than, you know, they were newer when TuF was being written and so they didn't have the same trust. And so yeah, but I can look into that more. I would second what as you're saying here, I mean, every single CA who issues publicly trusted code signing certificates is required to offer a time stamping service for free and maintain it to a certain level that is also audited. So, I think they're totally reputable and they add a good layer of counter signature in that regard to it comes down to the question of whatever is verifying it does it have a reliable time sink that it can source from when validating a time stamp. I wouldn't want to restrict it to only like the the timestamp has to come off of the same route as the the signature itself, the key that was used to sign. Yeah, part of the concern we had there was that typically if you have a public timestamp with a different route. If for any reason you revoke your key based on a compromise, you can also set a time of revocation. So you don't necessarily need to revoke everything you've ever signed. But if your route is compromised and your route was also used for the time stamping, then you pretty much have no option but to revoke everything. So, from our perspective, the public public time stamping service kind of give gave you that second layer of protection. Yep, totally agree with you. Okay, I think we have a list of next steps. Marina, I'm going to put your name on some of the action items for looking into tough and then I'll take the other ones for some of the clarifications for the spec. Does that work for you. Yeah, that sounds good. All right, I think we're at time. Thank you everyone. Thank you. Thanks all.