 Hi everyone. How's everyone doing? Pretty good. How about you? Yeah, I'm really glad it's cool today, kind of raining because my AC is busted. Oh no, this is a bad time of year for that. Really bad time of the year, but I was really overworking that thing, you know. I should have listened to God and Ed when they say don't run your AC all the time. That's not what you're supposed to project to the grad students here. You're supposed to have this like opulent life of luxury that, so that way Marina can say, gosh, only when I get my PhD, I'm going to be living like Trishonk. Well, I mean, just make sure the life of luxury includes two aircon redundancy. It will be fine. Just going to give you a few more minutes and see if anyone else joins. I'm just setting up the notes today folks. So I did two items to the agenda. I looked through a lot of the comments on the pull request. I unfortunately didn't have enough time this past week to add in the deployment admin, the deployment user rule, as well as the diagrams which we talked about last week. I'll take a crack at adding that in this weekend. But I think there is enough conversation in the comments or there are two key items I think that are worth discussing. The first one I think is on the having the multiple root keys and the second one is just looking at curavocation larger in general. I think we can start off with multiple root keys Trishonk I think you and Marina both went back and forth on that and I think Milo Slav also had some comments and but he's not here today but we can kind of discuss some of those. So I think from my perspective I kind of share some of Milo Slav's concerns in that even if we have multiple keys as a requirement for the root key. Typically what we're seeing is that those keys are all being stored in a similar fashion. So a compromise of multiple keys is just as likely as a compromise of the single key. So what we were kind of moving more towards was something similar to a two factor authentication if you will. Well there where there is a separate route to kind of like share what your trusted keys are, which is what we're envisioning notary to would look a little bit more like. But I think this this this one we can we can kind of like look at the pros and cons of each approach and decide pretty quickly on sort of like you know what approach you want to move forward with. Oh yeah I totally agree I mean so so I think there's two separate issues here right that we should we should take some pain to try to distinguish. One is that I think it was wrong to lead with multiple keys that NASA that's not that's a good idea to multiple keys. But I don't think that's what we're trying to emphasize here what's more important is that even if you're one rookie that's fine that's what you want to do. You should have a way to rotate the keys that's the really important point that the more important point I would say Marina Diaz thoughts. Yeah I think that before we go too much further I think that I had a question actually about the what we're talking about when we talk about rookies. Because going through the document it seems like there's like a lot of different rookies in the system. And I think that if we can kind of clarify, you know, who needs to make these new keys and what they're doing with them that I think it might be easier to figure out what kind of management is reasonable. So if we go through that really fast. First I think that would be helpful. I can clarify some of that so essentially the way I was envisioning the root key as working is the same way that we have the root key currently working in notary. The first implementation, in which case it would essentially tie into being like and I like the root of like what what's used to establish someone's identity right so each developer that's trying to identify themselves as a developer each entity like you know could set up their own root key. And then all the timestamps, if we decide to use snapshot keys, like you know delegate keys, these would all change from that single root key so the root key would be now goes to what we're currently using in notary. But this would essentially be the root of trust that chains into like a single identity that you're saying that I trust this identity. That makes sense, but I think so I think that that was part of my confusion actually was that I kind of was envisioning a system where like there's a single root key that then delegates to these developers and other users of the system. And so I think that you know because there's less of them I think that the more management is reasonable but if there's going to be one for each developer than yeah I totally agree that the multiple offline keys will probably be harder in that case. So yeah we want to make a distinction there between I misspoke and I don't want to kind of have that quick confusion as well so I use developers pretty loosely so I'm considering kind of like use cases where you have individual developers like pushing sort of like containers and that they want to be identified versus sort of like maybe large enterprises that have multiple developers. I would envision an enterprise having either a single root key per sort of like organization or potentially sort of like multiple root keys per sort of like smaller teams that they want to have individual keys for right. So taking sort of like, you know, a large sort of like company they may have like one sort of like root for their windows business one sort of root for their Linux sort of like containers. So I think what we want to tie the root to is sort of like an identity that someone else wants to verify whether that's an individual developer whether that's sort of like a legal entity. I think we'd let sort of like the, the, the signing admins decide how they want how many roots they want to use and what those roots would tie into. Okay, I think my concern there is like would be like how you figure out which of these root keys to trust, because if there's no like centralized group that you know clients start with that they can find these other keys from. I just don't know how they would figure out, you know, which of these developer groups to use. So there's a lot of background noise. Right. And I think this is where the key sharing service, if you will, which is what I think notary can morph into comes in is sort of like a service that's telling you that could either be stood up by the individual organizations. It can be something that's a bit of a shared service as well if people have like trusted providers where you can kind of go in and check what the root key for sort of an identity is. This is kind of similar to sort of like where companies are sort of like, you know, putting on their websites, what their public keys are. We have steps where we look for sort of like an out of bed key sharing service that isn't necessarily tied into how containers are signed and propagated. It would be a second service that's managing that that key sharing mechanism. Yeah, I'm kind of concerned with. Sorry, go ahead. Thanks. I was just going to say I'm kind of curious about the design of this, this external key service because we know from experience that if you if you don't. You don't want to tightly couple everything of course but when it comes to designing secure systems if you don't carefully integrate them. You might run into unexpected security issues but but we can discuss that separately. I think that's one of the things we definitely want to dive into once we get into the detail design at a high level with the sort of like the deployment administrator. If you will, they have two options right right now they can manually sort of like right in the public keys that they trust. So whether this is like, hey, I have like an internal private repo set up. I know what keys I'm using there or sort of like, you know, I trust these large companies. Here's the private keys that they published on their website. These are the keys I want to I want to add into my repository. That's one route. And then the other one is that if there is an automated service that can get that information for you. And then that's where the key, the key sharing service comes in that you're able to kind of like say, they're the key sharing services that I trust for these entities, you know, automatically get the keys from there. I think we would want to kind of dive into sort of like what the auth mechanism there looks like what are sort of like potential threat models that we need to kind of address there. There's always kind of like concerns around sort of like, you know, what happens if a key sharing service gets compromised, what are sort of like the mediation steps. Those are things we need to address and sort of like the design and threat model. But at a high level, I think what this addresses is that in the event of a key compromise, you're not necessarily kind of like at risk. You'd also need the key sharing service to be compromised and you'd also need the registry to also have like your upload credentials to be compromised. So there's three different things I think that you need to compromise at this point, in addition to just a key compromise happening. So I'm wondering, do we need a centrally managed key sharing service? And what I mean by that is what if you, let's say you are hosting your own notary, what if you just have a interface which you need to implement, which you can use to expose your root keys. That means you only need to put on your webpage, okay, where can others retrieve your root keys? Meaning you can automate it a bit further. So that means it's still distributed, which to some extent maybe adds a bit more security. Yeah, I think this would be a very lightweight service. It definitely wouldn't be as heavyweight as the current notary implementation is. I think here the goal would be, can we focus on how we establish the office to connect to that service and what are sort of like some remediation steps if that service were compromised? I think those are the two things that go into consideration for that service. Yeah, so what I was thinking about is if you, let's say you have your own notary server and you want to configure which routes are trusted, you can just configure one or many URLs where routes can be retrieved from with some appropriate authentication mechanism in between that at least prevents people from having to check if routes have been replaced or changed or things like that. So you basically just register yourself, okay, what are the URLs to fetch the keys from? Yeah, that's kind of in line with what I was thinking. Steve, did you want to, did you have a comment there? Yeah, so I'm coming at it from a usability perspective and asking the security experts here what the implications of it are because we certainly want to make sure that people can find keys and achieve, acquire them in a reasonable fashion. Since they're going to a particular registry to get the content, while we certainly want to have, you know, key management solutions that the user can choose or the registry operator can choose that makes sense for that environment, is it considered unsafe to have a discovery or even an acquisition model that happens through the registry APIs even though it's delegated to another underlying service? I guess I'm trying to figure out how does the, where the keys are stored and secured and of course uploaded is a completely different thing than retrieving them. Can that be integrated with the registry so we don't have the discovery problem because there's registries all over the place, they're just going to explode even more. We can't have just as many confusing places to go find keys and having a single key discovery services isn't realistic either. I think part of what I was kind of striving more for was decoupling the registry from that process in the sense that, you know, registries can tell you where to go find keys. They can also potentially redirect you to other locations as well, right? So what guarantee do you have the URL you're going to is the URL you want it to go to, right? So in that regards, I think that's kind of where I think like, and if you're going to say I trust this publisher, you should know who that publisher is and why you're trusting them. That was kind of like one of the actions I'm starting off with. But if I can get redirected, why couldn't I just be served from it? Again, for retrieval of keys, the uploader keys is a different, I'm not even beginning to touch on that one. Just for retrieval of keys. Even for retrieval of keys, let's take this scenario, right? Like you have company A that has their key set up and you have company B that's taking in sort of like, you know, containers from company A and running them. Now, if they're going through a registry, right? And if the registry decides to kind of like has a mechanism to kind of like point to a URL that has a key that's not used by company A but says this is used by company A, then company B can potentially, you know, use a key that doesn't belong to company A to verify container images, right? And so the way that company B is defining that we trust company A, they need something from company A to kind of say this is what we're going to use to trust your images with, right? Like unless it comes from company A, I think if you're getting it from any other source, there's always a question of how do we know this comes from there. One route to do this is using sort of like public CAs, which I think is something this model can extend into where if you have sort of like trusted entities, like if you have like a public CA that you trust and they're telling you this is something that belongs to company A that's a mechanism we can explore. But I think the bare functionality we need to kind of support here would be that I can go to company A and figure out what it is from them that I need to trust their containers. So this is where like having those requirements listed, you know, kind of help scope some of this or help figure out how do we answer the question, I guess is a better way to say it. So if company B is, so what we continually see as customers want to be able to lock down their registries to certain data endpoints, and they can't be sent to other ones. We see this with the Windows foreign layers. We see this with when registries had two different data URLs. But I guess the key and air gap environments where I can't go to that vendor, but I guess the acquisition of the key and those kind of environments could be done out of band because the assumption was locked down that type they probably don't want. I guess I'm trying to figure out can can the key from vendor a be put into the private registry so that that company can distribute those keys to ephemeral clients. So when they get booted up. Is there something about that flow that kind of violates the security model. Because is it really that I can create another key that looks the same or I'm creating content new content that looks like it's signed by Microsoft, but it's actually signed by Microsoft or something like. Can you really create a fake key or can you, you're creating content that looks like the same content that points to the different key. Well, so keys don't have any information associated with them, right? It's because I've gotten a key from like, let's say Microsoft.com that I know that this key belongs to Microsoft, right? But by looking at a key directly, there's really nothing there that tells me when I look at a root key specifically, there's really nothing there that tells me that this key belongs to Microsoft. For example, if we and I think that's kind of where sort of like using a public see a become slightly different. Because in the public see a scenario, the root key is essentially coming from a public see a and so you trust, let's say did you search for example, or let's encrypt. And then that's the public key you're trusting and then you're relying on let's encrypt or digital search to verify that any intermediate there or any other search that they're issuing is actually being requested by Microsoft. But when we go down to sort of like establishing like which routes we trust, you only the only mechanism you have to validate that this would belong to someone is by trusting the source you're getting it from. So I think from from a usability perspective, if you look at the registry, it would be very convenient if you can just enter a list of URLs that could be AWS.com, Microsoft.com where you are pointing to a bunch of discovery services. It might even be that you have those discovery services where those keys are available that you have those for given departments within a company. So so when I look at the way Phillips is currently organized. There is a chance that certain departments or certain businesses want to manage your own. So from a usability perspective, it would be convenient if I can register that in the registry itself. What are my trusted certificate discovery URLs. To keep this separate from the registry also adds the advantage that people can deploy this in different networks. So let's say if I'm publishing my keys in my own secure network environment, either within the company or across companies. I can keep basically the service which is able to retrieve them and provide them to to the registry. I keep that can keep that more close to each other. And then it's more or less like like Nias was mentioning. You just redirect the traffic. It's basically HTTP traffic so you can just reverse proxy those kind of things so from a flexibility point of view in a deployment. I think that also makes sense. I think I'd go rather than having the registry expose that this could potentially also be a part of the signature where it tells you what the discovery URL is. That way we're not necessarily relying on the registry kind of maintaining a list and trusting the registry to maintain a proper list right. I think like one of the things I'd be concerned about is like if a registry was maintaining the list. They could potentially like you know redirect from Microsoft to Microsoft right like have a Microsoft URL and that's usually like you know something that could get easily missed. But if it's in the signature then you've actually kind of you are proving that like you know you have control of this domain that you're pointing to and that your key information is going to be there. And if this isn't a discovery or already trust, you can actually validate through this URL belongs to and decide if you want to trust this or not. I think maybe this is something we add in from a discoverability perspective into signatures so rather than trusting the registry you trust the image to kind of tell you where to go look. I would actually be really concerned about having the discovery mechanism in the signature itself, because basically it's kind of at that point itself validating itself because it's telling you where to go to verify itself. So I feel like we want some sort of kind of cryptographic mechanism around the discovery so that you have to verify a separate signature before you can go to this discovery mechanism like there's if there's some kind of root on the registry that could just point to these discovery mechanisms, then at least you do you know one step of verification to make sure that those discover URLs are correct as far as their registry knows. Yeah I think the discovery URL like if it's present in signatures I don't think it should be used to automatically trust the image you're absolutely right I think that like kind of leads to sort of like you automatically trust anything. I think there would be some we would want some manual intervention there where this is saying that like you know this registry isn't trusted. Here's the URL go validate this before you put it into sort of like your trust config right. I think that's the that's the work that's that's the manual step we kind of expect there for someone to go in this potentially could go into something like an error message rather than kind of just saying like you know you can automatically use the client to validate this based on whatever URL this is sharing with you. So I have to go soon unfortunately, but I want to I want to raise a meta question and encourages to think about it one is it's not obvious to me so at least I'm not aware fully aware of the context not obvious to me. So what is the context here is it that we trust. Is there one registry or the multiple registries partly because this private images right. So is that the reason for decentralizing key discovery like this like why why why should there be. So because my question is if we trust the registry for artifacts why not also trusted for distributing keys like to me it's not obvious why so some clarification with help. Part of it was that we when we look at sort of like you know today the model has kind of been that you're publishing to a single registry and you're signing for that single registry and that registry is maintaining that signature information for you. And so one of the goals of this was to make it easy to move containers from registry to registry and potentially allow anyone to set up a registry to host any images they want to. I think this leads to a concern around sort of like you know depending on who's setting up a registry. If they're also responsible for sort of like maintaining some of the trust information. You know if this if anyone is setting up a registry can we at that point trust registry owners to maintain this trust mechanism as well so relying on a registry in that scenario to me kind of like breaks the trust model. And I think that's part of been sort of like I've been looking at a very high level is can we decouple sort of like sharing of containers versus like trusting when she wrote the container. I see that's actually very helpful context I wasn't I wasn't aware of this so so that's good to know thanks. I'm also going to have to drop off in a moment so I wanted to raise a kind of I think related issue. So if you're going and you're having a system where you go and you have these URLs that you then go to and you rely on the CA system to provide you trust that the thing you're going to that URL is correct. Then there's a natural question of why didn't you just put the public keys that you're trying to trust in instead of the URLs. If you try to compare and contrast those designs, because in one system you get all of the kind of problems and overhead of the CA system to have this level of indirection that it isn't necessarily clear is at least I think it's it's worth we're thinking very carefully about what that buys for you given all the problems and difficulties with revocation revocation not working the way that you would like it to or would expect it to and all the trust in the CA system versus just directly putting key information in so I I don't really understand yet like why the why this way of doing indirection is a is a better way I understand if you're just assuming that like you know we don't want to look deep into those mechanisms we're just going to try to use them as tools I can see why this is something that would be thought of but I don't know if if you peel back the mechanism and think about it a little bit that you're getting anything better by putting URLs there than you would be but just putting the keys. So I think the what we're looking at is you can put both right you can put, you can both put keys and URLs, so it's what you can put one or the other. So if for especially for air gap environments I think that's where like you know you'd want to put in keys rather than URLs and have an administrator do their URLs is really more of like saying that hey if I want to be able to automatically get like you know if when keys are being rotated or updated or they're being revoked like I want that information a URL give you some degree of automation in pulling that information in. It's one where I think like you know if you're without that without that URL then like you know that you're essentially relying on sort of like getting an update from company a in knowing that's like you know their keys have been compromised and revoked and you need to kind of have like some kind of manual intervention. So the URL adds some degree of automation there where you can actually get notified that hey something has changed here. But we definitely can't use the URL in all places, especially in air gap regions I think that's where like using the key itself would come in. I think you can still use this service with the URL in an air gap environment. So let's say if I run this service myself in my air gap environment. I can store the keys there manually but the systems connecting to the service can still get the updates automatically. So from a management perspective I can manage those keys in one place instead of different locations in the system. So I think you can still use it as a proxy as well. Yeah I think having both give sort of like you know deployment administrators choices and how they want to do how they want to implement that I don't think they're necessarily like mutually exclusive right. Yeah, correct, correct. But I was trying to point out that it still could have value in an air gap environment. I think that makes sense. So I also have another question. So in this multi registry world. I think the root keys have to be known by registries. Because when I derive from a image from registry a and I push the new layer on my registry, of course I might want to check some validity on the other layers. On the other hand, also the, the clients, the production systems they also need to have access to these root keys. Is that a correct understanding from my side, because that means you would need to manage those routes on the registries as well on clients consuming those images. Right, so I've added that in as a requirement for registries to have so every time you're generating a new key one of the steps you're taking is uploading it to the registry is where you're distributing containers to that ties it in with sort of like your identity. And so it is one where that you can continue publishing images there. So for for uploads there you would need to do that but that that is something that the registry would would validate for subsequent uploads. So so what about consuming images. So let's say I'm in consuming a Docker image. And I configured my client to trust given root certificates, which allows me to pull images from different registries. What would be would there be a difference to to configure such a client as opposed to configuring a registry when it comes to to the root keys. It's out a little bit is just where you where you get what registries you connect to, and what polar content shouldn't really be tied to their, the key of their certificate. In other words, I pull the Debian image that originally is built by the Debian corp let's just say, and it gets pushed up the Docker hub and then I've moved into my registry. I'm going to move into another registry in the air gap environment it's going to keep on moving. In that case, maybe Debbie is not the best image best example say rabbit MQ where I'm not I'm actually consuming it as is as opposed to Debian where I build upon it. So I want to be able to get that key from someplace, but it shouldn't be tied to where they're tied to the registry. I will be able to discover it through the registry I guess was the idea, but they shouldn't be coupled. The thing here is that are you also validating images as they're being uploaded to the registries right. And what is that mechanism there so let's say that like you know you're you're getting an image from web at networks. You've gotten the root key from web it and you know what the root key is and you can verify any images. I think the question here is more that you know if I am uploading to a registry. Let's say, you know, web it and I upload to a registry a our Docker hub, the Docker hub can also potentially validate that this came from web it and kind of like had managed sort of like a set of root keys. And I think they're actually having the same verification mechanism that that the client has just adds two layers of validation right it's more defense in depth. It kind of ensures that registries themselves are validating whether sort of like anything that's uploaded or sort of re uploaded over that is came from a legitimate source. And meets the sort of like, you know, like there wasn't any tampering or any issues with that. I think it's, it's, it's one I'm tracking like we I have steps for that drafted in requirements but I think it's something we track as an optional requirement. Because at the end of the day you just still have validation of the client side to verify that even if you if you know something was compromised in the registry side, you're still protected. And it just kind of ensures that registries themselves are not storing compromised images potentially. Yes, so the reason for my question is, let's say, I have a whole bunch of developers who need to know what are the trusted routes. I'm not sure that they are not having to to configure a lot of URLs or keys which are trusted to to create a signature because in general, I would only need the key to create a signature and when consuming or running the image. I could basically rely on the signature themselves. So let's say if I have 10 developers and today we are trusting three of these root keys. Tomorrow one has to be added I have to tell all 10 developers you need to add this key to your trusted chain as well. So from a usability perspective, I'm wondering if there is a way that we can keep that on the registry side or if there is a need to have this on the client side where someone is interacting to push that new image. I think there's two ways to address this in the current proposal one would be through that URL redirection. So if you have a URL that's pointing to sort of like your set of trusted keys you can always just add more keys to it and then your developers are just always going to that single URL. The other mechanism would be that in in if we have a key sharing service, you can potentially configure sort of like what keys you're putting in that key sharing service and using that as a proxy as well. So I think that's kind of where I envisioned notary v2 kind of like the notary server kind of like being revised to is being sort of like more like repository of trusted root keys. And that's that's why I think something that we can look into sort of like the additional component. But at least for now like the URL I think would at least give you that that information. Yeah, I had kind of a similar note. I think that key revocation is also a time when you need to kind of like immediately update the keys that you trust. I just kind of wonder how that could be handled or like and also similarly a compromise of whatever URLs or servers that are hosting these public keys. So how are those all protected kind of Yeah, I think this is one where we'll we'll need to do sort of like a detailed threat model next in terms of what are the potential ways these could be compromised and we'll need to kind of break this down into like what are things we're addressing in the design to address that. But the the thing here would be that, you know, if if the if the URL itself is compromised, then, you know, it does it does lead to the fact that like, you know, someone else puts in your URLs or something then you would be that is something that you know you'd be you'd have to kind of like protect against. I think this is one where we want to think through sort of like in each one of these breaches like what actions would people be taking. Like a URL breach is potentially something that you know would be recognized as like this is a major hack and you know there might be information and things and we need to kind of work work through that model. Okay, that makes sense. Maybe that's the next step is to walk through the situations for Cuba location and various compromises and see how those work. Yeah. So what I'll do is, I don't think we've quite settled on a decision in how we're going to have the key sharing service work but I think we've talked through a little bit more detail here. I'm going to take a try and take a stab at updating the doc over the weekend. And I'll try and highlight sort of like a short doc on what sort of like some of the considerations here are so we can potentially review that in the Monday meeting. Okay, sounds good. Thank you. So if we have like, like a short like paragraph that summarizes sort of like what's on the concern, like you know we've had different people bring up in this group. I think that's something we can reach for quickly and discuss in the, in the Monday meeting right. Yeah, yeah, absolutely. So basically kind of summarize what the thoughts are. So, for instance, what we're, we're actually recording the coup con talk today for no review to status and was supposed to be working sessions we're going to joke about that is now to recording. And we're basically just going to say here's the things we're at and here is the open questions and I think it kind of helps people understand with what people feel fairly confident on moving forward. They disagree, and then they feel confident that they haven't figured this one out and I'm glad they haven't figured it out because I don't like the answer I'm seeing, you know, so it kind of helps categorize both and they know where to engage. And I think the key discovery. And maybe there's, you know, there's the whole key life cycle management that you're, you know, driving as part of this I think there's a subsection of what exactly does key discovery and acquisition be not just key management. When you take in consideration that multiple registries as content moves between registries and content moves into air gap environments. What is the right model. And I, I don't know if you've been following this contract. Well, I think you have you have that smaller group conversation of what types of keys should we support. What I kind of heard from this one is is maybe the GPG and again I'm not an expert so I'm just echoing what I think I'm hearing are easy for free and, you know, the community can use them and maybe they don't give that, you know, high trust because there's no domain or CA associated with it. And then if larger software vendors want to use a CA base, then they can and that gives them more confidence but they have to, you know, the model is they have to pay for it because there's another service that can validate it. Yeah, one of the one GPG trust model is that you'd have to have keys go back and forth and you do end up trusting the registry at that point. And so that one has some other repercussions will want to think through. And so GPG potentially is a solution, but there you do need to trust the registry itself to a certain extent. And it's the X 509 or the CA model is another one we can look at where I think the concern comes in around sort of like, you know, if we have to go get like publicly trusted certificates and there is a cost to that. I think the way the ad hoc way we're thinking about it right now gives you an option to use X 509 and use public CAs and also use sort of like private routes like you could do one or the other. So I like that because you can support both both formats and you don't necessarily need to trust registry, but I'll elaborate a little bit more on that on the GPG side and you know we can have another discussion on that as well. I think the GPG looks 509 versus the ad hoc that will probably be take me like a week to write up so maybe we can discuss it in the Monday after the shorter one I think we can have next Monday in terms of like the decision process we're going through here right now. What I might suggest is, because you know with each one of these PR is the comments endlessly go on and it churns and churns and you can never get a merge is maybe do a separate one on key options like a separate markdown and a separate PR for pros and cons of keys of what we might support. And then we could have that discussion on that document and maybe there's a separate document for the key acquisition model and you know maybe once off let's assuming at some point, we can actually get agreement on these three things right your scenarios that you have here. The types of keys we would use the separate one separate PR than the third one is the acquisition and once all three are done everybody's comfortable with them. Then we just merge them into the spec because remember this whole thing is a sketching an iteration model this is not intended to be the final spec so it kind of helps people get to it, the confidence place. Yeah, I agree I think I can separate that out into a separate pull request so I'll definitely keep that in mind. Cool. All right so yeah if you want to put something on Monday schedule just that you're going to take. And we're back to 10am our time Pacific time. And just put something on the agenda that says hey I'll do a quick recap of these topics or whatever you want to talk about. Sounds good. I'll go ahead and do that. All right. I think that's all we had to discuss today I think the key revocation scenario you'll probably want to wait till next week to dive in until we've settled on the key sharing components and potentially that gets dressed in the threat model as well. So I don't think we, we want to jump into that today and I think having to chunk and Justin for that conversation would also make sense. Okay. Marco, thank you for taking notes. It's definitely very helpful. I totally was going to spend another 50, 20 minutes running them up so thank you for doing that. Yeah, I'm not sure if they are fully complete but at least it's a start. All right, sounds good. Thanks everyone. Have a good weekend and hope we'll see you guys on Monday. Thank you. Thanks. Bye.