 All right everybody, happy Monday. Welcome to another OpenShift Commons briefing. As we like to do on Mondays, we like to have upstream projects talk about where they're at right now and new initiatives. So if you have one, reach out, let me know and we'll give you the podium. And today we're giving the podium to a new CNCF Sandbox project called Keyline that I know this much about. So very little. So I'm really interested in hearing from Luke Hines and Axl Simon, both Red Hatters who've been working on this project about what it does, where it's going and I'm gonna let them introduce themselves and we'll have some Q&A at the end. So please guys, take it away. Sure, I guess I'll start quickly. So my name's Axl Simon. I'm part of the Red Hat Office of the CTO. I work with Luke in the security team of the Emerging Technologies Department. So we're basically focusing on all the new technologies that's gonna shape what's happening in the next couple of years to a horizontal bit beyond that. We extend sort of our thought to sort of five and then even maybe 10 years as we sort of try and really take the long view. But most of the stuff we look at is more on the horizontal of a couple of years. And I've been working on a few open source projects that are security focused, Keyline being one of them. Prior to that, I was doing quite a lot of work on blockchains and it's not entirely relevant here because both have things to do with distributed systems and how you have multiple systems and you try and sort of maintain an integrity over all of them. So as you can guess, I have an interest for sort of distributed systems and how you keep all of those things working together nicely. Yeah, so that's about it for me. Luke, do you wanna take the floor? Sure, yes, so I'm Luke Hines from the UK. I also work in the CTO department alongside Axl. And I've worked in many open source projects, we're typically focused on security and I'm the current project team lead for Keyline. Right, so I'll introduce you all to Keyline and you may be wondering, well, what is Keyline except for, you know, beyond a cool logo and nice name. So it all comes from a research paper in the beginning of 2016 called Bootstrapping and Maintaining Trust in the Cloud. So that's an issue that you might have run into. It's hard to know what state the machine you boot into the cloud is in really if you don't have anything to base it off from. That is, you may be told that this machine is running, I don't know, say CentOS 7, but it's hard to know exactly what it's running. And so you need a way to bootstrap confidence in that state of the machine. And this is what the research paper fundamentally is about. It was written by Nabil and Charles at MIT and so later on that same year in 2016, they came up with a prototype, which would become Keyline. And over time that kept moving forward and eventually in 2018, moved all to GitHub and got a community starting around it. I think Luke started participating around that time maybe a bit earlier, I'm not sure. But anyway, basically the project really gets started and goes from the prototype to an open source community project. And very recently, about a month ago, thanks to Luke's efforts, the Keyline was accepted as a CNCF Sandbox project. So we now are part of the Cloud Native Foundation, which is interesting because Keyline really much very much is dedicated to the idea of multiple nodes and how you're making trust in that. So what exactly does Keyline do? Well, Keyline tries to provide three main things. The first of it is remote attestation. So that's the capacity to check without being at the actual computer that is running it that it is in a state you believe, in a state you can check. So you wanna attest from afar remotely, obviously, that the machine is in the state you think it is. And to do that, we use two things. We can measure the boot to check what it boots into and then we can measure the runtime using this Linux subsystem called IMA and we'll get back to that a bit later. But that's the first part. Remotely checking that a node is in a state you're checking the state of a node remotely. The second one is encrypted payloads. It's once you can check that the node is in a trustworthy state, then you can send it payloads that are encrypted and that it can be equipped. And that can be used for several things but basically you can bootstrap your node and give it extra information, including secrets and that's very useful in these day and age. Always secrets to maintain, to manage and this enables you to do that. And lastly, we have a revocation framework which enables you to manage with a failure of a node. So if a node no longer is in a state you like, you can fail that node and we've got a framework around that to take several action. So there's three work together but they're all based on one fundamental root of trust which is a TPM. The TPM for those who might not know is the trusted platform module. It's a chip that's found on the vast majority of modern computers and essentially all servers, lots of laptops have them too. You can even get one for your Raspberry Pi if you want to. Essentially it's a chip that is capable of doing some simple fundamental cryptographic operations and one of them is measuring the, measuring different aspects of the system as it boots and we use that extensively for KeyLine to be able to check remotely the state of the system. So let's look a bit more what the KeyLine architecture looks like. So we've got two sides here. One of them is the node on the left, the machine you are actually trying to check and on which we run an agent which is a KeyLine agent. You can see that KeyLine agent connects to the TPM or the virtual TPM, we'll get more into that later but basically for now it's just TPMs and that can run in a container in a virtual machine directly on the machine. All of those things are, all of those use cases are possible and it will communicate over a network to the KeyLine verifier. So the KeyLine verifier is the one that actually checks the integrity of the node on which the KeyLine agent runs. The KeyLine agent just sends a quotes and the KeyLine verifier will check those quotes. And then we have a third component which is a registrar which will manage, which will store the states of the machines, the remote machines and the state we expect them to be in, their cryptographic identity, that sort of stuff. Some of you might have picked up on the fact that in the middle our network doesn't have to be trusted. So often these days, every time we do something that is security related, we'll try to always be using a TLS encrypted connection. In this case, it's not strictly necessary. It may be desirable, but it's not necessary because the KeyLine agent doesn't do anything really, but well, it does a lot of things obviously, but fundamentally what it does is make available a quote from the TPM. And the TPM's quote is cryptographically signed and nothing else on the system is able to forge that signature. So if the signature got modified along the way on the untrusted network that would be immediately visible. So basically we have some protection in the capacity of the TPM to sign cryptographically valid quotes. And so we don't necessarily need a trusted network. Having one can be desirable again, maybe to protect against some other failures, but it's not necessary, which is an interesting little extra aspect of KeyLine. But so fundamentally, those three aspects, the agent on the node, the verifier on your machine from which you are trying to verify things, the registrar to sort over all the information relative to your node and your nodes, usually because you all have several. How does remote attestation work? Well, I started describing it previously, but it's really quite basic. So you request attestation before you send your workload, ask the verifier, can you please check this node? The verifier talks to the agent which requests a quote from the TPM and then sends this quote back to the verifier. Now you have two possibilities. Either the quote is validated, everything's okay. Your node has not been compromised, has not changed. It's in a state that you believe to be good and you were okay with that state. And then automatically, the verifier might send to the agent an encrypted payload and it can run automatically. Otherwise, if it fails its validation, then you will get a revocation event and the node on which the agent is running here will be cordoned off and will be removed from the group. Let's go a bit more into the idea of running encrypted payloads. So once the machine passes its attestation of the verifier, then we can send it back to the encrypted payload which will give it access to some secrets. We have a little example on the right here where we have some secrets, like we have a password and we have some local actions we wanna take. That, for instance, will only be executed if the machine passes its attestation. In this case, it'll receive the payload, it'll have what it needs to decrypt it and then it'll start running the actions inside the payload. The protocol for exchanging the secrets is a three-part derivation protocol, I think, but mistake there. Luke might be able to expand a bit and I am, you know, don't push me on that one. I'm not quite clear on it exactly enough yet, but it's pretty cool. It basically means you can ship a node, for instance, with a secret on it that it can't read because it doesn't yet have the keys and then later on reveal the keys to it so that it can read stuff. So you can embed secrets in, for instance, a master image that you will push onto all your nodes and yet be sure that, bearing being able to break modern cryptography, the node won't have access to the secrets until you decide that it is okay for it to have access. We mentioned prior also that we were able to do runtime monitoring, so not just checking that the system boots into a good state but that the system remains in a good state over time. You can basically think of this as like a trip wire. If anything changes on the system, it will trip the trip wire and we will have an event telling us of that. So for that, we use the integrity measurement architecture which is a Linux security subsystem. Every syscall is measured and extended into the TPM but this is done asynchronously so it's not blocking and it doesn't slow down the system. And then the state is compared remotely with what is expected and if there's a problem, then we can fail the node. So for instance, if somebody executes a script that wasn't planned, wasn't supposed to be executed on the node, then that will trip the IMA monitoring and Keyline will be able to set off an event and you can make decisions on that. Again, here we have this idea of using the TPM quotes to check that that cannot be fabricated to use that as a protection against basically say a system will be taken completely over and we would start sending fake quotes. In this case, it shouldn't be able to do that because the TPM, it won't be able to get the real, it won't be able to fake the quotes from the TPM compared to what we're expecting remotely because we have our copy remotely. We use also a nonce here for those who are interested to make sure that quotes can't be replayed and that they're fresh. So what happens in the case of revocation? Well, let's say for instance that we realize there's an event on node C and there's a problem and we wanna fail node C. Well, so what we might do for instance is revoke node C certificate with our certificate authority and then send these revocated event to all the other nodes. And this is basically what we can do with Keyline which is once node C is compromised, we cannot trust it anymore to take any action properly. We have to assume that it's dead and gone and that we're not gonna be able to get anything out of it. And so all our actions are basically gonna be about cordoning off node C and modifying to behave with all the other nodes. So you really have to think about it that way and that's really the main idea. And so here the revocation events can be what we just mentioned here. For instance, like removing revoking node C certificate but you could also do things like remove from SSH authorized keys or coordinate and drain the node using Kubernetes or shut down VPN access. I have the other nodes remove it from their VPN peers or adding or removing IP table firewall rules. So all those types of actions are possible and we're working on sort of creating a collection of those rules that will be easily usable by everybody. What so let's move into current work on Keyline. So the agent, the Keyline agent is currently in Python. It's being ported to Rust and work is underway on that and it's moving forward. So for those who are interested in why we're using Rust well it's a low level performance systems language and it has been designed with security in mind which fits Keyline pretty well. And also we have another issue is that Python in the current setup ends up pulling quite a lot of dependencies using PIP. And this is not always an option especially for systems that are immutable like CoreS, that's not quite possible. So we would be interested in moving something else for that. And once it's done our default agent will be the Rust agent. Other work we're engaging in is an IMA. So the integrity measurement architecture can also be extended to you to do name spaces which are used very much by containers. So once we have that in place we'll be able to do measurement inside containers which will also be an interesting positive development. Lastly, the work for the future. We have some on VTPM so virtual TPMs. That's if from a purely security standpoint currently a virtual TPM is not very interesting because it's not based on any hardware. So it means that it can provide fake quotes and security-wise that's pretty useless. However, for testing reasons it's already interesting to have it but in the future what we'd really like is to have what is called nested quotes which is where a virtual TPM that is inside a container is actually based on a hardware TPM. So it actually the quotes the virtual TPM gets actually based on the quotes from the physical TPM. And so using that chain we would then be able to have virtual TPMs inside containers that would still be useful-wise. So that's one of the next things we're working on. But so beyond all this technical stuff we also have quite a nice community working on the project. For a start it's multi-bender. So that's always really nice. We have people from Red Hat as you know but also from MIT, some people at IBM, people in Netflix and ZTE some independent contributors who are also working on the project. And we also don't have just developers. We also have other people working on UX working on outreach and everything. So that's really quite nice. The community is friendly. We have a Slack room on the CNCF Slack. Everybody's very welcome to join, ask questions, take a key line for a test spin and see how it works. We also have a lot of automated testing. We do code quality assessments and we try and be pretty supportive of new contributors. There's a guide and there's a lot of help available if you need it. So with that said, please don't hesitate to try a key line. Come and join us if you wanna have a chat and we are now open for any questions you might have. So thanks for that. And I think, and thank you, Luke also for joining this. The work that you guys are doing to port from Python to Rust. What is the, like currently, you should be, where are you testing currently? So if you're running with Kubernetes, are you not able to run tests now on a REL CoreOS or is it just a lot of dependencies and that's why I'm moving off? Yeah, sure. So CoreOS has a read-only nature. Okay. That's not to say you can't use RPMOS tree and so forth but they also have a stripped down version of Python. Okay. I can't remember the actual name but I think system Python and currently the Python agent has a big list of dependencies that are pulled in, okay? And with Rust, it's statically linked. So when you compile it, all of your dependencies are in a single blob, okay? So that just means it's less disruptive to an OS tree-like operating system just to have a single binary OS tree. So that's one of the reasons that it makes a run implementation more conducive to container operating system like Fedora CoreOS, Red Hat CoreOS. And it was actually the Fedora CoreOS community that was encouraging us to do this work as well. So that's the one aspect is we don't have a big pool of dependencies to pull in. And secondly, the Rust client is, because it's a low-level language, we can be a bit more less resource hungry. Performance is arguably better, I would say. And then the security. Not to make a statement that Python is not secure but because of Rust's strict adherence to scope and ownership, it means that a lot of our possible security debt is paid at compilation time rather than being discovered later. So those are the sort of the free motivators. But I guess, and I have a vested interest, my Twitter handle is Python DJ, so I'm just showing my bias here. However, I am one of the co-chairs for the OKD working group, which is running on Fedora CoreOS. So I've been balancing my Python for a long time. I understand. So with KeyLine, you've got a trinity of systems. You have the agent, which runs on the machine that you want to measure. That's remote to you, so you're performing a remote attestation. We've got two services, the verifier and the register in the integral part. Those tend to be a little bit more on-premise and those are all developed in Python and they will remain in Python. But if foreseeable, with no plans to move from Python there. And we also plan to continue to keep the Python agent going because it allows us to prototype a lot quicker. So yes, it's kind of, for us, I would say if you looked at our entire code base, it's about 20% that's going to run, majority of it staying in Python. So I guess, and pardon my naivety sometimes in these things. So you're using this, you mentioned earlier that MIT and IBM and Netflix and all of these folks were participating in this. Where are you at in terms of being production ready? I know this is sandbox, so I know that's a leading question. But what is sort of the status of it? So as it relates to OpenShift, we're working on a developer preview. And that will be coming the end of this quarter. And so this is deeper integration with Fedora Core S. And then that will naturally percolate to OpenShift as well. So what we're doing initially is looking at securing the infrastructure for when you deploy your workers and so forth, your OpenShift cluster. It will ensure that it is deployed to an infra that has the expected state. And nobody is tampered with that environment. So we're looking at a developer preview at the end of this quarter. Then we'll move, hopefully move to a tech preview and GA. And a possible date is this, don't hold me to this, it's sort of fall 21. And so initially we're looking to establish trust as to station for the infrastructure. But then we will look at ways that we can bring that up into Kubernetes where scheduler can start to operate with KeyLine and other components as well, cluster manager and so forth. So, and this again, I'm wearing my OKD working group hat. So when, so if we get four, six out the door, four, seven out the door, will it be testable with OKD, which is running on Fedora CoreOS in the not too distant future? So where are your POCs going right now? Are they running on vanilla Kubernetes and on what underlying immutable, is it? So at the moment, it's just Fedora CoreOS. So what's happening is some folks from Fedora CoreOS are working on a change to introduce. So KeyLine requires measurements of a file. So a measurement is a SHA-256 digest of a file, okay? And then what happens is those digests cryptographically signed and they're sent from the agent to the verify. And the verify will then make a comparison between what is the state on the machine and what is the expected state? If there's a change, you know, somebody's tampered with it. So if it's, for example, we're measuring SPIN IP tables, okay? That has a hash of XYZ. On the verifier, which is not on the target machine, this is on-premise, that expects the file state to be ABC. So obviously there's a discrepancy suggesting that somebody's tampered with that binary. Perhaps they've trojanized it, okay? So for us to get these hashes, what we're looking at doing is, and this is a proof of concept at the moment, is OS Tree and Brew, the build system, will automatically pass out these hashes from OS Tree and construct a list for every OS Tree release, okay? That list will then be signed. And then KeyLime, when you run KeyLime, you can tell which version of OS Tree you want to measure. KeyLime will make a call in, retrieve the list for that particular release of OS Tree, perform like a GPG verification to make sure it's signed and so forth. And then it will then send it to the verifier who will then measure the target node where our workload is running to make sure that it has that exact version of OS Tree that's running. And then at that juncture, once we have that proof of concept in place, then we'll look at how this can be leveraged by, for example, OpenShift, Kubernetes, that then be part of, for example, somebody might have an application which they're going to run in a container and they're going to run it on somebody else's machine, effectively, so they can then also query for the cross-state of that machine because we measure the initial boot of the system, but we also measure it continuously for runtime as well. So that way, if you are somebody that wants to deploy an application onto OpenShift, but that you need to have a security context that's strong, there's a strong requirement for privacy and security resilience and so forth, then you'll be able to call in to make sure the trust data that node is sound before you then schedule and run your pod there, so to say. So we've also done some demos where we had a demo recently where we had two worker nodes and a controller and a pod running on one of the worker nodes we hacked this worker node, that hack was instantly picked up by KeyLine who they made a call into the controller to coordinate and drain the pod from the compromised worker node onto a known good worker node and then the good bit about the demo was it was a seamless experience for the application owner. See their pod migrate across from a compromised node to a known good node. So that's the sort of the core thing that you can do with KeyLine is you can measure a machine, but then as soon as a machine fails, you can tell other machines, controllers and so forth, to effectively shut down and ring fence, compromise machine and migrate your workloads to a machine which is still showing that it's tamper free, be seen with it. So how do you, so just I'm getting my head around it, all right? It's a bit low level for what Diane normally works on and I'm really happy you're working on it because it sounds like we needed, especially with you can demo hack and a worker node, that's probably not the best thing to know about for me, but how is this going to, how do you see this surfacing in say the dashboard of OpenShift or is this like something that we add into our notifications? How is the visualization of how this will integrate into our current experience of OpenShift? Sure, so you're very much seen a work in progress here. Those are discussions that are happening at the moment. So with KeyLine, this is, so it's in the office of, it's in the CTO office. It's sort of what we consider emerging tech at the moment. So at the moment, we're talking to lots of folks around where KeyLine will be situated within the different technologies that, so my guess is that KeyLine will be quite early in the process of the cluster being deployed, okay? Because it needs to measure that the infrastructure sound. And then when it comes to KeyLine continuously monitoring and how that's rendered onto a dashboard, that's something that we still need to work out. So I can't see it being a challenge. It's just getting consensus around how do we do that? Yeah, I'm working with the UX team and figuring out how to expose that. And also to do it in a way that anyone who's running generic Kubernetes can also do it. Yeah, very much, yeah. And there's lots of considerations because with KeyLine, you saying that you trust a system is based on what we call a hardware root of trust. So you have this trusted platform module, TPM, okay? And the TPM is almost like a very simple version of OpenSSL. It can create keys and it can sign things. And that sort of signs these measurements within the TPM. And then when you get that list back, you can use that hardware root of trust to know that it's a natural machine that you've spoken to. It's not somebody pretending to be that machine and feeding you false information. So what you have to be a little bit careful of is when you render that onto a dashboard, then if you're putting that state into something like a MySQL database, which is then being pulled into a ReactJS JavaScript framework and put on a browser, then what you're seeing is something saying, something's good. You've kind of got a lot of intermediate components between the hardware trust and what's the trust that's being rendered to the user if you see what I mean. So that's where we have to make sure when we do that, we have to make sure that somebody doesn't compromise the journey from that hardware-based trust to somebody seeing something in a browser if you see where you've got, where you have CSS and JavaScript. So this is because of where this level of verification is going on. It has a lot of implications for Edge and IoT, I would suspect as well. Very much, yeah, very much. A big thing behind HeliM is a big push because of Edge and IoT. So when we showed this solution to the Linux security summit and the Edge and IoT summit, there was a lot of interest around the project there because it's incredibly good for, let me rephrase that, incredibly good to incredibly suited for machines that are physically in locations that can easily be tampered with. So for example, if somebody's got an IoT device which is in a roof of a building somewhere, then it's hard to sort of protect that machine compared to when it's in a big data center with a big security guard on the door, checking badges and so forth. So one, there was somebody that used KeyLime in the Raspberry Pi community because they had a camera on their garage door which read their number plate using machine learning. And then if it picked up their number plate, it made a logic control signal to the automated door mechanism to raise the garage door. And they use KeyLime to protect that Raspberry Pi. So that if somebody messed with it to try and break into their garage, then they would sort of cut off the connection so that it couldn't call the door to open. So it does really lend itself well to Edge and IoT. Cool. That's one of the... Yeah. So the other piece that you also mentioned that MassOpenCloud was participating in this and doing the POC. So are they using it? Not the POC, no, they're using KeyLime. So what they use KeyLime for is if somebody owns a machine and they give it back and they want to give it to another person, they don't want to, they had this use case that was particular to them where they didn't want to entirely reinstall the whole operating system and the hypervisor and everything, okay? So what they do is they instead use KeyLime to make sure that the person's not compromised the machine with something nasty, or they release it to go to someone else. So this has tons of applications outside of Kubernetes and cloud native as well. So this is... Very much, yeah. It sits on the edge of, is this a cloud native project or is this just a damn fine security thing that we should all use them? Yeah, very much, yeah. So when we originally spoke to the Linux Foundation, that was the question was, we were thinking we could put this in LF Edge, it could be in CNCF, it could be in, it could be its own project as such. So we landed on CNCF just because we were doing a lot of our work around Kubernetes initially, but this really is conducive to the Edge and IoT as well. So coincidentally, sort of, I guess going off open shift a little bit as a topic, but Fedora IoT are actively working on the project with us. I would suspect Peter Robertson would have picked up on this. Yeah, I was on a call with Peter earlier. Yeah, Peter's quite familiar with KeyLime, yeah. So if you wanted to get, I'm going to make you share, Axel, maybe your screen one more time and go to the KeyLime landing page because it has a different extension than a lot of the other ones, I think, because KeyLime... Yes, .dev. .dev. And maybe go to, when you're... Yeah, I'll... Let me see if I can do that. Yeah, see if you can share that and because that would be good just for people to see where you're at, because that took me a minute or two. I think I got somebody else's KeyLime recipe page the first time I Googled you all. Not that I cook, but it looked good. This looks better. Yeah, so... Yeah, so I have to be honest with you, I've never actually baked a KeyLime, but let me know if you can see this page. I can indeed. Okay, so we are at KeyLime.dev. Yeah, sorry. That's good to know. And if people want to... How do they find out when your community meetings are happening? Where's that schedule? So there's a lot of information on the GitHub. I'm not sure that's... That should be somewhere in the guide. Oh, if you go to the Meetings repo, you'll see it, Axel. Yeah, absolutely. That's a good point, mate. Thank you. Yeah. Perfect. Perfect. All right, cool. And I would tell you also when we get the updates to Fedora Core OS for this to all work, I would love you guys to come to the OKD Working Group meeting, which is on Tuesdays, and come and talk about it with that, because there's been some conversations between the Fedora IoT, Fedora Core OS, and the OKD Working Group about using OKD on the edge. It's not there yet, and we don't have really the resources beyond getting our releases out right now, but there's actually quite a few people there who are interested in this space that probably could help test it for you, especially with OKD running on Fedora Core OS. I think that might give you a first test bed for OpenShift that might help, and I'd be thrilled to see that collaboration happen between the two or three working groups, Key Lime, Fedora Core OS, and OKD. That might be a great breeding ground for some more contributors to this project. So hopefully that. So what else should I be asking than I'm not asking? I, you know, what's the thing? You haven't, you stumped me because now I have to go out and play with this and watch you guys grow this community, but what is it that I should have asked that I haven't asked? One of the concerns we often get is, but so how many nodes can you have? Like, can you get 10? Can you get 100? I mean, that's usually the one of the questions that comes up quite fast. So currently we know that it can scale up to thousands. So with one verify, you can check a thousand, several thousand machines, and I think, Luke, you think it can go quite a bit further from what we have as info. So that's one of the questions we often get. The other thing as well with Key Lime, you get the impression that it might be all these complex protocols and raw network connections. It's not, everything talks over a REST API. So all of these services and the agent, it's all plain REST. So it's, the only sort of slightly arcane part is where we talk to the hardware, but the rest of it is very much a kind of a modern approach to developing a web service. So we have some stuff coming in where we, we're currently using a mutual TLS, but we're also introducing JSON web tokens. Looking at, integrating with other projects as well around bringing Key Lime, it's been easier to authenticate with Key Lime. And so I think we'll sign on and so. We have time. Yeah, also, yeah, to mention, there is, if you go to our homepage, there's a demo, and this is Key Lime Protect in a three node at CD cluster. So the first five minutes are me sort of talking about the project, but the second five minutes, you'll see there's some terminals, you'll see the actual solution work in there. So what we do is we, we compromise one of the SED nodes, okay? And then the, it's removed from the cluster and we delete some SSH keys, new actions. All right. So I mean, that's what, yeah, that's one of the good things with Key Lime is this revocation framework. You can, anything that you can dream up of writing in Python, Key Lime will run for you. So for example, if a machine fails, you might want all of the other machines to update an IP tables rule. You just write a simple Python script in IP tables and then Key Lime will securely transfer that to the machines and it'll be securely run on those machines only when a signed event comes. That way you can sort of ring fence the machine. So in a way it's got a very nice open framework for being quite creative with what to do when a machine fails. Yeah, I can, this is, there's so many different use cases for this. In my head, I'm going to like anybody who's a cloud hosting provider, who's supplying servers and GPUs and HPC machines and needs for secured compliant systems, there's going to be a lot of interesting use cases that come up in the next little while. So it'll be interesting to see how this plays out and where, I'm glad it's in the cloud native computing foundation, frankly, because I probably would not have heard about it until it surfaced somewhere in OpenShift and the upcoming releases. But that would be an interesting use case. I'm curious to see, when people ask for this kind of attestation from their cloud hosting providers, I could see them saying, yeah, this is really running whatever, Fedora, CoreOS, blah, blah, this version or it's running, you know, REL, CoreOS or it's running whatever other immutable operating system. This is, it's really, I think an integral part of the puzzle for people to really trust Kubernetes at a high scale and to get into those high security customers or end users scenarios as well. So that's, it's always been an interesting aspect of Kubernetes. There's a very interesting aspect of Keyline which is to sort of move the root of trust away from basically just the sort of social trust you have in your cloud provider and the promise that they're not, you know, gonna mess things up in the background and moving that to actual hardware root of trust is which is based in Silicon, which is a different type of trust, but for some cases it's much more useful or it feels stronger and it is stronger in many cases for many actors as you were pointing out. Yeah, it's kind of interesting. I mean, we've just done the, I should be, should think that the hardware providers would be very interested in this as well. And, you know, like we do a lot of work with NVIDIA and other folks and people will make chips and things along that nature. I'm curious to see how they interact, become aware, hopefully they'll watch this and become aware of the project and see if they can help move it forward as well. So kudos to you guys for getting it this far, going from a paper at MIT, which we'll put the link up and I'll take a look at that myself and hopefully other people will and collaborating with MIT, IBM and Netflix and Mass Open Cloud and everybody else to solve their use cases. I'm really gonna be looking forward to seeing how this comes into the OpenShift release. So come back, please, when that hits, come to the OKD working group when you're ready and or even if you just wanna expose this, I will share with them the video that we're making today and we'll make sure that it's on their radar because I think that's, and there's also a lot of security folks that are part of OpenShift Commons that I think will be very interested in this as well. So I'm really looking forward to seeing and helping you guys grow this community and totally thrilled that you've gotten to Sandbox. It'll be interesting to see how long it takes to incubate you guys and maybe get you to be an official one and if you end up being officially in CNCF or if you find that you're playing more on the edge in the IoT space and need more of a generic home. But I think the Kubernetes community is gonna really appreciate this and embrace it. So I'm looking forward to that. The other question I have for you guys is, well, actually there's one came in, this Key Lime Act at the pod level also to ensure pod security and not only at the infra VM level, that's. Sorry, once again. It's in the chat there. Oh, okay. This Key Lime Act, I'll go to the whole city. Yeah, that's a very good question. So as far as measuring trust within a container, this is something that we're looking at. And what we need to do is, as you would have seen earlier mentioned, we use something called IMA in the Linux kernel, which is used to measure what happens is when a syscall is made, IMA will measure the object that's requesting that system call. So IMA sits alongside SE Linux, the Linux security subsist. So what will happen is, if you run a script as root, that will be measured. It'll be put into the TPM signed and then sent to the remote verify to verify. For us to get that to work in a container, we need an IMA namespace and we're actively working on that. I saw some people in the Linux kernel. So we fully anticipate being able to do the same level of trust measurement within a pod. If a pod is essentially a container. So yeah, so our plan is to provide the same level of measurement within a container as what we do for the input. But we have to just wait for this IMA namespace to land in the Linux kernel first. But yeah, that's that for us is that's the crown jewels that we want to get as we'll be able to measure in a container. And the ETA for it landing in the Linux kernel namespace? I wish I knew. It's things get discussed a lot on the Linux kernel made in list. So we're just trying to sort of brush out an agreement and alleviate any sort of points of contention that always do come up on the Linux kernel if you're trying to get something like that. This level of a change, yeah, or inclusion. So I know KubeCon is coming up soon, November 17th. Do you have any birds of a feather or meetings or any back, did you get any space at KubeCon or? We didn't, unfortunately, no, no. Too soon, I think maybe you were just too recently added to the sandbox. Yes, yeah, yeah, very much, yeah. Was, well, Axel, was it six weeks ago or? It was really recent. Yeah, something like that, it was very, very recent, yeah. So, well, then what we'll just have to do is make sure everybody shows up at your Wednesday meetings and continue to push people to come and find you. Yeah, there's lots to do. We're a pretty happy, friendly community. So, you know, we have a policy, there's no stupid questions when you're standing up key line, you know, and so, yeah, you'll find us on the CNCF Slack. There's always, you know, people chatting on there. All right. Yeah, and I was going to add, I mean, Diane and everybody else, if you find new cool ways of using key line, you know, don't hesitate to come and share them. I mean, it's a fun thing to think about how you can use this in ways that might not have been initially intended, but can actually be useful, so don't hesitate. Well, I think the Raspberry Pi Garage opener example use case is probably my favorite of the day. So, I think that'll be interesting. We have a whole bunch of Raspberry Pis around my house, so hopefully we'll do that one. In our spare time, which we all have so much of. So, I was gonna say one more time to the audience out there, if you have other questions, please speak up, throw them in the chat. We'll give you a couple more seconds and then we're gonna let you guys go back to making key line pies or key line rusted or whatever it is. I have no puns left today. It's been a long weekend. And when you get the Fedora Core OS things in, please ping me and we'll have you back and we'll have you back with the OKD working group as well. And maybe, just maybe, we can stand up a couple of examples of this and have the demo run with OKD, which would be one of my happy days as well. So, I would really get behind that and help get that running with you, yeah. I think that would be a great, maybe a sub-spin-off group from the OKD working group, because I know there's a lot of interest and we're really thrilled about the collaboration between the Fedora Core OS and the OKD community. There's a lot of cross-pollination there, so hopefully we can make something happen for you, as well as other Kubernetes folks out there as well. This is something that I should hope we can get into the slipstream, upstream of Kubernetes sooner than later, though everything takes time, especially when it's this low level. So here's to hoping the Linux kernel folks listen to you and incorporate your requests and get you moving down the path soon. Awesome. All right, well, thank you guys. Thanks a lot. Thanks for having us. Happy Monday, everyone, and take care. Bye-bye.