 All right welcome everyone to our talk today. This is on key lime bootstrap and maintain trust on the edge cloud and Iot My name is Michael Peters. I'm a principal engineer at Red Hat in the emerging technologies group specifically focusing on security and I'm one of the maintainers of key lime I'm Lily Sturman, I'm a senior software engineer at Red Hat in the office of the CPO also focusing on security on the same team as Michael And I am a maintainer it at the of the rust key lime repo specifically And I want to point out that Michael and I are just two people who are part of a much broader Effort and a larger community who are working on key lime So there's definitely a whole group of people involved. We're just two of them But we are very excited to be here today to share some New developments in our project So what problems does key lime solve? Trust is difficult on remote machines that are not under your physical control And this is true in cloud scenarios it's true in edge and Iot scenarios and These scenarios are becoming more and more ingrained in our everyday personal lives as well as our day-to-day business practices And Another team member that we have on the security team called Mark Bistavros once called this problem Sensitive workloads running in an insensitive cloud And I love this description that he had so I keep using it But what this refers to is the inherent tension between all the opportunities that the cloud offers in terms of scalability resources easy management versus the inherent challenges the security challenges of Running some of our most sensitive workloads in these types of environments Because for example when your workload is running on infrastructure that you never personally see You have maybe many other stack levels that you can't even access How can you really make sure that there's not a malicious co-tenant for example who's executing arbitrary scripts? That are changing the results of your computation Similarly in edge and Iot environments that can be even more challenging How can you know that a specific device that you're in charge of hasn't been altered or tampered with? When you may never be able to see it yourself, but maybe it's even in an area that gets a very high amount of foot traffic So it could be a target for tampering from other people And just as a more concrete example If you think about a car so our grandparents probably never envisioned that Getting a car and going for a drive would involve any type of computation but this is actually very common in our modern day and This is an example where both the driver of a car and the businesses and organizations that are Responsible for a modern car and its software Both have a very high Incentive to make sure that the computation is working as expected and as intended because otherwise we could get into scenarios where We can have potential privacy breaches or theft or even safety issues in this particular case and this is just one example of The ways that computation is becoming more and more ingrained in our everyday lives. There are definitely many more of these so these scenarios are very prevalent the stakes are really high and There are many challenges. So how do we know that we can trust these remote machines? Well, Keelam is one way to Provide this remote trust through integrity verification and integrity here just means a Way to check that your machine or device is still running as intended And Keelam does this in a specialized way that is able to do integrity checks both on the boot state of the machine or device as well as Runtime integrity checks. So it actually covers both in one overarching system and It does this by relying on some foundational underlying security technologies one of these is TPMs and these serve as the hardware route of trust and Another is a Linux technology called integrity measurement architecture We will cover both of these in a little bit more detail in just a second, but as a brief overview of the history of Keelam it was Born in MIT Lincoln labs in 2016 as a white paper and a prototype in the succeeding years it gathered a lot of community support and this is where Red Hat and IBM got involved we were closely with upstream communities to build on this project and in 2020 Keelam was actually accepted as a CNC and Sandbox project. That's the cloud native computing foundation And this is a step toward becoming a CNCF Incubation project and if Keelam gets to that stage it would join other well-known and respected projects like the tough framework container D and Kubernetes So Michael will tell us a little bit about these underlying technologies that Keelam is bridging and building upon So first off who here is familiar with the TPM? Oh Great, that's great. So just a brief overview then since that's most of you a TPM is a Hardware device hopefully that's Created or has a nice respect created by the trusted computing group It has a lot of cryptographic Built-in features such as hardware random number generator hash functions symmetric and asymmetric encryption Secure storage of keys and generation of keys and if you're familiar with the Linux Lux system for full disk encryption that stores and retrieves keys from the TPM And it also has a couple of features that we really like in Keelam one of those being the what's called the endorsement key or Which is a secret key that's burned into the chip on this manufacturer. So this is a private key that can't be read and That is tied to that chip So if you're using that chip to sign something or to encrypt something it say a hard disk if you take that hard disk away from that TPM It's no longer usable. All right They have to be paired together The other part that we really like in TPMs are something called PCRs which are platform configuration registers And like a CPU register they hold data, but they're not written to directly instead It's what we call extending so you extend the value into the TPM Which means that it's hashed with the current value in there to create a new hash And so every new thing you extend into the TPM Is a hash of a hash of a hash? So this creates a nice little trail for us if you're familiar with a Merkel tree. It's essentially a hardware-based Merkel tree for the end value So AIMA is a Linux integrity measurement architecture which came in the 2.6 30 colonel And hooks into the kernel to measure files as they're accessed and there's different ways you can configure it But the most prominent way is to configure it that every file that becomes part of an executable So this is at the binary or the library file is measured by the by AIMA Which means it takes a hash of that file and then it extends that hash into the TPM into a PCR on the TPM And then it creates an audit log of that so you can go and look in memory of the kernel and By a pseudophile and you can get all the current measurements that have been measured by AIMA One other piece we want to talk about is measured boot So measured boot does not secure boot which is the process that a lot of people or that that manufacturers use That can limit the software that you put on your device, right? It locks out certain or it locks it to only come from manufacturers that have been pre-approved Which rankles a lot of people because it limits the freedom of the device owner This isn't the fault of the TPM though This is just secure boot and measured boot is the opposite measure boot takes all these measurements And then it gives you the user the freedom to decide what the policy should be So some people confuse them, but TPMs themselves are not Inherently locking out users from their devices. It's secure boot or something similar But TPMs themselves can give you the freedom to say only these things that I approve of can run on my device So What happens is the the BIOS the UFI? And the boot loader as they're going through the different parts of the boot send out these measurements and extend them into the TPM And then write them into a boot log And so you can go back and compare either by having like a golden value Like I know this machine when it boots up These are the PCRs these are the values that I care about and they need to be in this state And every time it boots they should be in the same state if you have a more dynamic system You can use the boot log to walk through and try to recreate the values that are in the PCR to validate them Taking this all together. We have what we call remote attestation So remote attestation if we have the boot log we have these file measurements. We have PCR values We have the public keys from the TPM from the manufacturer and from that particular TPM And we can get what's called a signed quote from the TPM So basically we have the TPM say give me a bunch of stuff sign it with your private key and send it back To me and then I can validate this signed blob came particularly from this TPM Which I know came from this particular manufacturer and I have that trust and that's what we call a hardware rid of trust We're basing that in the TPM and they can't be Changed once those PCR values are extended into them And so we ship all this stuff off to a remote system that can then walk through all these Measurements these hashes and hashes and hashes and go through the chain And if we end up with the same value that ends in the science PCR we know nothing was compromised So another part we'll talk about is file integrity and file integrity basically asks is this file of what we want it to be? And there's various ways you could look at that like has the file been altered Is this from an unapproved vendor is the file on the right path? Who created the file it hasn't been rolled back to an unapproved version that might have a vulnerability into it So the way that we do this in Ima there's two parts There's the file hash and the file signature File hash we can use to say measure the contents of the file and with I might also include the file path as part of that hash And then we can create a list of all these hashes that we know are valid. Maybe we have a pristine Segmented machine that we put off off network We boot it up We load it with all the correct files that we're expecting and we create a database of all the valid hashes Or we work with our OS vendor To create a list of known hashes that are going to exist anyway And then we create this database of this and then in key line We call this the allow list this is the allow list of all the files that can exist on this machine And it's the hash in the file path the other way is through signatures and So it involves either the person who created the file. So a CID CD process Signing files that are going to end up on your system Or your your OS vendor signing their files and putting it in there And this is stored in the file in an extended file attribute called security dot IMA And so red hat ships signatures with their files in fedora 36 and red had nine So those are now available that you can use and you can verify this file came from red hat because it was signed by red hat's private key So the different levels of protection we have like if I want to validate it came from red hat Well, I can just look at the IMA signature the bottom the hash doesn't tell me that it came from red hat but the problem is that if there's a say a vulnerability in bin bash and That's red hat signed that vulnerable version at one point because it was an unknown vulnerability And then we get a new version. Well, if somebody has that old version They could replace your version of bin bash with the old one and it's still signed by red hat The other Thing was like setup like getting a list of hashes is pretty straightforward You just look at the file you create a shot of 256 some of it and you sort in some file or database somewhere Signatures are harder to set up because you have to do a key management, which is not a trivial thing to do But as far as updating when if you have a dynamic system that you have a CID CICD process and constant updates are going in well Then you have to update that database of hashes somehow as well and then let Keyline know Oh, we have a new updated list of hashes bad ones and good ones and then For the updates for the keys you just say well trust these keys You handle a handful of keys and then Keyline can trust those keys So it's sort of a mixed bag which one do you want to use and you can mix and match in Ima and Keyline So now we're going to get the system architecture Keyline little so the Keyline architecture is basically divided into two sets of components the agent that you see on the left is a Node machine or device and that is trying to prove its integrity to the rest of the Keyline system So this machine is going to need to have access to a TPM Hopefully, this is a hardware TPM but in some environments where you may not have access to a hardware TPM Keyline will still run if you have a VTPM or a software TPM and Even though you only see one agent here on the screen A Keyline system can be set up to monitor dozens hundreds or even thousands of these agent nodes We just didn't want to put a thousand pictures up here on the left for you because we didn't think that would be helpful On the right you see the components that are responsible for Checking the integrity state of the agent nodes. So the first component at the top is called the registrar This is basically a public key store. So it keeps track of the public keys associated with specific agent nodes Their unique identifiers and their enrollment into the system basically The component on the bottom the verifier is the component that is responsible for handling all of the Checking of the attestation sent by the agents. So there is a very clear delineation of roles between these two components So the verifier will take an attestation sent over by an agent it will Check things like was this TPM quote signed with the correct key is the data inside the quote Matching with the known good values provided according to the measured boot policy or according to the IMA allow list and then If these checks pass or if they fail the Keyline system will take certain actions based upon What the outcome is? Now you see in the middle. There is an untrusted network What we're describing here is that these components don't all have to run in the same place They don't all have to be in the same cloud. Some of them could be on-premises. Some of them could be in a public cloud They could be across different public clouds So this is really a hybrid cloud scenario that the key lamp architecture is designed for So I'm not going to get into a lot of the key exchange That happens as part of this key lime workflow process because that would be its own 30-minute talk But at a very high level just so you get an idea of The steps involved in this system and in the attestation checks Let's say there is a key lime agent that wants to come online Be enrolled into the system. It will send its TPM info for registration and activation to the key lime registrar After that point it needs to send an initial an initial quote to the key lime verifier and if at that point That quote or remote attestation does pass the validation check Only then are certain secrets able to be released to that specific agent So that's the key step that I want to highlight there And by secrets we mean for example Any scripts that we don't want to run until we know that the agent is in a trustworthy state or Any private keys for example that we don't want that Machine or node to have until it has proved itself through this integrity check After that point the verifier will still regularly pull the agent node or nodes for these quotes And these are the runtime integrity checks that key lime performs So if at any point After this even though the initial quote was marked as valid If at any point afterwards the agent fails to produce a valid attestation It will be marked as failed It will not pass this check from the verifier and at that point all of the agent nodes that are involved in this key lime system will be sent a Trigger to execute some revocation actions So one example of revocation actions would be that all of the other agents included in the system would Erase the keys of the compromise machine and would no longer communicate with it Just as one example, but You should note that all of this is very highly configurable So you can choose your own revocation actions. You can choose to not run any revocation actions You definitely get to choose your measured boot policies your I'm all out list. You can choose whether to Use the measured boot check or only the runtime checks or vice versa or if you want to use both And you get to choose what types of secrets you want to release to the agent nodes in case that The attestation is successful. So configurability is a very important part here So we wanted to just have a quick chart to compare key lime to some other solutions that might be used for integrity checking of systems and These are just a couple that people might be familiar with intrusion detection systems or f a policy D and You will see a lot of red X's on this screen That doesn't mean that these other solutions don't do their job or they aren't good or that you shouldn't use them You may notice that we came up with the criteria for this chart So what we're trying to highlight here is that key lime is uniquely adapted to this specific use case Which is multi-node remote monitoring in? Hybrid cloud scenario with a hardware root of trust and the ability to perform these built-in revocation mechanisms So I do want to highlight the hardware root of trust here We think that's a really important piece because it provides tamper evidence and tamper resistance that is very difficult to have in software so that's definitely a key piece here and The other the other component is just if you think back to that key lime architecture. It's very modularized There are different components that can be Deployed in different clouds to interact with each other. So that's why this is really a Solution that's uniquely suited to this type of remote multi-node monitoring And having these built-in revocation mechanisms also helps in the scenario so now we'll begin to talk about some of the more recent adventures that we've had in the key lime project and This is one that I was specifically involved with so this is the shift of part of the code base from Python to rust and Specifically the part of the code base that we ported over to rust was the key lime agent because this is the key lime component That is meant to prove its integrity to the rest of the system So this will need to be deployed in all kinds of different environments So this could be running in edge or IOT scenarios. We want it to be able to be packaged for systems like Fedora CoroS and For this purpose we wanted it to have a smaller footprint and also to have fewer dependencies so that it is easy to package and Rust is really the the direction that we went with this. So in addition to smaller footprint the fewer dependencies We chose rust because it is a well-respected systems programming language that has a lot of built-in security features It has a very good performance And it's got This memory and thread safety and it's got type checking to avoid unexpected behavior at runtime So overall it was a very good fit for what we wanted to do with the key lime agent The the other components are still in Python and these components do talk to each other So it's not a problem that they're in different languages the rust agent is Pretty much at this point. It has feature parity with the Python agent It's in an alpha release right now and it's undergoing testing at scale and we are thinking that Probably in Q4 of this year we can actually mark the Python agent as deprecated and make the rust agent the official key lime agent so Looking at the scenario of using Key lime to monitor a Kubernetes cluster in sort of a real-world scenario So if we want to monitor our system Then we need to have the thing that is doing the monitoring being a trusted location So we have key lime separated from this Kubernetes cluster He like itself could be running in a Kubernetes cluster just a different Kubernetes cluster or bare metal or whatever on on-prem device Or whatever you want to use That we have already trusted where key lime is running so the key lime register and verifier that he was talking about and it's monitoring this other Kubernetes cluster and so we have the key lime agents running on each node. So that's the worker nodes and the control nodes and this wouldn't be running as like a node Damon because We don't we don't have any trusted Kubernetes yet So she was running a Straight on the system talking to the TPM hopefully a hardware TPM but again if you're in the cloud maybe a VTPM or some hybrid of the two and The the nodes come up they come to the key lime register and register and then the verifier starts its pole and starts verifying them And does continuous verification and if a node fails at the station So say one of our worker nodes over there turns red Then the verifier can send out a revocation action and this is where we can do the customization So in this case the verifier would send out commands of the control nodes to do a cordon and drain command in Kubernetes So the node would be cordoned off so new Workers or new workloads aren't put on that node and then adjusting nodes are drained off I was guessing containers are drained off and run on the other nodes So this in this scenario where we're attesting the integrity of the nodes themselves not the workload on top of that node So we'll talk about that in a little bit later and then back to Lily to talk more about some future stuff coming sure Yeah, so now we're getting into Some things that are still work in progress So how many people here are familiar with six door? Yay, that's great If you are not you should definitely check it out at six door dev I think there might also be a panel tomorrow That you should attend, but it's basically It's a new project in the open source supply chain security space It offers a lot of different tools for signing and verification of container images Software build materials software metadata and stuff like that And it offers automatic key management. It offers a public transparency log called recor and a whole lot of other stuff We're looking at integrating Key lime with some of these six-door features specifically recor right now so recor is basically a a tamper-proof ledger so you can sign and store artifacts in recor and they can be queried and verified later and what we're thinking of doing is Basically having signed artifacts from key lime that can be uploaded to recor for later public querying or auditing purposes and The the first target for this is the key lime allow list. So these are the I'ma known good values and to this end our team has done some work paving the way by Improving the key lime allow list API. So we now have a broader range of ways that we can sign the allow list including We're working towards having six-door Be one of the ways that we can actually sign this as opposed to in the past. It was just with GPG and We've also made the allow list more modular So in the past it used to be that each key lime agent had its own allow list Which was not very versatile. So now we actually can have an allow list that is applicable to a whole number of agent nodes and they can be reused that way and Conversely if we have some agent that we want to have accept Allow list from multiple sources. So maybe two different parties have different known good states that they want to apply to the same agent It will also be able to take those two and use the union of them as the allow list for the particular in question And Also as a part of this work. We now have The the allow list being not only signed but the key lime verifier is going to be checking that signature Each time that allow list is used and applied to a specific agent so We're going to be doing more work in the space But that's some of the work that we have been doing so far To build on that. So as Lily was saying we are we have the ability to sign The allow list or the policy that we're telling key lime We want to enforce and this is to guarantee that someone can't come later And we've done all this work setting everything up and someone comes in later and alters our policy And now things look like they're approved when really they could be altered with without us knowing So now we're going to move a step further and add something that we're Currently experimenting with that we call durable attestation or auditable attestation So key lime you can think of it as a giant state machine monitoring all these nodes and keeping track of the state of the various pieces Of the nodes Getting quotes from the TPMs and verifying those and walking back and recreating all these things But it really just maintains the current state of things if a node is trusted or untrusted And and goes through this process. But what if we want to say we talked about in a state a previous state? What if we say How was this node last week or maybe it's an ephemeral node that doesn't exist anymore? So if we're talking about Say a build system and if we if you're familiar with salsa build levels I think build level 3 says and you need an ephemeral environment. Well if we have key lime attesting the integrity of a node it comes up We have a key line do after the attestation keys are released and then it goes through its build process Signs an artifact publishes an artifact and then the node goes down And then we need to verify well did that really happen like did it really get a testing like what happened to that node? It doesn't exist anymore, so This durable attestation takes this idea of we take all of these artifacts that the key lime is dealing with so It would be the current policy for that node It would be the artifact sent by the allow that by the by the agent so that would be TPM quotes It could be the measure boot log It would be the I'm a measurement list all those things and then all state changes across all these hosts and each thing is Extended into an immutable store in this case experimenting with recor which is the transparency log from six store and this will allow us to go back at any point in time and Verify the attestation state of that note whether it exists or not anymore And this would could be a private recor log that you run probably not the public one That's probably a little bit too much data just to shove out into the public recor But most likely just a private way for you to attest and to be able to go through an audit and prove Things are exactly how they were how we expected them to be even in the past And so when I talked about the Kubernetes instance where we're monitoring the integrity of the Kubernetes nodes I said we're not doing the workloads. We're not monitoring the containers This is because containers have a couple of problems Containers are really just a bundle of processes and files in a namespaced way running on the kernel But the TPM and I'm a our host global So when something writes to the TPM it could be any process on the system And there's no way to tell which process did it it could have come from a normal process on the host Or could have come from a container and there's no way to segregate out to say I only want this containers measurements. Well, it doesn't work that way especially for a TPM So some of the solutions that we have coming forward is to have an I'm a namespace in the kernel We're really an I'm aware namespace. So there's patches currently being worked on for the username space to be I'm aware such that when I'm a measures of file it can also record which namespace the file was being accessed from And then we can segregate those out and later and say this is from this container This is this container this is from this container It doesn't solve the problem for the TPM measurements because those are still host global So until there's like kernel VTM VTPM support, which might be several years away Then maybe we could do a software TPM inside the container that would be extended or find find some way to do that But these are the problems with containers and the reason why This sort of solution isn't geared towards the container contents themselves There's other ways to validate containers right to validate that you're the operator Is loading the container from the valid source and other things like that so But that's what that's some of the problems if you're wondering can this attest my container itself. It's tricky So this is getting into Very exploratory very future work But we wanted to touch on this because we thought it can be interesting if it turns out to be feasible so Right now key lime is very closely coupled with the TPM hardware technology specifically as its hardware root of trust The TPM is what provides this attestation without it There's basically not much value to key lime but What if we could actually move key lime towards a more pluggable architecture So that it could take different types of attestations not just from TPMs, but potentially from other sources and In those cases key lime would still offer this multi-node attestation or authentication type of service It would still do the remote monitoring and it would have the revocation framework But it would be More broadly applicable to different scenarios. So some of the potential scenarios that we are looking at include spiffy inspire which are Two related the NCF projects That basically have a specialized x509 certificate to establish this notion of workload identity. So in this case The workload identity would be taken in place of the attestation and this would Basically be a way to authenticate before key lime might release some of those secrets for example The other scenario that is potentially interesting is trying to use the key lime framework for TEEs or trusted execution environments So if you're familiar with Intel SGX or TDX or AMD SNP For example What these offer is for use cases where you need to do not just integrity checks on the system But you also need to actually keep your workload confidential so that None of the rest of the system can actually even see what you're doing you might Want to use this tea technology and the teas would also offer a hardware root of trust they offer on their own mechanisms of attestation so potentially we could take those in into the key lime system and Use those interchangeably depending on the specific scenario that the The situation requires so Yeah, hopefully we'll be able to do this because that would be pretty cool So if you are intrigued by this project if you have thoughts comments Questions you just want to come talk to us. Please do You can look up more information at key lime dev You can also go to the CNCF website and find key lime under the sandbox projects Obviously we welcome any contributions or if you just want to look at the code we've got The key lime main repo, which is for all the Python components and then we've got a separate repo for the Rust agent and The best way of course to get in touch is to find us on Slack on the CNCF Slack there is a channel called key lime and That is a very good way to reach a very broad community of people who care about this project So, thank you all for being here and for your attention We will be taking some questions and we will also be at the red hat booths after this talk All right So I'm thinking about this from like the perspective Richard storm would look at it I Can see someone using this thought like instead of not giving the Kubernetes workload and secret to their nose Not giving a movie to your desktop unless you're running their approved media player Do you know of any way you could like mitigate the risk of that and still be useful for legitimate use cases like yours? Well, I mean it's all about what you're running on your machine So they can't force you to run a key lime agent. I assume if you own the hardware, right? I Guess what I'm saying is they can keep you from watching the movie at all if you don't they can make something You want to do conditional on pose you could do that? I mean Right, I mean, so there are there are valid scenarios like I think so some of our Some of the key lime users use key lime to for university testing scenarios So they can validate that the hardware of that's being used in this testing scenario has not been compromised in any way To allow untrusted software to run while the student is taking their test So, yeah, there are definitely valid ways you want to lock down the hardware that but that in that case the student isn't the owner of the hardware or and they are Not the own like the it needs to be validated against certain criteria for this testing so I Mean if it bothers you that someone could Yeah, you just don't have to buy that hardware, right? I mean Yeah, I don't know that we have a technical answer to that at this moment. It's a very valid question I guess a philosophical answer would be that You know these hardware devices like TPMs and TEs are tools and so they can be used for a whole spectrum of purposes We like to shepherd them toward purposes that we think are for the social good but there's always kind of an inherent possibility that people will use them for other purposes as well and Yeah, that's something we keep in mind There's a fine line between saying I want to protect my system and someone else saying I want to protect Their part of the system. Maybe they're intellectual property or they're So if I give you the tools to help you protect your system, yeah, somebody could use it to also protect something You don't want protected. Yeah, this is also true for Intel SGX has received some criticism in that direction And that's you know something we talked about. Oh, yeah, I think that hand was first. Thank you Yeah, no separately so This obviously requires some hardware. How are you collaborating with the hardware OEM or So TPMs are everywhere like I can almost guarantee that most devices your own have a TPM in there your phone your laptop I have a TPM from our Raspberry Pi like they're everywhere So there's not a lot of like now that TPMs are ubiquitous We're trying to take advantage of them and give people the tools because they're not the easiest things to use and so Key lime is trying to democratize it a little make it easier for people to use TPMs and secure their systems But yeah, it's pretty much everywhere We don't I mean we do sometimes have to work with some vendors who are not behaving totally correct to the spec or other things But at this point they're everywhere So we don't really work for vendors to get them. I'm thinking I work for Marvel technology working on data processing units and we're looking at this for Several of our customers Including automotive so nice Yeah, yeah, we're definitely interested in helping I mean automotive you faces edge cases. I would say that's definitely big Yeah, so you can pick. Yeah, please come talk to us more. Yeah, okay This is a leading question because I think I know the answer but We're right now Where and when does the agent run and how is that configured? So right now by where I'm sorry. I don't I know it runs on the hardware. I meant like in In what part of the boot process and in which piece of the boot process, right? So it depends on whether it's running on bare metal So like the TPM is going to kick in if it's a hardware TPM is going to use the boot process as part of what it records And that value will be used as part of the measured boot state that key line can check if it's a virtual environment It doesn't have that component for example So it really just depends on where you're running this and what hardware is available to you and it depends on the underlying technology So the VTPM would start up pretty much the same time as a hardware TPM But the software TPM might start up later, but for key line It doesn't really matter so much when the agent comes on Obviously you can't attest it until after the agent comes online But the agent pulls all those measurements from the TPM in from the measured boot And it doesn't really matter at what point it starts because it can attest all the things that happened before it. Yes No, that's the other thing that so key line doesn't stop anything It just alerts you about things and then you take action based on that So it's not like FA policy D which might like prevent a file from being executed or read It just tells you hey an unauthorized file was executed. So it's it is more after the fact Rather than prevention although you can combine the two if you want it. That's a good question I had the same question when I started working on this I'm curious if key line works for edge devices that don't have consistent connectivity So like in a case where you have you know a device whether it be in a car or on a drone or something Where it's not always connected does it fail because it can't be constantly pulling or when it the device connects back in Does that work automatically? Yes, it would currently fail, but we do know that scenario and we do have proposals out to try to fix that to try to have a better Way of dealing with that, but it's basically going to be about you set the policy when you set up the The policy into key line for that agent you could give it Extra parameters that are to be defined in the future that would handle that scenario Also a good question All right. I'm sorry if my question is also silly for being the last one, but so I think this might be like a general You know question on iot devices and things, but how do you bootstrap like the initial trust from the agent to the devices and How like could you possibly use? Information from a transparency log to determine whether something is like you know like sort of in the way of like CT Transparency of like oh, you know this this is a compromised key and don't trust I'm assuming there's probably a device Start on there that would be a great direction to move in in terms of known good values Is that generally what you're asking about is how the known good values get to Yeah You're supposed to kind of do it out of band So yeah, that that's I don't know if you have like a better specific answer. Yeah, there's there's The the most secure is to do it out of band where you you have all the firmware that you know It's going to be loaded you have all the devices and everything and you individually measure all those things and you create the PCR values yourself Other than the expected say to the measure boot yourself, and you go there although very very few people do that Right because it's very difficult. In fact with some manufacturers. It's really hard because they don't let you know which parts of their driver are being signed or whatever so so But we would like to explore that like we have talked about that like what if we could have a way to publish known good values Yeah, exactly that could be consumed in some way to generate a policy that you then feed into key lime Yeah, we would love to explore those ideas that it is some question of where do you put your trust, right? Yeah, this is where Rekor could really be handy if we could actually like finish that integration and put those known good values in Rekor That would be an excellent place to be Yeah, since we're all about trying to make this easy for people to use Sometimes the easiest has some questions about where do you get the trust like I talked about that earlier with Tashes versus signatures like if you just use your signatures and you say I trust red hat And I trust their public key well then I get a lot of protections for free But I don't get every protection there's some corner You know there's other things that I would need to do this on myself my own to give the known allow lists and things like that But you know that it gives you a long way. All right. Thank you everyone