 Rwy'n cael ei wneud o'r ffordd, ac rydyn ni'n Llew Kyns ar y UK, rydyn ni'n gweithio'r axon, ac rydyn ni'n Llew Hatt CTO, rydyn ni'n gweithio'r enghreifft. Rydyn ni'n gweithio'r gweithio'r gweithio'r gweithio. Rydyn ni'n gweithio'r gweithio'r gweithio'r gweithio'r gweithio. 1970, cydwyddiadau y TPM iawn i ddosbyddiaeth gyda'r unigach wahanol, yn y sydd gweithlo'r sydd gweithio'r sydd gweithio yn ychwanegodd ymarfer. Ac ei wneud i gyda'n cael ei wneud ar gyfer y gweithio'r cydwyddiadau cydwyddiadau ymarfer. Mae'n gweithio'r gyda'r cydwyddiadau. Ychwanegau yma, mae eich cymdeithas eich cyffordiydd cyfnanaeth, mae eich cynhyrchu oedd yn cymryd yn y cyfnod oeddyn i ymweld. Fe yna'r cyd-m如果你g, mae eich cyfnod oedd yn cyfnod oedd. Mae eich cyfnod oedd yn cyfrifwyr cyfnod oedaf yn ceis. Mae oedd yn cyfnod oed o Gymrydolol o eich cyffordiydd, mae eich cyfnod oed yn gyfnod oedd yn cyfnod i fynd. I'm very much locked within that chip and it's created at manufacture time And we'll go into that quite soon. Its called an entity key And the other thing that TPMs can do alongside creating keys and so forth Is that they can do this signing of artifacts So essentially when I say story not signing a measuring of artifacts So this has ychydig o'r objeu, y ffirmweblob, y ffile, y bwtloader, y cernol, y cernol modul. Mae'r bwysig wedi'i gweithio gyda'r cyftograff ychydig. Felly, ychydig ychydig o'r 2, 5, 6 ychydig. Mae'r bwysig wedi'i gweithio'r TPM. Y TPM mae'r bwysig ychydig o'r bwysig ychydig o'r objeu. Mae'r bwysig ychydig o'r bwysig ychydig o'r bwysig. Mae'r bwysig ychydig o'r bwysig cyftograff ysgwrdd gyda'r ychydig o'r gwaith ar hyn ar eich byd. So, mae'n ei bwysig eithio er gynnig gwaith y hynny ychydig. Mae'n bwysig yn cyffredinol, oes arτι mae'n bwysig, â'n ddilyn a'r hollol, a'n gweithio'n gweithio i gael ladw. Ac mae'r bwysig ychydig o'u bwysig. Mae'r bwysig eich hwyl olef. Haiddydd pethol yn gorfod o'ch gweld yn ystod gweinwyr yma. Mae lŵw'r mhobl yn cael ei huniad dros hynny. Felly mae'r gwan ni'n gwneud ymddangos gwylo yma felly chylau yw'r gwaith yma. Yma sy'n gwylo ffwilio ymddangos chi ddyliad diwethaf yn gweld yn gweinwyr. Mae'r gwylo'n gwaith mae'r gwaith o'r gwaith. Mae'r rhan o'r rhan o'r Cyfrifoedd Cymru, oherwydd mae'n gynhyrchu'r cyfrifoedd. Mae'r cyfrifoedd cyfrifoedd cyfrifoedd y cwrwm ar y cyfrifoedd, mae'r cyfrifoedd cyfrifoedd ymlaen i ymlaen, a'n cael argymintech i'r cyfrifoedd, ond mae'r cyfrifoedd yn cael ei gydag. Mae'n cael ei zedlau, mae'n cael ei gondolaeth. Mae'n cael ei gondol ymlaen, mae'r cyfrifoedd ymlaen yn sicr i'r idea yw'r wrthyn. It's very easy to scale this. It's very performant. So it's asynchronous. It's non-blocking IO network. So the TPMs tend to be a little bit slow. So that way we don't get a bottleneck. So we can have a verifier scale to monitoring thousands and thousands of machines that have TPMs. We support the latest standard, TPM 2.0. We also support 1.2, but that's getting quite old now. So we'll be depreciating that. ac mae'n dweud o'r ffordd o'r ffordd o'r ffordd. A oedden nhw Keyline wedi bod yn ffawr o'r MIT, Lincum Lab Security Research Department. Mae'n rhaid i'r cyflawn gyda unrhyw, ac rydyn ni'n fawr o'r project oherwydd y cwyl ac rydyn ni'n rhaid i'r cyffredig o'r cyfrwyr o'r cyfrwyr o'r cyfrwyr o'r cyfrwyr, oherwydd o'r cyfrwyr o'r cyfrwyr o'r cyfrwyr. Rydyn ni'n rhaid i'r project oherwydd. Efallai rydyn ni felly rydyn ni'n rhaidio am hynny. So Keyline provide measured boot, ac rydyn ni'n rhaid i'n rhaid i'r ffordd o ysgolwyd. Dyna i'r measured boot arall, rydyn ni'n rhaid i'r ffordd o'r ffordd o'r cyfrwyr, mae'r shim a'r redhat shim project yn ysgrifenni i'r meddwl fonts jytaeth oherwydd yna'r cerdd ddwy, iChart fydda i'r cerdd o'r rhaid i. Llywodra, a'r cerdd o'r cerdd o'r pwy yn toggle on and off SE linux or audit or those various powerful flags that can be changed on the GRUB command line kernel options. So we can actually measure those as well to make sure nobody's tempted with those. The Unnet-Ramffs and the modules and then further into user land which we will go to next. Also Secure Boots. So there are various parts to secure DPS, Moc list, Vendor DPS, ffair y cerddyn nhw'n ei wneud o'r wathriadau'r ddweud o'r pwysig. Yn hyn ymgyrch yn ymlaen, oedd y cyfrifio mewn argytêw. Ymlaen o'r cyfrifio ar gyfer Linox ysgolain a'r cyfrifio'r cyfrifio'r cyfrifio ar gyfer Linox. Am Alfys, sy'n tu cyfnod celf yn ystafell 2.7, felly bydd yn gweithio'r ddechrau i'r gwahau gyda'r calws. Felly rydym yn credu yn ddim arno'r cyfnod siŵr i wneud â gweithio'r cymotig, mae'n iивno i'r gweithio'r sall ac yn archifolid i gael oherwydd hynny yn sicr y bwerdd hynny'n unrhyw o'r cyrgynod. Ar y dyma am'r CEO yma, mae'r cyfrifau a chyfrifau'r Caerdydd. yr Yma wedyn yn ei adwr i ddechrau'r cyfrifau, Mae'r clywedd yn cydwyr cyrcaf, sydd yn ôl i'r ffant, ac mae'n ddynnu'n gweithio chi'n gweithio'n ddynnu'n gweithio'r TPM, ac mae'n rhan o'u bod hynny yn gweithio'n gweithio'r têffadol. A byddai'n gweithio'r tîffadau. Felly, yn ymgyrch chi'n gweithio'r pwysig y bindur, We could have something like the IP binary and they swap it out with a Trojanised binary. It's obviously going to have a different cryptographic checksum. So we'll be able to remotely see within seconds of that executing that somebody has actually changed that binary and take actions from there. Also the same for kernel modules as they load and SE Linux labels and changes and so forth. Essentially we've got the initial boot, the system, secure boot and then run time. So we can measure all of these components remotely. The other thing we have as well is an encrypted payload and execution framework. So what we do is when a system proves its trust, so we know that nobody's tampered with that system, we can then release an encrypted payload on that machine. So that could be, for example, some private secrets that you have or certain files that have passwords or sensitive materials in. So the machine will prove its trust date, it's not been compromised and then we will then release an encrypted payload. If the machine fails to suggest that somebody has somehow tampered with that machine, they cannot access the payload. And finally we have something called a revocation framework. So this essentially happens when a machine fails. So somebody tampers with a machine, then we interface with a certificate authority, typically we use Cloudflare SSL so we can revoke a certificate and that certificate revocation in turn could invalidate IP sectonals or TLS connections and so forth. And then we also have these things called custom actions and these are where you delegate a list of scripts that are written in Python. And what happens is all of the nodes that are part of the keyline cluster, what they will do is they will execute these local actions when one node fails. And typically what you would do is you get those nodes to stoniff the node that's failed. So stoniff, if anybody's not familiar with that, I should know this, it's shoot the offending node in the head essentially, so ring fence it. So here it's very flexible, you can essentially anything that's programmatically possible on a Linux machine within Python, you can execute that locally. So it could be calling specific APIs, it could be shutting down network connections, removing hosts from Northrise, SSH key file, anything that you come up with really, you can set a script and then Keyline will execute that script once a node fails. So a bit of a high level overview of the different components. So first of all we have the agent, so what we've got here is to our left is consider that the sort of the remote data centre or perhaps an outside location where you have an IoT device or an edge device. This is essentially an area that's outside of your control, outside of your network. And this is where we run the Keyline agent. And the Keyline agent communicates with the TPM, so it requests these quotes, and quotes are essentially a request to get a measurement list of the current cryptographic state of the system, which is then sent to the verifier, which is now on premise. So this would be something that you have within your own trusted boundary. And the verifier accepts these lists, it does a check to make sure that it's a real TPM that sent that using the private key hierarchy that I spoke of earlier with the TPM, and it then performs a kind of a comparison between current state and what the expected state is. Now if that state changes then the verifier will then fail that node and then we'll have our revocations and so forth. But before we get on to that there's also the register. The register is relatively simple, it's a database, it's where we store the agent IDs, the unique IDs, their operational state, and we also keep the public keys of the TPM vendors. So normally they'll provide like an intermediate certificate which can then be used to vouch that a TPM is actually a real TPM and it's not a spoofed actor pretending to be a TPM. We then have our revocation service and the revocation service is like I described earlier, when a machine fails then we can have a series of scripts which kick off on all of the other machines to ring fence that particular machine. And likewise it will also connect to a certificate authority. And at the moment we support OpenSSL and Cloudflare SSL but we're going to move this to being a plug-in framework so we can easily integrate with other CAs. All they have to do is write their own driver. OK, so this is where it gets a bit hand-wavy, so do pull me up afterwards if something doesn't make sense. So initially we have to set up something called a hardware root of trust. So we need to establish is this actually a real TPM that I'm talking to? So what happens is the key lime agent, so remember this is the remote machine that we're monitoring. This will send an ID, there's no cryptographic properties to the ID, it's just a unique identifier. And it will send two public keys, an EK pub and an AK pub. Now an EK is the entity key, OK? And this entity key is burnt at manufacturer time into the TPM, OK? So nobody can get hold of that theoretically, not even the TPM manufacturer themselves. What happens is they, I don't know it too fluently, but they inject a random seed and then it sort of self-bootstraps its own cryptography and creates this entity key, which is locked within the chip. And then the entity key also creates an attestation key. So the attestation key is not fixed to the hardware. So if you was to reset a TPM, then the AK would be wiped and you would generate a new one. Whereas the entity key is a permanent fixture to the TPM. So we send the public counterparts in the ID to the register. And what the register does is using the public key, it encrypts the AK pub using a hash made with a key which we call the KE, which is essentially a challenge, OK? So we're challenging the machine to say, prove to us that you have the private counterpart of the entity key, OK? So what will happen is the agent will then make a HMAC of its ID and it will send that back to prove that it actually has the EK private key. And then, further to that, we'll also verify that the EK private key, sorry, public key is signed by a natural TPM manufacturer as well. And that allows us to then tie the attestation key to the entity key. So now what we can do is when we receive a cryptographic quote from a machine, we can be sure that it's an actual real TPM that signed that. So that is a bit hand-wavy, isn't it? So hopefully that went in. It took me a while to understand this stuff as well. So now we've got our hardware router trusted up. So we can now trust the hardware. So the second part is we do an initial attestation of the state of the machine, OK? So now you can see that we introduce, I don't know if there's a laser on this. Now you can see at the top there's a verifier. So the verifier is coming into the picture now. So what we do is, sorry, and there's a four factor as well, the key lime tenant, OK? Now the key lime tenant is essentially, this is you as the user, drive this, OK? And we provide a CLI, so you can type commands with arguments to kick off this process. But that CLI actually wraps around REST APIs. So if somebody had their own system, they could obviously develop their own system around our REST APIs to drive this. And what happens is the tenant creates a key, which we call the bootstrap key, OK? And this key is cryptographically split into two. So it's not a file that's split into two. It's actually done in a cryptographic manner, OK? And that makes the two counterparts, the U and the V. And we use this for unlocking the encrypted payload. And we'll see how we do that shortly. So first of all, we send the V part to the verifier. So this is the V part being half of the bootstrap key, OK? And we also send an agent ID, again, just a unique identifier, an IP address, and a white list, OK? And the white list is a list of hashes and a POSIX path to a file. So you essentially have a hash value of a file to the right, two columns. Pretty simple. And that's your golden state of how you expect the system to be, OK? The verifier then sends a nonce to the keyline agent, because we want to make sure it's fresh. We don't want somebody to try and send an old quote. So we send a nonce to make sure there's no replay attacks and so forth. And then the agent will communicate with a TPM and it will send back this TPM quote, OK? Which is using the nonce, it hashes the attestation key of a value called a PCR, which is a platform configuration register. And these are letter boxes where there's hash measurements that are stored in the TPM, OK? And this is sent back. And it also sends a public counterpart of a new key called the NK, OK? And the NK is something that we use to protect transferring secrets from the agent, sorry, from the verifier and the tenant to the agent itself. Now, when this comes back, we first of all check the validity of this attestation key, which goes back to the previous slide where we had the hardware root of trust. So again, we make sure that this is actually a real TPM that we've established a hardware root of trust with. If that trust pass, then the verifier will hand over the V counterpart and it will use the NK key that we spoke about, the NK public key to do that. So it's done in a secure manner. So this can all happen over HTTP, and it's absolutely fine. There's no problem with that. This is going to happen across completely untrusted networks. Now, the second part is we now need to get the U part of the key, second part of the key over to the agent so that it can unlock a payload. So now as the tenant, we request a quote, OK? And with this quote, we're not actually measuring the system. It's just a way of us showing intent to the machine to prove its identity. So we use a quote to do this. And it comes back with a quote that's signed by the attestation key so we can again check the hardware root of trust. And the agent also sends its NK pub, which we will use to safely transit the second part of the key. OK? Again, we'll check the trust, check the trust state, the hardware root of trust. And then if that is shown to be non-compromised, we will use in the NK pub, we will send the U counterpart, the second counterpart of the key, over to the remote host, along with a payload, which is just essentially a tarball, OK? And we're going to what a payload will typically contain shortly. So the agent then combines the keys again to one key. Now it has that full key. The secrets are unlocked. And then we have a deploy hook. So we have a script that will run, which could do anything. You could call an answerable playbook. You could have perhaps a bash script that deploys an application. It's essentially up to you then. So here's an example of a very simple example of an encrypted payload. So you can see we have our payload.tar. And in there we've got various secrets that we want to securely transmit to a machine. Once we know that nobody's tampered with that machine, we can actually trust the environment. So you can see there's various secrets and binaries. And then at the bottom you'll see these scripts that are prefixed with local actions. Now these are the scripts that are executed when a machine fails its state. So for example, these two scripts are making some calls with cube CTL around a cluster. And they're also making some changes to IP tables. So essentially what we're going to do is when a machine fails, a signed event will be sent out to all the machines, they'll run these scripts locally and that will ring-fence the compromised machine. And then you can see there's an auto-run.sh which we automatically execute again on. Once the trust is passed, the keys are combined, the payload is made available, we will execute the auto-run.sh script. And in here you can see a simple example that calls an answerable playbook. So the idea is you would deliver the secrets, answerable would run, deploy the application and your secrets are there. And you know that they've gone on to a machine that you can trust. Now once this happens, we move into the third phase which is continuous integration. So we've measured the boot, the secure boot components. We've cryptographically delivered the payload. Now we want to continuously monitor that machine for compromise. So here we use integrity measurement architecture. And what happens is we continuously poll for these TPM quotes. And this actually happens over a REST API and you don't even need to protect that. It doesn't even need to be on HTTPS, it can be on HTTP because these measurements as I say are cryptographically signed by a key which is locked within the TPM. So if anybody tried to tamper with those measurements, it's going to break the cryptography and it's going to fail. So the TPM quote comes back and we then move into continuous polling. So we do this about every two seconds. And what will happen is every time somebody makes a sys call or they make some sort of a change, IMA will write this to the security FS. And the security FS will then lodge the hash that is recorded into the TPM. And then we query the TPM for the measurement list. So within, as I say, we typically do polls of around two seconds, you could do more. So within a second or two of somebody running something on the system, if there's been a change to the object that's running, we will know about it and we can immediately then shoot that machine off the network. So remember earlier we sent our white list, our golden state of hashes of what we expect the file system to be. So IMA populates the security FS into the TPM. And then key limer tests the runtime trust state using IMA. And then somebody compromises the machine. There's an integrity failure. So we're now look at the next part which will be the revocation framework. So the actions that we'll take once the machine fails is trust state. So for example here you can see we've got the key line verifier, we've got our certificate authority and then we've got four nodes that are all being monitored. So these could be like an open shift cluster or any sort of type of machine essentially. One machine is compromised. So somebody runs a script as root that's not white listed or they somehow trojanize a file or they do something that tampers with the state of the system. Immediately the verify will send out a certificate revocation using a certificate revocation list to invalidate that certificate. And then if that certificate is used for the TLS connections of a machine obviously you're going to invalidate those connections or it could be used for IPsec or any sort of solution which uses a certificate authority. And then the other event that it does is it sends out these revocation events. And these are essentially, it's a list of metadata and you can place whatever metadata you like into this revocation list. Typically we have stuff like IP address or host name and parts that have failed and so forth but you can customise this list. And this is signed by the verifier so you know that it's an actual real verifier that you trust that is sending this out and it's not a hacker pretending to be a verifier and causing havoc by sending out these events. So yeah, so these events are received by the machines which then informs them to run the local actions and these are these Python scripts that we looked at earlier which can then programmatically do whatever you like. They can make shell executions or call APIs or take any sort of determined action that you'd like to take when a machine fails a state. So for some examples we could remove the failed node from SSH authorised keys. It's a very simple example or we could shut down a VPN tunnel or a Mensom IP table rules. Whatever you want to do, like I say this is all outlined in scripts. And then we can call the certificate authority to make a revocation. So there is actually a demo that I put up recently on the Red Hat community YouTube site and what we do is we monitor three SCD machines that are part of a cluster. We then compromise one of the machines in this instance SCD2 which causes the verifier to send out these revocation events and these revocation events tell the leader to remove SCD from the cluster and it does some other things as well like shreds and secrets and so forth that are associated with that particular node. So if anybody's interested you can watch that video. We also have some examples as well using Libra, Swan and Raccoon. But like I say this is Python. So anything that you can do with in Python you can do on your machine. See it's really is up to you. You can, the world's your oyster essentially. Anything that you want to achieve and automate programmatically you can do so and Keyline will start that for you. So we try to keep ourselves being agnostic when it comes to use cases and have the user really use their imagination around what they would like to do when a machine fails. So a little bit about the project and the community and where we are at present. So as I say we're relatively young but we're continuously growing. Organically contributors are coming along and finding the project which is really nice. And you know the volume of commits is increasing and all the metrics say that it's a nice growing open source project. We meet weekly okay and all of our meeting notes are github issues. We've got a meetings repository and this works really well because we can link to pull requests and comments and then github will automatically reference that we've pointed to them and it really helps tie everything together. And we also have a Gitter. Gitter's like a kind of a I guess like a Slack sort of type message in platform where we meet and anybody that's trying to use Keyline and wants to get it to do something or doesn't understand how something works or wants a bit of support can come along, jump on and we're a pretty friendly bunch and we'll help you achieve or discuss a particular use case or something that you'd like to achieve. So currently what we're working on is our agent so the agent being the component that runs on the remote machine that's currently in Python and we're porting that to Rust okay and the reasons for this are Rust is arguably a bit more performant because there's no garbage collection and so forth and it's also a very good security so the compiler's pretty strict around ownership and so forth so you get good memory safety and thread safety. We're working on VTPM support so at the moment we work with a hardware TPM. There are virtual TPMs but the problem with a virtual TPM is you're back to that issue of your secrets being on disk because the virtual TPM will store, it's emulated and it's running within the virtual machine. So you don't get that true hardware route of trust with a VTPM. So the Geiser MIT and some interns from Boston University, they came up with a proposal that allows us to extend the trust from the hardware TPM to many virtual TPMs but we don't absolutely bomb them the hardware TPM and cause a bottleneck so what we're doing is we're aggregating all of the quotes into a Merkle tree and then we can have a single operation against the hardware TPM which will then allow us to attest many, many, many VTPMs and there not be any sort of throughput issues. So this will allow us mass scale and it extends that hardware router trust. So this is something that's been worked on at the moment. VTPM support is available in QEMU, it's pretty well established and there's a patch into the container runtime as well for VTPM support and that's looking close to merging soon, it's in pretty good state. So we hope in within, perhaps as short as a few months this might be there to try out and use. We've got a prototype that's working that people can look at. Lots of other stuff as well so we're revamping the UI, we're working on a token system to build the initial stages of multi-tenancy so at the moment for authentication we're relying on mutual and server TLS but we're going to use Java web tokens so that we can scope tokens and have some form of multi-tenancy and access control. For the revocation events we're going to set up different levels so at the moment what happens is a node will fail whereas we want to have like a kind of a less serious action level of notify so that means not necessarily blow the machine off the network but perhaps monitor it more closely. We're working on, well we haven't started yet but we're going to hopefully have some sort of repository kind of like a galaxy where people can share revocation scripts that they've developed. These are the scripts that for example shut down network connections or terminate VPN tunnels or update IP table rules or call a part of Kubernetes or a part of OpenStack and so forth. We're packaged into RawHide so Fedora32 you'll be able to install KeyLine. We're starting to look at how we can integrate with Fedora Core OS and IoT so KeyLine I should mention here is it's a very good fit for sort of your cloud computing type scenario of a cloud consumer and a cloud provider but it's also very good for IoT as well so anything that's remotely outside of your trust boundaries that's where KeyLine comes in. OpenStack and OpenShift integration we're starting to look at that and evangelism which is what I'm currently doing at the moment which is getting out and talking about KeyLine to people. So how are we doing for time? Okay we're good, we're good. So this is kind of like a kind of like come and join us type pitch. So at the moment we're looking for anybody to help out. It doesn't even need to be an engineer you might be somebody that's a user, an architect, you're a like writing documentation. Anybody that can contribute is welcome and as I said in the previous meeting we try to be really welcome in, inclusive, we support new people. We know what it's like to approach a new project and try to get a pull request merge. So I mean I will personally go out my way to help anybody that's going to contribute to the project. You don't need to be a security guru in case you don't need to understand crypto to a really deep level. We've got lots of stuff around standard login frameworks, little bugs that need fixing, typos, spelling, all sorts of stuff. So if you even know basic level of Python then you can come in and get your feet wet and help contribute as well. And we have other stuff such as Ansible roles that are available and so there's lots of stuff to work on. Our documentation needs a lot of love. So you know there's lots to do and so you don't expect to be some crypto math genius or anything like that. It's really a, that's only a small partner system and that's pretty much complete now. It's just about making things more resilient and robust and mature. So we have a website, keylime.dev. If you go on to there, then you'll find out where our GitHub is, where we meet, where our white papers and so forth, that point you towards everything. And as I said earlier, we've got a community chat channel. If you've seen anything which you think is perhaps a little bit concerning, maybe don't share it in a public area. We've got a security disclosure list that you can report security vulnerability to. We're a young project. We're trying to do things the right way, but as I say, if you've spotted something and it's a little bit questionable, you can always pull me afterwards or we have our responsible disclosure system. Last of all, you can test drive keylime in a VM. So we've got an answerable role. It comes with a vagrant file. Most of you are familiar with vagrant. This will allow you to vagrant up and it will call an answerable hook which will install keylime. It will install a software, TPM emulator. So you'll be able to log in and run keylime and then perform all of the sort of use cases that we have. So we've made it really simple for people to get involved. And we also use this as a development tool where we mount a local code repository into the virtual machine and run tests and so forth. We have got keylime running in a container as well. So you can run it in a container. The only caveat with these two is that we don't have that VTPM hardware trust extended. But if you just want to play with things, use it as a sandbox, develop some stuff, test some stuff, it's perfectly adequate for that. So I think at that juncture, which is quite good because my voice is going, I don't know if you can hear that, my voice is starting to give up, I can take some questions if anybody has any. Hey, you sure. I just have a question for you. Is that going to be a test that the community is welcoming? I just want some proof of that. Yeah, yeah, sure. Good to meet you. Yeah. And so indeed, it's welcoming in here and for each other to contribute as well. So a few questions. I've got some notes. You mentioned we could have a white list of fires we want to control. And I was thinking maybe that would be a little too sweet tax. They provide the kind of safety checks on the fires one. So who? Sweet tax. Oh, I mean the question as well to the microphone. Yeah, so the question was around the white list and the hashes that we have. And there's a system that provides these. You mentioned. Oh, I see you're like an RPM header. Yeah, very good point. So that stuff that we're actually exploring is there are some patches around taking the hash from an RPM header and then recording that integrity measurement architecture which then extends into the TPM. So that's something that we're following because that will make white list management a lot more easy. And the other one as well is with the mutable operating systems. Obviously they can change with RPM OS tree, but they're pretty static to a degree. So with the structure of you have a commit history and there's a checksum for a particular commit release of that operating system, we're going to start to look into ways that we can perhaps generate white lists from those as well to make it a lot more easy to measure. So the sort of vision is that if somebody, I might be reasoning the wrong words here, and there's somebody from CoreOS that can pull me up, but if somebody changes operating system version, they load a different commit hash, that OS tree, then they would have a white list immediately available that they can then attest that that has happened correctly. So that way they can check the security integrity, but also the operational integrity that they've actually got that right particular image point that they wish to have deployed. Cool. Sure. There's the same blogging for RPMB tool to generate IMA signatures for the operating packages as well. So whether lack will be deeper and then people will use it or not. Yeah, I'd love to explore this stuff. So we should connect and have a look at that. Definitely, yeah, yeah. Because this is one of the things with white lists. There's always been the problem of you. You can have like a kind of golden set of hashes of your machine state. And what we typically do is we take init ramfs and then we iterate for all the files using a SHA256 sum. So we build a hash list. But then what happens if somebody runs DNF update and all of those hashes changes? So this makes a lot of overhead for an admin. So this is where we want to find these better ways of sourcing white lists. And it seems like there's a good few things coming along that's going to really make that a lot easier. Cool. OK, this chap here. Yeah. What about the question? Yeah, so the question was around what about verification of the LUX header. OK, I'm not too much of an expert on LUX, but we do measure secure boot. So I believe that has a level of interaction with LUX. Well, it's normal. Oh, I see. If I have an folder and you change your username, sorry, user passphrase, then I can overwrite with the old one. Understand. At that point, I can use the old passphrase to unlock the hardness. I see, yeah. Something like that, we could look at extending into the TPM. So, you know, if we can hash an artifact, then we can cryptographically extend it into the TPM. So, yeah, I have to chat with you about that. Yeah, definitely, yeah. Like I say, anything like this, it would be great to talk about it. And we can prototype some stuff and see what we can do. So this will come when I add to that. Stick around if you've never got this information going on, because the LUX header could be actually encrypted into your... Oh, really? Because I've been shipping since RAIL 7.6. Yeah, because I was the... ...worked on the Red Hat disk encryption, so I've been seeing emails from... We've been able to tie TPM at LUX together since 7.6. Oh, there you go. Yes, we'll stick around. Is this gentleman at the front? Yeah, we've been so much about containers and the integrity in terms of vulnerabilities that can be introduced by third parties with RAIL, like a real image in a container that is not the real one. And now all the vulnerability tools are just looking at the RPMs or package manager versions to double check if there is any kind of vulnerability right now. Do you think that kind of technology could be used for getting some hash and check every single file in a container just to double check that apart from having real, not vulnerable RPMs also, every file in the container can be... ...can be secure by this happen? Yeah, we could do that using IMA. So we're waiting for IMA to be namespaced. That's something that's happening. So we're relying on a few things landing, a few patches. But yeah, you'll be able to build a golden state of hashes of how you expect that container image to be. And then that will be monitored. And then if somebody changes those, for example a virus or malware or something overwrites certain files, you know, it could be your Python site packages or your Go binaries or whatever, then you'll be able to remotely know that that container has been compromised. And then using the local actions revocation system that we have, you can then, you can sort of make calls into Kubernetes or a container runtime to sort of fail a cluster or a pod or just a single container or. Yeah, so that will be possible but we just need a few patches to kind of land. But it's looking promising. They're making good progress. Yeah. Okay, five minutes left. Any other questions? Cool. Okay. So I can say I'll be here for the rest of the day. So do come and grab me. You know, it's some interesting stuff to talk about. And thanks for your time. And I think I've been straying outside the camera a lot. So I tend to do that. I tend to wander around a lot. So apologies to anybody watching. And thank you.