 Hello, my name is Devin Bowers, I'm a software engineer at Microsoft and I've worked on code integrity systems in Windows since I graduated in 2017 and started Linux kernel development since this past October. I'm here to present on integrity policy enforcement, otherwise known as IPE and upcoming LSM, which seeks to solve the problem of code integrity. I'll also be explaining how IPE can be used to achieve full system verification for lockdown systems. Initially I'm going to talk about the motivation for IPE, why we thought a new LSM was necessary, why existing implementations cannot be extended. This will be followed by a brief introduction to the design of IPE, as well as complications that came up during the development and have some of these complications were addressed. I'll finish up with a 20 minute demo and a few comments on our plans for future work on IPE. Before we get started, I just wanted to give a shout out to these people who participated in the design of IPE from the beginning, or gave a large portion of their time to review the early drafts of IPE, as well as teach me some of the nuances of Linux kernel development. Also a special shout out to Josca and Karana, otherwise known as JK, who pioneered the initial versions of IPE, as well as laid much of the groundwork for what we have today. So quickly going over the basics of code integrity, code integrity is the concept that code hasn't been tampered with and transitioned between its source, be it the build machine or distributed by an application developer and its execution. In other words, files are guaranteed to be identical to where those files are built. This is a wonderful property of defeating a lot of low effort attacks like malware being directly executed, LD preload, tampering with a binary, or even P-trace attacks. Additionally, this concept can be generalized to more than just executable code, it can be applied to high value files like configuration files that could be vulnerable to file system attacks. So how do CI and Mac interact with each other? Well, CI performs a complementary function to Mac. Mac systems require file metadata to make their decisions. This implies that they have a dependency on that file's metadata not being altered. CI, in theory, fulfills this dependency by enforcing file integrity, encompassing the entirety of the file, including metadata. In other words, CI provides the integrity that Mac assumes. So let's design a lockdown system. Starting at the bottom layer, we're going to protect the kernel and the boot loader through a verified boot solution, something like Uboot, Verified Boot, or Secure Boot. We can also add in the kernel lockdown LSM as well as LKRG for some additional hardening. After that, we move a bit up the stack and we'll select a way to measure and test the state of the device. The natural choice here is IMA. Now at our final layer, we need to select a Mac system to encompass the security policy for the whole device and a CI system to ensure that Mac functions correctly. We have many options for Mac. We have App Armor, SE Linux, etc. For CI, all we have is IMA. Additionally, we could lock down the file system a bit more at the file system layer through FS Verity, DM Verity, or Authenticated BRTFS, which provide a measure of integrity verification. So why not use IMA? IMA selects its files through metadata and verifies the content, not the metadata of the file. If we're trying to get around the CI block, easy solution. We offline mount the file, change the metadata to bypass the policy, and we've now pwned the CI system. IMA solved this problem through extended verification module, otherwise known as EVM. EVM introduces additional metadata to secure the metadata of that file. The issue here is what happens when we offline mount the file, change the EVM metadata, and then change the IMA metadata. The obvious solution here is we just appraise everything. Assuming we care about enforcing integrity checking on reads, that isn't realistic. There's going to be some subset of files that are not integrity checked with the very nature of those files making it impossible to fix values to. Logs, for example. The more glaring problem here is that it introduces a circular dependency with the Mac system. CI is assuming metadata from Mac and Mac is assuming file integrity from CI. This falls apart quickly. On top of direct metadata attacks, can we really predict the number of ways an attacker can manipulate the file system to change the metadata? Bind mounts over files with the required metadata to skip policy, hard links. In general, the file system is a very fragile thing controlled by user land. We really can't trust it to make any decisions. And also what about memory attacks? And that read, write, and protect execute would be completely uncaught by an IMA. You could do that fairly easily. What about anonymous memory? These kinds of attacks have been completely untested. So, can we do better? The file system is scary, so can we remove it as a factor in evaluating what's allowed to run? Can we remove control from user land to manipulate the output of the policy? Finally, there's so much existing work with integrity systems in the kernel already. Can we just use one or all of them for the way we check the integrity of the system? This leads us to the core design goals of IP. Enforcing user-defined system integrity requirements, separating integrity mechanisms from the policy mechanism, removing the dependency on file system metadata, and enforcing a hard security boundary between user space and kernel space. The goal of a customizable integrity requirements leads us to a policy. So, what makes a good policy? Ideally, a policy is intuitive. People should be able to understand it without looking out technical terms or be extremely nuanced about a specific system. The second requirement is that a policy should be diagnosable. When a policy goes wrong, a layperson should be able to diagnose the issue without requiring a developer or an expert in the system itself. In simple terms, it shouldn't require a PhD or a specialization to understand and utilize a policy, unlike web development and assembly. To achieve an intuitive policy, there needs to be a few obvious characteristics. The policy language needed to be simplistic and easy to read, so we decided pretty quickly that the policy should be interpreted from top to bottom, use the new line to delimit rules in the policy, and use key equal value syntax for the individual items within our rules. The next choice was how to abstract the policy rules for the end user to establish control over the kernel in a simple way. Initially, we considered using the LSM hooks as the operation to be controlled. However, we discarded this for fear that it would become too complicated for an average admin to understand with all the nuances within the kernel. It also had the drawbacks for the future. If some system call that allowed execution in a different hook than we expect, then policies would need to be updated to handle the new kernel functionality. To this end, we decided to go with the logical mapping, what we intuitively expect to control, in this case execution, which could potentially come through and map with execute permissions, exec VE, or end protect with execute permissions, or whatever comes through the future. The next choice for the policy was what should it do if no rules match. It's nearly impossible to write a policy so exhaustive that it covers the whole system, so this is important. We did draft for it deny by default, which was great for execution rules as this encompasses the entire system. But the rules were confusing to author for other scenarios, like enforcing reads of verified files for specific cases. The next attempt we had was the inverse, allowed by default. This was great for the verified read scenario, but felt a bit short in the execution space. So from there, we decided to borrow a bit from PEP 20, explicit is better than implicit, and have a user specified default based on logical mapping. This made intuitive sense based on what the policy was trying to achieve. So our next about a requirement, the ability to diagnose policies with plenty of experience to draw from the other members of my team. I myself work on the Windows side of CI, and we have had plenty of problems because Windows CI problem policy is not diagnosed. The reason why it's not diagnosed will tends to track back to the decision to use the intermediary binary format, which cut down on the parsing code in the kernel. This decision led to several things requiring the maintenance of a serializer. The binary format could change between versions. Additionally, there is no deserializer because some of the information is stripped away. What serialized may not match the original policy. All of this leads to the ability for people to perform self service diagnosis of policies. So running from our past mistakes, we chose to pursue a plain text policy, removing an intermediary format. This addresses all of the larger issues found in our study of Windows CI policy, but has a few drawbacks. Authoring has less guardrails as the serializer can catch some mistakes and output them to the end user. And ultimately, because it is a giant string, it has a larger memory footprint. The other thing we considered from the diagnosis standpoint is the actual evaluation of a policy. One choice would be to read in the full policy, optimize the policy, and then evaluate against the optimized policy. This was not chosen because it results in difficulty bugging. Roles may not be evaluated consistently, and it could be a victim of optimization logic, which is its own class of bugs in code today. Instead, we decided to evaluate as is. It prevents a certain order of optimizations, as we cannot reorder rules, but it's easy for a human to understand and debug the policies. As authoring policies tends to be the optimizing factor. So when we put all of our design decisions together, we get a policy that looks a bit like this. This is a theoretical example policy that may be typical for a lockdown system that wants to control all of its execution and a subset of reads that must resolve to a verified file. Our second goal for IPE was to divorce mechanism from policy. There are several good systems within the kernel that can be extended with minimal effort to provide some assurances of integrity. So what makes a good integrity mechanism? Well, it's not controllable by user land. If that's possible, then what's the actual security value in the first place? Secondly, mechanism should be deterministic. The value resolved by the integrity mechanism should not change between different evaluations. Finally, as a stretch, it ideally captures the life of an application. While an application runs, it can do a lot of sketchy things. I've seen binaries download an arbitrary shared library off the internet and try to use DL open or execute it. I've seen a shared library being executed from a data section. In fact, this solution is the highest upvoted response to a cross-platform.net assembly that calls a native binary on stack overflow. I've also seen a library that sprays an executable code fragment and attempt to map it into an executable page when executed. The point is, if your integrity mechanism can capture the lifetime of a normal usage of the application, then all of these should be caught. Some examples of good integrity mechanisms are DMVarity, FSVarity, authenticated BRTFS, as all of these would at least fulfill the first two bullets. In IP's first patch set, we support DMVarity as an integrity mechanism. The TLDR of DMVarity is that files are verified by a merkle tree. Corruption or tampering will cause the block to fail to be read. For IP's purposes, DMVarity creates a block device structure that's resident in the kernel. I integration with IP is simple. We store some info in a security blob on the block device structure and we check it later with IP in the correct hook. Now that we're done with all the supporting factors of IP, I'm going to go into the two major aspects of IP itself. The policy loading mechanism and the policy evaluation mechanism. Policy loading in the IP is straightforward. The policy must be signed to establish the root of trust to the system trusted key ring or else it would be trivial for an attacker to deploy a policy to undermine the security of the device. This key ring is special as a key ring that is compiled into the kernel and cannot be modified without rebuilding the entirety of the kernel. IP uses the pkcs7 assigned to data format which encompasses both the policy and the signature in one convenient blob. This blob is written to security of this which causes IP to parse the policy. IP parses the policy line by line tokenizing it to key equals value pairs. IP looks up the key against the red-black tree of registered mechanisms also known as properties. This key entry is a structure of function pointers, a version and a name. One of those function pointers is a parse function which accepts the value half of the key equals value pair and stores that in an opaque void star pointer. If the parse function returns success IP continues or else it exits early and terminates the parsing. On success IP stores the policy in the private section of the inode of the new security FS entry for the policy where it lies in kernel space inactive until the policy is activated as the active policy through sysfs. Evaluation is very similar to the parsing function. All of these calls funnel into the LSM hook which calls IP process event. This iterates over the policy where a reference to each mechanisms registry is stored. IP calls the mechanisms evaluate function passing the file pointer to its mechanism. If all the mechanisms of a role match then it short circuits and returns the action property. Otherwise it continues on to the next role until a match occurs or until it runs out of rules at which point it falls back to the default. We came into a bunch of challenges with IP across its development. The first issue we came across was the authorization of init ram FS. Init ram FS is obviously not at the severity volume but it is typically verified as part of a verified boot stack. To that end we created our own mechanism based on the tested implementation of the load pin LSM which caused the first mounted super block that executed something to be authorized. And then when this super block is unmounted then nothing else can be authorized by the logic. As init ram FS is the first mounted super block this all works out. The system just needs to remember to unmount init ram FS to prevent writing to that authorized location. The second problem that came up was map anonymous. Anonymous memory is inherently incapable of being integrity verified because it cannot trace back to anything with a backing that could be used to establish the trust of its data. But there are valid uses of it. One such valid use is dynamic cogeneration used primarily in two places that we were concerned about. The first was foreign function invocation which is used in almost every language in today's systems when calling from one language to another. Libffi accomplishes this by creating a trampoline to the destination library. Which it first attempts to do by mapping the region of an anonymous memory with write permissions and then filling in the trampoline and then marking it execute. If we have our max system with no exact name this will fail because write memory can never be marked execute. Additionally it fail an IP because there would be no file backing. When that fails Libffi attempts to write an executable code fragment to the system then map the file as execute. This fails an IP as it doesn't trace back to anything verified. This cross language calling is a pretty desirable feature to any developer. We're looking at addressing this problem through a separate series of patches to the kernel. Another point I've mentioned is GCC closures which require the same form of trampoline to function appropriately. Script interpreters also represent another challenge. As these files are opened with not plus X but in fact read. So they are not subject to IPs execution rule set. Fortunately the community has already realized this issue and has started the process of creating a potential solution through omegsec. The first thing we're going to do is generate a self sign certificate to establish IPs root of trust. We're going to use OpenSSL just to quickly create a self sign certificate with a private key that we're going to use to sign things later. After that's done we're going to move on to kernel setup. So we're just going to open up our kernel repository set the default configuration for x8664 and let's edit it slightly. So from here we're going to enable kernel configs. The first kernel config will enable is the Inverity. The option is under device drivers, multiple driver device driver support, RAID and LVM. Device mapper support should be enabled as well as Verity target support. And we're going to also enable Verity data device root hash signature verification support. After that we're going to make a quick stop in file systems. We're going to enable fuse as a module that we used later in order to demonstrate how to. How to block or allow kernel module loads. And we're also going to enable the SquashFS file system driver to allow us to mount our SquashFS. We can now enable IPE. We're going to enable SecurityFS as a dependency of IPE. We're also going to disable SE Linux in IMA just for the purposes of the Stemo. So we're going to navigate to the Integrity Policy Enforcement menu, enable that. We're going to go into the sub menu enable all the properties. The policy to be applied at system startup is going to be left blank. We'll come back to that later. Our final option is in the cryptography menu. We're going to go down all the way to the bottom and compile in our certificate to the system trusted key ring. This will establish IPE's root of trust for all of its cryptographic verifications. With our config complete, we're going to now start our kernel compile. We're going to wait a couple seconds before we continue just to ensure that everything is going to work out. Looks like everything is going to work out. So we're going to split off a pane and we're going to create a dn-barity volume that we're going to use in the demo. So the first thing we're going to do is create a SquashFS of our demos folder. It's pretty easy. The packages SquashFS tools on most distros and you just type in make SquashFS, the folder you're trying to compress and then the output file. We now need to set up our SquashFS volume with dn-barities. We're going to run verities setup format on our SquashFS volume. This will generate the hash tree used by dn-barity as well as output the root hash to standard out. We're going to save the root hash by copying that root hash and echoing it with the dash n parameter to a file. The dash n parameter is very important because it strips the new line from the output. And then we're going to sign it using OpenSSL s-mime pointing in at the key of the certificate that we created way back when. We're just going to do this twice so that we have two separate hash trees and two separate root hashes so that we can show authorization and denial by root hash. It's now time to create our IP policies. We're first going to create a policy name boot. We're going to give it a policy version of zero. We're going to allow everything that we haven't considered to be allowed. Our executes are all going to be forced to come from the initial super block on the device. That's the property boot verified. And our kernel modules will also be required to come from the first super block on the device. Our next policy will be continuation of the first. We're going to use it as a template and add another rule. This rule is going to be dn-barity root hash, which will allow us to specify an individual dn-barity volume to either execute binaries from or load kernel modules from. The root hash in this case that we're going to be using is the root hash that we saved from an earlier step. Our third policy will be a continuation of the second. We're going to allow any dn-barity sign volume, so anything that was mounted with the root hash signature argument provided by Varity set up and passed successfully. And we're also going to set up a revocation of the previous root hash. So what that means is anything being executed or kernel modules being loaded from a volume or previous root hash volume will no longer be allowed, whether it's signed or not. We set up this revocation by moving our root hash rule up to the top. His IP properties are executed from top to bottom. And then we switch that action goes allowed action equals deny. So this deny will act as a short circuit. It will prevent evaluation of any other rules. And as a result, we will just deny execution from that volume wholesale. As the scenario driving this policy is that we signed something incorrectly and want to revoke trust for that volume. We're also going to bump the version number up to 0.0.1 so that none of the existing policies can be rolled back to. The fourth and final policy we're going to be making is very simple. It's going to be a policy comprised of two rules for our execute, two rules for our kernel module. It's going to be boot verified equals true. So anything from the initial super block and the invariity signature equals true. So this is any the invariant volume that has a signed root hash or mounted with a signed root hash will be allowed to execute or load kernel modules. With all of our policies written, we're now going to sign all the policies. We do this once again through open SSL s mine. This is almost identical to the way we sign our root hashes. However, it's important to notice the inclusion of the node attach signature flag, which specifies that it is not a detached signature. It is impact and envelope signature. So I'm going to do now is I'm going to mount my root file system on their mouth. And I'm going to copy all of our artifacts are squashed FS or hash trees or root hashes or signatures and our policies into my root file system so that we can have access to it. We're going to do a key move later. And it also looks like our kernel is done so we can start getting ready on our demo. So as Kimu starts up, the first thing we're going to do is we're going to load all the policies into the kernel. This is done simply through writing the content to a certain file on the file system and security FS. So mark one of those policies is active. And then we're going to run through some binaries that show various ways that IP catches potentially unverified binary loads. So as I just said, we're going to load all of the policies into the kernel. We do this by writing the signed message into security FS IP e new policy. And we do this four times just to load all the individual policies. And this does not mean any of the policies are currently enforced. In fact, they are just waiting to be activated before they are enforced. In addition, when we activate or load a policy into the kernel, we have a audit record that shows the policy name policy version and the flat shot one hash of the file that was loaded. This includes the the signature and so on and so forth. Once IP policies are loaded into the kernel, they get their own node under security FS IP policies and the policy name. There are two files inside that directory and it is raw, which contains the original signed message that was uploaded to new policy or whatever stored in the kernel at the current time in the case of updates, as well as content, which is the other file, which is the plain text content of the policy in the sign message. With our policy loading out of the way, we're now going to open our dn verity volumes in the way that the invariate expects and then mount them on the file system to get the binaries that we're trying to execute. So the way we do that is through very setup, which is to very setup open the name of the block device in our case demo that squash of us. Then we have the hash tree so demo dot hash tree. And then we have the you echo the the root hash so without the new line so that's we just cat demo dot root hash. And then the optional parameter root hash signature and then the file name which are cases demo piece of an ass. Once again, we do this two times because we want to separate volumes of two separate hash trees and two separate hashes for the purposes of our demo. After that it creates a redirect device under dev mapper and whatever we put as our name so we put demo as their name and demo to so we just mount dev mapper demo on any particular directory. All the test files out of the way we're going to show the demo files that we're going to be running as part of this experiment. So the first step we have exec says simple binary all it does is essentially the equivalent of hello world. And then we have k module which contains fuse which we compiled earlier in the kernel compilation we're just going to try inserting that if I'll do preload so this is a binary that overrates the rand function to always return for. And we're going to see if we can block that from being loaded the final one is a lib so this is just a simple binary that's linked to another library. And we're going to demonstrate that if that library is unverified it won't be allowed to load and the program will terminate and protect so this does one of two things. If we don't have the first argument, then it does an anonymous memory mapping. This will always be rejected by all of our policies and we'll see why in a minute. If we include a file to first map that file read write, and then it will and protect it to execute. Finally we have a script, which is just a simple Python script that we're going to show one way that it's subject IP policy and another way where it's completely circumvent IP policy. We're just going to show now because IP has no active policy it's allowing everything to load. So we're just going to run through a couple test cases and show that all these binaries can successfully execute. Now we're going to activate the policy boot. As you remember this is the boot verified equals true meaning anything that belongs to the initial super block will be allowed to execute on the device. So right here we're going to execute our hello world binary and we're going to see that it's going to be blocked because it is not boot verified. So checking the audit log, IP created an audit event. The operation is execute under PID 272. The process was shell. It tried to launch execute the second I node device DM zero. We're going to check the rest of it. It was DM Verity signed right there. Yeah, so it was from a DM signed in very volume. It had a root hash. That's the root hash. It was not boot verified because it was not the initial super block so it blocked it matched the role default off equals execute actually goes deny. So this time going through our kernel module we're going to try to insert the kernel modules ends mod. We're going to say permission to mine. Let's check the audit event. The audit event says 284 PID the operations came module the hook was kernel read. The process was ends mod try to insert infuse.co. And as we can see, all the properties are consistent with what we had before and the rule that matched was default operation equals came module action equals deny. So now we're trying to do LD preload. We're going to use the binary on the boot verify volume run LD preload where it's forcing the preload of share.so from the DM Verity volume. And we're going to see that LD denies loading that volume. We're going to check the audit log. And we're going to see that it was execute trying to end map the comms preload and the audit block was shared.so as we expected everything else is consistent with our operation before. Here I am just screwing up a little bit and protect is denied because it's on the verify volume, which totally makes sense because we can't execute that. So we're going to copy it to a boot verify volume and then we're going to run it and we're going to see that the anonymous failed permission was denied. So checking the audit record. We notice that immediately that there's no audit path name. There's a it's just completely empty. It's because there is no file object when it comes to anonymous memory. So we can't evaluate any of our properties. And so it matches the default rule. So now let's try and protect the file. We're going to try with itself from the book verify volume. It's going to work because both the subject and the target are verified. Therefore it matches our rules. Now let's move on to scripts. So I'm going to first try to execute the script through dot slash, and that's going to deny. This is because the shell runs exactly on that script and that subjects it to the hook that we use to check for whether something can execute. The next thing I'm going to do is I'm going to try to invoke it through the interpreter. This does not get blocked because the interpreter is verified. The interpreter opens the Python script with read permissions, not execute permissions. And there is no equivalent execute permissions for open. So it just completely executes the file. This is an obvious gap around IPE. And if you're interested in helping us close the gap, I recommend that you look at LKML for the Omega exec patch series, which is capable of addressing this problem. Now let's switch policies. We're going to switch to the root hash policy that we built before. So this will allow anything from a very specific dm varity volume. I believe we have mounted that under signed. And so we're just going to run through the same things execs now allowed to run because it's launched that root hash kernel module can be inserted. I have to remove that kernel module because we're going to use it later again. I'm going to use rn.dashf. Please don't do this in actual kernels because obviously it contains the kernel as the warning says. Now we're going to LD preload with the root hash. We're going to notice that it now succeeds. We get that statistically unlikely outcome because it loaded the overridden brand function. Let's try mprotect. mprotect anonymous memory still fails as expected because if we look at the audit event as I am scrolling up here, we can see that it still has no file. And so no file means it can't be traced to a boot verified volume, can't be traced to a dm varity volume. So it matches the default rule. So the mprotect for the file itself, it works because the subject is dm varity verified. Scripts still work the same way. We're going to quickly switch policies to the policy that trusts everything by signature, which is called signature. So we're going to sysitlipact policy equals signature. Oh wait, no, I want to switch back to the root hash to demonstrate that it was just authorizing an individual root hash. So we're going to go to sign2, which has a different root hash. We're going to try to execute something that's going to be denied. We're going to try to execute something on the original signed volume. It's going to be allowed. We're going to check the audit events of our denial and it's going to show that it's going to belong to sign2. The root hash in it is different from what we had before. Before I believe it started with a 7. It starts with a 5 here, so we match our default rule. All right, so now let's activate our signature policy. Both of the volumes will be trusted with this policy. So we're going to go to signed. Let's execute something off of signed. Exact works. Sign2. Exact now works. Previously it did not work. Because we were trusting one single root hash, but now we are trusting every signed root hash. Both volumes are allowed to execute and insert kernel modules with one rule. Now for the final part of our demo, we're going to activate the signature with a revoked policy. So this is everything signed, except we explicitly deny that first DnVarity volume. So if we go to signed, we're going to execute. Execute fails. That's rightfully so. If you check the audit log, we're going to see that the rule that was matched was our denial rule. And we have to go all the way to the right for this. Let's see here and then the rule DnVarity root hash equals blah blah blah equals deny. So now let's try preloading something off of that revoked volume. We're going to do preload. LD preload equals that signed shared dot. So and it's going to fail. It's going to be ignored. We're going to try to and protect something off of that volume. And it's going to fail because the subject is also not matching policy. If we look at the journal CTO audit logs, we're going to see both of them showed up hook and that hook and protect. And we're going to try to insert the kernel module. It'll fail. Same reason you look at it. It's going to be kernel read, optical kernel module. If we go to sign to, we can insert views just fine because that's signed and it's not revoked. And we can also execute things off of sign to. So now that we've patched our security vulnerability with our newly updated policy, let's make sure that we can't roll back. So we're going to try to activate our prior policy signature. It's going to fail with invalid argument. And we're going to check why. So if you remember when we created this policy, we bumped the version to 0.0.1. But if we look at this policy version of signature, it's 0.0.0. This is because IP does a version check when you try to activate a policy. This prevents rollback attacks. So now that our policies or older policies are completely useless to us, let's delete them. So all we do is echo the name of the policy into security of this IP delete policy. And the policies are unloaded from the kernel and freed. There are requirements around this. Can't be the active policy and it can't be what's known as the food policy, which we'll go into very shortly. But as you can see, policies are cleared and can no longer be queried through the security FS interface. Now we'll go into some of the assist CTLs that IP gives. The first one is assist CTL IP on force. This switches IP between permissive mode and a force mode. This is very similar to SE Linux, where ultimately the policy is not enforced at the end evaluation. However, the audit records are still generated for allow you to test policies in a relatively unobtrusive manner to normal operation while you refine your policies. There are many assist CTLs you can show by assist CTL A grew up IP. It's active policy and force, strict forest and success audit. The description of these can be found on IPs documentation. So a little while ago I talked about a boot policy. A boot policy is a policy that is applied at startup and to walk you through how to create a boot policy, we're just going to go back to our Linux kernel compilation process. We're going to go back to security, integrity policy enforcement menu, and we have that option we skipped integrity policy to apply at system startup. That's going to be pointed to your plain text policy, which should apply on enforcement. In our case, we're going to do the boot policy. So we only trust the boot verified items when we spring the system up. You start the kernel compile, wait for that completes, then start chemo with our new command, our new kernel. And when we bring the system up, we see that active policy is now boot and the content of the security FS node of our active policy matches the plain text policy that we compiled into the kernel. For the future, we're looking at the ability to support the ability to indicate specific keys, and that front occasion mechanism should result in as opposed to if it's valid, but it through. We also want to support integrity verification for specific file reads. We're looking at accomplishing this through an original user land passed in path to establish the intent of user space, as well as a new flag for open at two, similar to may exec. We also want to incorporate the mag stack patch series into IP to address the script interpreter gap indicated previously. And the final thing on our immediate list is resistance against rollback attacks. Right now, IP has the option to give a pile and ideally minimal policy into the kernel to protect early user space. There exists a possible avenue of attack where before the policy is deployed in late user space, a previously deployed insecure policy is deployed and leveraged to gain a foothold on device. We've noticed this that has the potential to be affect several policy based security systems, and are looking trying to implement a generic interface for all these systems to leverage protections from this attack. So someone asked what would the name of the patch series on LKML that I referenced. That is LMA exec. I was proposed by Mikhail Son and forgive me if I butcher that name. Another person asked what the upstream status is of this policy mechanism in barely signatures have been in the kernel for a while. If you meant IP in general, we're proposed. We're awaiting more review currently. And that's about it. And then someone else asked all kernel modules have to be loaded from the first superwalk, i.e. from the net RAM FS. This is all controlled by policy. This is because so you can also extend your policy to allow kernel modules be loaded from the severity values. And that's the last question I got was, do you know about the state of IMA directory appraisal? Unfortunately, I don't. The last I heard was in 2019. I mean, there was some to do that is regarding the directory appraisal, but I'm not an IMA developer. So, sorry. Yeah, sorry, my video is also down so I can't in case anybody's wondering why you're looking at black screen. All right. Any more questions anybody. Okay. Well, I guess we can call it here. Thank you everybody.