 So, good morning, everyone. My name is Roberto Sasso, I'm a security engineer, engineer Huawei, and I work mostly on IMA and TPM. Today I would like to present a simple protocol for a motor testation, where simple means that it is simple to understand for the user, but the solution itself is very complex and I think the complexity does not derive from calling the TSS, but the real problem is to find a state where the system is trusted to behave as expected, to perform the task in a good way. So, first I will talk about the problem that we are trying to solve, some background information, our proposal and the conclusion. So, remote attestation is the process to verify whether the system can accomplish its task as expected, but the evaluation of the operating system integrity is very complex because reference measurement and public verification service are not available, and also because it is unclear which information must be included in the measurement list and how these information should be verified. Also, remote attestation is very difficult to integrate into a product, because a dedicated server must be added in the infrastructure to do the remote attestation, and we need also to implement two separate protocol, one for remote attestation and one for secure communication. So, in trusted computing the integrity is the process of evaluating whether a system or application behave as intended by the software developer. And the measurements start from the core root of measurement, which is usually in the CPU, and the measurement process is done until the application. On the operating system level, the measurement is done by integrity measurement architecture, and the evaluation is done by comparing actual measurement with reference measurement. So, we have two types of remote attestation, one explicit and one implicit. Explicit remote attestation. So, IMA, so we have an initial value of the platform configuration register, which store the system state. This initializes to zero, and when IMA start to measure, to perform measurements, then IMA calls DPM PCR extend, and then the PCR, current PCR values is updated with the hash of the current value plus the digest of the measurement. This process is done for each entry that is added to the measurement list. Later we have a verifier, which want to attest the system, and so there is a remote attestation agent running in the system, which provides both the measurement list and also assign the PCR. The verifier then replicated the same operation that was done by the TPM in order to see if the measurement list was tampered with, and then if the measurement list is good, then it compares the measurement inside with reference measurement. And if everything is known, is recognized, then the verifier determines that the system is good. With implicit remote attestation instead is a bit different. So we have first we generate a TPM key, which is sealed to the desired system state. Later the PCR is updated as soon as IMA creates a new measurement to the list. And the remote attestation consists instead of establishing a secure communication between the management system and the local system. And only if the current state is the same of the state that was used for the sealing policy of the key, then the secure communication is established. So this remote attestation is implicit because the fact that it's possible to establish a communication means that the system was in the desired state. So the remaining task for the verifier is to check if the state that was set in the sealing policy is a good state. So we see that implicit remote attestation is easier, is more suitable for integration into a product, because the product uses secure communication, so it is easier to integrate the remote attestation into this existing protocol. So we require to switch from software key to TPM key, and also an additional verification of the certificate, which tells if the key that is used for TLS is a TPM key, and this seal at the good state. Problem is that the IMA PCR is not predictable, because the final value of the PCR depends on which file has been measured and also the temporal sequence. So our solution, so we want to implement implicit remote attestation, and to do this we have to make the IMA PCR predictable. To do this we have two new concepts. One is called the IMA attest list, and the second one is an enhanced version of policy in the reduced integrity measurement architecture to handle multiple files. That is list. So normally when a file is assessed in the system, then IMA always put the measurement in the measurement list. With our approach, before at kernel initialization time, we preload a white list in the kernel with all the reference measurement. For example, here we have been in BASH. And so when BASH is assessed, then IMA checks first if the actual digest is in the white list, and if it is in the white list, then it does not perform any, does not add the measurement in the list. Otherwise, if the file is unknown, then the file is added to the list. So in this way, the only measurement that we should have, if all files are recognized, would be only the measurement of the white list. But unfortunately, we have some unrecognized files, which are the mutable files. So those files change when the system is being used. So we cannot really compare mutable files with reference measurement. An alternative approach would be not to look into the content of the mutable file, but instead look into which process are writing mutable files. Because if all the processes that update mutable files are in a good state, then we can conclude that the mutable file is good, and then we can exclude this mutable file from measurement. However, if we measure the whole system, the problem is that we may end up trusting also application, which are poorly written, then are most susceptible to attacks. And then the real state of the system so we have a system, which is good for the verifier, because all the digits are recognized. But in fact, the insecure application is injecting a malicious sequence of byte into mutable files, and this can be used to exploit existing vulnerabilities in other processes. So what we are proposing is to isolate the portion of the system that will perform critical operation, and try to protect this with a mandatory access control. So here, so we are using, for example, we can use Selenux or Smak. And so if mutable files are included in the TCB, in the trusted computing base, then we achieve our objective, because if only mutable file can be written only by the TCB, then mutable files are good. So the mandatory access control is enforcing an integrity policy, such as BIBA or Clark Wilson. And the insecure app, in this case, is left outside the TCB, so it cannot inject a malicious sequence of byte. And the mandatory access control and integrity policy will be included in the evidence that is sent to the verifier during remote attestation. So this solution has been proposed by Trent Jagger, and it's called the policy Reduced Integrity Measurement Architecture. So about the integrity policy, the BIBA model is not very flexible, because it does not allow i-integrity subject to read a low-integrity object, and does not allow low-integrity subject to write an i-integrity object. So this is not very flexible. Instead, the Clark Wilson model allows i-integrity subject to read a low-integrity object if the code is robust enough to handle potentially malicious data. So policy Reduced Integrity Measurement Architecture seems the right solution to address the issue of mutable files, so that if we address this, our IMA PCR will be fully predictable. However, there are some issues because it's not very practical. So finding a PCB in the Selenux policy is difficult because the genetic policy takes into account all possible application usage scenarios, so there are a lot of permissions to consider, and then it's very difficult to find a portion of the system which is isolated from the rest of the system. And also, if we are able to find a PCB inside the Selenux policy, this PCB cannot be reused directly in other scenarios, but it must be adapted because the applications that are used may be different. Also, Prima does not take into account offline attacks. So we are going into the part of making Prima practical, and to do that, instead of considering the whole Selenux policy, we are considering only the process interaction. So in the Selenux policy, we have 100,000 rules, and we saw that only 2.5% of this policy is really used in a system. So it will be much easier to find a PCB if we consider a process interaction. Also, we want to protect against offline attacks, for example, with IMA, Prima, and EVM, because one problem that we have is that when we reboot the system, we don't know if the Selenux protection was enabled in the previous boot. So this is for the reduction of the PCB, so we are using a process interaction, and this is an information flow analysis for the SSH server. So SSH server is able to read the private key and also a configuration file, but is also able to read the Kerberos file configuration file, which according to the policy can be written also by REMD and FTPD. So if we want to have a PCB and we want to not have an integrity violation, we need to add this subject to the PCB, REMD and FTPD. And we can also exclude this subject if we make the assumption that this Kerberos file is not used, but this is a manual process that requires a lot of effort. If instead we consider process interaction, we see if the SSH server is reading Kerberos file, and then if there is no record of the read, we can automatically exclude the Kerberos file subject, and then the analysis is done. So we solved the problem of the practicality of the information flow analysis, so I think we could be able, with the process interaction, to find a PCB that satisfies the requirements. Now the remaining problems that we have to solve is the offline attack. So what we want to do is to know if multiple files were always protected by mandatory access control. Currently, this is not possible because the VM key is not sealed to the operating system. So it can be sealed only until the kernel because the PCR is not predictable. So basically what the change that we are doing is to modify the sealing policy, and we consider also the operating system. So we want to seal a key that can be sealed, can be sealed only if the selenux is enabled, and also the integrity policy is enforced. And which means that when we boot the system, we check if the protection is enabled, and then the key can be unsealed. But then when we have a valid HMAC, so if the key is not available, obviously IMA cannot produce a valid HMAC. So when instead we have a valid HMAC, this means that the key has been unsealed and then the file was created when the protection was enabled. And this is what we needed in order to exclude the file from measurement because we know that the file was updated by the TCP. So now we have the last step to make the IMA PCR predictable. Currently, we don't take into account the validity of the HMAC in order to see if we should measure the file. But now that the key is sealed to the operating system, we can exclude the file from measurement. So, finally, we have a measurement list in which we have the white list, so which code, which file we allow to be assessed, and also the integrity policy that is enforced in the system. So, since we are sealing the key to the OS, we are also able to find if there was a corruption in the previous reboot. So, we have an entrusted administrator that, for example, it may try a different type of attack. It may try to use an EVM key, which is not sealed to the operating system. But what we do is to include in the measurement list also the parameter that we use to seal the key, then the verifier is able to identify that the key is not good, is not associated to the good system, so the system does not pass the verification. And also, the DPM 2.0 allows people to seal a key, which is internally generated in the DPM, which means that the key is never under control of the administrator, unless the system is in a good state. So, we generate a good EVM key, and then we are... This is done by a system, which can be also potentially compromised, because the DPM protects, is a tamper resistant, so the key is inside the chip and is not available. So, in the first boot, we run the good system with the Mathurias control protection enabled, then the key can be unsealed, and then the system is trying to write a mutable file. But, at some point, we have an attack. Attack means that, for example, a file was not in the white list, or there is an integrity evaluation, for example, a process outside the TCP is trying to write a mutable file inside the TCP. Then, the EVM key is deleted, which means that the system is no longer able to produce validation. So, when we reboot the system, the good system is assessing the mutable file, but does not have a validation. Which means that this will be included in the measurement list, because we measure all files which have a validation, or missing a check, and then also the system does not pass the verification. But the attack did not happen in the current boot, it happened in the previous boot. But by the fact that the HMAC was not calculated correctly in the previous boot, then we are able to detect this. So, the last option is that the system was in a good state during the first boot. So, the system was able to calculate the valid HMAC, and then when the system rebooted, then the system is able to read the mutable file with the good HMAC. And then the measurement list is still is good, because it contains the white list and the good EVM key. So, the verification only in this case is successful. And as you see, we have only static measurements, mutable file, we don't have any other measurement. And this is what we needed to do the implicit remote attestation, because the key can be serialed only to only to one state. So, if the state in the TPM is different, then the key cannot be used. So, for implicit remote attestation we have now two possibilities for the verification. The target system first creates a certificate signing request and as part of this also send the event log and components which were assessed were used in the system. And then the first option is that when the CA gets the certificate signing request it also performs the verification of the sealing policy used for the key. And if the sealing policy is acceptable then it sign the certificate and return it to the target system. So, when the implicit remote attestation is done, so we established a TLS protocol the management system received the certificate from the target system and then get from the issuer it can trust just the CA and the verification terminates here. The other option is that the verifier gets also the measurement log and the list of components extract from the certificate the sealing policy of the key and then it can perform the verification of this sealing policy. So now, few information about the implementation. So, the info flow LSM use the to detect the process interaction of the integrity constraint. So, we have the sealing policy, we have an interaction between the SSH server and so it's trying to read the private key. And info flow LSM we are intercepting this request and we are recording this interaction. The interaction is supported in user space so that we have all the list of operations that were performed by the system. Later, so we have to identify the to find the TCP of the system so the administrator pass the discovery interaction to the information flow analyzer and then the analyzer perform the analysis so it tries to identify the TCP of the system which meet the requirements of Biba or Krakvis. The output of the analyzer is the list of the subject and object in the TCP. During the deployment phase we have Selenu which is initialized with the policy. When there is now the SSH server is trying to read the private key the first operation that we do is to determine if the subject is in the TCP and the object are in the TCP or not. It's the same of what Matthew Garak explained so they are using the AVM extended attribute instead we have a list of subject and object in the TCP so we are doing in a slightly different way. If the object is in the TCP then we have to check to the validation of the metadata so we check if the label attached to the file is has not been modified so that we are sure that the label that is attached to the I know that the process is the expected one and then we enforce the integrity policy so if the subject is in the TCP he can read the only object in the TCP. Last step if the subject is in the TCP since we have to also provide which code has been executed apart of the TCP with IMA we measure the code and immutable files. Now as I said mutable files we protect them with the integrity policy. So in this case we are not using IMA policy in the sense that we provide the list of subject in the TCP but so which subject must be measured this information is provided by the info flow LSM so we have the audit match and it is the new LSM which tells to IMA which file should be measured. We have some source code for the digest list this is available in our GitHub account also we built some binary packages for testing easily and we also provide an overview of the feature in the trusted computing group developer portal. So remote attestation is not widely used because because evaluating the integrity of the operating system is very complex. And also we have the requirement of adding a dedicated server an additional protocol so we have remote attestation in a product. So the solution will be to use implicit remote attestation but the problem is that currently the IMA PCR is not predictable and if we want to protect mutable files with TCP this is currently not very easy because the Syrinux policy is very big. So we propose a solution that is comprehensive so we are mutable files that now is a problem because there are unknown digest in the measurement list so we fix this issue. And also our solution is more practical because with implicit remote attestation the integration into a product is very easy. And we are using this solution in the future TPM European project. That's it. So I was just curious no arguments that the Fedora SCLNX policy is quite large it's a general purpose policy it's meant for everything. I'm curious did you investigate customizing that policy to shrink it down to alleviate some of those problems? So I know that there are bullions for example that allows to disable some rules I did even more I tried to modify the security server in order to select only the part of the policy that was queered by the the LSM and we saw that even in this case there were very generic rules for example there was a rule that allows PAM agent to read the log file but then there was another more generic rule which allow every domain to read the log file and so this is make the information flow analysis very complex because then you have an interaction not only from the file which should be able to read to write the log file but also for any other domain included so any type which is part of the attribute domain I guess that was my question like did you consider changing going beyond bullions but actually replacing some of the policy modules with some of your own that would remove some of these accesses that you didn't want I took that adding something on top of the enforcement of Selinux it was more easier because so you leave the policy as it is and then you just use the part that you need for the information flow analysis it is possible to modify the policy but also the problem is what also Matthew Garrett mentioned that we want that the solution is easy to understand by people so having only few interactions to analyze makes the solution more practical More questions? If not, let's thank the speaker