 En tiedä, että minun pitäisi saada näitä ympäristöjä. TPM oli aika ympäristössä, mutta tämän vuoden vuosien jälkeen on tärkeintä, sillä ihmiset ovat ympäristössä. Tämä ei ole tärkeintä DRM-ympäristöä. En tiedä, että tämä ei ole ollut. one. And the desktop adaptation is increasing. I mean, all the Chromebooks are using TPM for secure boot. And also cloud services are beginning to provide this service. I think Azure is the first one that will have this available. And I don't really know the IoT space, but I've heard that it's becoming more important also in that space for authentication. Here's some history. As I said, down to 1.0, there has been basic key management and attestation and hashing and that basic functionality. With 1.2, we got this direct and no-numos attestation, which is different from, like, having CA-based attestations in a way that you can have, like, a single public key for a group and multiple private keys that associate to that group. So you can attest to a service without identifying yourself. And with TPM 2.0, well, it has a lot of new features. These are the ones that I've been dealing with. There are probably much more stuff in it. But I think the most important one is the algorithmic agility. Not only because someone is getting old, but because Russia and China want to use their own encryption algorithms. And then there's this police-based authorization, which is kind of enhanced version of just being able to, you know, seal stuff with PCRs and passwords. You can basically make a logical expression of the TPM state that it must be in order to be able to unseal a secret. And also there's symmetric encryption exposed in the API. Okay. These slides were contributed by Peter. This shows how many patches we have released. It's been quite steady. There are some spikes here and there. That big spike is TPM 2.0 support in 4.0. But this shows the number of lines from 2611 release. So our size is rapidly increasing. And the discussion is heating all the time, mainly because of TPM 2.0, I think. And the interest for measured boot and stuff like that. People are actually starting to take advantage of the TPM. These are the features that we've been working on. Like during the last couple of years, I listed from that time period, because when I looked at like previous security summits, there hasn't been like TPM update for a while. So when I started to develop TPM 2.0 support, I think like the 90% of the work was actually cleaning up the subsystem, fixing bugs. There were some users. There were some race conditions with the user space, especially related to SUS-FS attributes that need to be fixed before we could kind of sanely add to TPM 2.0 support. We have now moved from MIST to our own device class TPM in order to have like stable SUS-FS attributes. And TPM 2.0 support starts to be quite stable, except for the IMA. So we have trusted keys. You can see them with passports and PCRs or any kind of policy you want to define. And then there's virtual TPM support, which is quite similar to pseudo TTIs. So we have this PTPMX device that you can use to create these pseudo TPM devices. And you get the file descriptor when you call this yokto to that device. And you can have like an emulator or whatever on the other side. Or I think the use case in production would be to have like closed service where you get the file descriptor where you have maybe like a separate server for TPMs. This virtual TPM support. Well, not really. It's more like for containers. It's almost exactly like pseudo TTI device. So when you call this yokto, you get like a file descriptor and device node. And then you can like have an emulator on the other end. And when you open the device, the traffic goes to the emulator. Another larger change is this multi-backend support for the TIS driver. So previously, but with 1.2, there was only this memory map, but I owe version available standardised by TCC. But recently, TCC has standardised also SPI and why to see where based versions to write to the TIS registers. And there's been a lot of new hardware support and the future developments. So the first one is not really high priority, but it would be good to do at some point so that we could compile out the 1.2 support for embedded devices. In order to like reduce the attack surface and reduce the size of the kernel. And the second one is something Peter suggested. And I kind of agreed that we should remove all the TPM 1.1 bit drivers at some point from the subsystem. Because every time we update the framework, somehow we need to kind of mirror the changes to these drivers and nobody has the hardware, so we cannot test any of those changes. Then there's this, the third bullet is kind of trivial, so we have now the framework to have multiple backends for TPMT, so someone should just write the code for the Y2C backend. And then there's this internal access broker that I talked about yesterday, so instead of having a daemon in the user space, we would do the session swapping in the kernel. And that's something that I'm planning to work on next. And then there's, I think these last two are kind of related, I think they have to be implemented kind of at the same time, or maybe the event log support has to be implemented before we can add the algorithmic agility to IMA. I promise to describe my plans for this internal access broker. I did all the, the plan has been kind of in my head, and last night I quickly kind of tried to draft it to three slides, so there might be some holes, but I'm anyway going to present it. So the basic idea would be that when you boot the system, there's like a root session that is there for the whole boot cycle. And when you open the deep TPM, it all by default, it will always use the root session, and the key ring would always use the root session. So it's, the session here means like a collection of transient objects, so that's the kind of units for swapping here. And then, then, then we would add like a new yokto, or actually there doesn't exist any yokto for the TPM, so that when you call this new session yokto, it would create a new session that would be completely isolated from the other session. And it's like a one-shot call, so that we can have like a very thin daemon in the user space that just do some access control and sends this file descriptor through Unix, okay, to a client, and you cannot kind of go away from the created session. And TPM2 spec defines necessary tools to actually do the swapping, so there's load command and save command for loading and saving transient objects. And there's this capability command that gives, for each command, it will give the number of transient handles taken by the command and returned by the response. So we can use that data to, when we virtualize the handles. And I think, well, at least for the first implementation, I'm planning to use one search memory file for per session for swapping the data when the context is, sorry, the session is switched. Even for the simplest implementation, we need to have this virtual physical substitution. You might think that if you do like really trivial implementation where you just, every time, well, for efficient implementation, you would probably want to swap stuff like lazily, so that if the space runs out, you swap something. But for trivial implementation, you might think that if you just swap everything out and load everything from the users, you wouldn't need this virtual physical mapping, but I'm still going to use it even for the most trivial implementation because specification does not give any guarantees in which order the transient handles are allocated. So when we have like a session open and we create, oops, transient handle, when it's created the virtual handle and the physical handle are set to the same value. But when, then some other session comes in and the sessions wrap it and when it's loaded, the physical handle might be different, so we replace the value of the physical handle to the value that it got. And then we use this mapping to substitute the handles as user space and commands and we receive responses from the TPM. I don't think so because I keep the rest constrained here that for one session, all the transient objects must fit at the same time to the TPM DRAM. I don't think we have that class, but we can take it offline and for the first proof of concept implementation, I can try with this approach and maybe when we test that we will find such classes and then we define different namespace for virtual handles. But at this point, I don't believe that there could be such class, but no, yeah, it's still in proposal phase, so who knows. Okay, and there are like, there is at least this get capability with cap handles parameter that gives in handles in the body, so we need special handling for that command. So we actually have to test the body, but I think there was some other, but there are at most like three commands that need such special cases. It's moned in the audience. Do you remember, there was some other, I think there were like three commands. Okay, and then I have a sequence diagram. So this would be the basic flow that I don't really know what the TSS message would be, but the basic idea would be that the application, we have like this really, really, really ting resource manager daemon that only does like basically like access control. So we ask for a new session. It will do, may probably some access control stuff there that can I, is this UID or smack label or whatever allowed to have a TPM session. If it is, we open the TPM device, we call the Yokto, so we have like a new session and then we send the file descriptor back and close the original file descriptor. Then after that, there's no IPC communication with the resource manager. So it will be like really, really, really ting component. And actually in this scheme, the TSS could be just like a shared library that the application uses. So we need some data structure that contains the mapping, the shared memory file and all the data needed for a session. Well, it's a pointer to that session basically. That was it. Any questions? Yes, but I mean, yeah, but does it have to be in the kernel? I mean, it's like a legacy platform in a way. So would it really make sense to do such implementation into the kernel at this point? Any more questions?