 So I'm George Wilson and I am the security development team lead in the IBM Linux technology center And I've been working in the LTC now for about 14 and a half years and slowly we've been rolling out various security improvements we now are primarily focused on the power platform and and We've been working to Think about how we might extend Our boot security up into guest land. I promise you this will not be a very long Presentation because we haven't put a lot of thought into it yet. This may be the least baked Presentation that you see this entire conference, but I'd like to get some ideas before we go off and do something Silly and stupid and get some reaction from you So standard disclaimer These are my views not necessarily IBM's and none of this stuff may ever make it into a product. So Just so you're aware of that so I presented last year on Adding secure entrusted boot to our open power host platform and This is all well and good Claudio's talked about Our work and and how we're using the TPM as our root of trust for OS is Gurney's talked about secure virtual machines, but what about ordinary? guest boot security This is something that we have left unaddressed thus far and We've just started having some internal conversations about it. So this is really an externalization of some of those internal conversations we've had with our Colleagues who work in the power of EM security side of things So Guests should have boot security as well as hosts and those keys should be manageable by administrators Much as they are for hosts You may want You're on Colonel signed under your own key authority There are any number of reasons that you may want to provide an experience in the virtual Environment very much like the host environment secure boot is required For the operating system protection profile for that one and higher This doesn't exclude guests so we need to get our act together if we're going to meet the requirements of that protection profile and There has been some thought as to how we would do secure boot on power VM logical partitions whatever we do for Linux needs to be compatible with what we're doing for AI X and that's an important consideration here and We also want to reuse the same work. We're doing for the host environment We don't want to have a different signature format say for for these kernels than we do for the host kernels So this is a celebrity diagram is sort of a condensation of what I had presented last year we've implemented the Entire firmware stack all the way up through Ski root at this point, which is the Environment that it's an init ramfs environment Running Linux and it has the petty boot application that launches the final kernel and You know, we've had the our lower firmware folks add instrumentation for trusted computing we extend those out to the the TPM and We're currently working on securely booting the OS kernel and It's as I said last year It's more difficult than you might think and and it has proved to be more difficult than we even anticipated but We're learning through the design process and I think we're pretty close to having a reasonable design and and Actually realizing a full end-to-end secure boots Hopefully by the end of the year. I think that's that's that's that's my tentative stake in the ground, but anyway This is a fairly Complex environment, there are a number of layers to it and You know, it's it's much more complex than the guest environment So and this is kind of running off the page, but So anyway the power vm Guest boot environment so this runs on top of the power vm hypervisor Which is a bare metal hypervisor that is IBM's proprietary hypervisor that's been running on our power PC servers since the early 90s and So it already is verified by the hosts of cure boot mechanism And There is firmware that's provided For the logical partitions is just part of our firmware stack and and that's open firmware we you know refer to it as partitioned firmware and The AI x bootloader and kernel are shipped together as a unit and The big difference on Linux here is you know, it's it's not just some Blob it's separated into a bootloader and a kernel and So And and this this is how it's Actually done for KVM There's a different Firmware component. We don't have partitioned firmware. We have this this Linux host QMU and then slimline open firmware, which is as the name implies a smaller version of partitioned firmware that provides hypercalls and and all the things that Largely partitioned firmware provides on power vm And generally, you know, we use grub to load the guest kernel. So we we have these these sort of two different Environments one provided by power vm and then one provided by KVM running on open power So two different hypervisors two different firmware stacks So there's a proposed scheme for securely booting These Kernels on power vm We already have this convenient Container encapsulation format not to be confused with user space containers, but Just an encapsulation structure and That's what we use for encapsulating the pieces of our firmware for host secure boot. So our Power vm firmware folks said well, we have this structure already. Why don't we reuse it and Instead of having Here's here's a picture of the structure from last year's presentation so instead of Having hardware keys and firmware keys. Why don't we have firmware keys that sign software keys? and so that's how they're encapsulating the AIX bootloader and guest kernel and And This is provided by IBM has an IBM public key very straightforward complete IBM blue stack there and Everything you need can can be provided just with a simple firmware load and then The the boot loader and kernel will be verified when you boot But you know, we we have a problem when we Have to boot up our KVM guests on open power and we also have a problem of you know, how do we How do we boot up Linux on Power vm and how do we make those two things? Compatible so one thing that we could do is just port our host solution. Why wouldn't we do that? well Our host solution uses petty boot and it's one of the only things that uses petty boot and the distro QMU guests generally boot with grub uniformly and We don't want to Make our power guests an exception to that More over So petty boot runs in this Linux and it ramfs environment and normally that'd be a really good thing But it turns out that power vm Negotiates capabilities with the guest kernel and Since petty boot in this environment would be the first guest kernel to boot up which is actually going to boot up the payload kernel There is not an ability to renegotiate capabilities with the firmware Not easily not without doing some surgery so our firmware folks did not like that approach at all and We also want one that want you know to You know make this easy to go between the two different hypervisors that we have so It's a much simpler boot environment. We don't really need the elaborate key structure We don't think that we've Been working on for the host environment. So why don't we do something that's simpler? and So the The partition firmware or our sloth can just make use of Perhaps an indirection much like x86 has with the shim so Simplified view of of an x86 boot it emulates the host solution And the shim provides this layer of indirection so that You know it can be signed by Generally Microsoft and the OS Can be signed by you know the distro provider or whatever key authority that you like and it's Interesting to observe that and one of the problems. I think we need to solve is you know the Hypervisor validating the firmware Which doesn't appear to be done on the x86 side nowadays So here's maybe a guess at what this might look like for Booting on power vm Very similar to the the Microsoft secure boot problem. We need to be able to sign Something that provides us a layer of indirection We can't get an awful lot of reuse out of the x86 shim. It's a UEFI application And it's it's very if I specific We can learn from it We could use maybe similar variable format for compatibility but It's it's not a lot of reuse there, so we would be implementing our own shim And basically you just managing this this sort of mock database Directly and Then we could also make use of grub Which Already has a convenient callback mechanism, so very much like on x86 we could have the shim verify grub grub call back into the shim and then Verify the the guest kernel And also get our trust and computing measurements via that route Since we can't build them into grub We have a little bit different problem Open-power KVM IBM wouldn't be providing Sloth or the shim directly So we would need to have the ability to Change the public key Public private keep here actually that we're using for signing the shim and Once again, I don't think we need an elaborate key management framework much like we we built on the host that We you know we tried to reuse some tiano core concepts there, but I I don't think we need as much of that mechanism because the The guest boot environment is so much simpler We could build key management into partition firmware or sloth That's probably my favorite place for it We could put it in to the shim. I I think we already have Some precedent for adding User interface features, I know for trusted computing they've been added to sloth It'd be easy to add to partition firmware as well And of course you can still manage keys from the host environment if you want to on on QMU So What about verifying the firmware Right now it looks like this is a gap and You know, maybe we don't need to address it for our first pass, but ultimately we would want to completely verified stack one option might be to use just to mandate IMA in the host environment and Let IMA measure the firmware that doesn't really give control to the Guest administrator necessarily, but that is is probably a lot better than no verification at all And that might be good for a first pass and then maybe we can think about how we could Create a layer of indirection for that in the future but So to summarize as I said, we've only recently begun thinking about this I Put down very much what our conversations have been so far internally into the slide set We must have a common solution across our hypervisors. We want to reuse as much as we can already build upon The concepts and code that that already exist On the x86 side largely We must have the whole stack from top From from bottom to top Verified and we want to keep it as simple as possible for administrators So with that I have some references here largely ones that Claudio already had presented and so I'd welcome any reactions questions suggestions that anybody might have For us as we think about how we're going to securely boot guests And if that if none then I'll come back next year and tell you what we decided Let's use the little bit off topic here, but I think we should all give a Big big round of applause for our master's ceremony's James. Yeah. All right. Thank you everyone Yeah, well that wraps it up for this year's event And so we've had more talks and more people Attending than ever and at this rate, you know, if it keeps continuing then our next year I guess we'll have even more Content so one of the issues we've had this year. I think is not having enough break time not enough time for people to Collaborate and I guess we could have you know tried to reduce the number of talks we accepted We actually did cut back five minutes were struggling to fit things in that actually helped Stopping us running over another hour and a half today or so So but I think one question is would we've discussed this a few times in the past in the program committee Would people be interested in going to a third day and having a more relaxed You know starting at nine and finishing at five and having longer breaks for hallway sessions and then potentially having some time for workshops or Hacking sessions and so on would that make everyone then too exhausted after three days or is it better to keep it in two days? any so Anyone who thinks three days? Is better raise your hand Okay, so that that's something we can take back to the Linux Foundation folks. It's it's quite difficult to Possibly I'm not quite sure how we'd do multiple tracks. We've yeah, we've talked about that as well It seems probably everyone wants to go to just about every session I think But probably the next thing will be to talk to the Linux Foundation folks. It's quite difficult to get Resources allocated So if anyone has any feedback, please let us know that our email addresses on the on the website if you go down and have the program committee and thanks everyone for the enthusiasm and and All of the good ideas and discussions and the presenters and also obviously a great thanks to the to the sponsors and the the Linux Foundation folk who did all the logistics, so thanks