 Good morning. The demo gods are not smiling fondly on me right now. I'm trying to actually show a demonstration on one of our latest proof of concept prototype systems. And I can either show you the slides or show that. So I guess I'm going to have to show the slides. Fortunately, we have some screenshots from the demo, so I can at least talk to them. And then you can come touch the pretty shinies if you really want to afterwards. I'm here from GE Global Research with my co-conspirators Bill Smith and Monty Wiseman. I think everybody's been talking to us, so I think they're pretty well introduced. A little explanation on the title. When doing the application, putting our proposals in, there was a tiny, tiny space allowed for the abstract. And I just couldn't fit everything in. And like any good hacker, I said, I wonder how I can get around this length limitation. I noticed the title was not length limited. So if you want to limit the total thing, you need to limit not only the abstract, but also the title. Talking a little bit about what we're doing, GE, large industrial company in lots of different areas. And one thing that's common across all of them is embedded control systems and a really deep need for security. So whether it's financial industry, transportation, medical, lighting, power generation, oil and gas, jet engines, these are all big machines doing big, dangerous things. And they need control systems that are both robust and secure. Just to zero in on one of these industries, just power generation, GE has half of the world's installed power generation base. It's from GE. 10,000 gas and steep turbine generating units, over a million megawatts of installed capacity in 120 companies, 40% share of the worldwide market for new power generation equipment, largest supplier of transmission and distribution equipment in the United States, top three worldwide. This comes, I think, under anybody's definition of critical infrastructure. And of course, we know, not preaching to the choir here, just kind of explaining what we understand what the issue is, critical infrastructures under attack were definitely in the era of nation state attacks in the wild actually have happened. Admiral Michael Rogers, head of the NSA, said this is the number one thing that keeps him asleep at night is cyber attacks on critical infrastructure. I think we all know about Stuxnet. Stuxnet is an interesting from the model perspective for us because we want to defend against nation state attack. And guess what? Stuxnet demonstrated that even air gap systems can be compromised. In fact, in that particular instance, and I'll come back to this theme later, the only thing that the air gap did was keep the victims from knowing that they'd been hacked. And so we really need to change our strategy there. Ukraine was also a very interesting attack. Again, this was on power distribution, nation state attack. And one of the most interesting aspects was that they went after the control systems and they did everything they could to actually brick them. So not just wiping hard disks, which could be easily reloaded, but actually zeroing out firmware in the embedded systems such that they would not even boot and could not be reloaded locally. At least we're returned to the factory style recovery necessary. So we need to look not only at integrity of systems. We also need to look at denial of service breaking style attacks because those have been actually seen in the wild. A little bit about industrial internet. In the past, the tradition has always been to keep the industrial device, such as a turbine generator, on its own network with real-time operation between the controller and the sensors and actuators. And this was an isolated network. And there was a local network talking to HMIs, these user or management interfaces that control the controllers that control the device. And then there would not be any external connection. What's happening now, though, is our customers are interested in having data, operational data, coming from the devices, in this case generators, to come up into a central location, into a cloud-type style, because there are lots of things you can do if you actually collect the data continuously. You can do analytics. You can display, deploy, manage the systems. But most importantly, they have this thing called MBOC, model-based optimizing control. And with model-based controls, if you're actually continuously monitoring the devices, you can actually do a much better job of optimizing. And they're seeing returns with these centralized per device monitoring. They're seeing optimizations in the 5% to 10% range, where 1% is millions of dollars. So the customers are deeply interested in this optimization to save lots and lots of money. And so if you even went to them and said, well, you need to air-gap your system. And the one said, well, OK, we'll be more secure. On the other hand, we'll lose hundreds of millions of dollars. You can guess who wins that argument. So we really need to be able to look at how to connect our systems in a safe way and defend against these nation-state attacks that can get through air-gaps, can actually brick devices, while still being able to centrally collect all our information. So one of our basic concepts here is security is throughout the stack. It starts from the very hardware device, the firmware operating systems, all the way up through the cloud at all the different levels. But the overriding thing is, as much as we can, we want to have defense and depth prevent them from getting in with all the different types of mechanisms, but also recognizing that if a nation-state's coming after our controller, they will get in. And at least what we want to do is to be monitoring them continuously in real time and make sure that we at least detect when they have compromised our systems. This actually, information assurance framework from the NSA, there are lots of different things. There are the original orange book and common criteria and all the rest saying how much you need. Well, we're in the exceptionally grave damage to security, safety, financial posture of infrastructure, and extremely sophisticated adversary with abundant resources will take extreme risk in the nation-state. So we're really firmly down in this area here. Traditionally, if you go orange book, that's A1 or common criteria, that's EAL7. And it's the sort of thing where we really need to be doing everything as defense and depth and monitoring all the way up through the stack. Specifically, what we've been doing in terms of reference implementations that help our product groups create the next generation control systems is providing architecture, design, reference implementation for them. So we actually have reference implementation running on here that looks at all the different levels. So if you start at the platform level, selection of processors with specific security features. So can your processor actually do DRTM? Does it have an IOMMU? Does it have virtualization? Next level up on security hardware, so this is board level, does it have a TPM? Does it have a TPM 1.2? Does it have a TPM 2.0? Do we have other hardware devices essential for Roots of Trust? Do we have the various different boots? I'll actually have a slide on these. But protected boot is a concept that it can't be bricked remotely, which is very important. Verified, measured boot, firmware. And so for example, UFI Secure Boot or Trusted Grub True, Tboot for DRTM. Encrypted disk, integrity measurement and appraisal. Hardware protected keys. So in the Trusted Operating System level, we're looking at encryption, for example, locks. Key management with TPM locks, so that we actually, in fact, the only way of decrypting the file system is to boot the correct operating system with correct measurements, only then is the key to the root file system unlocked. And the only place that key is kept is in the TPM. Integrity measurement architecture and also with a client and the corresponding attestation server in the infrastructure. Things like trusted keys, where again, we don't expose keys to user space. They're in the kernel. They're only in the kernel. Their private keys are kept only in the TPM, never leave the TPM unencrypted. And even the any symmetric keys stay in the kernel and are not exposed to user space. So a comprehensive set of things that we're doing with Linux in the kernel. Data and motion encryption based on hardware protected keys. Again, we use the TPM not only for internal, key management also, but also for the application and network level, key management. Security services, directory services, attestation server, public key infrastructure, security management, in this kind of environment, one of the security services that's most interesting or most challenging is if we're going to be signing all of our files for verification at all the different levels, how do we have a signing service that meets our needs? And in cases like this, like with generators, they're typically out in the field 30, 35 years. How do we actually have a certificate authority and signing server that can even generate certificates and that last for that long? It's a real challenge. Current systems will typically say, well, maybe five years. No, we need 35 years. And part of that, you have to recognize that you may be going through multiple generations of hardware key storage devices, HSMs. And you have to be able to move or migrate keys from one generation to the next to be able to meet the requirements. Security development, life cycle, certification, penetration testing, again, to verify that when you've assembled all these things that you don't have the composition errors, the hole is greater than some of the parts is in the normal statement, but in security it's that some of the parts is often a hole. So you need to make sure that we've assembled these correctly. So it's not any single technology. It's defense all the way down and detection all the way back up the stack. Specific types of secure boot. I know that nomenclature, everybody calls things differently. I mean, originally there was secure boot that Bill Auroba said way back when. And there have been appraisal. And there's verified and secure boot. Anyway, so what we've been trying to standardize on a set of things, protected boot is all about saying that the remote attacker cannot erase the flash and essentially break the device. That's a necessary requirement for anything on top of that. So on top of that you can have verified boot or UEFI secure boot or locked boot loaders or different types of appraisal. Measured boot, static route of trust or dynamic route of trust. And even better are combinations of these. So we think it's important not only with traditional trusted computing model where you collect hashes. We also want as much as possible to do a verification of signatures and attestation of those signatures taking advantage of the TPM. So TPM is not just its measurement and attestation environment, it is also the perfect environment for conveying or testing to signatures on your files, which makes appraisal much, much easier. Another one of the challenges with our embedded control systems is we cover a lot of different architectures and a lot of different requirements. So obviously until in AMD, these traditional controllers are basically ruggedized PCs. They're typically sealed and fanless and able to withstand rough operating temperatures, but largely a PC type of architecture. So we do have available UEFI protected boot, trusted boots, measured boot and verified boot. But there are other architectures ARM and some ARM plus FPGA. So looking at TI, free scale, Xilinx type platforms that are very useful in the embedded environment. And those work with different environments. So they're typically Yachto type environments, booting with U-boot. U-boot does have verified boot, thanks to Google, but there are still some issues and gaps there. CPU slash ROM based secure boot. So a lot of these processor sets that are aimed at embedded have some sort of hardware secure boot where it's either a key is actually in E-fused into it or sometimes actually metal mast romped into the chipset. And so those are what you might actually come sometimes called locked boot loaders. So we have a lot of that in addition to our traditional boot style or U-boot style of protection. They also tend to have other types of TPMs, SPI and I squared C rather than the typical LPC bus types of trusted platform modules. PowerPC, aviation actually does a lot of its work on PowerPC. One of the reasons is that when they do software, the FAA validation is extremely expensive. It can cost $150, $200 million to get an FAA certification. And they really don't want to move that onto a new architecture because then again it would be very, very expensive to recertify. So we do have some legacy systems that need to run PowerPC. So we need solutions there. So we need the same thing U-boot, some sort of verified boot, and typically SPI TPMs. We also need to do this for virtualized environments. So as part of the architecture, we are looking on the mid-level systems at doing both the virtual machines and containers. And we need some way to have support for measured end or verified boot and attestation for the virtualized environments. So I'll talk a little bit about some of the issues there. So I have a couple slides here. And I've been updating them as this meeting has gone on because the summit has gone on because a lot of these are already being addressed or at least discussed. Our biggest issue right now is TPM 2.0 support. These are hard and fast requirements by a lot of the standards bodies. Our controllers have to get away from SHA-1 to get certification. Fortunately, we had a wonderful BOF last night with all the key people in it. So a lot of our issues are in planning stage, and at least there's some consensus. And that was already reviewed by Yarkoff and Yarkoff. So some of the issues, resource management, he talked about that. Another issue, getting the boot log or the event log, the boot time event log, to the kernel. And for PC architectures that was done with ACPI before, now they're looking at perhaps doing that through UFI table. Another architectures, doing it through the advice tree is probably way. But this is something that does have to be addressed and agreeing on APIs. Anyway, the BOF has gone a long ways towards answering a lot of those questions, or at least getting the work started. Measured and verified boot on UFI platforms, particularly with TPM2, and Matthew discussed a lot of those issues there. And I think there's at least an outline of how to proceed forward with that. Container file systems, James talked a little bit about this. I would talk a little bit more about some of the other container issues, though, and in particular, when we're doing a strong measurement and attestation or verification slash measurement slash attestation environment, we need some sort of solution that makes attestation reasonably tractable. And if we're having containers come and go, if we're having VMs come and go on a mid-level system, how do we keep track and how do we verify a measurement list? And currently, what happens is all of the measurements for all of the containers go into the native measurement list. And all of the measurements inside of VM would go into the guests kernels measurement list, assuming we have a virtualized TPM to support it. And that's fine for VMs, but for the containers, we need something that's more tractable. What we really want to do is to be able to say, this is the measurement list for container one. This is the list and attestation for container two and so on. And not necessarily have a collection of all possible measurements from all possible containers that have come and gone over the lifetime of the boot, because that would rapidly become untractable. On the other hand, we do need to make sure that if we were running a tainer, that it's not an excuse not to measure, not to verify files in there. So we have to have essentially a hierarchical policy of some type. So it looks like maybe one solution to that is what people are discussing, which is namespacing IMA. And I know of one project, Yusheng Sun, has been working on a patch set for that, which is kind of early level, not quite ready to go. Upstream, not even RFC level quite yet. But this patch set at least allowed for each container to have its own policy, hierarchically, its own measurement list, its own VTPM. And being hierarchical, it's easy to separately attest and to separately verify. But this is just one thing. I don't know if there are other people working in that space. So if there are, let's try to get together and coordinate through Mimi, I guess, as to an approach for that. I think the question that kind of came up just recently with the talks this morning is, is there in fact a generic concept of a namespacing security modules in general? In other words, if you want to have the same sort of hierarchical policy, same sort of hierarchical reporting, auditing, is it something that we want a generic mechanism, not a separate one for each of the different modules? So I think that's an open question. But we certainly can start with Yusheng's patch set and see how that goes. A hypervisor support for VTPM, I mentioned this under virtualization. There are patches out there. We are using those patches currently. Certainly it would be nice to get those upstreamed in some sense. So what we're using right now is Steffenberger's patch set, yeah, which is the IBM software TPM, or libTMS, with actually my Q's driver and Steffen's patches to QAMU to handle the appropriate IO controls, to talk between QAMU and the software TPM for things like Reset and so forth that are actually have to be emulated hardware type things. Doing something more embedded in the kernel, I would be interesting. I'm not quite sure which approach we want to take. But this is one that we actually have running now. So at least it's something that works. And from either potentially the container perspective or the virtual machine perspective, at least you can say it's outside the guest. The guest cannot directly attack the software. So you at least have that level of protection. Yes, certainly for the VMs. I mean, haven't really looked at that. I mean, you could use it for key management, I guess, in the containers also. It's a little bit different. But certainly in the VMs, yes. And we've actually prototyped some of that. So yeah, the summit has been wonderful because we've gotten at least to talk to all the critical people face to face. And we've seen a lot of the progress that's going on here. But that's our, I guess, our number one set of lists. Some minor ones, SPI TPM driver and Peter. And that's actually out in 4.8 RC now, which is great. Corresponding question, though, is, how do we get that back ported into U-boot? I don't know if we have any volunteers for that. Any experts on putting TPM drivers into U-boot? We're certainly going to need it there. Lux and system D support for kernel key ring. So currently what we're doing with Lux and TPM Lux, I don't know if you're familiar with TPM Lux. But Lux, by default, can accept keys only from a console or from a file. And there traditionally has been support for that in the boot scripts and recently in system D, which has taken that over. But in our case, it's kind of ugly in the init ram fs to have a utility read the key out of the TPM and then turn around and put it in a file and hopefully in temp fs and then point the scripts to that file to unlock the root file system. It'd be much more elegant if Lux could actually just directly pull a key off the kernel key ring. And we could point to that with a kernel command line option or something to say which key to use or something like that. But that also would require not only kernel support in Lux to this alternate keying, but also require modification in system D because system D is now hard coded. It's not a script. And it would have to also understand that method of keying. But that would be much better because then the key would never get into user space. We're not doing this Cluj of writing it to a file temporarily and then putting it back in, which is kind of Clujy. Or another thing is maybe XT4 encryption is another option because it already does understand kernel key rings. And so that's another one we need to start looking at as an option. But currently we're using TPM Lux. CPUs without public documentation on their processor verified boot. This is to me a really scary one. Most processors actually have some sort of processor, most SOCs at least have some sort of processor based secure boot, hardware based secure boot. And I've not found any of them that are publicly documented. We see them all under NDA. And if the OEMs don't properly configure them they're gaping denial of service holes. Because if the OEM hasn't properly said either we support this or we turned it off. And if it's left in this uninitialized state then an attacker could come in, set some random key and your system will never boot again. And that would be really bad because that actually bricks the CPU, the actual processor and the actual SOC. So I'm really concerned that particularly given the bricking attacks in the Ukraine that there's a whole area out here that we don't have any real visibility into unless we're under NDA. And a lot of these chipsets you would never think actually have that type of secure boot. So the other one CPU is with binary blobs. And so here we had the presentation from AMD that there's yet another security processor in there with another opaque blob. But the traditional one's SMIME trust zone. Oh, this is good for you. This is good security if you trust us. Okay, maybe. Package signing tools and integration of those with signing servers. There's some issues that we need to do there. I know that that MIME has been kind of pushing that along and working with distros. It's a little bit easier for us because in the embedded space we control the box, we control everything's on it, we sign it all. So we don't have to deal with third party keys or any of the other kind of nasty things or users arbitrarily loading new stuff in. So it's a little bit easier for us. Yeah, as I say, key management for third party sign files is just not really so critical in embedded space. Those are the gaps that I had that we're working on. There's one gap that we've actually filled ourselves and we're hoping to obviously work on others. And this one is that currently as released, Tboot and DRTM is specific to Intel processors. And AMD had never released a package for that for the AMD processors. Well, Safayet in our group has redone Tboot to be support both Intel and AMD processors. I wish I could show you the demo because you can actually see the DRTM PCR is actively on here because we actually have his version of Tboot running and providing the DRTM PCRs. While doing this, he also found a security bug in the existing Tboot and we're working on upstreaming all of those. It's gonna be interesting to upstream the AMD support in Tboot because Tboot was originally an Intel supported package. So there might be some, but hopefully Monty can help us with. So that's one gap that we've done. The proof of concept demonstration that I wish I could show but I could do either this screen or the demo, but not both for some reason. But what I have on this is we have a controller and the box has a TPM in it, a TPM 1.2. We have Kernel and Ima. We actually have protected boot. This is the AMD version with the SPI controls. Trusted boot. Actually this is the Syrix, no, I'm sorry. This is Safayet's Tboot, Trusted Boot. Encrypted disk type tied to TPM with the TPM locks. I'm a client running and then I had running so that was actually running in this box. And on the laptop I actually had an Ima appraisal server and then a Predix client and Predix cloud to show what this would actually look like. And I have some screenshots at least I can show. So the concept is for all devices, all centralized in one cloud location, you could actually pull up an integrity report on every one of the controllers. And the idea is that this has, for the management types, green is good. For our techie types, we can show that everything was signed. Everything had a valid signature and all of the signatures are based on keys that we trust. And that the signature that the TPM did on the PCR10 actually matched so that we know that that's an untampered list. So this says that everything looks good and that's actually evaluating the integrity of 1,019 files that had been run. And then actually as part of the demo actually log in through the ethernet. And of course, these don't run GUI front ends. These are embedded devices. So they just run a text-based console. But I logged in through SSH and actually tried to run a program that was not signed. And even as root, I'm not allowed to run it because it's not in policy to run, for root to run it unless it's signed. And it actually stops it. But then what's reported through the IMA attestation is actually warning because the system's not been actually compromised. The file was not actually run. But the question is who's trying to run an unsigned file? That's a warning. But the measurement list does have integrity. And the next level is actually, I put in a kernel backdoor that could be triggered. So it's just a way of demonstrating some sort of kernel compromise easily triggered. And what it did then was went in and tried to sanitize the measurement list to remove any bad things. But of course, as soon as it tampers the measurement list, then the TPM quote doesn't validate. And so now we know something really bad has happened because if they're able to tamper the measurement list, then the kernel has actually been compromised. Something simple for management, green, yellow, red, but all of the details of all of the files and the signatures on all of them actually reported. So kind of the bottom line on all of this is, we are facing a nation state threat model. Air gaps really only keep you from knowing that you've been compromised. It's actually better to keep your communication open and do an attestation from the devices to a central model. Industrial control systems, security architecture, at all levels, across all the different platforms and architectures, protected, verified, and measured boot, not just one or the others, but all of those, trusted operating systems, all the necessary security services, cloud-based attestation and verification. And obviously a lot of work still remained to make this actually work. And with that, questions? In what way? It's pervasive in the programming community. Go randomly download that little bit of software that you might need someday. And we'll just get it from the internet. I mean, in our case, so I put the, yeah, I understand. So I put the four-letter acronym down there, SDLC, which is a big promise and hard to deliver, obviously, but we do at least have a lot better control since we are embedded, since we do all of the development ourselves. We control the inputs of all of them. We do have policies for that. I mean, they should be doing it. We also do have the various static analysis tools that are in process to use to check for a lot of reasons, not only security, but also licensing reasons. We don't want to get sued for copyright infringement and so forth. So there are a lot of things that look at source code from a lot of different perspectives that are in process. Is that a guarantee? No, of course not. You can do some types of pen testing to look at the final result, but it's mainly the SDLC and we do certainly have control over that internally. We do take that seriously. I don't think, Bill, have we done anything with telco? Not, I don't think so, no. That's what I was saying. Okay, I'm sorry, so as Mimi is saying, we know of certain gaps and we know how to fill the gaps and we're working on that. The interpreted files, we actually can measure those at file open. We can do policies on them. Typically in a general purpose computer that's really hard to do because you get lots of things that are created on the fly that are not signed and she can't just blanketly ban all of them, but on our controlled embedded environment we'd actually get away with it. So we actually have static, statically configured networking. We don't have to deal with resolve.conf and other types of files that are changing that are essentially impossible to sign. So it is a little bit easier for us to do that, but the interpreted files, whether Python scripts, if they're in a file, we do sign them, we do validate them. There are still gaps in terms of in-memory attacks, there are gaps in terms of executable data being injected across the communication link or across that which we don't measure. So there are definitely some gaps and I think that's a challenging area of research to actually try to close every single possible gap. Well, I mentioned the one that's the worst, the FAA certification. Obviously they are very, very risk averse with aircraft systems, engine controllers, avionics and so forth. Those regulatory things are really very strict and actually really pretty, pretty good. There are standards that are evolving, IEC standards, 6443 and some others that are starting to come in. They're not really hard and fast requirements, they're guidelines I guess or something. Different industries might have different things but Bill any other ones that FIPS certification is also a big one. I mean, that's one of the issues with OpenSSL, it's not FIPS certified and we have to do something about that. Thank you.