 Hi, everyone. I hope you can hear me. So we still have a few more minutes. So before I start, just wanted to get a feel about what people in room, what they normally work on. Do we have many other software programmers here? That's great. Many people focus on security, or people who focus on security. OK. How many of you have used ARM CPUs? Oh, that's nice. How many of you have heard of trust zone? OK. You're going to make my life slightly easier today. Hello, everyone. Hi. My name is Ashutosh. I work for ARM. I am a security architect, and I take lead with all the trusted firmware projects from ARM. And today, I'm going to be talking about some of the challenges that we face, security challenges that we face in the connected devices in this industry. I know every use case is different. Every device is going to be different. But there are some common underlying challenges that are going to be solved for all of these different use cases. So I'm going to talk about some of these security challenges. Then I'm going to talk about some of these common security functions that are needed to solve these different security challenges. Then I'll briefly talk about platform security architecture and trusted firmware project. When I initially submitted this talk, I was thinking I'll have enough time to talk about different workflows, and then we'd go in depth on different technical solutions. But then I realized I'll only have 35 minutes. So instead of giving solutions or talking about solutions, I'm only going to talk about problems. So security challenges. Let's first look at the supply chain model. This is a very simplified model. This is a very simplified view of how supply chain works. You'll typically have a silicon vendor that collects all sort of different hardware IPs and build a SOC. Then you have different OEMs. They would license the SOC from the silicon vendor and then build a device around it. Now, the different steps involved in the supply chain. Again, this is a very simplistic view. It gets a lot more complex and individual cases are completely different. But this is a very, again, simplified view of what happens in different stages. When you talk about the silicon vendor, silicon vendor would want to install some secure operating system. Void is required. I'll try to cover that in one of the latest slides. Silicon vendor would also want to provision some secrets, some cryptographic secrets, so it can have some control over the device in the later stage of the lifecycle. And devices could have some hardware or software IPs, where the silicon vendor might want to let OEMs use those IPs, but not necessarily let them see those IPs. So for example, if you have a software Bluetooth stack, it's very common for vendors to define APIs to let people use those Bluetooth stacks. But they normally don't want to share the code base, not even in binary form. So silicon vendor would want to protect those hardware and software IPs in one way or the other. Similarly, when you go to the next stage, when OEM is trying to build a device, what OEM normally does is build a system around the silicon, attach all sorts of different peripherals, and create a system based on the use case they're trying to solve. And they will have to have a similar kind of, they'll have to do similar kind of provisioning as well on their side. They might want to install some secure applications. So let me see if I can get the pointer here. So they might want to install some secure applications, which has to work with the Trusted OS. And at the same time, you don't want to completely expose the Trusted OS to the OEM stage here. And the OEMs will have their own secrets, their own cryptographic assets to provision on the device. And finally, when you talk to the OS vendors or when you want to install a rich operating system, you need to have some sort of agreement with the OS vendor to be able to use their operating system on your device. And they would want to provision some of their own secrets as well, some cryptographic keys or TLS certificates, TLS, if you're doing PSK TLS, maybe the key itself. Now, when you look at this supply chain model, there are many security challenges here. You need to be able to securely inject the Trusted OS system so no malicious code can actually run on the secure side on your secure domain. Basically, you have a lot of secrets there. You will have a lot of keys, protected IPs, so you don't want to run any malicious software. The code that you inject there need to be verified. You need to ensure that it's not coming from some malicious place. Similarly, when you do the provisioning of the device, the provisioning will have a lot of secrets. You have cryptographic keys, your device certificates. And when you provision that on the factory floor, those secrets will get exposed to everyone on the factory floor. And you don't want that to happen. So you don't want that to happen. Because once that has leaked, all the secrets are leaked, your device is completely compromised. Similar challenges are there when you move to the next stage. OEM will want to provision their own secrets in a secure way. They would want to ensure the integrity of the software they install. And at the same time, two different OEMs wouldn't want to trust each other. I wonder why. And you need to have a secure way of installing the rich operating system. Traditionally, this has been done using secure facilities. So if you want to provision your device in the Silicon vendor stage, then Silicon vendors will have some secure facility where you have physical locks and keys, and only certain people are allowed to go inside. But in a globally distributed supply chain model, that doesn't work very well. It doesn't scale very well. So we need to find better solutions that scale very well. When you talk about the trillion devices, when you talk about trillion connected devices, you can't have physical locks and keys to protect these kind of supply chain attacks. So as a promise, I'm going to talk only about the challenges for now. Now your device is provisioned. You have gone through the different lifecycle stages. And your device is in the field. Each use case is going to be different. Each use case will have its own quirks, its own security requirements. But if you analyze, if you look now at different connected devices use cases, they will all have a set of common usage patterns. They will all need some form of connectivity. The connectivity could be between device to device, or it could be between device and the cloud entity. They all will have to deal with some sort of data. The data could be a biometric data if you're doing financial transactions, or it could be a DRM content, or it could be just sensor data in your home, or it could be industrial automation data. They'll all need some sort of device management. All of these different use cases will need some sort of common device management use cases. Things like firmware update, things like incident management is something, some security incident happens. How do you report that? These are common challenges. Doesn't matter what use case you're building. These are common problems, common challenges. And finally, the multivendr supply chain that we already talked about. All of these different use cases or use case patterns have their own security challenges again. When you talk about the connectivity, the connectivity is usually over a non-secure physical medium. So you want to establish secure communication over a non-secure physical medium. Even if you ensure that, even if you establish a secure communication over a non-secure physical medium, you still need the devices, the communicating entities to trust each other. The communication itself might be secure, but I might be talking to a malicious actor in first place. So how do you establish the trust between the communicating entities? On the data management side, you will have to deal with sometimes very sensitive content. If you're talking about the DRM use case, you don't want the device or a malicious actor to be able to get their hands on the decrypted content. For example, if you have a Netflix subscription, Netflix wouldn't want that content to be freely available to everyone without paying for it. So you need to have some sort of protection on the content. Similarly, for device management, you need to be able to sometimes remotely provision the device in a secure way. So you have some level of control on the device. And you should be able to secure firmware updates if something goes on the device. So when you build a device, it will have, eventually, it will have vulnerabilities. No matter how much effort you put in designing your hardware and software, eventually, someone is going to break it. Someone is going to find a flaw in it. And you need to be able to patch it. You need to be able to basically, if it's a software flaw, then it's easy. You update your software and then you're good. Sometimes there'll be hardware flaws, hardware vulnerabilities that you need to find a workaround about and fix that in the software. So you need to have a secure way of updating the firmware. At the same time, you don't want to be able to roll back to a previous version of the firmware, which had vulnerability in the first place. When you look at these different security challenges for all the different connected devices, there are some common security building blocks that apply to all of these devices. All of them need to have immutable root of trust. What this means is this is the very starting point of your system. This is the, you could say, probably the most trusted part of your system. And if that gets compromised, all bets are off. This is the very starting point of your security in a system on a device. That leads to the next stage, which is chain of trust and software integrity. You need to be able to prove your software all the way starting from the boot ROM code to the application that you are running maybe on top of Linux. So the whole chain need to be able to be verifiable. Principle of least privilege. This is an interesting one. What this means is in a given system, you should give only just enough privileges, just enough access to any part of the software so that it is able to perform its job. Nothing more, nothing less. What that means is if there is a vulnerability in one part of your system, that doesn't compromise all of your different parts of the system. By the way, if you want to stop me on the way or ask a question or heckle, feel free to do so. Software updateability, we already talked about it. We all know, and I just talked about that as well, why we need software to be updateable. Device identification and authentication. You need to have a secure way of establishing the identity of the device. So when it communicates to another entity, the other entity can put some level of trust on the device. So it knows which entity it is actually talking to. And finally, lifecycle management. So lifecycle management is about when you move through different stages of your production. You compartmentalize that as well. So this is kind of extension of principle of least privilege. You extend the same concept in the supply chain flow as well. So if you are in the silicon vendor stage, then you get to access certain parts of the system, certain hardware keys. When you move to the next stage, the OEM stage, OEM gets to access only certain part of the system. So you have some sort of hygiene on the system. And different entities in the supply chain don't end up compromising each other. So a bit more detail about different building blocks. I'll try to cover this as fast as I can. Root of trust and chain of trust. A typical root of trust looks something like this. You will have a ROM code, which is your starting point in the system. ROM code is something that you would put as part of the RTL itself. So when the device comes out of the fab, it will have the ROM code. It will have some RTL key as well, which is known only to the silicon vendor. And that RTL key forms the basis of your rest of the provisioning security. The next stage in the chain of trust would be a richer bootloader. Arguably, you could put the immutable root of trust and the updatable bootloader in the same place. You don't need to have updatable bootloader if you know how to write perfect software. But we are not. We know that we are not perfect software engineers. We make mistakes. Or sometimes we write perfect software, but mistakes. And by hardware engineers, any hardware engineer here? No. OK, so it's safe to say that hardware engineers also make a mistake. So what happens is, since the first part of the chain of trust, this is going to be part of your RTL itself. You can't change. Once it is in the RTL, it's there. You can't change it. And so if you put a lot of software here, a lot of complex software and a lot of complex logic here, then you run a risk that if someone finds a vulnerability, you can't do much about it. So usually, a really secure system, what it would do is have a very simple ROM code. As simple it can actually deal with. And all of your richer functionalities, your firmware updates, your rest of the system's chain of trust, that functionality gets pushed out to a updatable bootloader. So if you find a flaw in that implementation as well, you can still update it. Finally, your runtime software. Runtime software is your business software where you would have what not, could be Linux, some file system distribution, and your business software. Principle of least privilege. I'm going to rush a bit because we don't have a lot of time. Principle of least privilege applies to hardware and software both. As I said, each part of the system should have access to just enough resources so that it can do its job. This concept also extends to cryptographic keys. When you do deal with different cryptographic keys, it's quite tempting to use the same key for different use cases. But from the cryptographic point of view, that's a very poor implementation. There's a very poor use of your cryptographic functionalities. What happens with that is, if one part of your system is compromised, it will end up compromising all of your system because you're going to use the same. If you're using the same key for every single use case, so you need to have some sort of cryptographic key hygiene as well in your system, one key for one specific use. Similarly, for my update, we talked about it, why we need it. It's encouraged to have a certificate-based image authentication so that you don't put a lot of secrets on the device. It's possible to do a symmetric key-based image authentication, and it's typically faster as well. But if your key gets exposed, then all bets are off, basically. So typically, in most of the cases, it's better to use a public-private key setup where you just have the public key on the device and store that in a immutable way so no one can change that key on your device and use the private key to sign the images and authenticate that on your device. And rollback protection, again, if you don't have the rollback protection, basically, firmware update kind of becomes pointless because if you go to the next version of the firmware which fixes some of the security vulnerabilities, an attacker can try to rollback to previous version which had those vulnerabilities in the first place. Device identification and authentication. This is highly linked to the trust establishment between two entities when they're two communicating entities. They need to be securely identifying each other. So they need to carry a identity. And that identity need to be proven, need to be proved using a certificate. And finally, attestation. Attestation is a kind of report from your device that could go to the server. And it will contain information like what version of software you're running, what version of secure firmware is there, what's your bootloader, what's its geographical location. So based on that, server can decide how much trust it should put on that particular device. For example, if the report, if the attestation report says this is a old firmware version which had security vulnerabilities, what server would do, or the cloud entity would do, is say, fine. You have this old version which has security vulnerability. I'm not going to let you do anything more than just the firmware update. So until you give me a report which says I have got the latest, greatest firmware running on the device, I'm not going to let you do anything useful apart from firmware update. Again, life cycle management. In the life cycle management, you would have a silicon manufacturer. They will, if you want to do the secure provisioning of the device, you need to have the RTL key in the system. RTL key in the system. So that allows you to basically not rely on physically secure facilities for provisioning the device. Also, most of the silicon vendors, what they do is create the same device, distribute the same device to different OEMs, but control feature on the same family of the device based on different licensing models. So you create the same SOC, but create different licensing models, different proof points based on who is ready to pay how much and what they actually need. And those things, you would want to be able to control in a secure fashion because it's directly linked to the revenues. The next stage of the life cycle, again, is OEMs or application vendors. They would, again, want to provision the, actually, I'll come back to this, the life cycle, because I'm going to talk, at least I'm going to talk about one solution today, which is about how do you solve the supply chain problem. So let's skip this one and come back again. When we look at the different security functions, to support those different security functions, there are certain hardware building blocks that are required. Immutable route of trust, we talked about that already. We talked about the device identity as well. And this need to be immutable so no one can come and change those device identities. Hardware unique key, this is an interesting one. Hardware unique key is, as the name says, you need to have a unique key on each device. Not each type of the device, but on each sample of the device, you need to have a unique key. So you can bind the file system on that device to that specific device. No one should be able to take out the flash and put that on another device because then you have all the gray market attacks linked to it. Non-volatile counters, this is where you would want to fuse the non-volatile counters. This is where you would want to fuse the firmware versions. So if a rollback attack happens, it can easily be verified at the boot stages. And you can check against it. You can predict your device against the rollback attacks. Similarly, harder isolation support to support the principle of least privilege. So you can compartmentalize your system in much smaller segments. Root of trust keys, these will form the, they will be part of your chair of trust. So you need to have them in the device in a secure fashion. And crypto accelerators and lifecycle management as well. Crypto accelerator, I'm going to skip talking about. Life cycle management is, again, an interesting one because to prevent all sort of supply chain attacks, you want the lifecycle management to be unidirectional. Once the device has gone from the silicon manufacturers phase to the OEM phase, this should be a non-reversible process. Because if it is, then OEM could do the reverse engineer, try to get all of the silicon manufacturers secrets, and that open spandora's box that would lead to OEMs compromising each other and all sort of different problems. So as I said, I'm going to talk about at least one solution today. As I said, it's perfectly OK to have a physically secure facility to do the device provisioning. It doesn't scale very well. Alternatively, you could find the trusted operating system using the RTL key. Remember, we talked about the RTL key that you would have baked in in the silicon itself. So you could find the trusted firmware, the trusted OS, using a RTL key. And now you can distribute that binary to your provisioning facility, and it could be anywhere in the globe, because now it's signed and encrypted. So it doesn't matter who gets their hands on these trusted operating systems, and the rest of your provisioning assets is encrypted. And only your device will be able to understand what actually it is. So you sign and encrypt the content using RTL key. It goes on the device. Device will be able to decipher that content, because device has the RTL key. It will allow provisioning of the secrets and the trusted operating system that you want to install. And from the silicon vendor point of view, you're safe. Before moving to the next stage, what silicon vendor would do is create a OEM provisioning key. So there's a bit of a complex challenge where you need to solve this problem operationally, as well as technologically. Silicon vendor will create a provisioning key for the OEM, but it need to somehow transmit that key to the OEM. They need to establish some operational mechanism, maybe sign and encrypt using PGP, and they need to set up some operational communication between the silicon vendor and the OEM. But the point is silicon vendor will have to provision a OEM specific provisioning key and change the lifecycle stage from silicon vendor to the OEM stage. What this means is now OEM has this key acquired somehow through operational mechanisms. And lifecycle stage has gone to the next stage. So the silicon manufacturer is now assured that no matter what the OEM does, the silicon manufacturer's secrets are safe. Again, OEM signs and encrypts its own content using the OEM key. Before moving to the next stage, we need to provision the rich OS signing and update key, and then change the lifecycle stage to the finally deployed state. What this means is your device is provisioned, and you can let it go in the field and do its job. The key point here is the key that gets generated here, OEM provisioning key, is going to be different for different OEMs. So even if OEM2 somehow gets their hand on this encrypted blob, they can't do much about it because this encrypted blob is going to be understood only if you have the corresponding provisioning key. Any questions so far, or anyone who doesn't think this is the right thing to do? All good? Platform security architecture. Platform security architecture is an initiative from ARM to drive some sort of standardization around security, device security in our industry. So far, everyone has been left to fend for themselves, and there have been so many competing mechanisms and specifications. ARM is trying to bring some sort of standardization around security, device security, so that it's a problem that we could solve in one place rather than everyone trying to solve the same problem, spending parallel and duplicate efforts trying to solve the same problem. Platform security architecture is a collection of a lot of generic specifications. It has, in most of the cases, is quite architecture and hardware independent. What it does is it specifies security properties that you need to have in your system. If you are building hardware, hardware need to have these different hardware building blocks with these different security properties. If you are writing software, the software should have these different kind of security properties. Now, whether you use ARM IPs to get those properties, security properties, or you build your own IP or license it from someone else, that's a different story. At the same time, we also provide implementations, hardware and software implementation, that comply with these standards. So if a device maker want to make a device, they can quickly get on to building the device using the baseline security that has already taken care of. And they could focus on their business software, rather than worrying about the security from scratch. Of course, that's a very simplified statement. There will be some security work to be done by our partners as well. If you're OEM or if you're a Silicon manufacturer, there will be some security-related work that you need to do. But as long as we are able to solve the problem in a common way, then we are setting ourselves for a future where we don't have to reinvent the same thing again and again. Trusted firmware, M, is implementation of PSA for Cortex-M devices. It provides firmware implementation. So it provides all of the secure-sized software as well. So you don't have to spend a lot of time reinventing the software is it already takes care of all of the different security properties that I talked about. It provides all of your security endpoints as well. Some of the security endpoints as well, things like attestation, secure boot, your cryptographic derivatives that you may need on your system. And then you can go on and build your system based on this. TrustedFarmware.org, this is, again, initiative from ARM to move to a more collaborative environment, where instead of arm trying to push, saying that this is what you have, this is what you should do, try to get more people on board, try to get more people, more companies on board, and have a collective thought about how this should be done in terms of how do we solve these challenges in terms of technology? How do we solve some of the operational challenges in getting there? And collaboration is always good. Collaboration would mean that we can solve the common problems together, and then everyone else can focus on their specific business cases. So the TrustedFarmware project now, as of now, is not a ARM-owned project anymore. It's a collaborative project. It's hosted by TrustedFarmware.org. ARM still has a very big stake in this. We have quite a big development team there. We are still the biggest contributor there, but we want more people to get involved. To get involved, you don't have to be part of the TrustedFarmware.org board. So there is a board, basically, where companies can pay and become part of the board. And that board decides the general direction of the travel for the project. But to contribute or to participate in the project, you don't need to be part of the board. Literally, anyone can just go to the TrustedFarmware.org, create a GitHub ID if you don't have one already, and start contributing. That's all I had for today. Any questions? Yeah. Under what circumstances the bootloader doesn't become trusted? So if you have built your system, if you have followed the rules of a chain of trust, then even if your bootloader is trusted, it may still have vulnerabilities. So it's not about whether your software component, be it bootloader or any other software component, higher up in the chain. It's not about how trusted or how untrusted it is. It's about how mature it is. So even if it is trusted, even if you're talking about the trusted OS, which is the ultimate root of trust in your device, even if it is trusted, it doesn't mean it will not have flaws. It will not have security vulnerabilities. So it's more about being able to identify the security vulnerabilities and be able to fix those vulnerabilities rather than saying whether a part of the system is trusted or not trusted. So bootloader usually forms the chain of trust. It's part of the chain of trust. So you would typically impose a lot of trust on it. It is a trusted component. Is that what you have been asking? Or OK. Any? So this is where the hardware need to be able to back that up. It should be a non-reversible process. What that means is your hardware should have capability to identify that it has gone to the next stage. Typically, how you would do is burn some fuses, some OTPs or some other fuses. And burning those fuses will signify that the device has gone to the next stage. Typically, hardware itself will mask off the assets belonging to the previous stage, so you can't go back. Burning fuses is something you will probably have to do in the factory to make this transition from one stage to another stage. There is element of trust. So the trust flows in one direction here. OEM has to have some level of trust in the chip manufacturer. But the chip manufacturer doesn't have to have any trust in the OEM itself. You could potentially solve that trust issue, but that requires a lot of redundancies in the hardware itself. So it's a balancing act there. You could say OEM doesn't want to trust at all the silicon vendor, but how much are you ready to pay for it? So this architecture focuses on open standards like SIBO and COSE. They specify how you architect your attestation token and how you sign it. So as long as you are compliant to SIBO and COSE standards, you're good. At the same time, it's a open source project. So if there is some tailoring that you need to do for your specific use case, the trust-afirmware project doesn't really stop you from doing that. It's a open source project with permissive license. It's a BSD licensed project. So basically, you can do whatever the hell you like. Any more questions? Thank you, everyone, for coming here today.