 Hello everyone. My name is Ashutosh and I am part of the open source software group with an arm and today I'm going to talk about some basic concepts about device security for connected devices. I'll start with some common use cases which are applicable to all of the connected devices and the security challenges they face. Some of the basic principles, security principles which can be applied to all the use cases. And then I'll briefly talk about the PSA program, the platform security architecture program from ARM, and a brief introduction to the trusted firmware M project at the end. And in the end we'll have some time for questions and answers. In connected devices space, every device is unique and every use case is unique. However, there are some common uses patterns, if you look deep enough. There is an underlying theme across all the different use cases. All the devices, they need some form of connectivity. It could be device to device communication or it could be communication between a device and a server. Or it could be a communication between device to a node in a mesh network. There's some form of data processing involved in all the use cases. The data could be sensor data being collected on a device and securely transmitted to a remote entity. It could be DRM data, if you talk about the multimedia content. It could be biometric data in case of medical devices. And the uses patterns of this data is very complex and the ownership of this data becomes extremely complex to manage. Device management, the devices that gets deployed are meant to be in the field for many years. And the scale of deployment is quite large and it's going to be even larger in the future. They cannot be managed individually. They cannot be managed on a management basis. They need to be managed remotely somehow and in a more automated fashion. When this might want to control certain features based on the licensing model for a particular use case. When this might want to revoke or invoke certificates on a device based on the subscription that the user has paid for. There will be firmware updates because again devices are going to be in the field for a very long time. There will be security fixes and the feature updates and the bug fixes on the device. And finally the incident management. There will be security incidents. There will be cases where devices become vulnerable. Their software become broken down by security researchers or hackers. And they need to be fixed by firmware update. And finally the vendor management. The ecosystem is going to be very complex where different silicon vendors, different operating system vendors and the OEMs, they try to collaborate with each other and they would want to limit the trust they need to put in each other. So it's a very complex supply chain where we want to make sure that the amount of trust each vendor need to put in each other is contained and is limited. And all of these user scenarios have some underlying common security challenges. All of the communicating entities, before they start any communication, they would want to establish a trust. They would want to make sure that they are talking to the right entity on the other end. If a server is talking to a device, there are certain implied trust. And that trust could mean that if your end device is compromised, it can compromise the rest of the network. And once the trust is established, the communication itself needs to be secured because the physical medium on which they actually communicate can be a compromisable network. It could be a network which itself is not secure. When we talk about the data management, this is probably the most complex bit. And it has a lot of socioeconomic aspects as well. Who owns the data if you're talking about the biometric device? If you're talking about the DRM license management, that's even more complex where you would want to leave the content on a device for a limited period of time and the subscription expires. You would want to be able to show that the content cannot be used or reused or misused beyond the given time. Similarly for device provisioning, since there are monetary aspects attached to the device provisioning, feature enablement and disablements, we would want to make it secure so that the subscription model and the money-making models people make are supported by the underlying security basics. And finally the vendor management and the firmware updates. I think it is quite evident that the managing different vendors and their mutual trust is a very complex scenario. And when we talk about the firmware updates, they're even more complex because the software could be coming from many different places. The secure side software vendor could be another entity. The non-secure side and the business case vendor could be another entity. And you may want to install applications coming from multiple other partners and they wouldn't want to trust each other. To address all these different security scenarios and different use cases, there are some underlying basic principles. There are underlying building blocks which can be applied to all of the use cases. And while this is not exhaustive list, this provides the initial building blocks that you need to have to secure a device in a connected mesh. Immutable root of trust. This is the absolutely trusted part of your device. And when someone creates a threat model for their use case, this needs to be ensured that the initial part of the system needs to be ensured to be non-mutable. And if the initial part is compromised, all bits are off. So for any use case, you need to have the absolute start point for your trust in the device itself. That leads to the chain of trust and software integrity. The different links in the chain need to validate the next link in the chain and make sure that the next entity is certified and is validated and it's not compromised. And by creating this chain, we ensure that all of the software that is running on a device is not compromised. Hardware and software will have bugs. They will have issues. And since the devices are going to be in the field for a very long time, we would want to contain the scope of every vulnerability that gets exposed, be it the hardware or the software. So the principle of this privilege means that your system should be divided in the smallest possible pieces, which do not lead to trust each other. So your applications should be given just enough privilege so that they can function and nothing else. And the same principle applies to the rest of the system as well. The software should be updatable. So if a device is deployed, we will want to make sure that any vulnerability that gets exposed in the future in the hardware or the software can be mitigated by providing a software update. Device identification and authentication. This is interesting because when you talk about the secure communication, it is not sufficient to establish a secure link between two devices or two entities. It's also important to make sure that they recognize each other for which you need to have a very unique identifier which ties the secure communication to a unique device. So the server or the receiving side on your IoT device, they can authenticate each other and they can trust each other. Finally, lifecycle management. When you talk about a device manufacturing process, there are multiple vendors involved. The silicon could be given by one partner while the OEM could be putting the whole system around the silicon while the software could be coming from so many different places. To be able to secure the supply chain and the different stages of the supply chain is important to compartmentalize that aspect as well. It's important to limit the resources, hardware and software resources which are visible in the different parts of the product lifecycle development. Let's look at these different building blocks in a bit more detail one by one. Root of thrust and the chain of thrust. Root of thrust and chain of thrust usually go together. They go together. Most of the time, the root of thrust needs to be implemented in the RTL itself. To be able to compromise the device, you need to be able to compromise the RTL which is normally quite difficult and involves hardware level and very deeply embedded attack. The immutable root of thrust is responsible for initiating the root of thrust. It is responsible for authenticating the next set of the software which is going to run on a device and it should be able to do it securely. That guarantee is provided by the cryptography. The next stage of the software should be signed by the public key of the software vendor and the device should be able to authenticate the software which is running on the device by checking the signature of the boundary. The immutable root of thrust is also the very first entry point in the system. So it also needs to assist in the factory floor-to-floor provisioning. When the device is coming out of the fabrication, you need to provision keys, you need to provision the software, you need to provision the hardware keys coming from different vendors and that part also might require some level of guarantee so that the whole process of provisioning itself is not compromised on the factory floor. And the initial root of thrust may be required to assist in the initial factory provisioning. Then the next stage, which is again the next stage is a logical step. When you talk about the updateable root logger, this is a logical separation. In some of the use cases, the next stage could be clubbed into the immutable root of thrust or it could be clubbed with the runtime software. So this block is essentially in some of the very simple use cases can be clubbed with the immutable root of thrust. Because for simple use cases, you may not want to spend the additional RAM ROM and the hardware requirements that come with the actual physical separation of the different stages of the boot. When you talk about the updateable boot loader, you should be able to authenticate the final, the business case software which is going to run on the device. You also need to participate in the firmware update process. I'll try to cover the firmware update or a example firmware update process in one of the later slides. It will become slightly clearer why boot loader gets involved in the stage. Finally, the runtime software. The runtime software is where you implement your business use case, your actual final use case which is very devices specific. It also needs to support the firmware update process and provide the final use case level compartmentalization of the system. The next building block, principle of least privilege. This is not limited. The compartmentalization is not limited to just hardware or the software. This is a general principle that one should follow throughout the product development lifecycle and see if there are two aspects of the system which can be compartmentalized so that they do not compromise each other or cannot interfere with each other. Hardware software compartmentalization is a key aspect of it because that's where most of the complexities of the system are going to lie. The compartmentalization also applies to cryptographic keys especially the hardware keys where if the same key is used for deriving multiple different cryptographic keys for different use cases, one would want to make sure that the key derivation tree is very clean and the key hierarchy is set up in a way that one can make their way back into deriving a key for a different use case. If you look at this block diagram in this, this is the conventional system that most of us are familiar with where you would have some sort of OS kernel scheduler privileged code handler which will run on the hardware level. It will have provides certain OS features and on top of it all, you would have the application firmware and there's the conventional system that we are used to seeing. Now, in the conventional systems, it's possible that you have a lot of security-aware software as well either in this part or in the OS part and the more complex the application software becomes, the more complicated the OS kernel becomes. It becomes harder to contain the security vulnerabilities. So what this means, we should separate the security-aware aspects of the system and put it on a different sandbox which is what the sandbox provides that you separate the system in a business use cases specific software and all of the security-aware software can be implemented and handled on the right-hand side of this picture and this boundary should be enforced by the hardware itself. Now, once the separation is done, there are different scenarios where the secure-side software itself could be coming from different vendors. There could be certain very use cases specific, for example DRM. DRM, if you're talking about the DRM use case, and if you're talking about multiple vendors for DRM use cases, they would have some level of functionality on the secure world, some level of functionality here, and if there are different vendors again, they would not want to trust each other. They would want to have some level of guarantee that a vendor content provider A is not able to see the certificates and keys of the content provider B. For that reason, there is a need to have compartmentalization on the secure side as well, which is what this block, these green boxes represent. You need to have the boundary between the non-secure-side software and the secure-side software, but secure-side software itself need to have compartmentalization to be able to ensure that different vendors can implement their software without worrying about someone stealing their data or their software IP. From the update, I already mentioned that hardware and software will have vulnerabilities. The devices are going to be deployed for a very long time, and someone will break them, and we need to be able to react to that in some cases, and in some cases, proactively identify issues and go and fix them. And once the issues are identified and fixed, we need to have a secure mechanism to provide those updates to millions of devices which are in the field. So the devices should have an automated way of being updated. And when you talk about the firmware update, this again becomes very complex when you talk about the multivariate scenarios. The secure-side software could be coming from one place, the non-secure-side software could be coming from a different place, and the device should be able to clearly get the firmware from different vendors, be able to compartmentalize them, be able to assemble them together without compromising the originally intended separation of logic with different entities. So this is an example implementation of how the secure firmware update can be performed on a device. You have the separation between the secure world and non-secure world. The update client on the non-secure side would download the binary from the server or another connected device. It will talk to its peer in the secure world and ask to authenticate the downloaded binary. And once the authentication has passed, the binary will be dumped into the flash. And after which device would be reset, the bootloader comes up and it sees that there's a new binary in the system. It needs to perform some checks, some cryptographic checks to make sure that the image is not a tampered image or it's not a rollback attack. It's not some old binary which is being provided again on the device. And all those actions can be performed by the bootloader to secure the whole process. Device identification and authentication. I already talked about the immutable unique identity. It is important to have a device identity which cannot be spoofed, which cannot be tampered with to ensure that the communicating entities are able to absolutely trust each other. And the communication again needs to be secured between the communicating entities. But before the communication could start, the trust establishment can be run through cryptographic certificates. Having said that, it's not as simple as that. In very simple devices, you wouldn't want to put a very complex certificate parsing software. For example, if you're talking about Lightbulb, I wouldn't want to implement a lot of software logic or hardware logic to be able to perform RSA signature check or ECDSA signature check. In those cases, we need to find a different approach. And in some cases, it's okay to have a shared secret within your Lightbulb and the router that you may have installed in your home. And if they have a preprogrammed provision symmetric key, they could use that symmetric key for establishing the trust with each other. Finally, the device attestation. Attestation is a concept which is quite heavily being standardized in the industry right now. What this means is this is a health report, or it's a report card of the device that gets sent to the remote entity, the remote entity being a cloud entity. The report contains the information such as what is the boot signature? What was the hash of the image which is running on the device? What's its physical location? What's its identity? Based on the report that gets sent by the device to the server, the server can decide whether it should trust the device or should it allow it certain features or should it completely block this because the software version that it has has vulnerabilities. For the lifecycle management, instead of trying to focus on the selection, I'm trying to highlight some of the problems because it's a very complex topic and I would not be able to talk about this in given time. When you talk about the lifecycle management, the first stage is Silicon Manufacturer. Silicon Manufacturer need to be able to secure the device provisioning, need to make sure that on the factory floor when the keys are being programmed or the identities of the device are being programmed, the whole process is secure and it's guaranteed to work in a certain way. And the manual, the human interaction in the whole process doesn't affect the guarantee that the system is going to provide in the later phase of the lifecycle. Also, the Silicon vendor might want to follow a certain licensing model. What that means is they might want to create a single device with multiple peripherals or multiple features, but based on the subscription model the OEM has paid for, they may want to limit the features and they may want to limit the features and may want absolute guarantee that those features cannot be hacked or they cannot be maliciously enabled or disabled. Similarly, when it comes to OS vendor or the OEM, there is a need to have a central entity which will integrate all the software coming from different places, make sure that they all work together. At the same time, ensuring that the different vendors do not end up compromising each other's software or IP, the hardware IP. Finally, when the device is deployed, the complexity there is how do you manage different fragments coming from different devices? And if a device is compromised beyond the repairable state, then how do you make sure that the device is not usable anymore? This is a very small snippet of the lifecycle management problem and that's where I would leave lifecycle management. To support these different building blocks for security, there is a need to have hardware building blocks which can support these different security functionalities and this is not again exhaustive list, but just a quick look at the various issues that most of the devices would need to have. Immutable root of trust, we already talked about that. This is going to be mostly the RTL code, the ROM code that people put in devices. Having said that, if for a certain use case, the physical guarantees are made about the silicon itself and there are certain threat vectors, the final use case owner doesn't want to worry about or doesn't care about, then the immutable root of trust can be a programmable entity. You could have OTPs in the systems that can be programmed as the provisioning time and that becomes the programmed immutable root of trust. The hardware unique key, this allows binding the file system on a device to a specific device and whatever you store on a device. So by making sure that there is a unique key program on the hardware, you ensure that all the file systems that are there on the device are tied to that device. So if someone tries to pluck out the file system and the storage system from the device and tries to play it back on a different device, it would fail because the file system would have been encrypted by the hardware unique key, which is going to be different for different samples of the same device. Device identity, we talked about that as well. This is important to have in a system to make sure that the communicating entities can identify each other. Non-volatile counters, they are required to make sure that when a device is exposed to rollback attacks, then using these counters, you can verify if the new image or the file system content, so new image coming from the server or new image coming from the external storage or the contents of the file system are not content from a previous iteration, which may have expired certificates in case of the file system or in case of binary might have vulnerabilities. So non-volatile counters are required on a system to ensure that these rollback attacks cannot be performed. Then comes the hardware isolation support. We talked about the software and hardware compartmentalization, but to be able to support that on a system, we need to have support on a device itself, on the hardware itself to facilitate the compartmentalization. Root-of-trust keys on a device would ensure that different players in a multi-vendor scenarios can control their assets separately. For example, a silicon vendor can use their root-of-trust keys to enable and disable their features. The OEM can have their own root-of-trust to enable and disable their own content management policy. Crypto-accelerator while this is not a mandatory requirement on a system, you could perform cryptography in software, but a lot of times it's better to have the cryptography in the hardware and not allow the software to see the hardware keys and instead allow the cryptographic accelerator to make use of these hardware keys. So that way, if there is a compromise in the software, the compromise doesn't affect the originally provisioned hardware keys. Then after which you can create a different key derivation tree and discard the old software and old keys and rely on the new key derivation policy. And finally, life-cycle management. There are aspects of life-cycle management which need to be handled in the hardware and need to be enforced by the hardware and it should be built in the silicon itself. Any questions so far? So is the question that how many secure partitions can be supported in a system? This is very use cases specific. I'll come to some of the software building blocks that we provide in one of the latest slides, but this again depends on the use case. If you're talking about a very simple use case, again, light bulb, you probably need two building blocks on the secure side, cryptography and secure storage and that's probably good enough for a very simple use case. In some of the more complex use cases, for example DRM. DRM will require certain software to be running on the secure side as well. So what that means, all of your... In DRM, if you are using GPUs and display processors to do the decoding, then you would need some software which can handle the interfaces for the graphics hardware or the video driver and some software which can do the final overlay between these differently generated layers and that software can become quite complex and that software itself might be compartmentalized into different parts on the secure side. So it's very hard to say how many blocks do you need in general. It's very use cases specific. There is no hardware imposed. Well, you have limited memory, so you may have limited memory and limited hardware resources. Yeah, so the framework wise, there is no limit how much you can put on the... Of course, if you can talk about the extremes where you in 32 calculations might overflow if you're talking about more than four giga partitions, but those are the extremes. Platform security architecture. Arm, this is an initiative from Arm which it got launched sometime in 2017 and just last week it became public. All of these specifications and documentations and the philosophy behind that is now publicly available. This again, the platform security architecture is a overarching program which covers not only what I just talked about but it has much bigger scope in general. As you see it here, it has three major aspects, analyzing what the different use cases are, understanding what kind of threats they get exposed to and based on the understanding built by the initial analysis, architecting the specifications, architecting the different parts of the hardware and software to ensure that the threat vectors which have been identified can be mitigated and finally the software implementation which is what the Trusted Fromware M project is. The Trusted Fromware M provides open source implementation of PSA architecture. This is a very high-level view of what the Trusted Fromware M project is. It's open source project, it's governed by a governance body. I think we announced the governance as well sometime last week in Teccon. What we have today is some of these building blocks in the system. The TFM software provides a boot order which takes care of the initial route of trust and the chain of trust. It provides this isolation between the non-secure world and the security-aware part of the system. Then there is a framework which allows this separation between the different worlds and on the secure world different parts of the security-aware software. In the absolute terms, boot loader will have the highest amount of privilege and highest amount of access to the system resources. In some cases, boot loader will block certain resources and only allow uses of certain resources in the rest of the software, one of them being the hardware unique key. At the boot order stage itself, we could derive a key from the hardware unique key and block the access to the hardware keys saying that beyond this, no one can make direct use of the hardware keys and still has to use the derived keys given by the boot loader. In which case, all the compromised parts to hardware keys gets blocked out. Then comes the framework and the SPM. This is the part which provides sandboxing for the runtime software. Now the initial boot process and initial load of trust is verified and is finished. SPM here, the secure partition manager here, provides the sandboxing between different parts here. Finally, there are some common building blocks which apply across different use cases. I would say some of this will apply to all of the use cases, things like crypto and secure storage. In some cases, at the station as well, they apply no matter what the use cases. You need to have secure communication. You need to have way of securely transmitting data between two entities for which you need to have cryptographic support to be able to do secure TLS. Some of these entities, audit log, this provides trace of what happened on a device in case there is a security incident. It provides mitigation against the repudiation attacks where if a certain transaction was requested by entity, that entity cannot deny later on saying that I never made that request in the first place, for example, financial transactions. It creates a log on the device of the security critical events on the system so that that can be used to verify the claims later on on behalf of the device vendor. Finally, we know that every use case is going to have different software. They need different building blocks. There will be use cases specific software which will need a different sandbox altogether. The trusted firmware project provides a way to create these user specifics and boxes as well. I'll walk you through a very simple example to showcase how it all fits together in terms of software. This is a use case where a secure TLS connection is established between a remote entity and the device. On this side, this is the conventional software. You might have the TLS stack here. Whenever the TLS transaction is initiated, there will be some cryptographic operations to be done. Now, for the TLS use case, it needs cryptography, but the protocol itself doesn't need to see the exact cryptographic key being used for the encryption, decryption, authentication. So by following the logic of least privilege, we compartmentalize the system into multiple smaller parts, one small part being secure storage, which actually has the content, which actually has the TLS keys and certificates. The crypto is a cryptographic engine which can perform requests on behalf of all the callers on the non-secure side, or the callers could be somewhere here in the secure side as well. So for the TLS use case, TLS makes a request for encryption, decryption, slash authentication. The request comes to the crypto. A crypto fetches the key from the secure storage in plain text, and then it performs a check to see if this calling entity, based on this framework, it'll check if the calling entity is allowed to make use of this key or not, or it does this key belong to another entity somewhere here. If that check passes, then it'll make use of that key in this domain itself, perform the cryptography, again encryption, decryption, authentication, and return the result back to the caller in the non-secure world. The key aspect is that the cryptographic keys and certificates, they never leave the secure domain, and their visibility is again limited to a very small part of the software in the bigger scheme of things. So any vulnerability in the rest of the system will still make sure that your keys, which are probably one of the most important assets on a device, are not compromised. That is it I wanted to cover today. We have another talk by one of my colleagues on Wednesday afternoon, where he's going to talk about the compartmentalization in TFM in great more detail. Many of us are here. Many of our team members are here in this conference. If you would like to talk about PSA or TFM or anything in general, please find us. We have an ARM booth as well, so most of us will be hanging around in that booth. Questions? Does someone have a mic? I'm not sure if it is on. Yeah, that works. Yeah, so I'm ahead of the FIDOS tag, but I have not gone to the technical details of the FIDOS software stack, so I can't really comment on that. For you go at the station, please. PSA program is slightly wider than just securing the device. Just securing the some parts and bits. At some point it will cover things like certification or it could become much more wider than just the firmware framework. So in that context, FIDOS can be a PSA compliant entity at some point. So what we have is a specification and a open source implementation. That doesn't mean that this is the only implementation that can exist. As long as it complies to the specifications that we provide. Me as a TFM engineer, as a TFM tech lead, I would like everyone to use the TFM, but from the program point of view, as long as any software solution complies to the specifications, it would still be PSA compliant. Any last question? In which case, thank you everyone.