 The Internet of Things, or IoT, is an increasing trend to take devices not normally thought of as computers and connect them to a network. A consequence of this connectivity is that the security of these devices becomes more important. If you do a Google search for phrases such as IoT hacks or IoT vulnerabilities, you can see there are plenty of people talking about this. One way that many of these attacks happen is that the attacker is able to pretend to be someone that they are not. Maybe I can convince your Internet light bulbs that I am the cloud service that is supposed to control them, or perhaps I can convince the cloud service that I am a legitimate user who should be able to control your lights. There are many aspects to security of IoT. Today I want to talk about identity and trust. Security isn't really a new problem for computers. In 1979, Kevin Mitnick accessed a debt computer system and was able to copy software that was being developed. A lot of work has been put into making computers more secure. If you are watching this video now on YouTube, all of the data between YouTube servers and your computer is encrypted and protected through a system known as TLS, or the Transport Layer Security. TLS goes back as far as 1994, where it was called SSL. In the many years since 1994, computers have gotten more powerful. My desktop computer has many gigabytes of RAM, and the processor runs at several gigahertz. IoT devices, on the other hand, are usually much less powerful. It is common to have maybe only a few hundred kilobytes of memory, and the processor that is running it may be a hundred megahertz. However, since these devices are now being connected to the Internet, security becomes that much more important. The core of much of what is done for security is cryptography. Wikipedia defines cryptography as the practice and study of techniques for secure communication in the presence of third parties called adversaries. Before we move on, though, let's make sure we understand this word, communication. It is sometimes easy to think of communication as a live thing, maybe talking on the phone or texting someone, but communication can also be something that is stored. You are watching this video sometime after I made it, and while you are watching it, I will be doing something else. Cryptographers really like Alice and Bob, but as humans it helps us to think about abstract things such as computers exchanging messages, instead as people with friendly names talking to each other. An easy way to think about security is to think about what things we would like to know are true. One thing would be that we would like it to be true that someone outside of Alice and Bob, we'll call her Eve here, won't be able to know what the messages that Alice and Bob are sending to each other are. Determining what Eve can do based on what messages she is able to see is the topic of threat modeling, which would make another good video topic. A less obvious thing we might want to be true is that Alice and Bob are actually talking to each other. Our friend Mallory might try to pretend to be Bob when talking to Alice, and might pretend to be Alice when talking with Bob. Alice and Bob might not even realize that Mallory is doing that. It's probably a little hard to imagine if you think of them as communicating, say, over the phone, but maybe they are writing letters and Mallory works at the post office, or they are sending emails and Mallory has broken into the mail server computer. How does cryptography solve these problems? It develops something known as a cryptographic protocol. Wikipedia defines this as an abstract or concrete protocol that performs a security-related function and applies cryptographic methods, often as sequences of cryptographic primitives. A protocol is simply a formal way of communicating. It defines the exact messages that each party can send and what must be in them. The protocols themselves are built around what is known as cryptographic primitives. These building blocks are built out of things such as hash functions and symmetric and asymmetric encryption. There are usually not a large number of these primitives, but they are really hard to come up with. Most of the primitives that are used now are developed over many years. Often as a contest with cryptographers scrutinizing each other's work, the end goal is to make sure that these operations have the desired security properties and that the implementations really do implement these operations. But the primitives themselves don't really give you security. That's why the protocol is so important. It tells you how to use them. If you look at the changes that have been made to something such as TLS, some of those changes are indeed due to weaknesses found in the primitives. But a lot of the changes come about because vulnerabilities were found in the protocols themselves. To wrap all of this up back to our topic, what is trust in regards to cryptography? Trust essentially means that a party will behave as it is expected to. If you look back to Alice and Bob, it means that Alice needs to be able to know that she's actually talking with Bob and that what she says is what Bob will get. Trust is usually implemented using digital signatures. In my previous video, I covered some basic concepts and definitions concerning the use of cryptography as it relates to IoT devices. Before diving further into how this applies to IoT devices, I want to spend a little time discussing digital signatures themselves. Before we even get into digital signatures, let's take a few minutes and think about where the term comes from, signatures. In this instance, Alice is signing her name on a contract. And what this signature means is something along the lines of her indicating that she agrees with the document. Other times the signer of a document is indicating their approval for what is described in the document. Think of somebody writing a check. The meaning of the signature has a lot to do with the contents of the document itself, with the signature being a kind of testament that the person signing it approves of or agrees with the document. Unfortunately, paper signatures aren't very robust. It isn't that difficult of a skill to learn to forge someone's signature. Their signature is also not really related to the document contents. The document can be modified. The signature stays the same. From this, we can see two important principles that digital signatures will need to have. First, they need to be difficult to forge. And second, they need to be based on the contents of the document being signed. Of these two requirements, let's address the second one first. In cryptography, we have something known as a hash function. Two important properties of these hash functions are they take an arbitrary size document as input, and with that input they produce a fixed-sized output. Often the size of the output is right in the name of the hash function, for example, SHA-256 outputs 256 bits. These properties allow us to build our digital signature off of something fixed in size. In order for the hash function to be useful to us, it needs a few more properties, though. One, it must be difficult to forge. If I'm given the output of a hash function, it should not be feasible for me to come up with a document that generated that hash. To see why this is so important, even though our whole document will be visible, we have to consider the compression aspect, the large input, fixed output. The nature of this compression means that there will also be multiple documents that can produce the same output. If it's easy to come up with a document that generates a given hash, it would be easy to come up with a second document that produced the same hash. This aspect of collision avoidance is also a requirement of itself. The details aren't that important now, but there are cryptographic attacks on hash functions that often find generating two documents that have the same hash. The older MD5 hash function is no longer used, because algorithms have been found that can quickly modify a second document so that its hash matches the first document. This would be the equivalent of modifying the document that somebody placed a physical signature on. A common hash function these days is SHA-256. Some of the older functions such as SHA-1 and MD5 are now considered weak and shouldn't be used in new designs. Hash functions are one part making digital signatures. The other is what is known as a digital signature algorithm. Digital signatures use something known as asymmetric keys. Now, asymmetric keys are used for both signing as well as encryption, but we'll just be looking at the signing. We begin by generating what is known as a key pair. The key generator takes pseudo-random numbers from a source of entropy and produces two keys. The structure and the format of the keys depends on the algorithm, which we'll get to after we go over the process in general. One of these will be referred to as a private key and the other is a public key. The names are descriptive. The private key is something that the owner of the key needs to keep private. For example, on a device it should be protected from being read out, and the public key is just that. It can be known by anyone. With these two parts of the key, we can make a digital signature. The digital signature algorithm takes the private key and the hash of the message that we described earlier and produces a signature. The signature is also a small message. The other important part of a digital signature scheme is to verify that it is correct. The verify operation takes the digital signature generated before, the public key, and the digest of the message and tells us simply if the signature is correct or not. If any part of this were tampered with, the signature check would fail. For example, if the message were tampered with, the recipient would get a different hash, and the signature check would fail. What I haven't explained is how the recipient knows that the public key is the right one. An attacker might be able to change the message, but also substitute a different public key. Ensuring that the right public key is used will be the topic of the fourth video in this series. For now, let's just assume we have a way to ensure this public key is correct. Until now, I've shown the aspect of the algorithms abstractly. There are several commonly used digital signature algorithms, and each has some details and even caveats that are important to know. Now, the oldest commonly used digital signature algorithm is based on RSA. RSA, which are the initials of its inventors, Rivest, Shamir, and Adelman, is a relatively simple algorithm, but it's important to be careful when using it. An RSA private key, explained simply, is just two large prime numbers. For 2048-bit RSA, a commonly used size, these values, typically called P and Q, would be 1024-bit prime numbers. Their product is the 2048-bit public key. Choosing the two prime numbers has to be done carefully, or the keys are vulnerable. And if the same prime number is ever used to generate two different public keys, all of the public keys based on that can be easily computed. It's important to use established and well-tested implementations, and it's crucial to have a good random number generator to avoid these problems. Although RSA is able to sign small messages, meaning it would be possible to directly use RSA to sign the hash, there are numerous weaknesses with this approach. Instead, a scheme known as RSA-PSS, RSA Probabilistic Signature Scheme, is used. There is an older signature scheme, usually just called version 1.5, that although it's still used, shouldn't be designed into anything new. To summarize RSA-PSS, the keys are difficult to make, requiring quite a bit of CPU time, and care needs to be taken. Also the keys and the signatures are fairly large. For RSA 2048, a public key may be about 300 bytes, whereas the private key can be much larger, perhaps 1200 bytes. Signatures will be 256 bytes. RSA has the advantage of being fairly fast. Another commonly used digital signature algorithm is ECDSA. ECDSA is based on Elliptic Curve Cryptography known as ECC, which I'm not going to try to explain in this video. The algorithm tends to be more complex than RSA, as well as slower. However, there are some distinct advantages over RSA. First, both keys and signatures in Elliptic Curves are much smaller, typically an eighth or a quarter the size of an RSA key with similar security. It's also much easier to generate private keys, typically this is done by selecting some number of random bits and performing a simple sanity test. Rather than most random choices being rejected like RSA, most of the choices end up being fine as a key. One bit of complexity with ECDSA and Elliptic Curves in general is that it is necessary to choose the curve to use. The choice of the curve has a lot of impact on the security of the algorithms using it, and there are a lot of choices. NIST publishes a set of recommended curves, and a common choice is to use one known as NIST P256 or also SECP256R1. Like RSA, there are some caveats when using ECDSA. The signing algorithm itself requires some random bits to produce a number known as K. If the same value of K is ever used to sign two different messages, the private key can be calculated. It's therefore crucial that any application that generates signatures, meaning any part of the communication that possesses a private key, have a good, unpredictable source of random numbers. I'd also like to speak of another specific variant of digital signatures using Elliptic Curves, a signature algorithm known as ED25519. This is a signature algorithm like ECDSA, that also defines other aspects of the system as well. It is specified for a single elliptic curve, known as Curve 25519. This curve was designed to be easier to implement correctly and in a way that doesn't have side channels. Because the curve is specified and was chosen to be easy to implement, the implementations tend to be faster than ECDSA for other curves. There are even implementations that are faster than RSA. Although these tend to require a lot of code space for pre-computed tables. One thing about ECDSA that's different from the other signature algorithms is that it takes the entire message rather than just the hash of the message. It still ends up doing the operation on a hash of the message, but it prefixes the message with some secret data to reduce the likelihood of certain attacks on the hash function being able to weaken signatures made using that hash. It's still possible to give the signature scheme a hash of the message, but this advantage will then be given up. Given these choices then, how do we choose which digital signature scheme we should use for an application? There are several things to consider such as code size and performance. There are also considerations of long-term security and how the algorithms might be resistant to future advances in quantum computing. This overview of digital signatures has shown how digital signatures can be used, like a physical signature, to convey some type of assurance to a particular message. In the next video, we will look into the practical aspects of how digital signatures are used within IoT devices. Last time, we talked about digital signatures and how they can be used to convey some type of assurance about a particular message. I've decided to split the next topic into two parts. Today, we will cover how these signatures are used to protect firmware. In the second part, we'll cover how signatures can be used to protect the external network communication of the device. Before getting too far into how signatures help firmware, it's probably a good idea to make sure we have a good understanding of what firmware is. With a conventional PC, firmware is a fairly small amount of code, usually referred to as the BIOS. Incorrectly these days, I might add. This code is responsible for initializing the hardware on the computer and then loading the first part of the operating system that will run. Most of the code that you interact with on the computer comes from files on some kind of disk. These typical computers have several kinds of memory. I won't go into details about caches and such, but it is helpful to understand that the main memory of the computer, what someone is referring to when they say their computer has 16 gigabytes of RAM, is a type of memory that can be randomly accessed as well as randomly read. This memory is also fairly fast. The computer will also have a larger amount of secondary storage, often called disk, even when it's not. This might be a terabyte or more in a modern computer. Traditional disks are just that. A metal or glass disk with an oxide coating on them stores data through changing magnetic fields, quite literally spinning rust. More and more, these spinning disks are being replaced with flash memory, which has no moving parts. Data is stored by an effective charge within a transistor, designed in such a way that the charge largely remains present even when power is removed. These large devices are often implemented with a flash technology called NAND. NAND allows for great storage density, but it's fairly complex to use. It generally has to be read and written in particular ways. The system will have a flash controller that tries to make the flash device look more like a hard disk with fixed size sectors that can be read and written. An older form of flash memory, known as NOR, is what is typically used to hold firmware. The densities are much lower, and a device may have a few megabytes of NOR flash, contrasted with the hundreds or thousands of megabytes of NAND. NOR has the advantage that it can be memory mapped, and the processor is able to execute instructions directly out of NOR flash. This distinguishes it from most other code that has to be copied from disk into RAM in order to be executed. I'm oversimplifying things a little here, and there's some diversity in how real devices work. The flash memory though for NOR is still somewhat complicated to write, but this is usually managed in software. All of this introduction to memory types leads us to what I consider the primary distinction of firmware, that its code runs directly out of flash memory instead of being loaded into RAM. No smaller IoT devices do not have disks in them, and not usually even NAND flash that acts like a disk. The systems that do have NAND are usually managed similar to how a desktop computer would be. But for a smaller device, it's possible that the code has to fit into this one or two megabytes of NOR flash. Because software generally has bugs waiting to be found, it is important to be able to update these programs. Being able to do this upgrade reliably, for example so that it doesn't break the device if power fails in the middle of an update, is the job of the bootloader. I'm not going to talk today about how the bootloader does these updates safely, but instead about how it knows that the new image is coming in, or even the one that it finds in flash, is valid. Now before that you might ask, how do we know that the bootloader itself is the correct code? This is a very good question. Usually the bootloader has to be stored in the device in a way that keeps it from being modified. Sometimes this is done through a write once memory, other times NOR flash will have the ability to set protections on part of the memory that will allow it to behave as if it were write once. If we divide the memory addresses of NOR flash into several partitions, and put the bootloader itself into one of these, that we are able to protect, we can then divide the rest of the flash into two areas of memory. We'll call slots. This can be configured in several ways, but let's assume that each slot can potentially contain a run-in-place version of the application. It's important that it be difficult for an attacker to replace or just modify this image. There have been real attacks on IoT devices that were possible because their bootloader was willing to run whatever code the attacker wanted it to. To make this secure, each of these images will have a digital signature attached to it. The public key used for this signature will be stored on this unchangeable part of the bootloader itself. The manufacturer will have the private key and use it to sign the image before sending it to the device. This way the bootloader can decide if the image is valid, or if it's been tampered with. Before we wrap this up, I want to make sure we understand the uniqueness in this whole approach. We said before that the public key will be embedded right in the bootloader. This means that it can't ever be changed. If the manufacturer were to either lose the private key, or if that private key got out becoming a not-very private key, they might be unable to update the firmware, or unauthorized users might be able to update the firmware. There are some things that can be done about this, and in an upcoming video we will discuss how certificates might be helpful in managing at least some of this problem. For now, however, hopefully we have a better understanding of how digital signatures can be used to protect the firmware running in the device, a combination of hardware protecting the bootloader itself, and the use of digital signatures can make sure that the rest of the firmware image is valid and untampered with. Previously, we covered the use of digital signatures in IoT devices as it relates to the firmware of the device. I want to also cover how the device can use digital signatures to ensure that network communications that it makes can also be trusted. Let's start with what is probably the most familiar network application, the web browser. When you type the address of a website into your browser, generally one of two protocols will be used. The more basic of these, HTTP, creates an unencrypted connection to the server, and the request and web pages are sent back to you in plain text, free for anyone to observe, and even tamper with. The other, HTTPS uses Transport Layer Security or TLS to establish an encrypted connection to the server. The same data will be sent back and forth, but it is encrypted and protected by the TLS protocol. In just a few years the internet has gone from where most requests were unencrypted to now where a large majority of internet traffic is encrypted. If you remember back to Mallory however, someone intercepting this communication is still a possibility. I may think I'm connecting to www.mybank.com but I in fact am connecting to Mallory who then makes a connection to my bank for me. Everything I send he decrypts, observes, and then re-encrypts with the connection to the bank. Likewise, he will decrypt the data coming back from the bank and re-encrypt it for my browser, and I would be none the wiser. In watching, he both learns about my accounts, probably sees my login and password, and might even be able to modify the requests to redirect money to someone else. Of course, real banking doesn't normally work this way. So what mechanisms are in place to stop it? Essentially, my browser and the bank use digital signatures to ensure that they are talking to each other without another party involved. Let's assume for a minute that we have some way of knowing what the public key for the bank is. As part of the initial communication with TLS, the bank will sign part of the message with their private key. My browser can verify this signature to then be able to tell that the message came from the bank without modification. We'll cover how we can know we have the right public key a bit later. But for now, just know that it is through a system called public key infrastructure. Normal web traffic only has an assurance in one direction. The public key infrastructure helps my browser know that it is talking to the legitimate server for the bank. But the bank doesn't have any assurance about me. Rather than have the users have to manage public and private keys, users tend to just remember passwords, which they also tend to do poorly. When the connection is established, I as the bank customer have an assurance that I'm communicating with the right server for my bank. The bank knows nothing about me, though, and asks me various secrets to establish that I'm a legitimate customer. Since the TLS connection is established symmetrically, the bank is at least assured that whatever party sent the passwords, meaning my browser, is the same party that the rest of the communication is with. How does all of this work with IoT? IoT devices will still need a protocol to communicate. TLS is still a good choice. For a protocol based on UDP, there is a variant known as DTLS. It can be possible for the IoT device to just have a secret password that it sends after the connection is established. It's also possible for the IoT device to have a private key. If the server has the public key, the device can sign a message as part of the protocol and use this to assure that it is a legitimate device. If it just used a secret password, the server would also have to store this password or something derived from it. Since back into password leaks seem to be occasionally making the news, we can have more security by only storing the public key on the server. There does need to be a mechanism for the server. One way to do this is to install the private key during some kind of factory provisioning operation and export a list of the public keys from the factory. As we will see in an upcoming video, we can also use something like the public key infrastructure to manage these secrets as well. There is one other area where IoT devices can make use of digital signatures. Something known as attestation. With the above use of signatures and a protocol such as TLS, the device and the cloud server can have some confidence that both parties are legitimate. However, an attacker may be able to modify the code running on the IoT device. Although it was a legitimate device earlier, it may now do illegitimate things such as forward sensor data, generate invalid user requests, many other types of attacks. One way to help mitigate this is for the device to gather what are known as measurements. These can be things such as hashes of the software running on the device and other information about the boot process. This attestation message is then signed with a private key or the public key is shared with the server using a similar mechanism we saw before with authentication. It turns out that this is really hard to do. It's just done in software on the device. The attacker can merely replace the code that hashes the image with one that just uses a predetermined hash. To make this effective, the attestation itself needs to be done and signed by a separate piece of hardware such as a TPM or by software that is protected from the rest of the system such as Trustzone. Done correctly, however, this can give an increased assurance that the software running on the device has not been tampered with. In the next video, we'll finally get to the topic of how public keys are managed which gets us into certificates and x.509. Up until now, whenever we've encountered a use of digital signatures, we've made an assumption that the party verifying the signature had the proper public key as well as an assurance that they had the right key. It turns out that insuring this is a fairly complex topic that I want to spend a little time on. There are several aspects of managing these public keys that we need to cover in order to understand how it is possible for various parties to trust public keys and to be able to associate them with a particular identity. As I mentioned before, one possible way of doing this is to communicate the public key through some channel that itself can be trusted. For securing a firmware image, the signing public key can be embedded directly into the bootloader in the factory, since firmware upgrades are relatively rare compared to transferring something like sensor data and are likely to always going to be generated by the same party, this approach works reasonably well. In fact, this is currently the only method supported by MCU boot. This can also work to some measure for the device identity key. The key can be installed in the factory, and the public key communicated via some out-of-band mechanism, say writing it to a secure device and delivering that by carrier. Even when a more complex approach is needed, there will still usually be a need to have an initial key that is stored this way in order to establish the beginning of the trust of these keys. When things get more complicated than this, say when there's multiple parties participating where one party may want to delegate this trust, we need a more complex mechanism. And this is where what are known as certificates come in. It will start by going over the concept of a certificate in general, and then we'll look at some of the specifics of X.509, the most commonly used certificate format. After this, we'll discuss the infrastructure built around X.509 certificates and some other mechanisms that can be used. Recalling back when we cover digital signatures, each key pair is made up of two parts, a public key and a private key. Each of these is encoded as a small block of data. The private key must be kept as a secret to the party that is signing the messages. The other can be, and in fact needs to be somewhat public. But these keys are just numbers, and there's nothing inherent about a given key to associate it with a given party. What a certificate does is that it bundles a public key together with a block of identifying data, and then that whole message is signed with the digital signature. In the simple case, a certificate can be what is known as a self signed certificate. Here the private key associated with the attached public key is used to sign the message. Since anyone can generate a key pair, attach whatever data they wish to it and sign it, it might be hard to see what the value is in this kind of signature. Indeed, there isn't much value in this except that it provides a convenient way to associate the identifying information with the given public key. Generally these self signed certificates are delivered through and out of band manner, similar to what I described earlier is just storing a public key. This helps identify at least who the key belongs to, as long as it actually came from a trusted source. When these certificates become powerful, is when the signature associated with a given key is signed with a different key pair than the one contained within the signature. This allows the party owning the self signed certificate to vouch for this additional certificate. Now this chain can be arbitrarily long, although it's generally kept to just one or two levels beyond this trust gets kind of complicated to reason about and the keys are easier to mismanage. So what this certificate chain allows for, for example, is that the device only needs to store a small number of these so called root certificates in the device itself. As long as the certificate chain sent during the handshake is ultimately signed by one of these trusted root certificates the device is able to trust the key at the other end of the chain. Now I don't have time here to go into all of the details of X509 itself other than some key points. Essentially X509 defines a specific format and an encoding of the information in the certificate, which includes the public key, the identity of the certificate owner in a format known as a distinguished name, and attributes on how the key in the certificate can be used such as whether it's allowed to sign other certificates. Over the years that X509 has been around numerous extensions have been defined, some made mandatory to allow keys to be associated with host names, IP addresses, email addresses and other things like this. The certificate including this signature is encoded using a format known as ASN1, specifically with its distinguished encoding rules, which ensures that the given data will always be encoded the same way. An advantage of using X509 is that there are several implementations of it available that address a lot of the difficulties of implementing it securely, such as all the rules necessary to determine if a certificate chain is valid. The format itself is somewhat archaic and although there have been some efforts to make something more modern none of really gained much traction. I think one of the reasons for X509's persistence is because it is used in what is known as the public key infrastructure. When you connect to a website using your browser, the website will send your browser a certificate chain. The browser has a small list of certificates owned by trusted parties that it requires these certificate chains to ultimately be signed by. As such to be able to have an HTTPS website, you have to go through one of these trusted parties. Small plug for let's encrypt by the way, they're one of the trusted parties in most browsers and have an automated mechanism that allows you to generate certificates for websites all without charge. This has been a large part of why so much of the web is now accessible via HTTPS. It's possible to use certificates and certificate chains without using this primary PKI. Doing this will generally save money, sometimes a considerable amount. But it does require that the entity running their own PKI establish and maintain the necessary trust. If this is entirely within a single organization this is certainly possible. But when multiple parties become involved these external trust vendors generally provide a useful service. So we've covered certificates and X509 but really only related them to such uses as secured websites. In the final part I'll cover how certificates can be used within IoT devices. So far we've covered digital signatures how these fit in with IoT devices and then certificates and how they can be used to make an association between public keys and particular identities. I want to try and wrap this up by tying some of these things together and discuss how certificates specifically X509 certificates fit within IoT devices. One of the more obvious complaints about X509 is that it is complex and requires quite a bit of code to process. This is indeed true. However, X509 support is needed for TLS and DTLS and a lot of these devices are able to use these protocols. Adding X509 support for other uses generally doesn't add that much code to the system. Smaller devices where TLS and DTLS might not be practical are also probably not talking directly to the internet but going through some kind of gateway. Generally there will need to be some kind of pairing between the device and this gateway and this pairing usually involves shared secrets. The gateway will be a more powerful system that is able to use TLS and any necessary certificates. Another aspect of using X509 specifically the public key infrastructure is the size of the root certificate list. As of the 2020.2.40 version of the commonly used CA certificates list there are 306 certificates which when encoded in the most compact form are 224,389 bytes. Although this size isn't too significant compared to a modern web browser it would be a significant amount of space for an IOT device where the code is often smaller than this. Fortunately, this isn't usually a difficult problem to solve. IOT devices don't generally connect to arbitrary internet sites. If the server or cloud service is known that service can restrict itself to a smaller number of root certificates maybe even one and that certificate instead can be installed within the application or device. One thing that is often overlooked with these certificates is that certificates expire. Part of the block of data associated with a certificate is a validity period. When a certificate expires it shouldn't be used anymore. In order to handle this the device will need to be able to periodically update the certificate list. If the list is hard coded into the application the application itself will need to be updated. Certificate expiration dates are often on the order of many months or years so this is usually not a significant problem. On the other hand if certificates are used to protect firmware these will either need to not expire or have extremely long expiration times. By its nature the bootloader needs to be difficult or impossible to update. It would be possible to store the root certificate somewhere else that could be updated later but this becomes a fairly easy attack vector. An attacker that is able to update the root certificate can replace it with one that they have a private key to which will allow them to sign whatever firmware images they wished. Normally certificates are made to expire to reduce risk from private keys that leak. In this case however adding an expiration to the certificate would probably result in a less secure system. The last thing I want to cover is the idea of provisioning the devices. This is really a topic for another presentation but I will cover it a little bit here. It will generally be necessary to get some initial private key into the device through an out of band mechanism. For example it could be written in the factory and the certificate or public key stored on an external device which is then delivered to the OEM. This is an area that is right for attack where a single point of attack can compromise a large number of devices. As long as the implementation is careful to never allow the private key to leave the device or even a protected area of the device the best an attacker should be able to do is get a list of public keys or certificates for these devices. There may be some loss of privacy here but it shouldn't be usable to compromise the security of the devices themselves. Beyond that there are some ways of adding new keys to a device. One way of doing this is to have the device generate its own key pair internally. It then uses this to generate what is known as a certificate signing request. This bundles the information that will appear in a certificate together but instead of it being directly signed by a trusted entity, it's self signed. It could also be signed by an existing trusted key or communicated over a channel secured by one of these keys. A trusted entity would then make a signed certificate out of this information and deliver it back to the device. It's also possible to generate the private key off device although there is an increased risk of the key being accessed in the process. In conclusion, making IoT devices both self secure and secure in their networking communication requires the use of digital signatures. Using certificates can help manage public and private keys but often trade-offs have to be made especially with lower powered devices. Okay I've been told that I'm muted hopefully this is better now. I was just mentioning that after this Q&A time here I also should be available throughout the day on the Slack channel, the track internet of things. Let's just get right to the first question. Someone asks are you seeing any common protocols for over-the-air updates? I'm familiar with suit and uptane but they are both nascent. Unfortunately, no, not really that well. I know of one in the Zephyr tree that's called Update Hub. It's an open source implementation. They also provide the hosting for the service. As far as that, I'm not actually aware of anything else that isn't also just something that's in development. Let me know if you have any more questions about that. I don't know if I do something to make these questions go away after I've answered them. Feel free to ask any other questions. I did notice listening to the talk that there's occasional little mistakes in there. I will be posting the slides that go with this talk to the schedule and what time do we have? We still have about four minutes. If there's not any other questions I can try to ask my own questions. I did, if anyone is interested please let me know. I did do some benchmarking specifically of the different signature algorithms and I have some results on a particular platform with RSA being the fastest and some examples of code size with that. The ED25519 is quite interesting because there are several implementations that differ drastically in code size from one known as Tweet Nackle that's like couple kilobytes to or Tweet Salt I guess it would be called or the Salt reference implementation which is about 130 kilobytes of code which is larger than the memory in a lot of these devices. The reference implementation is significantly faster than even RSA. In the MCU Boot project we did find one that's even kind of a medium size. It's about 10 kilobytes of code and it's nearly as fast so if anyone's interested in that I do have some references to that. I think that's all that I have for questions here. We do have a couple minutes left. Feel free to oh, I need to scroll down. What are your thoughts on browser moves to limit accepted certs to those with one year max expiration? I don't know if I really have solid thoughts on that question. It is kind of a trend. It's a trade-off. The let's encrypt only issue certificates with a 90-day expiration. I think for IOT use especially for bootloader certificates you definitely need longer expiration dates when there has to be some kind of thing that doesn't ever expire. All right, next question. How do you feel about external crypto providers like the ATCC 608 and such? I don't know enough about them to really comment on it so I guess I don't really know how to answer the question. I can look into it more if you're interested. Another question says near the end you mentioned self-signed for the bootloader. So I guess I'm not sure exactly how to answer that. The bootloader is generally would be self-signed or just not signed at all since the signature is embedded in the bootloader itself. Someone asks, are you aware of any solutions for platforms without secure environments? Well, we kind of have this already with MCU boot. There's still kind of a requirement to be able to protect the bootloader itself but we are able to verify signatures. It's about trade-offs and security. If you don't have a secure environment there isn't a place to store things so you may have to hard code them into the the bootloader for example may need to be hard coded. One more question and we're getting low on time here. The question asks is that, oh so this is the second part of the self-signed bootloader. Is that a typical situation? I struggle with how to manage keys for secure boot, etc. Other than saying yeah I feel the pain I think the key management is something that's not very well solved for these situations and it's something that we need to work on. I mean with MCU boot we have a Python script that manages the signatures that just live in files and the typical kind of device you would probably want to have an HSM that manages your keys. And I don't know if that quite answers the question but I think I'm out of time I believe it we're at the end. So thank you all for joining. As I mentioned I will be on the Slack channel if you have any additional questions. Thank you all very much.