 My name is Andy Done, and this is Ricardo Salvetti. We work at Foundries.io, where we help device makers secure IoT and Edge devices to market faster, and then we kind of help them manage through the lifetime of their product. So before I begin, I'm going to call out a couple names, hold your hand up if I say your name. It's Bruce Schneier here, Daniel Bernstein. All right, so if you're not holding your hand up right now, you're in a group of people where we probably shouldn't be inventing crypto or our own security ideas. So that's what this talks about today. How can ordinary people start to handle sensitive information on embedded devices? This kind of goes through a lot of different topics. So feel free to just interrupt, raise your hand if you've got questions. It may be kind of hard to have the context later on if you wait till the very end. So anyways, we're happy to answer while we're going. So the presentation is kind of one of those classic memes on the internet, how it started, how it's going. We had a customer that had what I thought was a relatively simple question about how they wanted to interact with the big cloud service securely. And the support question kind of, well, I kind of looked around and asked a support ticket and they kind of come back, well, you just need unencrypted credential file on a device to work with this. Now to be fair, I wasn't talking to their A plus tech support team. So maybe there was a better thing, but I thought it was kind of a more industry wide thing. This tends to have things boiled down to and it kind of drove me nuts. I think there's better ways that we need to do this and I mean, that's the mission of our company is to do these things better. So that's where we've landed on having this talk. Just before we get going on this, you know, some people may be like, well, what's the big deal about putting sensitive information on devices? And I don't know. I'm going to claim this is my own law of internet security. I'm kind of crazy like that and hyperbolic. But my bigger point is credentials always get leaked. And like sometimes you think you're doing something clever or you've got some idea of how you're going to make your device secure. There's just guys out there like Matthew Garrett who are going to see your device and after a couple of days, they're going to find that you are at your head or they're going to hook an oscilloscope up and start pulling keys off your TPM or something. So if you don't do these things the right way, you're just at risk. And some of these security talks kind of get bad. It's all like doom and gloom and we're all going to die. I will say on a positive note, I think our industry, we have a superpower. It's like how hard could it be mindset or, you know, in Texan here, hold my beer. And that mindset's done us really well. You get individuals that create some open source project on GitHub and they'll topple an entire proprietary software company. You know, we've done really neat things. It made us naive enough to create a startup company. So we do these and then we create a startup company and I was like, man, I don't like Jenkins. I'm going to create a better open source version of it. And I mean today it's like a backbone of our company. But when you start talking about security, it is hard in that how hard could it be mindset starts to fail us. Security is necessarily complicated. You can't really simplify it. You can kind of streamline security, but you've got to do a ride. A common thing, I work on software over the air update stuff on the client server side. So I tend to bump into people that have ideas for, oh, here's how you could do a secure OTA. I kind of love pointing them at like the tough specification. And all the things that they've gone through that you have to do to make it update reliable and secure. The point being, unless you're like a small handful of experts, you're attempted customer security is probably going to fail. So then how are we going to do this? I kind of hit a problem every time when you start talking about security is it means different things to different people. So there's a lot of layers to security and there's different dimensions to it. And a lot of times I see people in meetings and they're trying to talk about security and they're kind of talking by each other. Because you might have a hardware guy and his view of security is just like, well, I'm building this HSM and it supports these elliptical curve algorithms or you talk to the ops guy and he's worried about how TLS is secure between the device and the cloud. And then maybe you get like more probably the people in this room. Maybe you're more interested in the firmware stuff and you're talking about what are the keys for signing the bootloader and the operating system. But he's talking to the dev ops guy who's talking about keys for talking to TLS. So all of a sudden these words like keys and security can get a little bit confusing for everyone. So I'm going to kind of present some building blocks here. This is possibly my worst slide. That's why I kind of put this silly picture of Nick Jonas up there. There's some people that are going to be like, this slide is completely obvious, of course. And then there's going to be some people here that are like, well, this is too much information. So for the people that think, this is a little bit boring. Of course I know it, I'll give my sports analogy. Professional baseball players hit off a tee every day. So every now and then is like professionals. We got to look at some building blocks. So just to get us all level set here, I think the first like big building block when we're talking about security is public key infrastructure, PKI. I'm not a big fan of it. You can kind of see my snarky comments, X509 and ASN1. The good thing about PKI is this has been scrutinized and looked over for a long time and the stuff works. So to explain PKI just really high, I'll get into it a little more in a second. This is how we're going to establish trust between different entities, say devices and the cloud trusting one another. Once you have your PKI in place and you've got these credentials, you can start doing some neat stuff like encrypting things. Most people tend to, I think people will start with symmetric encryption. Both people share a key and you can encrypt and decrypt with it, which is okay. But then you get into some cool stuff, asymmetric, which we'll talk about. And then maybe even more kind of primitive to PKI is this notion of signing things. So you can prove that you're the person that possesses a private key. And so we're talking about all these keys and all this stuff here. All of a sudden, where that key lives becomes critical to your product. So this is where the HSMs come in. They're able to keep a key in a way that no one else is going to be able to see it. And then when you put all that stuff together, you make Nick Jones happy with his chef kiss. So here's a PKI at kind of a high level. You start with this root certificate authority. And this is again, as I start talking about this, that people tend to, we're talking about keys and certificates. They come together in what we call a key pair. Now your root key pair is going to be a private key. It's going to be some certificate that's signed with it. And it's going to have some metadata that maybe says when it was created, how long it's going to be valid, things like that. Because this is a root to everything you have, this key becomes essentially too big to fail. And I would say there's very few people who's like levels of paranoia aren't justified for protecting this key. Because this key is so important, you can't keep it around on computers and stuff. So you have to create what we call intermediate certificate authorities. These intermediate certificate authorities, they've got a private key and they have a certificate. And inside that certificate you can see, hey, that was signed by the root certificate. And that CA may have this ability to create other keys for people. Now we're talking about these certificates. The way these certificates start to get created and traded around is using these certificate signing requests. So the certificate signing request is a pretty simple thing. Like a device can say, hey, I need a certificate. Here's some metadata I'm hoping you'll put in it. Here's the public side of my private key. And that intermediate CA can look at it and say, yeah, I like what you want. I'll sign that. In the certificate, these things are okay to share around in the public. Because the way that certificate works is you do operations the only way that you, there's a ways that you can verify that person possesses the private key of that certificate. And some other interesting things like with these certificates you can start to do is say in that metadata, you kind of have a field in your X509 that says like who the owner of it is. You can start putting some things like say the device is UUID in there. And now suddenly a device is cryptographically pinned to that name. So as he tries to access things on the web and says, hey, I'm Bob. You say, no, no, actually, you're cryptographically Alice or something. And once we, oops, sorry. Once we get PKI set up, really the first place that we start seeing this use for embedded devices is with mutual TLS. So you've got this key infrastructure. Essentially everyone in your infrastructure knows who the root is. So then MTLS is a great way for clients and servers to talk to one another. So a client can talk to the server and say, hey, I want to talk to you. Who are you? And that server can say, hey, I'm this guy, I've got this TLS cert. And he can do some cool things with this private key to say, yeah, yeah, this is my cert. And the device can say, okay, you've got a valid cert. And it was signed by the root that I trust, so let's talk. Then the server's like, hey, but who are you? The device is like, here's my cert. And he's like, yeah, that's right, and I trust your root. So we're good to go, we can talk. The nice thing when we're doing embedded development, these mutual TLS is essentially widely adopted. Everyone has it, especially like these big cloud providers. So like Amazon, Azure, Google, they all have things, usually called like an API gateway. And you can give that API gateway your root cert, and then it can create a TLS cert, give you the, sorry, it'll give you a certificate, a signing request, you sign it. And then their API gateway can start handling TLS traffic for you that you've authorized. So you've never seen their private key, they've never seen your private key, but all of a sudden you've got this trust. And then if it knows that root, devices can just start magically talking into their services and trust them. The next thing that we tend to talk about, and this is where the talk started with was encrypting content. So symmetric encryption, it's not bad, and a lot of times you need it. But the thing that always makes me uncomfortable is symmetric encryption. There's a lot of people sharing this thing that becomes super powerful. So some really smart math guys came up with some other techniques for encryption called asymmetric. And this is pretty basic. If you know someone's public key, you can encrypt data in a way that only the person that possesses that private key can decrypt it. So this is pretty handy. This is something I like when our own customers are wanting to give us like sensitive configuration for their devices. They can give it to us, and I don't know the content of it. And if something bad happens, they don't have to worry. You don't have to trust the person that's hosting it for your devices. Only that device is gonna be able to do it. Just a little kind of pro tip on the bottom, this may be obvious to some people, but usually you're gonna need to kind of keep that config data on the device stored persistently. So keep the encrypted one stored wherever you want on disk. But when you decrypt that thing, because a lot of services are still gonna wanna read a file from disk, put that in tempfs. That way when you reboot the thing, it's gone, or when someone's just trying to analyze it when it's powered off, you're always safe. And then here's something, as you start doing things the right way, you're gonna bump into this thing called ECIES. So asymmetric encryption for RSA, it's pretty well known, well supported common way. And if you're doing encrypting certain sizes of payloads, it's just slam dunk easy. The bigger ones, there's a couple of ways to do it, but it's pretty much a set and stone thing. Everyone does that pretty well. Elliptical curve is a little bit less so. And the thing is, you need to be using elliptical curves for all your encryption these days. It's a better way, RSA is not so great anymore. So the problem is how do we start doing encryption with elliptical curves? Cuz it's different than the RSA technique. So the standard has emerged called ECIES. I don't wanna go on like a standard ramp, but this is one of the standards where it's a standard, but they're not interoperable with one another. But they're doing things in a way you can kinda trust. So the experience I had in our product, we have this command line tool called FiOctl and it's written in GoLang. So I was trying to find a way to do this ECIES encryption in GoLang. And the first place at the time was Ethereum was doing this. And they have a small library in their overall Ethereum project. So we were using that, it works great. But I do get GitHub warnings on my repo all the time. There's a security issue in Ethereum. You've gotta update your version, and you start looking around. And it's not the encryption stuff that we're interested in. The problem is there's now an ECIS Go library that showed up afterwards. It works fine, but it's not interoperable with the way we do it. So all of our devices that have this encrypted payload that I can't now decrypt, we can only decrypt it with the Ethereum implementation. So it's just kind of a heads up as you start going down your own encryption journey. Now we'll get into HSMs. These things are really cool. Essentially some magic piece of hardware where a private key is done correctly. And the mathematicians have done really neat things and it all works. And with the HSMs, the ones that we kind of like and the more or less I think everyone does this. They'll support some common libraries. Like the main one that always interests me is called PKCS11. And this is a library that kind of exposes some simple crypto stuff, like sign this piece of data, things like that. And once you have PKCS11, and it's such a standard, things like OpenSSL, know how to use that as an engine, and if you're using OpenSSL, now you can use Curl. So you've enabled your entire ecosystem through this. And all of a sudden your device with this HSM with a private key that only exists inside this piece of hardware can talk to anything in the world in a secure way. So now I'm gonna hand it over to Ricardo to kind of give you guys the bad news about all this. So yeah, what I wanted to kind of cover is when looking more on the embedded side, as Andy was saying, you have the PKI and you're using asymmetric keys and you have all of that, you're using the best practices. But and even with HSM, unfortunately, life is not that easy because it's hard enough when you're already there. So it's simple because for security, it's only really strong as your weakest link. So you might have like HSMs, you might have a lot of protection, but if you have an open door in there, people can explore that. And from that on, you basically lose access to whatever that you're trying to protect. And this is also like something that Lena said on the keynote, and when they're talking about security in the kernel, is the only way to do secured right is to have multiple layers of security, right? Because in case you have an issue with one of the layers, there's always additional layers to be protected, right? So for example, using HSM is good in that sense. It's one layer, but there's also no additional things that we can do on the platform side on the embedded side of things. And if you see just security in general, embedded Linux, for you guys that are around for several years and have been working with products for embedded related products for more than 15 years, is that security is not taking seriously at all most of the times? And it's just really bad. It's not something like as we're seeing on these conferences, the cloud is pushing ahead and trying to be more protective and having new methodologies and so on and so forth. And it'd be kind of delaying the adoption on the embedded side of things. And it's pretty common for you to see devices, for example, that are using root by default, exposing UR to data being exposed. Things not signed, SSH running by default and services exposing ports and things like that. So yeah, it's pretty bad overall. Of course, we're improving and we're trying to improve the idea of this presentation is to kind of show some lines of the things that it should be concerned with. But yeah, it's pretty bad. And then looking at the platform and the DOS side of things, things that it should be concerned as well, like additional layers that it should be concerned with your final implementation. There's some that are pretty simple. For example, making sure that I have secure boot, in the sense that the hardware is only capable of booting images that is signed with your, for example, key with the product owner key. That is one kind of protection. The other thing that it can do as well is measure boots. You can measure, after you boot it, the only thing that you know is that it booted a valid image, but you don't know if there was compromise or not. So there's ways that you can measure the boot. And with the measured data, you can even perform remote attestation. That is something I said, for example, what you could do is that you have secure boot, you have a bunch of things that you added in there. And if the device is trying to reach, for example, remote server, you can ask the device to attestate itself. Right, so show me what is actually running. And if it is not matching with what would be expected, you can block the device, you can disable the device. So you add a new kind of layer, even though you might have HSMs and so on. The key might not be, it might be able to extract the key, but someone can still sign if it has control of the element. One other basic thing, the disk encryption is something else that is not that common, unfortunately. If you see a bunch of devices out there, it's pretty easy to extract content from the MMC and so on. So there's also a new level of protection that it can do. You can also attach, for example, of course, with disk encryption and any sort of encryption that you have, you need, of course, to control the key. There's ways to bind the key into a secure boot or whatever secure element that you have in there. So you don't actually leave that key as well. So that's another good thing to do. And just general hardening and surface attack reduction. Of course, if you're not familiar with thread model, you should be, for whatever product that you're designing, thinking of the thread model and all the things that are exposed on that device and just try to reduce the most you can. For example, you are, maybe you need it, but maybe you don't, then you can easily remove JTAG, for example, it's a big issue. If you don't block that, people can easily extract things on memory that are on the running system. And just one minor thing, for example, maybe on the application, we have the OS, right? But let's suppose you're thinking now about the application. One thing that's common thing these days is to try to contain the application, for example, in a container, and there are several different methods of doing that. And Sergio Prada is doing a presentation covering security aspects on the container. So it's something that if you guys want to have a look, I suggest going there on Friday. And even then now, just talking about HSM, even with dealing with that HSM, you need to know and be aware of some good practices as well. Like as I said, hopefully the key is not, you're not going to be able to extract the key. Even by having physical access to the hardware, but there are things that you should be aware of. Like for example, how do you do the communication with the element? Are you encrypted in the communication? So there's actually a protocol called the Secure Channel Protocol Tree in which you use to start to establish the communication with the secure element, and you can encrypt that communication. Of course, there's the key that is used for doing that encryption, but I'll get that in a bit how to better handle that. The other thing as well is reducing access to the element. So you don't need to give access to all the users on your system. You can reduce permissions for groups, or even through abstractions. Like for example, as Eni was saying, the Pikesa-Salavan implementation, the library, you can use that. You can use, for example, this other project that ARM is pushing for quite a bit lately is called Parsec, the platform of destruction for security. And the idea is to have some sort of like a service in which the service control access to the secure elements, like TPM as well. And then your application only have to deal with the service. So you can have multiple containers talking through that service, without having direct access to the element. And that service is no more responsible for abstracting the assets. And then you can have policies and so on and so forth. And the other thing as well, especially on ARM SoCs, you have the trusted execution environment in which you can run the Secure OS. So that's another kind of good abstraction that you can do is just have the secure S-take control of the HSM. And going a bit further into that, something that just covered some of the work that we decided to do, we had some customers that were really looking forward on using the NXP Secure Element, called SO 50. There's 51 now, which is why the X. And we decided to have the implementation in a way that you could abstract the access to the element by using and leveraging the trust zone technology. So that's something that we did and we worked with upstream is adding support in Opti to manipulate and take control of the element. So you would basically put the system and then the Opti, which is the Secure OS, would initialize communication with the element. And then all the rest of the stack gets, when it needs to communicate with the element, it needs to go through Opti. So you add a new level of protection. The good thing about that, you can extend the hard root of trust because when you're performing, for example, Secure Boot and Opti OS is one of the elements. It's one of the binaries in the OS that you start on the early boot, so we can't extend that as well. The encryption, as was saying, to the element, if you're leveraging, for example, the Opti and it does an implementation that we did, even if you want to encrypt the communication with the element, you need to store that key somewhere. So by using Opti, there's a nice functionality in which if the hardware provides a way to generate a harder unique key at runtime, you can automatically derive a key from that key and then use that key that you derived for doing the actual encryption. And this is nice because then you bind the element to the SOC. So if you remove the element, replace the element, the harder unique key is going to defer. It's not going to match with the element. And the same thing with the SOC. If you just try to replace it for abusing it, it's not going to work because it's just dynamically derived from the harder unique key. And the also great thing that we did, which isn't like we also help it a bit, like on the PKS-11 implementation in Opti, and then because we have this integration done in Opti, and Opti itself has a trusted library and it has a library and a trusted application for PKS-11, you can simply just use that library to, like an underneath Opti is going to use the element, but you don't need to know about that. And so it's transparent to the application. And then the PDF is on the schedule, so in case you want to have a look as well, like Jorge Ramirez will work with us as well, like he did a presentation a couple of weeks ago on embedded recipes, just talking a bit how the implementation was done, how the integration was done, so I suggest you to have a look at that if you're interested on the topic. And also like for any, like for any of these validated items here, like we could do a presentation in its own. So if you have any question in particular, if you want to look how we did, you know, like on our OS and so on and so forth, which is all open source, we can, you know, like get the details, we're happy to provide details there. Then I'm giving back to Andy. There we go. So yeah, I mean, NXP has a, as Ricardo was saying, they've got a pretty cool product called SEO 50, now the 51, you can put that on an embedded product right now and it works. They also kind of, they also have a really neat product that's called Edgelock to Go. And this is kind of a software as a service, I guess, offering that they have in the cloud that helps you manage keys. So, you know, the thing with PKI is how you're gonna distribute all these keys and stuff. And Edgelock to Go is this cloud service they have where you can start to delegate different things to it and it will create intermediate CAs. You can sign them yourself if you want to own that route, but then they have intermediate certificate authorities that can start creating certificates and hand them out to all the devices in your fleet that have these SEO 50s. So, you kind of, instead of building out your own like PKI distribution, like piece of software, you could like leverage Edgelock to Go if you'd like. And then like for me, I was doing with someone earlier, I'm trying to work my way up into the cloud and not have to get into the weeds and solder stuff anymore. So, SoftHSM is a project you can use on any device and it acts like an HSM, but it's just pure software. So, it's a great way to prototype stuff. You're trying to make sure that like, you know, theoretically things are gonna work according to your assumptions. And sometimes people ask, well, what about my RPI? And this is just kind of my snarky. You can have an RPI or you can be secure. You gotta pick one though. It's not gonna work for you. So, now we're kind of getting to the end of summarizing things. There's some things that we can start doing to be secure with our devices. So, use Mutual TLS. And once you have Mutual TLS sorted out for your fleet, you can keep those keys in your secure element. And now that we know your device is public key, we can start giving it like configuration data, however you wanna manage that for your fleet. You can encrypt that asymmetrically in a really nice way. If you're using third-party systems, like I was saying earlier, the good ones all support Mutual TLS hooks for doing whatever you want. And then, as like Ricardo was kind of talking about earlier, like we can start building our software trust from the hardware root of trust and work our way up. So, we go through all layers of the operating system as we're working on this. And things we can stop doing. So, let's stop putting stuff in clear text on devices. There's ways that you can prevent that. So, we have to look at that more seriously. In particular, Amazon, and I'll be honest, I'm not a big fan of AWS. I kinda joke that they're the cloud C++. And I mean that in all the bad ways of C++, but also all the good ways of C++. It's some really powerful stuff. That Amazon has their AWS IoT project, which is a pretty cool thing. And they have this process called just-in-time provisioning where using TLS, you can have devices just automatically connect into AWS IoT and start being managed there. It's a little tricky to do. At the risk of mixing metaphors, their identity and access management system, AIM, and in AWS is kind of their version of SC Linux. And it's really hard. It takes great restraint not to just disable SC Linux and do the right thing. But if you work through it all, and I actually, I had a lot of trouble figuring this out when I was trying to do a blog. So, if you wanna follow my blog there, if you ever get interested in AWS IoT, it kinda walks you through how to do it. And it's all command lines that you can copy and paste because the problem with AWS is their tutorials, especially anything with IAM, it's click this, click this, click this in a UI, and you never remember how you got there at the end. Now, if not AWS, like I say, and MTLS is supported by all the big players. And what I would do, now that I'm telling you, stop putting these credentials on a device, is my more generic advice is set up whatever your cloud provider is, they're gonna have some API gateway, set up MTLS on that. And then they're gonna have, if you wanna be one of the cool kids, let's do a serverless Lambda function where you can start accepting traffic from your devices. Maybe they'll say, hey, can you give me the temporary credentials to talk to this service? And that Lambda function can now give like, here's a credential, it's good for two minutes and then it goes away. So that's how I would start trying to think about like talking to cloud services in a safe way. And then Ricardo kind of added this toward the end of our thing is like, we gotta stop thinking of securities and afterthought. Like you gotta do this early on in the project, you can't bolt it on at the end. So that's everything that we had. If you've got questions, if you wanna ask a question, I can repeat it. Okay, yep. So the concept behind like key rolling would be like, you may remote out of state early in the root of trust or maybe for a device that has minimal or no network connectivity, maybe it has only one hardware root of trust. And then each, and then you subdivide the system into very small modules and each one contains the key that out of states the next one. And they're never the same, right? Like maybe you're using one style for this and like, right? Is that a waste of time in your opinion because it's a lot of overhead, it's a lot of work, but is the payoff there for all of that work? You may have to get up. Yeah, all right. Yeah, I might need to go into here. So I think so. Like, I mean, like the thing is, isn't like as Lena has even said and I quoted in there, it's like, you add a bunch of layers. Like, of course, like it complicates development as well. It's annoying. You have to handle with the bunch of things and so on and so forth. But I believe it pays off in the end because you have like so many different layers and like either abstractions and so on. And even if you have like probably one of them, you can still be kind of, you know, like conscious that nothing necessarily bad is gonna happen, right? But of course, like it's a pain. So yeah, but I think it's even worth it, no. You know, when we were creating our company, one thing that we kind of did early on was, we kind of took this mentality of things are hard. Let's just do it more often. And I think that's one of those things like, you will, it'll get easier if you just force yourself to do it. And we did a lot of hard things that were painful, but it's kind of paid off for us. One thing for example, as we did, like as we suggest to customers, like for example, even for secure boot, right? You have to fuse and so on and so forth. But even at the beginning of the development, what we suggest them is just create a development key, just fuse and play with it, get comfortable, you know? And then later on, you just swap the keys. You don't need to redo anything and you're more comfortable with the stack as you go as well. Any other questions? No problem. I was just curious, when you talk about disk encryption, are you thinking full system disk encryption or just focused on the like data partition, kind of the modifiable properties? So that usually depends on, if you don't need to hide anything, you don't have like proprietary, for example, information in the root file system, what we suggest is just not encrypted necessarily. If it is just like open source content and so on. But most of the companies, they are like concerned about leaking, even what they are actually using. So usually, what I see is that people want to prefer go like full encryption for everything, not only the system, but for the data as well. Of course, there's penalties in performance for doing that. Some devices offer, for example, an XP offers with the Cam, you can do like crypto accelerations and you can use that acceleration for disk encryption as well, to facilitate some of that. But then it depends on the kind of customers that it would have. But my initial thinking on this is that as long as you don't need to protect the IP, you shouldn't necessarily be encrypting, right? It just creates another layer of complication in a way. But at least we're all, of course, like for private data and so on, then you encrypt. A short one, what are your thoughts about secure JTAG? You mentioned that disabling JTAG, of course, is one thing that we should be doing. But I think that lots of vendors, lots of SOC vendors today offer some kind of secure JTAG functionality. Mike, did you ever play with secure JTAG? Secure JTAG is great if you can afford it. It can be fairly expensive if you, I don't know if you've ever looked into licensing of those units and how you use them in production, but if you can afford it, use it, it's always, it's another layer, right? You have another layer of protection. If you don't want to disable that, if you want to be able to recover devices in the field, then you can leave it enabled and you have keys to protect the JTAG access. But oftentimes it can be kind of prohibitive and you'll see as companies will only use them at the very, very beginning of their product development and then they very quickly move away to something else. You've mentioned a lot with using the Lyftical Curve as part of the encryption. And every now and then I get a flow down from the bosses of well, how do we start protecting against quantum computers? And I know NIST has a working project of trying to find the right algorithms, but is there a general timeline that you all have seen of when people have to start migrating from a Lyftical Curve to the next algorithm? I'm not up to date on that stuff. I tend to just go with what they're saying now. And yeah, so we do ED2551 high and it's like the kind of the curve people use. Yeah, usually at least like what we see is that, and at least like with the customers that we have there, of course they do want to protect the product and APNC but they're not that paranoid at this point of like because they're thinking of course like a product like five to 20 years lifetime. And in the hope of of course like we create something that you know like it's able to later on to resist to that kind of you know like attack. But which is why like one other thing you know like as a button and put it in the afterthought in there that you should start doing is like allowing decision to be updateable like from not only the OS and application but the whole thing right as well. Like even so much like for example the NXPHSM there's like I think like a Java based at OS that you can even update the OS itself inside the element. So hopefully you know like if you have the possibility of you know like updating the system at any point in time then once there's a solution you can try to roll it out. And then of course like create new certificates new keys and roll all of that. So yeah. I think we're out of time now. Can we take one more question? One more question? Oh yeah. Yeah. Could you please explain why you decide to go with Opti instead of trustee? This is first. And the liveries you use with Opti I guess you are unbatteless right? Or something else you do use? So we did Opti mostly because like we came from Lenaro and then like when we're looking at you know like the more kind of generic you know like OS for Lenox in general. You know like that was kind of the OS to go. So we decided to stay with that. Of course I think I don't remember if there's you'd be even using on Android these days. But so it's more kind of you know like the at least the one that we have familiarity we contributed before. So it seems to be coming you know like at least like for SOC such as NXP and Xilinx and even TI now like they're all supporting Opti officially. So we decided to go that way. And now for the library like for example what we use like you know like even with Opti there's a pkss11 based library that matches the trusted application and then we just use for example the open SSL like pkss11 engine and then it just goes through there. So we don't we don't necessarily have to do like a specific TLS implementation or embedded TLS or someone. We can just simply use in the open SSL engine for example. All right thanks a lot everyone.