 All right, I'm not used to leaving that off somewhere. It's a bit weird, the only one without the mask. Okay, thank you very much everyone for joining. My name is Adriana Hava. I'm a senior software architect at Twitter Q. And the presentation today is about five success factors to deploy Yachto for production-grade embedded IoT device. That's a mouthful title that you can guess that I wasn't the one writing it. The goal is more to catch the keywords and to make it work. But I will explain the rationale behind that anyway, and why it somewhat makes sense actually. So first, a brief presentation on who we are. So we take care of a software service company. We've been in business for almost 20 years now. We have offices in the US, in Europe. And what we do is essentially help companies design a software solution from the real-level, so whether you have an MCU or an NICS embedded device and you need to improve your kernel-led drivers to the application and everything that's going to run in the cloud. So essentially working on software and that full stack for embedded IoT devices. And now, to this definition. So the idea here is to define a bit the difference between this and this. Well, it looks nicer, because that's one difference. But you can guess. So we can say that both works. I expect, I suppose. I assume they work the same. But the one on the left is going to be harder to maintain, probably not super safe. So all these criteria, these attributes are what we're going to look at and how to go from that thing on the left and the right to the solution on the right. And provide a sort of checklist for you of important things to kind of take what you're designing a product to make it production ready. So we came up with five different attributes. So making sure that it is future proof, easy to maintain, secure, fast readable, and visible or observable, and controllable. And the intent is that if you do all this at the end, your product should not only work, but also make sense in production and be easy to maintain and all that. And if you look at these five different attributes, they actually fall back within your team in different groups. And that's one of the point that I want to make here is that some of these elements that we will discuss here might be outside of your job description. But just knowing that they are here, knowing that they're going to help another team and help your product be successful, will just, well, hopefully you'll learn something. And you'll be able now to kind of bring that knowledge or check or ask about it and make sure that overall it makes sense. And it is production ready or however we want to name that. So throughout that presentation, we'll look at these five areas. And you'll see there are little check boxes for these different points in what we recommend. Some are nice to have. And the goal is really to provide a 360 view of what we recommend on the device. So let's start with the first, which is making it easy to maintain. So I'll start by saying that Yocto is not the easiest tool to work with, right? I think if you use anything else, it's a steep learning curve. It's big and so is embedded device and so is Linux. There are a lot of things to know. It's quite large. So everything that we can do to make that easier for anyone that humbles the project is going to make life. I mean, it's going to be much easier for them and for everyone overall. So the first recommendation here is to try to create and to provide the development environment. So that will be Git, Python, repo, if you're using it to clone your midas and all that. You can very easily put that in a container and have a few companion script to be able to build that and have that work really quickly. So that's great. One thing that we'd recommend is if you can pre-build that image to provide it, that's actually going to save you some time because you know that even if you have Dockerfile and you're using that version of that specific distribution, after some time the package are going to be different, the versions are going to be slightly different and overall it might just not work exactly the same or not at all. So if you pre-build that image and just provide that to your developer, you just don't have to worry about it. The second one, maybe a bit boring, but it's really about documenting the main development workflow that everyone will have to follow working on your Yachto distribution and mid-device. That's of course essential to know new commerce when you want to join the project. That's also going to be a great tool to share knowledge within your team because if you have someone right, like what's the build process for that thing is going to write a few things and then someone's going to review that and say, oh, I didn't know you could do that or that you recommend to this. So it's also a great tool to share knowledge to just have that basic documentation. And when you're in crunch time, when your project is launched and there are likes, well, hopefully not too many bugs to solve, but if you're in crunch time at some point and you need some external help, you won't have time to create that documentation. So if it's there and ready, you're good. Another one is trying to shrink build times. Again, Yachto takes a few hours when it starts from scratch. It's a bit harder now that we're working from home, but the thing you can do is share like your downloader, your solistator, mirror, et cetera, which basically enables your developer to just download that, download pre-build packages and the kind of things without having to wait for all Yachto, for Yachto to rebuild all that. So if you have within your team a server where you can easily download and push this, this is going to save you a lot of time while building it. The alternative is, of course, we can always ask for a beefier machine. I did. The fourth point here is more for the application developers but to provide an application SDK. They don't need to rebuild a full Yachto distribution, of course, for some people maybe that will be obvious, but some developer like build all Yachto just to be able to compile your application, you don't have to do that, right? You just provide an SDK that let them build their application and to just deploy it. And you can deploy it as a package to save even more time on your device. There are a few best practices, a few other recommendations I have but that I put at the end of the presentation just to try to balance it out. The second part here is about overall providing reproducible builds because that will be something crucial once you launch your device, you'll not only take care, you don't only want to have that thing working reliably, but you want to be able to track the release and you want to be able to go back in time at a specific point to rebuild that exactly the same and to test a fixed, for example, against that. So the first point to accomplish that is of course to version your OS, to pin everything, so to use Git and everything to version your metas, to pin the specific version, specific commits of all that, to tag your OS for a version, to override like the OS release, for example, to provide a build time, that's let's say a production build or production secure build or different kind of information just to have that information available within your distribution. From there automating your Yocto builds, it's fairly easy to have nightlies, so to build Yocto like every night or something like that from your master, your main branches, it will at least give you a feedback whether your main image, your main path, your main branch is building another right. You can do that with Jenkins and a bunch of tool what I would personally recommend is to try to use tools like GitLab and Azure DevOps because they provide runners that you can have run on premise in a specific machine. So it will have all the, everything cash to downloads and everything. You can run code on that platform, you can build and run on a specific platform but still have all the feedback of the builds and the results within your cloud environment where you have all your repos and everything. And that enables the third bullet point here which is the pull request validation which is something we played with where what you wanna do really is if you're hacking Uboot, let's say, before you merge that onto your main branch you wanna know if Uboot on your platform is building and everything is working fine. And if you wait for the nightlies it's gonna be a bit late, it's already merged. So with that setup, you can just have your pull request trigger a build on, so let's say for example, GitLab here and pass the package that you want to build and the commit that you wanna build and have that machine just, that build machine build the Octo with that specific commit of Uboot for example, build it, deploy it and you can easily have now that feedback to the pull request that says, yep, that looks good or something is wrong with it. And finally here, archiving your release build and environment. I talked about that just before, but same, if you just have even a Docker image and you wait a year from now to try to rebuild your specific version, thanks, chances are you won't be able to rebuild it the same way. So an easy way to just make sure you're fully, you're fully reproducible is just to either have a pre-built image, a VM or something that just really frees your development environment at a specific point in time to be able to restart it the same. I put that here, but Yocto is affiliated to reproducible build so they give you that guarantee but that doesn't give you a guarantee regarding the tools and everything that you're using at your level. So Yocto is good and now you just need to basically do your part. And from there we can move to security. So the usual disclaimer, I'm not a security expert, but hopefully this will give you some elements and some things that you can use to improve security on your device. Okay, so let's look first at a few OS security feature. The first is really just a place order for a lot of common things that you know you have to do. So having a specific production image or production security image if you're using secure boot, not including debug tools or these kind of things, not having root login, so oops, sorry. So it's really the bathing thing that you expect to be there. Additionally, make sure to disable all your interfaces that also applies to USB if you have a USB port and you have USB enabled within your kernel. Well, they just don't open for anything to be inserted in that USB port. And well, a lot of things can be inserted in a USB port, so you really don't want that. So either disabling USB completely for the production image or just authorizing specific devices, specific USB devices using secure protocols, of course, so HTTPS and all that. Once we're done with that, secure boot is also something always, well, everybody recommend, but I have to recommend it here. The thing is, I would definitely recommend to try to validate as much as possible. The more your system is we don't need the easier it is to validate all that and to go as far as possible. So don't just stop at the, you boot all the kernel. If you can validate more of that, if you can validate your NetramFS, of course, your DTB and part of your file system, really go that extra mile because that gives you a good extra guarantee. And one thing that's absolutely crucial here is to make sure that you have backup secure keys that are available because if the key that you use to sign your image is lost or if it is compromised, then all the device in the field, you're gonna have a problem. So just having a couple backup keys, they are pretty easy and you can just use eFuse to then revoke them on your device and you just need to have direct calls from the cloud or anything to have that path available. Providing a secure secret store, again, that's pretty basic, but that's something I need to have. So either a TPN secure element or Trasm-based solution to store your device keys, et cetera. And a nice to have here is to encrypt your disk if that's an option and to prevent writes altogether. So either using read-only file systems or to mount or mounting this partition read-only until the point where you need to write to them. A few other things, application and everything in general, you want them, you want to follow the least privilege principle, so you can use SCLignix, you can use AppArmor, you can also use containers and isolation to just make sure that these are restricted. So it's a general recommendation. Of course, SCLignix is a bit, that's not the easiest thing to set up, but it's definitely a secure one. Just after that, monitor and address vulnerabilities when you detect them. This is a really broad topic that we deserve and that's probably the case, their own presentation. So I'm just doing a very quick snapshot here. Yocto provide a CV check and it will show you and patch for liberties based on the package version only, which is sometimes not enough and may give you some false positives, but at least that's the first pass on that and you can run that on your specific topics if you're interested by that. Like I said, that's a potentially time-consuming process. It is a time-consuming process. So Metatimesies provides a few tools and also some commercial solution, like for example, we provide some support to be able to monitor and to address these vulnerabilities and the best package is the one that you don't include, of course, no vulnerability if you don't have that package. A few nice to have running confidential or code that's critical in a secure environment. So nowadays, almost all our processor will have support for a trust zone so you can just use opti and just run your secure application there and communicate with your main, rich environment and consider the standards and the regulation that are coming. So I named two here, the one from Europe and US, which basically say your device needs to be secure, it needs to provide to communicate securely with the cloud to have a unique identity and all that. And that's one for security concerns device identity. Nowadays, I mean, we should use a Unix X5 on i-certificate, except if you have really a specific, if you're really in a specific situation, the easy way to do that is to just sign it, create it and sign it from an intermediate certificate that's derived from your root certificate. An alternative is to use secure elements that are pre-provisioned with that identity. So usually this is a service that manufacturer can provide to you where they say we'll provide you a security method chip that you can just solder on, that's gonna have that secure, that certificate, that unique certificate and we can provision that if you want in Asia or somewhere else, you just have to solder it and you don't have to deal with that. So both are good, the second, the latter is a bit more secure and less work on your part, so that's always nice to have at a cost, obviously. And once you have that device certificate, you wanna push that in the cloud. So we're talking IoT device, you need to connect that somewhere. So if we're talking Azure or AWS, Azure has DPS and it's very easy to say, authorize all the device that fall within these intermediate certificates. If you're using AWS, you can do similar things just using Lambda function. So either authorizing an intermediate certificate, pre-provisioning, and I could discuss using a third party that's gonna actually do that work that's secure work for you or do it yourself, you just have to automate it for sure and to do it securely. And the last part is make sure that you support rolling the device certificate and updating root CAs. Root CAs have a relatively long lifespan, but I think let's encrypt is the one that's expiring or will expire soon, for example. So that's one that you don't want to be in a situation where you're relying on the root CAs that are expired on your device. Moving on to fast and reliable. I'm not gonna, so this is generic presentation, so I will try to share some general recommendations here for the fast part. So the same, that can be its own topic, very easily its own book, its own, I don't know. But I will try to provide you some elements here. And the first one being, of course, optimize only if required. And if you do so, measure and profile that result. So if we look first at the boot time, you want to start from a minimal image and distribution to not include anything that you're not gonna use. The less data there is, the faster it's going to be. Compiling full size, linking statically, using different version of the libc are always to optimize that, postponing drivers and services using more specialized or faster files systems. And just one way for your users to feel good or to feel like there's a better user experience is just to dump a small logo or animation as soon as possible on the screen because they will just having that thing that's moving will provide them, they will know that the device is alive and it can wait a few more seconds. So you might actually not have to do any optimization at all if you just provide a small animation that you run from new boot and say there are some libraries that you can easily clone and use from Yocto to do exactly that. And the second point is now that you booted, make your application as fast as possible so you can compile full speed, it's pretty basic stuff, leveraging different course CPU instruction priorities. Really the main bulk of the benefits or the gains that you're gonna have here is using hardware acceleration. So whether that's 2D, 3D, crypto, having that enable possible on your hardware on your sum and having the right libraries that are gonna make use of this acceleration. And that second part is about making it reliable. If we look at Yocto, Yocto uses autobilder 2 build bot, the setup that is there that you can actually, so it's open source so you can use it and deploy it if you want, is not necessarily a good fit for you because the goal here is really to test a broad range of different configuration and all these different metrics and to make sure that everything works. But the good news is, of course, there are tons of alternatives for you to automate tests. So the first thing you can do is leverage ptest which is the package test feature that you'll have in booting package and recipe for Yocto. So using ptest and ptest runner, you can run tests for specific packages. You can also have a look at the Linux test project which provides a very extensive set of tests for everything from the kernel to memory networking. Probably a bit too much, actually, depending on what you do, so you probably wanna restrict that, but that's a great way to test your device and then there's gonna be a set of device-specific tests that you can write on different language or different ways depending on the tool you're using. I listed a few here. So LabGrid, PlumaFuego, Pluma is one tool that we're developing locally, but in general, there are a lot of different options. And going back to that GitLab and Azure and having access to that online, I would really again recommend to use that because knowing that with GitLab or with Azure, I'm not trying to sell their solution, but knowing that you can run some code on a dedicated machine, on-promise, on multiple actually dedicated machines, on-promise, and target different hardware version or different multiple just units of the same product that you have and put that result online is very convenient. So you can easily create a test plan that runs, I don't know, 100 tests on 10 different devices with different machines, build machines, runners that are tagged specifically and just connected to your device using an Ethernet cable or whatever you want. And you'll be able to target these specific devices, the specific runners, run these tests on this hardware and have everything in a nice web dashboard where you can see everything that's going on and the result for the latest. And finally, if you can, automating the tagging and the deployment. So tagging your release, archiving and all that and deployment to your OTA backend because the less manual step there is, the less work and the less risk for error. So you'll have a more consistent result, a more consistent build output. And we're moving to visible observable. This is more specifically for IOTO connected devices because when you have A connected devices, if you have a fleet of connected devices, you always want to know what's happening. You can't have like a thousand devices in the field and not know what, like if they're connected or if anything goes wrong. So the goal is really to provide some recommendations here, some bugs to take to make that easy to observe. And the first one is to expose the device state. So just as a question here, how many of you know the concept of device twin? If you raise your hand a few, okay. So device twins, there are different names but device twins, device shadows, digital twins are a bit different. Are a, you can see that as just a, actually Rick you know, did you raise your hand? You did not? I did raise my hand. You did? Okay. I'm sure. So this is basically a JSON file that is shared with shared access from the cloud and your device with essentially two sections. The cloud can write to one section of that JSON, the device to another. And that's a very simple, efficient, reliable way to share some stateful information regarding your device or what the cloud wants you to do. Because that file is gonna be, so if your device is disconnected, it's gonna download the latest version from the cloud when it reconnects. And that same file is available both locally and in the cloud. So same if your device is disconnected, you can just read from it in the cloud. So that's, I was going to say recent. It's not that recent, but it's really a great thing to leverage. And some things that you can share that are there are typically the battery level, like the specific boot slot that you're using. For example, the health of your storage. There are tons of information you can put there. The point is once you set that in that JSON, it's gonna be available in the cloud for everyone. Not everyone to use, but for your team to use, for your team to filter devices to see what's happening and to create dashboard from. And if there are some things that you see that you think maybe are useful, but you don't know if they're gonna leverage it, you can always push that to just a file on the file system and see if your team just say, hey, there's like that information, for example, the boot slot. I don't know if you wanna use it, but it's available there. If you wanna save it to that device to infill it free to just read it from here and make it available. And then you have some tools to monitor that. So that's the first, that's a very important one, exposing your device state. There's the telemetry part, which I don't mention here because it depends on your use case. If you need to send telemetry and private information, well, you already know how to do that and you can use MQTT and all that. But another point that can be sometimes, not checked, let's say, is sending log and usage information. These logs are going to be, so that that state, that device state tells you if your device is connected, if it is working or if something is wrong. These logs are going to provide you live feedback or live-ish feedback regarding what's happening on the device. And this is going to be a critical piece for your reliability team, so for your DevOps team, as well as your security team to be able to see if there is anything going on on the system that's unusual, that shouldn't occur, or to just identify patterns in general. And you can actually extend that and generate your own Google Analytics events from the device or locally to have then marketing KPIs and a dashboard is really not that difficult to do. So on a basic Linux system, you'll have like, JournalD if you're using SystemD or Syslog, these provide some capabilities that are a bit limited to be able to provide that to the cloud. I listed two options here and there is also ways to use Azure in the US. Syslog NG and FileBeats, which are a bit more modern and can connect to your existing logs and just make that available to your cloud through different tools, be able to just send that efficiently to the cloud. Two nice to have, the first one is providing operational and BI dashboard depending on your fleet that might be actually quite mandatory because you want to see what's happening. So I listed a few options, but you'll want to know like percentage of your fleet that's connected or disconnected if something is wrong and it's gonna be mostly depending on what your business is and what you wanna highlight on this dashboard. And the second one is supporting big data scenario or having the basis for that. So I would recommend at least having the beginning of the data pipeline to the storage if you don't know any processing for the data you send because then you can just come back to it when you're actually interested in this and process it and like use machine learning or anything and deploy that in the edge, et cetera, this kind of scenario. So just building the beginning of the solution, sending the telemetry out and having it stored. And there's one piece that I really didn't know exactly where to put, maybe more in the first part, but that was also a way to better the slides. So making sure that what you're building is compliant with open source licenses another disclaimer I'm not a lawyer, but I put here a few tools, a few things that you can do already within Yocto because Yocto provides way to make that easier for you. So creating a manifest with all the different licenses, exporting the licenses of the packages and creating a package that contains those license. You can also archive the source of a specific release and you can archive the patches of a specific release. And some reference, the Yocto manual is great in general. There's also a great chapter, great paragraph regarding that. The ISO 50 to 30, Fossilogy, which is an open source tool that lets you analyze your code and see what's going on and if you're missing something or just having a view on all the different licenses and making sure you do the right thing. And of course, commercial tools as well. And we're done with this part. Moving on to making it controllable. So again, this is very much IoT and connected devices specific. And I don't know what's written on that, that like small bandana that that robot has. Man phase, I don't know what that is. But it was a cute robot, so I put it here. So making devices controllable and the first part of this is really OTAs. So making sure that you can update your OS over the air, you have a bunch of different tools for that. So OS tree, rock, Mander, SWB, and SWD. All of these have their respective materials and are pretty much integrated. They have small differences, but they're just good solution in general. So I would say the go-to for OS update is probably gonna be OS tree or Mander because they're the two most used. And then you can always also work with SW update and SWD update or even rock. Their goal is to update the kernel, the packages, make sure to sign your updates, obviously. And look also for delta updates because depending on what you deploy, a size of your file system can be pretty big. So if you want to limit the bandwidth that you have, some of these will provide delta-based capability where you just send a portion of that, that payload to the difference between the current version and the next one, which is quite a lot of bandwidth. Supporting application updates, just because the frequency at which you're gonna deploy that is much different than the frequency of OS updates. Your OS updates will be mainly to fix vulnerabilities, possibly to add some new kernel drivers or things like that, but most of the time that's gonna be just to fix vulnerabilities or issues that you have on your system. The goal of application updates is really to add new features and to evolve it over time. So you have a need to deploy it much more frequently. So separating it will make sure that you limit the bandwidth that you use again. The same names in terms of deployment. If you're using Azure IoT Edge or Greengrass for to deploy for example containers, then most of that is taken care for you. Having a dashboard for that. So again, you have a few options here to make sure you have a dashboard and you can see what's going on and you can target specific device to deploy it. And having a fullback mechanism, which is something that you always wanna have in case something goes wrong. I put a small set of options here. So either a recovery, any trimFS, having AB partition or AB plus a factory partition, golden or base image, you basically need to be 100 or 200% sure that your device is not gonna be breakable, except if you're fine with having it shipped back to you and having to manually fix it. You can use a combination of those of course and just test and retest it to make sure it works properly. Then remote device configuration. So the device twins are back here, but in a different way. So we talked about providing the status, the state of your device. You can also use this device twin to have the cloud query things for your device. So you can have your device twin contain things like the desired network configuration, the logging mode, whether you wanna throw the CPU in this kind of things. And because it's synchronized and always available on both sides, you can use that to just reconfigure yourself locally on your device in a very reliable way. A nice thing to have that I added here would be like supporting arbitrary operation. What we call that is, let's say you have a hundred device that you know have a specific issue. You don't want to manually go on these hundred device and fix them. You can, if you have a way to run arbitrary function of the device that can be Azure IoT, sorry, AWS IoT jobs or a container with IoT Edge or any way to automate a script that will save you some time when that happens and you want to run that thing on multiple devices. So that can be to fix it or that can be to test something or to progressively deploy a feature or to test something to see what is the impact on these devices or a problem. And a last piece here is to provide manual control and debugging. So question is the same, is it acceptable to have your device shipped back? If you have an issue, you can provide remote and or local but make sure to have at least one of those. Usually remote is especially now the one that's preferred. You can use a remote SSH VPN and GROC and all that but make sure to include a mechanism to be able to do that and to disable it by default and enable it only on demand and for specific duration again for security reasons because this is gonna fight with your security requirements. You can also have that done locally so that can be just a simple graphical interface or that can be a USB stick that has like a bash script and that bash script is secure when you sign it in a specific way to make sure that this is something that you created and not someone else and when that USB stick is plugged you execute whatever is on that and you just run a specific script on that machine. On the right, there is a small section grounding remote troubleshooting and debugging. You can include loggers and tracers that are meant for production like LTTNG that you can enable on demand that have very low overhead and you can also generate symbols from your Yocto image, store that and the day where you have an issue on your specific device and you want to debug it you can actually load these debug symbols that you generated for a specific version connect to that device and debug it even if it's a production bill and doesn't have all this information. There's not a nice way to be able to debug that even though it's a debug it's something that's in the field obviously and doesn't have these debug symbols. And we're now to our last part which is making it reusable and future proof with nice yellow boots and blue boots too. So here, I think I'm probably not the only one to say that but limit hacky customization where you're developing your Yocto distro so limiting the number of BBAPNs and limiting extensive patching of components of a recipe. If there's that, it probably means that either something is really wrong in the first place or that this code should be contributed back into that package or that feature because it's really broken. So try to avoid that because this is gonna make your life harder as the package or the different recipe evolve that's gonna break and that's gonna be a pain to maintain. It's much preferred to have so the second point, different layers and different layers that are gonna scale when your product is gonna evolve or you're gonna have a different device. So maybe in the future you have a product that doesn't have any display or maybe it's gonna be like in two different pieces making sure that you separate the features that you have in your device from what is expected of a specific device type is gonna make it easier for you to reuse the right metas and not have to package everything in one and having like tons of flags or configuration to enable this or that which is gonna be again very hard to manage. Using configurable format like WKS, sorry, for partitioned disks and all that because that's again a nice way to do that. Device tree includes an overload and again anything that can make it slightly more configurable and if it makes sense using an LTS version of Yocto, I'm saying if it makes sense here because the next point is preparing by upgrading Yocto if you're moving to a new device. What is very hard is to maintain a set of meta layers that you want to work across different versions of Yocto. So if your goal is to be able to reuse these same recipes, reuse these salmita and everything, one way to do that is to prepare the current product or products that you have by upgrading them to the same or the target Yocto version that you're gonna have and from there it's gonna be much easier to reuse. It doesn't mean that it's gonna be easy but if your goal is to reuse them that's probably really the way to go because otherwise you're gonna run into a lot of troubles having to deal with these different version. So preparing by upgrading to that target version and once you're there you can have that same version of Yocto that's reused across different product and now really share these metas and recipes and everything. And that's one is contribute we're at an open source summit so I think that's appropriate. But if there is anything that can be reused by someone else it's actually really easy to contribute layers, recipes and everything on layers.openembedded.org. Same for Yocto features if you see an issue of you wanna contribute something there are a bunch of scripts and tools to help you do that. And the last slides before our summary for application level abstract your device and OS specificities not gonna go too much in the details here make sure that your application is modular again because maybe in the future you'll have no display so making sure it's modular at a source component plugins or service level is gonna be much is gonna make it much easier for you to just change that like unplug a block and replog something differently. If that's possible if that makes sense using self-contained application services because you don't need to deal with dependencies and the conflicts and everything around that and having to add those same things in Yocto if that makes sense so there are a few there are quite a few options now and it's made it also makes the reuse easier. And finally using standard tools and protocols. So it's gonna depend on your sectors is gonna depends on what you do but when we're dealing with embedded device usually we tend to all use the same tools or to talk about to see the same ones and these are the one that are the most maintained the most reliable et cetera. So attending to this kind of conference probably the best way to learn about the what we call the industry standards and when possible using interoperable interoperable yeah probably standards like matter for example which is a recent standout for IoT devices to make them easy to interconnect and to play nice one everyone. Okay so that's the end of this checklist. So we looked at these five different areas we try to cover all that. I'd recommend to use it as a checklist before and it really is of course the sooner the better. There's probably some things that you would want to add or change. So I'd be more than happy to hear about your feedback and things that you think are missing here. And there's one more factor that's of course you. So like I said I think some of these features fall in different job different position but just knowing about that knowing that the end goal is to make sure that you cover all of these is gonna make it easier for you to just make sure that overall the solution makes sense talk with people review ask if something is there and if something is missing and it makes sense that may actually make your life a lot easier in the future. And like everything it's a learning opportunity which is something I love. So that's probably a great thing to have. Thank you very much. I think we're done here. If you want we have a booth at the third floor I think this is where people are. We have a we take your booth so feel free to come by and talk or just ask question now. Thank you. Can I have a question? Yeah. So essentially let me go back. So the question was on the slide about software upgrade. Oop, there was a mention that apt was an atomic. So what can happen is if you have like a power loss in the middle of an apt update you can have basically you're gonna have the file system that it's gonna be half updated or something like that and have apt is likely gonna be in a situation where it's gonna say I'm not really and don't really know what I'm gonna do, et cetera. So the tools that were listed here OS tree, rock, mender, et cetera. If you lose power any time either you're gonna fall back so if you're using OS tree for example you're gonna fall back to another partition or you're gonna just discard entirely what you had. So for example rock when you're doing an update it's gonna make sure that it updates everything and only when it's done it's gonna swap the current package or the current files that you have. So you cannot be in a state in between where you see I'm doing a partial update which is not really something you wanna have. Yeah so mostly do you remember what slide that was? So the question was about QMU and if we use that. Yeah, sure. So the QMU was somewhere. So yeah QMU we use when... So if you don't have hardware and you kind of try to be like in develop work on the system in advance without having a development part from you can always use QMU and test if things are working. It's obviously not the same as having the hardware but it worked. If we have used it. Yeah, yeah, yeah. We use it if there are specific difficulties. I mean it's you can see it as a virtual machine. So a virtual machine emulator. I know not specific difficulty. It's actually relatively easy to use and you just start it. You mount a specific file system. You configure it to be a specific architecture and everything and then just gonna run. So like I said, it's not exactly. It's you're still simulating things but it's a pretty convenient tool either for BSP development or for application development. It can be used and so I will take your question just after and Yocto does provide. I think Yocto does provide a configuration files that you can directly pass. Actually, there is a command that you can, I don't remember the name of the command but people probably here know it. A command that you can directly call when you're ready configure your machine and everything to kick off a QMU machine if it's supported. That maps basically everything that you describe to QMU configuration to be able to start it. Okay, so your question was? Okay, it's not a question. Don't worry, yes, don't worry about the flash, yes. Yeah, it will mitigate that so you're right. So there are two things you have to do the work. I'll try to analyzing how much rice you're gonna do in the life of your product if it's five years for example and what that three percent compares to the expected life of your flash. So for sure you have to do that because yeah, if you run up like running gigabytes every day it's gonna be quite a lot and chances are it's gonna die and you do have file systems that are gonna be, so we're leveling to make sure that you use your flash uniformly but at some point you're still gonna reach the end of, it's just gonna make sure that it's uniform but you can still reach the end of life of your flash. So yeah, you're right, that's a good point to add. Okay? Well thank you very much and enjoy the coffee break.