 So, I'm just gonna, I'm gonna get started. First of all, thank you for joining my session. Today I'm gonna talk about a new open source solution which is called Full Metal Update. I think it's pretty cool, but of course I'm slightly biased since I'm one of the main maintainers of this solution. So, Full Metal Update, spoiler, but, so you can see it as a mix between Fedora Atomic and Flatpak, but with a twist, it's optimized for embedded devices. So, that's the idea of Full Metal Update. Or, did it start, how did we start this whole project? What was the inception of the project? Basically, it started with a beer. So, I was talking with a friend about how docker containers completely change the way that we are deploying applications to data centers. Then we started to think, could we reuse the same kind of solution to deploy applications to embedded devices? Moving on with wine, then we started to talk about an architecture, what could we do if we started to put our applications within containers? So, in that example, we have one container running a QT application, and this application could access the GPU and the touchscreen. Then we have one application running an MQTT stack, and only that, one container running an MQTT stack, and only that container could access the network. Then, these two containers would communicate with each other over a virtual network. So, that was the idea, trying to split the different features of your application in multiple boxes or containers. One of the advantages of doing that, it's for each containers, you can use C-groups. So, for instance, for the container running the QT application, you could choose to limit the memory usage to one-on-one megabytes. Or for the container running the MQTT stack, you could limit the bandwidth that could be actually used by that container. The big plus of doing that is from a security standpoint. If someone manages to break into one of your container and to hack one of your application, for instance, the MQTT stack, if they want to, for instance, run a denial of service attack, they're gonna be limited by the C-groups and they will never be able to use more than 10 megabits per second. So, that was the idea behind using C-groups with containers. Of course, the fact of using containers is ever bringing more advantages. For instance, you are solving all the dependency issues that you could have when you are designing your system. Like, you could have, in theory, two applications running QT, one QT5, one QT4, and they are both running in their own container so you cannot have any dependency issues. Moving on, then we were talking about security, how by using SecComp, we could limit access to some STEM calls. So, we could use as well, like, rootless containers to limit what you could do even when you are running in the container and even if someone is hacking your container, what they could do when they are still stuck in the box, mandatory access control policy, and so on. In the early morning, we are still discussing if using Docker for such a system would be a good idea. Our conclusion was probably not. Why not? Because Docker, one of the cool features of that software, it's doing delta updates. So, if you are updating two versions of a container, like between the new version and the old version, it's just gonna update the delta, so the differences between these two containers. But to do that, they are using a specific file system. In the past, they were using OFS, nowadays they are using Overlay 2, and they are not resistant to power failure. So, it's definitely a no-go if you are developing a solution to update devices, that might be in a box somewhere on the other side of the world, and if some role there is a power failure and it's completely breaking down the device, it's absolutely not acceptable. Another thing, when you are using Docker, you are using multiple layers of software that might not be needed if you want to use containers on embedded devices. So, we thought Docker maybe not for such a solution. The next morning, we decided to check, okay, that sounds like a great idea to use Docker containers or containers to update devices, but does it already exist? It does, actually, so this discussion, I had it about two years ago. Took a while before we managed to finish something, but it does exist. In the past, so two years ago, it was called resin.io, nowadays it's called ballena, and they have a solution which is based on Docker containers to do updates. They replaced the regular way to do the Delta updates by something which is called Libar Sync, a bit like SW update, and they use a dual partition system for the OS Linux. So, it's a pretty neat solution. We use it at VTQ on a couple of projects, but they have one thing that we don't really like, at least which is not acceptable for most of our customers. It's when you want to deploy a new software, you need to push it to their private Git repository, and then it's gonna build in the container, and then it's gonna be shipped to your targets. So, for most of our customers, this is not really acceptable. So, we decided maybe we could do something else based on containers and replace the Delta update system of Docker by maybe again something else. So, we started to think, okay, what do we need if we want to develop our own solution? We need a runtime to run the containers, a build system to create the containers because if we are not using Docker, we cannot use any more of the Docker files. We need a system to do Delta updates. Ideally, we need a backend to manage the different updates, and on the embedded side, we need a client to download the updates and a tool to manage the container's lifecycle. So, that's basically our shopping list. Let's look at the step one. So, if you want a runtime for a container, you have about 20 options. So, we decided, okay, what could we use that in 10 years would still exist? Then, there was the open container initiative, and they started to develop a runtime specification basically for containers. So, we thought at least with that, we are sure that probably in 10 years, there will still be one implementation existing for that standard. Right now, there are three, as far as I know. One of them is run C. So, basically, you can see run C as the low level for Docker. If you are using Docker containers, you are using actually run C. There is another implementation from Intel, which is clear containers. And finally, there is one implementation run by Oracle, which is called Raycar. So, these three implementations are supporting exactly the same standard. We could not go for clear containers because it's Intel and it's only compatible with X86. It's not a big surprise. Raycar is not maintained anymore, like since one year, they are not doing anything new. So, run C was kind of an obvious solution. And on top of that, I mean, it's just the low level of Docker, so we know that it's working. Step two, we needed a way to create the containers. So, the people from the meta virtualization they are doing a great job, but since we started our project, even before actually they started to really work on that, we have our own implementation for that. So, it's based on Yocto because Yocto is just great to actually build your application and pull all the dependencies for your application. One requirement as well was something that could work on ARM and X86. So, with Yocto, since we built everything from scratch, there is no problem at all. And what we decided as well to do, it's to put the build system itself, so the Yocto build system within a Docker container. The good thing as well, since we are using Yocto to build the container, it's specially optimized for your processor, which is not the case if you use a Docker file with a generic architecture and you are trying to kind of building your container on top of, I don't know, an Alpine distribution or an Ubuntu distribution. And it's very lightweight as well because since we use Yocto, it's really just pulling the minimum set of dependencies to build your application. Then we needed a way to handle delta updates. So, to replace that specific part of Docker. We decided to use OS tree because OS tree is used by Red Hat, by Automotive Grad Linux, Genevieve, here, Flatpak. So, there are really a couple of big projects using that solutions when it comes to update. So OS tree, what is OS tree? It's basically a git for binaries. It's fully atomic, managing delta updates and we are using it actually to update the different containers running on the system and the operating system itself. So, basically what we did, we reused the meta-updater and which is already implementing all you need to manage updates for the Linux host system. But if you use this meta-updater, every time you do an update, basically you need to restart the target because it's updating the host system. So, it can update the kernel, for instance. So, there is not really a choice for it. But since we wanted to be able to update our containers independently of the Linux host system, we decided to have a second OS tree directory that we would use just to manage the containers themselves. Another thing, everything is happening in the background when we use OS tree, which means that your customers, they can still use their product even when an update is ongoing. So, if we look at the architecture of, sorry for that, if we look at the architecture, basically what do we have? On the left, we have Yocto, which is building the operating system and the containers and it's pushing the result of the build to an OS tree server. In that case, it's running locally on your computer. On the other side, on your embedded device, you basically have an OS tree client that can pull the modifications and that is just gonna pull the delta updates between two versions of the software and then you can independently update the operating system or the containers. Step four, we wanted to use, to have a back end and a front end to manage the different updates. The obvious choice for us was just to use Hobbit. But what we did, we decided to, in our Yocto build system, to have a BB class, that every time actually you build a container or you build an operating system, the result of the build will be automatically pushed to Yocto, to Hobbit. How does this work? So basically, when you build something, first it's pushed to OS tree, we get then the commit ID and then we put in the metadata of the package that we're gonna push to Hobbit this commit ID and the name of the image that you just built. By using that, then we can easily, when we receive this information on the client side, we can easily know, okay, what do we need to update and update it? It's cloud diagnostic. So the, basically the servers or Hobbit and OS tree server, we just provide you the setup in a Docker compose and you can just run it on AWS, Google cloud or Azure. It's really your choice. It's working out of the box. Step five and six. So we needed the client. So we needed something to run on the embedded device. We decided to use the Hobbit client from work. So which is kind of a competitor of SW update. But why did we choose specifically this client? It's because it's developing Python and we just wanted to do a bit of Python basically. So we just went for that. What we did, we integrated to that client OS tree and system D to manage basically the life cycles of the different containers. So right now we have a system that can pull the containers and then use system D to start them and stop them. If we look at the full picture, the full architecture of the system, basically on the left side, we have the Octo, which is building the different containers and the operating system. This is pushed to the, this is actually kind of committed to an OS tree or local OS tree. Then it's pushed to the distant OS tree server. We get back the commit ID and then we push like the information about the commit ID and the container of the OS that was built to Hobbit. So you don't need to do anything with Hobbit to add a new image. It's all done automatically. When you build something, if you build a container, you're gonna see actually the new container popping out in the Hobbit UI and then you can just create a new distribution and install it on the target. On the other side, we've got the FMU client, which is pulling the Hobbit server. If there is a new update available, it's using OS tree to pull the delta between the two versions of your application of the Linux operating system. If it was an update for the container itself, then it's using system D to stop the container and restart it. If it's the operating system, it's simply restarting the whole Linux operating system. You might have noticed that I did not talk about security or really not much about security when I was explaining full metal update because basically there is not much integration. Like we did not integrate that many things about security by default in full metal update. Why that? Because I really think that security for each project should be custom. Like you're not gonna invest the same amount of money to secure a water tank or to secure a secure boat. Does not make any sense. For instance, if you want to secure a system, an automotive system, you can use up 10, which is great for that. Like you have all the public infrastructure. It's just perfect, but it's super heavy to manage. So I mean, if you use that then for your water tank, you're gonna spend so much money just to manage your certificates that it might not make sense anymore. So I really think that every time you talk about security with a customer, you should first talk about friend modeling and then talk about, okay, what do you really need and what makes sense for your project? So what's available? Actually, everything is available on GitHub. The first part you need to run if you want to try full metal update, it's the cloud demo. So basically it's a Docker compose. It's gonna start a hook bit server instance and an oyster instance in this container. Then you've got the full metal update Yocto demo. So it's just a Docker container which embed the whole building system. So you have a couple of commands that you can run and you can select the machine and you can do the sync with different repo that you offer like we are compatible right now with IDOT, MX6, with Raspberry Pi 3, with STM32. You can build an OS image, you can build like all the containers in one go or you can build a specific container. The different meta that we provide for Yocto actually it's in two parts. First we've got the meta full metal update which is all the parts that are not hardware specific. So it's the container framework. So that's the recipe for R&C basically. You find in that meta the Python client for full metal update. And the different BB classes that we are using to push the containers and the OS images to OS 3 and to hook bit. Then we have the meta full metal update extra which includes everything which is specific to a machine. So we have all the GPU stuff, dynamic layers per machine, just to avoid that to spread all the configuration for a specific machine a bit everywhere in different recipes. Containers, the containers recipes, so we had to put the containers recipes in the extra meta full metal update extra because in some cases it can be specific to the hardware. For instance, if you are running a container on an idot mx6 and you want to access the GPU, it's specific so you need to give access to a specific hardware to your container. And of course we've got the image creation scripts in the meta full metal update extra. So I thought I would be trendy and start by a bit of machine learning. And so what I'm gonna show you, so I wanted to do that on stage at the beginning but that I realized it would not be convenient. So I'm gonna show you a video where I'm actually running a container including a TensorFlow Lite model and do an inference. So that's, if you are not familiar with that, that's the whole bit UI. And right now I already created a distribution with TensorFlow and already deployed, I'm gonna go a bit. So here I'm deploying from scratch a new container on the target. So that's the cool thing as well. Even if it was not at all at the beginning on your target, you can just decide to add new containers to the target. And in that case I just decided to add a new container with a deep learning model and to run it with TensorFlow Lite. So now I'm just connecting on SSH on the target and I'm just gonna use system D to run that container. So that's done, so you get your inference. So I just did an inference on this image and you can see that on an IDOT MX6 the inference was done in 200 milliseconds. There is a very cool thing which is developed right now by the people from Google if you are interested in deep learning. It's what they call quantized networks. So instead of using floating values they use integer values to run the deep learning algorithm. And on processors like the IDOT MX6 when you cannot run any hardware acceleration you can still use these networks and have very, very short inference time. So basically right now we are in 200 milliseconds but if you take the lightest model you can run the same inference in 35 milliseconds on a processor which is 12 years old I think more or less. So it's super interesting and if you are interested on that topic you can try as well the stuff from Facebook they are doing one thing which is called QNN pack and it's seven, two to four times faster. I did not get the chance to try it but it looks super interesting. So that was how we can update so we follow how we can add a completely new container to an existing image. Now we can look if we want to update the operating system so on this example what I'm doing basically I'm taking a Linux image and I'm just gonna add a stress to that image. So right now you can see that stress is not available on the target and I'm just gonna use so the meta-updater and all the stuff that was done by the people of here to update actually my target. The thing which is different from their system it's what we provide basically is a connector to hook bit that you can use to connect to a string. So right now you can see the client is running on the top so it's actually pulling on the hook bit and right now it just got a new thing to pull so it's just actually pulling it and it's installing it and restarting the target. So we already received a notification that the update was successful. Of course we did not test anything so just to say it successful makes not that much sense but anyway and then we just reconnect on the target and now you can see that basically a stress was installed on the target. So that was another demo. And finally the last demo it's about Qt containers. So we are running a Qt application within the container or run C container and what we're gonna do we're just going to update that application with full metal update. And what's interesting it's even if there is no notification for the user but you can see that it's really the switch between two versions is super fast. So right now we are just creating so the different distributions with hook bit to be able to update our target. I'm just gonna skip that step. Okay so what we are doing right now actually we are assigning a new distribution to our target so we are just moving to step two and I'm just gonna pause. So if you look on the top right of the UI next to the map symbol you see there is nothing. What we're gonna do now we're going to add the possibility to use different languages on this specific UI. So you can see it's really seamless. Like it's taking like one, two seconds and then you get the new version of open running. Basically what happened it was just downloaded in the background and then when it's ready with system D we stop the old container and then we start the new container and you have your new QT application open running on your target and you did not see anything. The good thing as well we just updated the delta between these two versions. So it's I mean when you have a QT application which is quite often 50 megabytes if you just modify like couple of lines of source code you're just gonna update these couple of lines of source code. Then the second part of this demo is so first we play a bit of course I switched to French because I guess you got it I'm French. But and the second part we're adding so the support for a credit card. So it's a demo application that was developed by QT for electrical vehicle charging station. And in that version if you go like you can set up okay how much and then like you can put your credit card number and do like you are charging your car basically. So this is what we did. So if we look at the roadmap. So what we are planning to do in very near future it's first so the port to STM32 MP1 is already done. Port the full solution to IDOT MX8. As I already told you it's already working on Raspberry Pi 3, it's already working on IDOT MX6, the Saber SDP. Soon I'm gonna do the port on the Raspberry Pi 4. What I want to do I don't know if you know the tool SCOPEO it's to do the integration with that tool. It was already partially done. So SCOPEO is kind of cool because then it's allowing you to pull containers from an OCI registry and to store them in an OS3 directory. So you get the advantages of using an OCI registry so regular Docker containers. But still you are using OS3 to store them. So you get like the advantages of the OCI repository and the advantages of using OS3. So I think it would be pretty cool. I want to do the integration with a container network interface because right now if you want to enable the communication over a virtual network between two containers it's not easy to implement. And with CNI it should be quite straightforward. And then we need to port full metal update to more Yocto versions. Right now we are supporting Rocco and Thud but we would like to support as well Warrior and Sumo. Everything is available on GitHub. We have a website with the first version of the documentation and a couple of rules if you want to contribute. So do you have any questions? Actually it's just about the bootloader and it's how you start OS3. So it's basically rather porting the meta-updater than porting full metal update. Full metal update is absolutely not dependent of the hardware. It's up to you. I mean what we provide it's a Docker compose which is going to run an OS3 server and a Hobbit server. What's happening in the background on the Yocto side if you have more than one developer working in the project is every time we are about to push something to the remote OS3 repository we pull what's available to avoid having inconsistencies. But the management of the OS3 server if you want to remove versions and at some point because they don't make sense anymore that you have to manage by yourself. Actually the bigger part is Python which maybe if I think about it now was a mistake but because we need to include a couple of libraries just to run Python it's what's taking most of the space on the image. Because the host Linux, the Linux to run the containers it's super small besides that. I mean we just have run C and we just have like a Linux channel a very simple root file system and that's about it. But all the stuff we need to bring just to run the FMU client. It's quite a bit of stuff because it's in Python. Right now I would say the image is about 200 megabytes for the host Linux system but we did not put any effort in that it could be reduced quite a bit just working a bit on that. So the root file systems since we are using OS3 is read only for most parts. There are a couple of parts that are read write but most parts are read only. Yeah so and the containers are running on the second partition. So containers are on an application partition and then we've got a root FS with a host Linux basically. And both of them are their own OS3. No, not any because that's the OS3 architecture basically. So it's done at the beginning. So it's whole OS3 is booting. They're using a RAM FS. It's the metal data basically. So first there is the bootloader which is starting. Then there is the RAM FS which starts. The RAM FS is making the decision about which part, I mean so then it's, yeah so which part of the file system should be used, which commit in the file system would be used. And then it's using hard links to recreate a file system on the fly. But if you want to learn more about that in 2016 at the embeddlings conference there was a very, very good talk about how OS3 is working, which was done by the guys who developed the metal data. Yeah, I mean we just sticked with what was done with the metal data. The only twist in our approach is to have a different partition for the containers with its own OS3 repository. And we're using a specific mode to actually check out the different containers which is called bare user, which is specifically done for that by the people who developed OS3. Because if you use the metal data, so the general OS3, you have to restart the system if you want the update to be applied. So that's why we needed to have a second partition with a different OS3 because we wanted to be able to restart an application without restarting the full system. I'm sorry, I did not get the question. Yeah, we use OS3 for that. So far, we never had to do that, but I would probably, since in, from my perspective, I'm gonna talk about customers, I would use like for a specific customer, a specific OS3. Yeah, it's just, I mean, it could be your choice, but it's really been, it's about the architecture of your system somehow. It's like, it's current. That's the VAR directory, yeah. But exactly, the VAR directory is just, yeah, it's just kept. It's just read-write. It's just there for that, basically. Yeah, sorry. That's an excellent question. I never checked. But it would be, that's a good idea. It looks quite fast, but it looks quite fast. And I'm a human being. So, but it do need to be mature. Yeah, it's just the metal data. I mean, like it's the time, how much time do they spend in the RAM FS? Then, of course, the service would then, it's system-d, the time for the full metal update service to be started, but it's not critical, I would say that part. Okay, that's all I have, guys. Thanks a lot.