 or maybe you have to implement it. In this case, the pros are that you can, if you have the source code, you can potentially integrate it in any embedded device. And the cons are that you have to do it. I mean, it requires some integration work. And in some of the cases, the updates are limited to the application level. The examples here are the big guys, Azure, AWS, Google, and Hopbit. And, well, it has something common with the first one, and that is, well, as I was saying in the first group, that the cloud is strongly deciding what's running on the device. And we are going to try to avoid that. And then the third group is the agentless one, in which the server is in charge of communication with the device using SSH protocol. So the update manager here is the scripts that are sent from the cloud, and the cloud client is represented by the SSH server. Pros, the only dependency can be SSH. Maybe you need something else to do the installation itself. And the cons is that you don't get any feedback to the cloud because here the cloud is the master of the communication. Example, Ansible. So now, how can we make this more decoupled? First, I'm going to take a little bit to show how we are approaching the problem of reproducible states. For this, we use containers and a state JSON. The state JSON is just pointing to one to multiple containers, and each container has a root file system attached, so you are going to be able to know what's running on the device at any given time, and if you update it, you are going to know that it is going to be on the device after the update. Here you have the example of our state JSON. It has the VSP with just a bunch of binaries, the kernel image, the modules, the update manager, which is Pantaviser. And then we have one application, only one container. With the root file system in a compression, SquashFS, and again, a bunch of configuration files. So this is the diagram now, where I have divided the application level into a set of containers, and I have changed the name of the update manager to container orchestrator, because I think it's more appropriate for this example. So what's the container orchestrator? Well, it is the update manager, so it's in charge of installing the updates. But in this case, it's going to install and install containers and start and stop them. We have an implementation of this container orchestrator, which is Pantaviser. I have put the link in there, so you can check it out later. And now we are going to take advantage of the use of containers. Instead of having the Cloud Client and container orchestrator in the same base system with all their dependencies, we can move the Cloud Client to its home container. So this is what? Cloud Client. This Cloud Client has to communicate somehow with the container orchestrator. It has to control it. And for that, we are going to need an open update protocol. But first, I'm going to talk about a couple of examples that we have implemented. One is an Azure Device Update agent client. It supports updates from Azure IoT to devices that have the container orchestrator. You have the link in there too. And another example is our client, the Pantakor Hub client, which can be used as an example to other implementations because most of the Cloud providers offers an REST API. So you are going to have to implement a client for each one of them, but they are all going to look pretty much the same. And you have the source code also in there. I mean, if you... Because, well, I haven't prepared any demo for this presentation because there is no time. But if you want to see some demos, we are in a booth, booth 33 in level 4, so you can come later or tomorrow. I mean, we are going to be there all this week. We can show you a demo of the Pantakor Hub client, maybe Azure client, or we can talk about how it is implemented in more detail. So this also opens other possibilities, not using the Cloud, which can be convenient in some cases, for example, for development. Just switching the client with a local client that can be managed from your host computer. We have also an example of that, which is the PBRSBK container, and an example of the tool that managed the PBRSBK from the host computer, PBR. So as you can see, you can not only get the local experience here, it's all about being able to switch the Cloud container with an update. So you get the same base system always with the container orchestrator, which is a minimal implementation of an update manager. In our cases, pretty small, it's like less than two megabytes, so you can fit in those devices with 16 megabytes of storage. As I was saying, to allow control from a container, we need an update protocol. We do that by offering a small server that interprets HTTP requests with a unique socket, and it can be made and can only control the container orchestrator from some of the containers. We set it from the state JSON, so you cannot mess with the socket from the application level. This would be the minimal update flow. It's pretty basic. First, we check for new updates in the Cloud, then we download the update, then the Cloud Clients send the state JSON to the container orchestrator, then the Cloud Clients send the state artifacts related to that state JSON, and then the new update is run. So for that, we're going to need at least these four endpoints or requests, or steps to install JSONs, objects to install the artifacts, commands to tell the orchestrator to run a given update. This can open other possibilities like containers controlling any aspect of the device, rebooting or powering off or whatever. And then some metadata so the container can get feedback information for the Cloud. This is the conclusion. This was really fast, and I'm going to read this because I'm not going to read better if I improvise. Following the UNIX philosophy that teaches us to do one thing and do it right, we have decomposed the device site software to get the simplest possible container orchestrator, Pantaviser. More functionality can be added to this basic versatile setup in the form of containers with privileges that can control the orchestrator using an open protocol like we did with our Cloud Client container. So that pretty much sums up the idea. With this setup, you can pretty much use any cloud that offers a REST API. So you don't have to marry your Cloud provider. You can still do it, but it will be a healthier relationship because you have a way out. And then you can get all the advantages of a local experience. In our case, we have a CLI tool so you can update your device from your host, but we have also worked on some other examples like Web User Interface. So you can update a Web User Interface that is running on the device, and you can use that to update the device, too, and then course-like tool for the same... I mean, it's very powerful in our opinion. I have put here some references with the links that I mentioned before and some more like our documentation if you want to take a look. As I said before, you can come and visit us in booth 33. If you liked the idea, we can talk about it. If you hated it, we can talk about it also. Some heated discussion is not bad if you are talking about embedded devices. And, well, I have a lot of time, so we can do a lot of questions. Yes, well, I'm going to repeat the question because so the guys in the streaming can hear us. You have asked if we have some kind of mechanism to sign the updates with a certificate and check the integrity of the updates. Well, we have... As I explained before, we have the state JSON to make reproducible states, but it can also be expanded to sign the updates because in the end, a version of the device is only going to be a JSON and a bunch of artifacts. So we can use encryption, I mean, asymmetric encryption to sign the device with a certificate that can be in a secure place in the device, like TPM, TMP, TPM. And we can also check the integrity of the artifacts because I'm going to go back to the JSON state. Well, this is because, yeah, I mean, it's a good question because I forgot to explain some things. Well, the JSON is pointing to these hashes are just artifacts that come with the update. So before installing or before running the update from the container orchestrator from Panda Visor, we can check the integrity of the artifacts before starting the containers. I don't know if that answers your question. Yes, as we have a version history, the device can roll back if something goes wrong. It's continuously checking if the containers are running, if they should be running for that version. And then if something fails, it restarts the device. Then the boot loader has the information that it needs to roll back to a previous version that was marked as a good one because it could be started. And I forgot to repeat your question, sorry. But I think with the context, anyone can understand it. You had another one, I think? Well, he has asked if... I mean, how have we ended up using containers in a world with specifications so constrained? In my words, sorry. Well, as I said, the container orchestrator in the end is like less than two megabytes. When I say the words container orchestrator, I guess you might think about Kubernetes or Docker and we are trying to make something similar to what Kubernetes did with the cloud, which is that developers could forget about infrastructure. But we are doing it in small devices like 16 megabytes. There are people that have thought about using Kubernetes and Docker and all that stuff, but if you have to fit that in 16 megabytes, well, you are going to have a hard time because you have to leave space for the application level, which is the level that is going to take most of the storage for sure. So, I don't know where I was going, but we use containers for the... The short answer is that we use containers to get the reproducible states because we don't know any better way of doing it and because we could make it somewhat light. And it, of course, has some overhead, but it's not that bad. And we have tested examples for devices, small routers with 16 megabytes or more. It's LXC. He asked which container runtime we were using. Each application is a Squatch-a-first. In the state JSON, I have only added one because it didn't fit to have two, but each container is just these three things. LXC configuration file, the router Squatch-a-first, Docker digest, this is for PBR, the tool from the host, and a rendition that is going to be parsed by Pantaviser. So, yeah, we use LXC. Yep. Okay, how are the containers cleaned up after a successful update? You mean the containers from a previous? Yep. Well, you can do it if... I mean, it's configurable, but we have a garbage collector that cleans up the artifacts that are not being used in this current version, the artifacts that were used in the version where we roll back if something goes wrong. The logs, et cetera. So, the garbage collector. If it's configurable, you can set it to clean it always or do it only when you reach a threshold in the storage. How do we manage the list of versions? In the device side, well, it's just a bunch of JSONs and artifacts. On the cloud side, you get more visual stuff. I mean, it's not that I don't want to sell our cloud client, but the talk was about between the cloud providers. But we have a cloud UI that is amazing. That is called Pantacore Hub. And there you have all the, well, a version list. You have all the artifacts from all versions in the cloud. So, if you are trying to go back to a revision that was deleted by the garbage collector, you can download them again. I don't know if I'm answering your question. I think he... Sorry. Yeah, but I'm not sure if I know how to answer this question. Because, well, we just download the artifacts of the new... I can describe the cloud object storage. So, we download the... For a new update, we download the new artifacts or they are sent by the cloud container using the Unix socket. And once everything is already there, we check the integrity, the signature if there is any. And we try to progress to the new update with the new artifacts. Better now. They communicate through a file. We have in a file, in a text file, the version that worked... I mean, the version that we roll back to and the current version. So, in case the device is rebooted unexpectedly or because of an update that needs a reboot or for any other reason, Pantalizer writes in the bootloader file the revision that it has to be... That goes next. We support Uboot and Grapp. Which? I don't know that question because I wasn't there from the beginning. But I guess... Well, taking into account the current solutions, they wanted to get something for these routers, small routers with low storage. I think that was the starting point, the inspiration of the idea. Okay, so the question was, how does this scale when we have thousands of devices to manage, right? Sorry, what? Billions. Yes, I mean, my big cannot think that big. My mind. But yet, we have another web user interface which is called Pantaflit, that is more focused on that kind of tasks to manage big fleets of devices. And from the device point of view, well, we don't have any mechanism to not flood our cloud API, I guess, right now. But if we get to that point, we will implement it for sure. We will have the resources. We have a small team. I mean, if you want to meet most of the team, you can go to the booth. But that's also maybe your own Pantaflit-based system. And those things scale through the stuff of communications and the tasks that are being updated on the device, which is then controlled in the same order here. And that's how I'm going to answer that question. Okay. Anything else? Okay. I guess we can leave it here. Thank you for coming. And, well, as I said, we will be most probably all the time in our booth. So go there if you are interested for demos. Thank you. Thank you. Yeah. We know.