 So now, welcome to the last, today's last talk here in Janssaint by Drew Mosley on Mender, so yeah, welcome. Thank you so much. I guess the microphone is working? Very good, very good. All right, so my name is Drew Mosley. I'm here to talk a little bit about Mender. I've talked to a lot of you today, so hopefully this will be review. If I did my job right out of the booth, I was answering questions properly for you, but hopefully we'll get a little bit deeper here and give you a little bit more information. We've about made it to the end of the day, which is good. That means there's no speaker after me to get annoyed if I speak too long. Downside is everybody looking at me will probably be annoyed if I speak too long because I'm certainly in the way of dinner, drinks or something much more interesting. So I'll try to keep on time and I'm sure the folks in the orange t-shirts will do their best to make sure that I do so. So just a brief overview, things we're going to be talking about, some challenges and motivations that led us to develop Mender as a project, specifically focusing on the unique needs of the embedded marketplace. Then we're going to kind of dig into some of the requirements that we put together, some non-functional requirements as well as some installer strategies, specifically related to the functioning of the update client that we've developed. And finally, we'll dig into Mender specifically, address some of the, how we implemented certain things based on the requirements that we outline in bullet point number two here. If the demo gods are on my side, I'll be able to at least demo a small portion of this system so that you'll get a better idea of what the workflow is like when using the Mender system. Then we'll go into a little bit more technical detail of some of the device requirements and things needed to actually integrate Mender into your embedded device. Briefly talk mentioned some of the testing and the general community open source type of things that we have involved in the project. So briefly about me, I've been in the embedded Linux and Yachto development space for about ten years now and longer than I'd like to admit in general embedded. My current role is I am the project lead in technical solutions architect for the Mender product focused on customer facing solutions and that kind of thing. You see some of the details of Mender over here but we're going to dig into those so we'll just move right on past that. Real quick sales job we are hiring for some of the positions you see here on the screen and we do have a table out over that way somewhere. So if this looks of interest to you feel free to stop by and talk to us tomorrow or catch us on our way out this evening. So briefly I just want to kind of motivate why OTA is needed. I suspect the majority of the folks in this room don't need a whole lot of convincing but there's a couple examples on this slide of things that have happened specifically in the IOT space that were a result of either non-existent or poorly implemented update technologies. There were some smart locks that were used as part of certain home sharing sites to allow owners to remotely open and close their house when their guests were arriving. And at some point they tried to push out an update however they pushed the wrong update and it was for a different version of hardware and it resulted in these expensive door locks being completely bricked and unable to let people in or out of the homes until somebody physically went on site and was able to actually update the devices by hand. There's been a lot of talk in the automotive space of head units running the infotainment systems that get caught in a reboot loop due to poorly implemented updates. And one that I'm sure we've all heard about is the botnets that are out there. I think the biggest one that kind of made the headlines was Mirai. This was a couple years ago now. Some claim it peaked at about 600,000 infections. I've seen reports as high as a million and a half and I'm sure the more recent numbers are even higher. It was used to exhibit a DDoS against major internet sites in late 2016 taking down many large internet brands that I'm sure we all used day to day. And the intent of that really as far as we know at this point was profit. But the folks that were eventually caught and prosecuted for this were my understanding was they were running Minecraft servers and they wanted to be able to take their competitors offline. So they developed this botnet to be able to actually attack their competitors in that space. Another one that came out shortly after that that was called Bricker Bot. The author claimed 10 million infections although that number seems a bit high to me but it certainly did hit some large number of devices. The assumed author officially quote retired in November of 2016. The intent of this was what has been termed a permanent denial of service. So once a device was added to this botnet basically all the block storage devices were overwritten with random data to ensure that the device could no longer function at all and was taken offline. And the author's claimed intent was to address devices that were on the network and potentially vulnerable causing havoc in the internet ecosystem and so the point was very much like chemotherapy destroying the devices to save the rest of the system. And there are newer botnets that come out every day that get even more sophisticated. The early botnets were very much around leaked credentials and things like that but the newer ones are using more insidious mechanisms of getting access to the system. So this is an ongoing problem. So a couple characteristics of the embedded environment that makes it a little bit unique compared to say server-side programming or web-based programming. Typically the devices are remote. It could be pretty expensive to send a technician on site to take the magic USB key and install an update. So you want some means of accessing devices and doing the updates remotely. Product lifetimes are very widely in the embedded space. Some markets are as long as 5 to 10 years if you're in the automotive space. If you're in the consumer electronics space it could be 6 to 12 months. So there's quite a variety there. Typically they're in a very hostile deployment environment, typically outside of your control. Think about the Wi-Fi routers and things that are in your local coffee shop. They're not exactly in a controlled environment. Power issues have to be dealt with. A lot of these devices are running on battery power and even if they're not there's no guarantee the owner of the device won't reach over and yank the power cord out of the wall at any given moment. So you've got to be able to handle power unsafe characteristics in your update system. And finally the network typically is going to be a little bit different than your typical data center. The connectivity might be intermittent. You might be over 3G or 4G connection or something even slower than that. And you may or may not have secure connectivity if you're going through public networks. So what are some of the requirements that we laid out when we were designing Mender in the first place? One, robustness and security. Kind of vague terms but specifically in terms of updating capabilities. We wanted to make sure that rollback was supported. We always had the ability to roll back to a known good configuration. This is to avoid brick devices in the field. We wanted to be able to have signed and trusted images. Standard industry best practices as far as cryptographic signatures and things like that. And we wanted the ability to have integrity checks of the images to make sure that you didn't have a download failure or you didn't have a man in the middle trying to inject a bad image. And then compatibility checks and this is specifically to address issues that were mentioned in a previous slide related to the smart locks. You want to make sure that the image you're installing is specifically for the device you're installing it on. Another requirement is atomic updates. This is very important when you're talking about large device fleets. If you are attempting to install an update and it could crash halfway through and you get an update that's half installed and you have a device fleet of a million devices, that means you've got a lot of different possibilities of what the actual software is that's installed on your device. So ideally no component in the system will be aware of a partial update except for the update client itself. So if there is a failure in the update, the update client detects that and makes sure that none of the rest of the system is available or is impacted by it. Obviously we want to be able to support updating everything that we can in the target system image. So kernel apps, libraries, device trees, that kind of thing. There has been some discussion of updating boot loaders and the only way to do that robustly would be to set up some kind of multi-stage boot load because there has to be some piece of code that is immutable in the board to avoid any windows for getting brick devices. It needs to integrate well with existing environments. So most of these designs come to Mender. They are already pretty far along whether they're using Yacht or they're using Debi and that kind of thing. We wanted a system that integrated well with the existing environments and their existing workflows making it easy to get started. And bandwidth consumption obviously is a concern. Many of these devices are on lower speed networks. So we wanted to make sure that compression where it made sense, we used that. We will be implementing Delta differential updates here soon. So that should make things even better along the bandwidth lines. And finally the downtime during the update. We wanted to minimize that. So we're all familiar with your phone coming up saying there's an update. Do you want to install it? You say yes and three hours later you're waiting to get your phone back. So we wanted to find ways to minimize that. So given those list of requirements, we wanted to talk... We started looking at the various mechanisms and strategies for actually installing new or updated software on a device. So the first one we looked at is called in place. Now this is very much like what you're used to today in your typical desktop Linux operating system. So this would be something we had to get update, that kind of thing. So the updater itself runs in the user space and updates part of its own user space. So that's a fairly common mechanism that we're all used to. The next strategy we looked at is maintenance mode. And that's very similar to the one that I mentioned with your Android phones or your iPhones where it actually boots into a separate mode and from there installs the update. The main downside of this is fairly large down times for your device. And plus there's an issue of redundancy in the bootloader and the update client. And then the third installer strategy that we looked at was the dual root file system approach. And this is a fairly common approach in the embedded space where you're actually using two fully redundant root file systems, each of which contains the kernel, any of the NIT RAM FS, anything that's needed to bring the system up, anything at a higher level than the bootloader. Note that in this case, again, the bootloader is a single bootloader. There's no redundancy at that level. And finally, we have proxy updates. And this is something that we're getting ready to support in a more general fashion. The idea is that a lot of these designs these days are more than just a single system. They might have a gateway and then a whole host of home automation things, maybe smart lighting, that kind of thing. And those devices themselves may not have enough capabilities on board to handle redundant root file system updates and that kind of thing. But the gateway is able to actually run the update client and proxy the updates over to the device. So that's another mechanism that we're looking at supporting here in the very near future. So kind of to dig in a little bit, of course, we did choose the image-based updates as the first mechanism for updating that we wanted to support in Mender. So why did we do that? As mentioned, this will in general increase the robustness of your fleet. So the idea is the software that is installed on your device is exactly what was tested in your CI environment. We know that. There's no ability for packages to be updated one by one on the device in the field. So if I do a full image update, I know that what is on your device in the field is exactly what was tested in my environment. It does reduce your testing matrix. Obviously, if I only have one image at a time that is known good, I only have to test that image. If I open up the software stack so that the devices in the field can then apply package-based updates on there, I've got to, in theory, test every possible combination. And ease of rollback, this is another big one. We have full redundancy in the root file system so that when we boot into an update, if there's an issue with that update, we can simply, on the next boot, the boot loader can simply instruct the system to roll back to the previous known good configuration. And this has a couple of advantages from the update system perspective. One is that we know we can always connect to the server and install another update. We never, if the update client is unable to connect to the server for any reason, that's considered a failed update and we will roll back to the previous known good configuration. And atomic updates, I mentioned that previously, but that's a pretty important component. That means your code doesn't need to know anything about the fact that there's an update process going on. There might be some integration at some levels if you're moving forward to new versions of database schemas and things like that. But that's typically handled at the boundaries of an update either right after the update is installed or right after the system has rebooted into the new update. So the fact that the update system is completely atomic means that your code runs, doesn't even need to know an update is going on until it's ready to actually move forward into the next state and reboot into the new update. And finally, the deployments are reproducible. If I have a set of devices in the field and I'm trying to, for instance, I'm trying to deploy Update Version 3, all my devices are on Version 2, and 80% of my devices, they succeed. They're off-run and happily, but the other 20%, there was some issue. Maybe they had a power issue or some other issue with the deployment, so they rolled back into Version 2. Now I have 80% on 3, 20% on 2. I can simply issue another deployment to Version 3, and all the devices that are on Version 3 will simply ignore the deployment because they're already there, and the devices that failed the first time around will get another shot at updating to that version. And at that point, you will have completely reproduced the previous deployment without interrupting any existing devices that are running on the new version. And just to kind of drive the point home a bit, some of the challenges with package-based updates, and this is, you know, the primary reason we chose to avoid package updates at least is our first supported methodology, really comes down to the total number of combinations that come out of any kind of package-based update. A lot of the package systems, I know they're getting better about handling atomic updates at rollbacks, but there's no installation order that's enforced by the package system, so if I update one set of packages on this device over here and this device over here for whatever reason decides to start with a different set of packages, it's very possible that my devices, even though we think they're running the same version, they might be slightly different. So the dependencies between them and then the fact that if there is an issue that stops a package-based update, the means to clean it up is not always obvious what needs to be done, and that can cause issues with further installations down the line and blocking cells of new updates that may be out there. So this is an open-source conference. I probably don't need to do a whole lot of selling on the open-source side of things, but a couple of things to point out. Our biggest competitor is those that want to build their own update system. We talked to a lot of embedded IoT developers, and many of them thought, oh, it's an update system. Okay, I'll just write a couple shell scripts. It shouldn't be hard. I'll write a disk image over HTTP. I'll write it to the block device, and I'll be done. Conceptually, that's what needs to be done, but it gets much more complicated when you start looking in the details in the way things can go wrong. And how much time and effort are you spending if you are designing your own update system? How much time and effort over the lifetime of your product are you spending maintaining the update system versus maintaining your value add and your specific use case and your expertise? So, you know, that was part of the reason we wanted to release this as a separate product and an open-source product, so that it could get out there and become a de facto standard in the community and help answer this problem that everyone in the embedded space deals with and find one solution for this that could be reused across many different designs. So, let's talk a little bit about what components need to go into any embedded update system. So, the first upper left, obviously, you need some means to detect the update. In our system, we have a server and a client. The client just opens a connection over TSL, and at a specified polling interval asks the server, is there an update available to me? That's pretty standard. Compatibility check. It's environment-specific. Some would say optional, but given the risks of not doing it, I would say that it really should be promoted to a must-have. Download. Obviously, that one's pretty straightforward. TLS, just download the data blob. And then a number of checks from there. Integrity check with check sum. This is, of course, to detect just download failures and that kind of thing. Authentication, obviously, this gives you a little bit greater guarantees, but it also decouples your infrastructure from your actual development system. So, the signature is applied in your CI system or by your developers, and so that gives you guarantees that are independent of the security of the transport layer. And then from there, moving into things like decrypting, if it's important that your payload is encrypted, extracting if you are using compression, which I would imagine everybody is. And then at that point, we can do things like pre-install actions. So this is going to be very dependent on your application and your device and what is necessary. Do you need to flush databases? Do you need to shut anything down? Those kind of things might be necessary depending on your exact application. And from here, obviously, the installation. And, you know, in our case, it's simply doing a block-based write to the inactive partition. And then we move into the post-install actions, and these are the kind of things that would happen before the reboot if there are any that are appropriate for your device. And then we do a reboot, and then we come back up and we do post-install and sanity checks. So, as I mentioned, the Mender client itself, just the fact that the Mender client is running as an application is a pretty good sanity check that the kernel is up and running. And then if the Mender client is able to actually make contact with the server, we know that the network functionality is still there. And then you, as the system designer, will plug in sanity checks that are appropriate for your environment. And then once all those checks have completed, then we can commit the update and move forward. If any of those checks fail for any reason, then ideally there will be some mechanism for doing rollback and failure recovery. And so that's where we get the best benefit out of having the fully redundant root file systems is in that final step. So a little bit of a high level about Mender specifically. We have both the client and the server. It is written in Golang. It's all open-source under the Apache 2 license. All our sources are up on GitHub. And we also distribute all the server-side components as Docker containers and a Docker Compose environment that will allow you to spin up your own version of the Mender server on any system that's capable of launching Docker containers. The client itself is written in Golang as well. It's fully open-source, both the server as well as the web UI and the client that runs on the target. And all the tooling, QA, everything that is available through GitHub and through our docs website that you see there. And there's a lot of information in the docs website that will talk you through the APIs, the architecture of the server, the client, what's needed to integrate it into your design, and that kind of thing. So moving forward, a little bit more detail on Mender, AB Image Update, TLS Communications, all things I mentioned. One thing to mention, because we have two root file system partitions, this gives us the capability of allowing the Mender client to install an update, stream it directly to the inactive partition, and that reduces the amount of storage space that's needed in the active partition because the new image is never stored persistently on that partition, so it just goes directly to the inactive partition. And then the other things that I mentioned, deployment management, obviously, that's through the server, cryptographic signing and verification, standard industry best practices there. And then we have what we call a state script mechanism, and this is effectively the means with which the system designers can customize the Mender flow to their particular use case. So at every interesting state change within the Mender client, you can plug in a script that is expected to run to either allow or reject the particular state change. So for instance, if you have a dodgy Wi-Fi connection, you might have a state script plugged in at pre-download to check the strength of your Wi-Fi or the strength of your battery to either allow the download or reject it or to allow it to be tried again later when the device is maybe plugged into a more reliable connection. And post-install sanity checks and things are implemented with the same mechanism. So there's about nine or ten different state changes where the system designers are able to plug in their own customization scripts to be able to control the flow of the states through the Mender Updater workflow. So this is kind of an image going into a bit more detail on the AB dual root file system approach. So on the left-hand side, we got our bootloader. This is the immutable piece of code that has to be programmed in, you know, in the factory or on your desk. Typically, it's U-boot or grub. And then one of our recent changes, we've actually started running grub on top of U-boot in the ARM environment. This allows us to actually use both U-boot and grub unmodified, and we are able to implement the logic we need as scripts on top of grub. So it makes it much easier to get that integrated into your environment. But in a running system, you have the image, the case, the green image, image A, it's active. That's a full root file system. It contains, obviously, the Mender Client, the full Linux environment, kernel, DTBs, kernel modules, libraries, everything. And then we've got the image B partition. And at this point, this one's inactive. And the contents of this partition at this point should be considered unknown because there could have been an update deployment started that failed for some reason. Some people ask, well, can I two weeks later decide I want to roll back to that previous version? And that's not something we support simply because there's no way to know if anything has happened on that partition since that's all information within the Mender Client and it's not something that we choose to expose. And additionally, one piece I did not mention previously is this blue partition here. That's the data partition. That's where all the persistent data needs to go. And under runtime during updates, we don't touch the data partition. So this is where you would configure things like Wi-Fi credentials and things like that that you're going to want to persist across the various updates. So once we have the system updated here, we actually just switch the roles. Now image B is active, image A is inactive. And the first time we boot into image B in this case is considered a conditional boot. And if, for any reason, this system reboots, we simply jump back here. The logic in the boot loader will know that that update had not been committed and it will automatically switch us back to image A. If in the failed image the system was up far enough that the Mender Client application code is actually running and can detect a failure, we will automatically trigger the reboot if your deployment, for instance, has a bad kernel and you get a kernel crash before the Mender Client can get up and running. Obviously we can't trigger the reboot at that point and you'll be relying on watchdog timers or other standard mechanisms to force the system to reboot. But even in that case, we would not have committed the update yet. So when that watchdog timer fired and rebooted your system, it would simply move back to the known good configuration. So moving on a little bit, we'll talk about the Mender server. This slide is kind of an eye chart for those that are used to microservices, architectures, you're probably used to lots of boxes on the screen like this. The only exposed ports on the server are port 443 and port 9000. Obviously 443 is the TLS port that the client communicates over and port 9000 is the storage port. So when you're actually downloading the actual artifact itself, that comes across port 9000. And the APIs between all the microservices are exposed over a restful API so that you can make calls into them. We do have the Web UI, but many users that already have a device management infrastructure, they will actually plug the Mender server into their device management infrastructure using the API rather than using our Web UI. So you can trigger deployments, you can group devices and that kind of thing all over the API. And all the components in green are stateless, which means they scale very well horizontally. So if you have a very large device fleet and for your particular use case, it's your inventory subsystem that is your bottleneck. Well, you can just add more containers, you can just scale the containers, running the inventory service, just add more of those, and that should help alleviate the bottleneck. Obviously, the persistent storage databases and that kind of thing, those can become the bottleneck and there are very best practices for scaling databases. If you get to the point where that's the case, then presumably you have database experts who can guide you in implementing sharding and things like that. The link down here on the lower left of the screen is our API docs, and that'll kind of give you an idea of some of the functionality that's available and provided by our server-side microservices. So a little bit about what's needed on the client side when you're actually integrating into a new device. Typically, there's, obviously, there's the A and the B root file system partitions and the data partition. We mentioned those already. Sometimes there's a bootloader partition. It depends on exactly how it's implemented on your device, whether it's actually a separate partition for that or if the bootloader is just stored in a separate SPI flash or some other device. So that's going to depend very closely on the design and architecture of your hardware. The bootloader integration, as mentioned, is what controls the boot process. So depending on which bootloader you're actually using, whether it's Uboot Grubb or Grubb on top of Uboot, we just use standard Uboot scripts and Grubb has a scripting API that we've implemented this logic in. And this is the logic that determines whether to boot the A or the B partition. And that's typically the main communication point in the Mender system between the Mender client and the bootloader. As far as actually implementing the bootloader integration, that tends to be the most development-heavy activity when you are integrating Mender into your build. So if you're using Uboot... if you're using Grubb or you're using Grubb on top of Uboot, that's pretty much automatic. It doesn't require any source code modification of either Uboot or Grubb, which is great because that's where it gets problematic. We also have some automatic patching of Uboot where if the Uboot functionality for your board follows some of the newer functionality within Uboot and is implemented similar to other boards, we're actually able to detect and patch the right files automatically. If for your particular platform, maybe you're on an older platform that hasn't been updated with all of the Uboot features that have been added recently, we can always fall back to a manual patching and we have that documented very well what exactly needs to be done, what functionality needs to be added to the Uboot. So integrating the bootloader is likely where you're going to spend most of your time, but we've done everything we can to make that as easy as possible. The runtime integration is... our requirements on the actual runtime are fairly minimal. It's a fairly simple application about 8 to 10 megabytes in size that runs as a daemon in the Linux runtime. We support EMMC and SD cards with typically EXT-4, EXT-3 type file systems, and we also support raw flash with UBI. And then as far as the target OS is Yachto and OpenEmbedded is our primary out-of-the-box supported platform. We have a standard Yachto Metal Air that you can integrate into your build, set a few configuration parameters in your local.conf file, and then you can pretty much get Mender integrated in your device without too much hassle that way. We've done some work with BuildRoot and OpenWrt in both cases. Some of that has been submitted upstream and is able to be reused. Some of that's going to require some customization. If you decide you want to use one of those systems. But we have done that through our services group on a few occasions, and we know that it is possible to support them both. And finally, recently we released a utility we call MenderConvert. And this allows us to support things like Debian, Ubuntu, Raspbian, things that are fairly common in the embedded IoT space, but typically have their own package-based updaters and that kind of thing involved, associated with them. And the way this works is the MenderConvert utility is actually a post-processing step that will work on your... Once you have a Debian image or an Ubuntu image with all the packages and all your code and everything, and you have your golden image that works for you, then you post-process that image and we actually will loop back the file system image, we'll inject the Mender client and all the configuration files and create the multiple partition structures and the artifact based on that image that is needed to be deployed over the air. So it's an extra step to do it this way, and we are certainly looking at getting better integration with the build systems for these kind of operating systems, but pretty much any system, any other target operating system can be supported with this mechanism. So if you have something that's not listed here, feel free to reach out or jump on our mailing list or whatever, we'd be happy to work with you on getting additional operating systems supported. And what's coming soon? This is the big thing that a lot of people have been asking about, and we've been talking about quite a bit over at our desk. We've got this new framework called Update Modules coming probably two to three months time frame, and what this allows us to do is to support updating microcontroller sensors and other small devices. So the idea here would be for these devices to be updated in the proxy mode that we talked about before. So your embedded Linux system would function as the gateway running the vendor client, and then we can write plugins that would allow you to, say, update an Arduino that is connected to your device or some kind of sensor that's connected over to Laura or some other kind of small device. The client in this case will not be running on the device that's being updated, it'll actually be running on the gateway device. With the Update Modules, we will be able to support... we will be able to better support in-place updates, such as running apt-get update and that kind of thing. There will likely be some limitations there. It's unclear if we'll support kernel updates because typically the distribution-provided kernel model is a bit different than what Mender's expecting. So there may be some extra steps involved if we want to support updating the kernel through this in-place update mechanism. We can also support simpler things such as just configuration and calibration data. If you just want to update a few files in slash Etsy and don't want to download a full root file system to do that, this framework will allow us to do that. Container-based updates, this is something that a lot of folks ask about. Today we don't directly support this, but with the Update Modules, we should be able to support running various container subsystems on your devices and providing the updates to the containers from there. Differential or delta updates, this is a very common request, especially when you're dealing with large root file system images. When only minor things change in between them, you don't want to re-download all the things that are already there. So we should be able to provide better support for that. And as mentioned, this is a framework. So the idea is this allows us to create the common ones that we know people are going to want, but then this can also be used for very specific use cases. If you have some subset of devices on a board that you want to implement, something very custom to update Linux on two-year cores, and then you have maybe some RTOSes running Zephyr or FreeRTOS or something like that on other cores in a very custom mechanism to update all those systems together, all of that should be possible with this update module framework. So real quick, if you want to get started with Mender, these three commands are all you need to do. If the demo gods are on my side, I'll actually be able to do this and at least show the web UI here. The link down at the bottom on our docs page will walk you through this. This basically downloads a Docker Compose environment and actually launches the Mender server on whatever machine you're running on, as well as it launches a Kimu-based emulated device already running the Mender client and able to connect to your server. So let's see how well we're going to work here. Those fonts are no good. Is that reasonable? So password, there we go. Okay, so I'm just going to let this scroll. So this is actually launching the Mender demo environment here, which, as mentioned, is a Docker Compose environment, so we're launching a number of Docker containers here. And in a moment, we'll start to see all the logs from each of the containers go scrolling by. At this point, we can actually go ahead and make sure I've got the user setup for me to be able to log in. And now we can see the logs from the emulated device that's running here. And just for good measure, I'll go ahead and SSH into this device. This is obviously an X8664 QAMU device. And so I can actually display the Mender logs here, and at the same time, I can jump over here. So this is the Mender web UI. Now, in our case, you can't really see that very well. We have one device here, which is our emulated device. It's running version 1.7. We have an artifact, which is a version of software to install. And we can trigger a deployment here. We'll select the artifact. And in this case, I'm just going to select all devices. It does the compatibility check here. You see the device types is QAMU X8664. And we trigger our deployment. Now, this is where I am fairly certain that it's going to fail. But if we look over here, we can actually see that it attempted an installation. And here we get an update install failed due to the image sizes are not right. So we have a bug in our environment right now, but the bandwidth wasn't sufficient for me to do a full build from scratch today. So after a few minutes, this will be detected as failed, and this particular deployment will be marked as failed due to that. But that should at least give you an idea of what the workflow is like when you're working with a mender system. And just to share a bit more here, we do have a Google group slash mailing list that's out there. Pretty standard open source type working model. We love contributions. If you have any ideas, feel free to jump on our mailing list. Jump on hub.mender.io. That's our community site. We're actually migrating all of our mailing list to there. This is a site to share tutorials, integrations. If you've added it to a specific board that somebody else might benefit from, we'd love to have you post that there. And obviously we have an IRC and then developer portal. The very bottom link there is pretty much the main link to get all of this information. And with that, I think we've got some time for some questions. Hello. Thank you for the presentation. My question is, do you have any plans to support dating devices that have encrypted storages? What I'm thinking about especially, usually when you want to protect disseminated devices against tampering, what you would do is encrypt the storage, use a TPM and so on. And so if you update the software, you have to inform the TPM and that can be very tricky. Is it a challenge you've already looked at? Certainly we want to support TPM and things like that. And we do have some one-off services work that we're doing towards that. Exactly how that's going to roll out into generic product type things, it's unclear. But there's nothing inherent in the vendor architecture that would preclude us from that. But there's obviously extra work that needs to be done to implement it. So if you have ideas, again, feel free to reach out. We'd love to have more people contributing there. How does the system deal with trust zone? In the China market, many of them just use the Android way, which can handle the trust zone, but it's not real. The AB method mixed with the boot loader and trust zone is very important in OTA. The producer needs OTA. Question one? I'm not following the question. How does this system deal with trust zone? With trust zone, you cannot address the fresh imagery into the B-part or A-part. You have some key problems and there are some security problems here. Have you ever think about it? The answer is we don't deal with that today. May I ask you a second question? How do you show something like you refer to Wi-Fi password in the data? As you just use the image update, not the package update, some conference files would be updated in the other version, which is the original file. The data partition is not usable anymore. That would cause some more problems here. Basically, for things like configuration files, for Wi-Fi credentials for instance, they're typically stored in Etsy in the Etsy directory and obviously that's overwritten with an image-based update. It's really up to the system designer at that point to arrange for those files to be stored in the slash data partition and either replaced with a sim link in slash Etsy or just change the configuration file so that it knows to look for the credentials file over in the slash data partition. That's definitely a system integrator step. It's not something we can do in any kind of automated fashion. Scripting can certainly help there. I think we got a question up here. Hello. Hi. Thank you. It seems from the entire system that you have everything there to potentially support recoverability. For instance, if you're running on A and you do something silly, then is it possible to over time create snapshots to B to say at some point, oh, I've messed up my A. Can I just switch on to B instead? That's certainly something that could be added. It's not something we... It's not a model that we support today, right? So the expectation is that your active partition is known good and then your passive partition is completely undefined. So the idea of having multiple snapshots and that kind of thing is not... We have two. That's all we have today. But conceptually, it certainly could be expanded as such. Hi. Thank you for your presentation. How hard would it be to deploy Mender over an existing MQTT network? I'm sorry. I can barely hear you. Would it be possible to deploy Mender over an existing MQTT network? Implement Mender... Let me come a little closer. Implement Mender over MQTT. As long as you've got a pipe for data, we could potentially do that. Today, we download the images and all the communication is over TLS, but as long as there's some data pipe available, we could certainly do that. MQTT might be a little heavyweight on full image updates. It might stretch the MQTT protocol a little bit too much, but sure, that conceptually could be done. Easily? I'm not going to promise that. Hi. A lot of these devices often have limited onboard storage, and maybe they don't even have a separate storage device or partition. And sacrificing half of your existing storage in the name of reliability and reduced downtime during an update might be a little excessive. Have you considered alternative systems like using a minimal partition with a base system and your Mender client running just to perform an update and then return to that partition? So you're talking about... Where was it? Something like the maintenance mode where you have an update client that'll update a part of the system and then you boot back into that and then you kind of bootstrap on top of that? Yeah, except instead of having to do it in the boot loader, you can still have a root FS. And actually, that picks up on the previous question which was about using the separate partition to recover functionality. So instead of having that functionality embedded in the boot loader, which, as you mentioned, has some problems, it could be a separate root FS partition. Again, it's not something that we've put a whole lot of thought into in terms of specifics of how to implement it. That update framework that I mentioned at the very end potentially could be used for that, but we obviously have to dig into the specific requirements to see how well that would match up. Although it's a last talk, thank you very much for your talk. Thank you so much.