 Okay. I think we can get started. We are about on time. Thank you for joining my session. I'm the guy that stands between you and lunch, so I'll try to get through this quickly. So we can all go in it. Today I'm going to be talking about strategies for deploying, developing and deploying embedded applications and images. And for you, the previous talk in this room from my colleague Drew, there is some overlap or I'm going to go into some details of some of the features that he mentioned. So the scope, what we are talking about today is this loop while you are developing applications or when you're developing your embedded system where you build some kind of software, you need to deploy to your device, you need to test it. And what we are trying to do here in my session is to take a look at a few of the tools that are available to us. It's also telling a bit the story that I've been through throughout my career in the Metal Linux, the tools I've used and what I use today as my primary tools when developing systems and applications. So quick overview, more detail overview. We'll try to take a look a bit at desktop environment and how that works when you develop applications and embedded Linux systems. I will take a look at the embedded environment that's available to us as well. And I will try to take a look a bit on development workflows using package managers, but also using Yachto with core package management system. We'll get into that a bit later. We'll also take a look at network booting and using software update solution as a development tool as well. So a little bit about me. My name is Mirsa. I've been working with embedded Linux in the last seven years. Started out more at hardware side like hardware development. Also doing local level stuff like U-boot and Linux kernel development. And the last two or three years it's mostly been Yachto and build root, build systems, creating custom distributions and stuff like that. And I am working with an embedded solutions architect at Nordentech. And I also work with the Menderer.io project, which is an over there update for embedded Linux. We also have a boot upstairs. So if you have any questions about that, you're welcome to join us. So let's start with the desktop environment. So developing applications on a desktop or laptop system where you have Ubuntu. Generally everything is available to you through our up-get install. If you're missing some dependency library or you need development tools like CMake or make or it's all available to you pretty easily. It's just up-get install and you're good to go. You also have a high availability of trace tools and debug tools like AGB, GDB, S trace and other things. So you have a lot more control over your, when you're running your binary, for example, you can easily debug it since you're building and running and testing on the same machine. And you generally, when you are able to do this, you have very short cycles of making code changes, building, to testing. And generally, I mean, I would probably try to keep development as much as possible on a desktop or on a PC and even go to some extent to mock certain hardware features to be able to do some basic sanity testing on the desktop environment just because you get these short cycles of changing code, building and deploying. But also be aware that it's mostly basic testing because you are not testing on your dedicated hardware where this will run eventually. And you also maybe have some constraints that, if your embedded device is less powerful, that also has an effect on your binaries and stuff like that. So you can go to the, like when you go to the desktop embedded system, you can probably reproduce most of the applications. You have access to like R&B and Ubuntu or Raspbian if you're running embedded devices, which are similar to a desktop environment that you have on your laptop. So you can install development tools on that as well, make, see, make it. And you can use that as a development workstation, but you shouldn't really because generally, it's very slow. It's not like compile times are terrible. And you cannot really set up the exact same environment that you have on your PC. So it's not really viable in the long run. So what we need to do is start doing is cross-device development, right? We cannot really use our embedded devices as development stations. So we need to use our laptops or desktops, which are much faster. But then we need to start cross-compiling application. And that's when it starts to get a bit complicated. But this is the accepted approach nowadays. And this is how generally most people do it using Yachtor, Buildroot or other build systems to cross-compile on a more powerful machine and then transfer it to the device, only the binary. But it also introduces complexity because now you have, you compile the code in one place and now you need to transfer it to one device or transfer it to multiple devices and then you need to run testing. And this is where it gets a bit messy. So like the initial approach or the entry point that most people have experienced in this room probably, you need to transfer files between your laptop or the desktop and your device. So you can easily do that with secure copy, for example, SSH or take a USB stick, transfer it. But this is the entry point and it's surprisingly how many people are, they stop here. This is how they develop applications for everybody devices. But this is, of course, error-prone and this is not something that you, it's a lot of manual work and there's people still trying to automate this as well, but it's not really recommended. So then if you have, if you are using ID integrated development environments, they usually have some kind of plug-ins or to be able to cross compile and usually you have some kind of post build hooks which will transfer your binary to the device and run it. So you can from the IDE run your binary on the device. For example, if you're using Qt Creator, you can launch the debug server on the device and also connect and do debugging remotely. But people, it's not very common to use, I don't use the IDs that much, at least. So the next step would be, okay, we have package managers, that's pretty familiar to all of us. That's how we install applications on our laptops. So, and with that, you get some kind of more sanity checks, more controls. You can state dependencies, the binary that you're installing has dependencies on this version of this library. So you get more, more sanity checks to avoid error-prone manual transfer of devices. And generally, there's three popular package managers. You have the Debian 1 and Fedora and OPKG. So the step, what you would do trying to utilize that, you'll probably add a step in your build system when you're compiling your application. If you're using make, let's say it's make Debian package, right? So it packages your binary, and if your application has some extra configuration files along with it, and it packages it in a deb file that you can then transfer to your device and install. And this, again, gives you more dependency tracking, sanity checks, and it gets a bit less error-prone. You can also provide custom package feeds where you can upload your Debian package that you create for your application, make it easily available to your devices. And package managers are useful usually during development or prototyping when you're working with a better device. You're trying to debug something, you notice, oh, I'm missing a tool like S-Trace or TCP dump or hyper for whatever, and then you just run up to get installed, and you can continue on debugging whatever problem you're doing right now. But this is not always the case. You don't have package managers on all embedded devices. This is generally if you're using Debian or Ubuntu or Raspbian, something like that in your embedded system. But if you don't have that, let's compare this. And I'm going to focus a bit more on Yachter project and work more on how to utilize, create package feeds in this environment similar to what you would have in a Debian-style system. So there's something called the on-serm distribution, which maintains package feeds. They have a meta-angstrom layer, and it seems that it's, I haven't used it in a while, but I checked it up. It seems that it's still maintained, and they have a sumo branch from this year. So what you typically would do is include meta-angstrom in your Yachter builds and set your distribution to angstrom, and then you get package feeds that the on-serm distribution maintains. So this is also helpful while doing prototyping or early development of your embedded platform if you're using that. But also angstrom is a lot more than package feeds, so maybe you don't want to use it for other reasons. The package feeds are nice, but maybe in the long run, you're trying to create a more customized based on pokey or something like that. And then there's a way to create this as well using Yachter work war. This is how meta-angstrom does it as well. So Yachter generates package feeds for you basically. When you do a build, image build, you will get one of the output directories is packages. So you can choose between RPM, DEB, and IPK, and it's stored under the attempt deploy IPK. So that's just packages, and it's not yet a complete package feed. So what you normally would need to do is there's a command that's a bit bake package index that generates all the files that the package manager that you're running on device needs. So after you have run the package index command, you can simply expose the deploy forward where all your packages are on a simple HTTP server, which makes it accessible on the local network for your device. But this only fixes the server side package feed in this case. So what you need, you also need the tools on your embedded device. So there's an extra image feature that you can set that's package management. That will install if you're using IPK, yeah, if you whatever package manager you choose, it will install the device specific tools needed to fetch packages. There's also a template file, or it's a complete recipe actually, in meta open embedded, which is called distro feed configs. So if you include that in your build, it will generate the configuration files needed for your package manager that you run on your device. And there is some configuration parameters that you can set. And the interesting is the URL or the IP address on the local network where your package feed is. But if we take a look like what that recipe creates, in the case if you're using Oopiko G, it creates all the feed configuration files, depending on architecture, also generic packages. And this is all output from Yocto. And you can see the URL that's configurable. So if you change that to your local IP address, you basically have a local package feed that you can utilize. So in the workflow, if you set this up while doing development in some way, then it's easy to make changes in your build environment. You just and you rebuild the image and then you rebuild the package index. And then it's available for your device to fetch. You don't have to reflash the device with a disk image each time you make a change. You can also run the bit bake world command that will generate packages for everything that's buildable in your environment, basically. So it builds that you shouldn't, but yeah, that takes a while. So you should... But that's a way of providing a more complete package feed. Even if at the build, when you build the image, maybe you don't know what package you would need. So bit bake world will build everything, then it's available through a package feed to your device while doing development. Any questions? You can interrupt me while... So this is something I've never done this myself. But apparently there's people doing this. And sure they can do that. But in the server world or enterprise, it's very common to have some kind of configuration management tool to manage a fleet of servers. And there's a lot of tools that are available. There's CFM, Genpuppet, Ansible, Chef. And these are all configuration automation over a large fleet of devices. And some are applying these workflows to embedded devices as well. So that means you have some kind of golden image that you install on all your devices that you need to install the configuration management server on your device. This is usually something like a manual work that you would do during provisioning. And you also have to set up connectivity and trust between your configuration management server and the device. And then you are able to script the configuration of your device using these tools. Which is an interesting approach as well. So but usually when working with embedded systems, you have a dedicated, in some cases you build a certain hardware for a certain use case. And then you have some custom kernel and you have some custom kernel options that you depend on. And then you write some application that integrates with other system components in the system. So it's not a single binary anymore. It's starting to get like a full system that you have developing. So testing usually involves you have to deploy everything to test reliably. And otherwise you're just testing a single binary in a complete system, so to say. And when you move on to this, more like a system level testing or when your application grows and it's integrating with other components in the system. And there are other ways of doing it. And one way is network loading, which is pretty common in continuous integration systems as well. And what you do is you provide all the resources necessary to boot your device or network. So the device on boot fetches both the Linux kernel device tree and the root file system. Meaning that you just need to reboot the device to update the software. But there is some complexity involved in setting up network booting. But it can also be easily extended to multiple devices. So if you have 10 devices that you want to deploy the same software, maybe they are interacting with each other as well. This is a way of doing that. So network booting, it's pretty common to use something called pixie Linux or pixie, pixie boot in Uboot, if you're using Uboot as a bootloader. And pixie boot in Uboot is almost a derivative of pixie Linux. And what you are using basically is TFTP. So you have a TFTP server somewhere. It can be your laptop or desktop or you have a central system where you put your build artifacts or that you want to deploy to your devices. This is what a typical pixie boot configuration file looks like. There is more complexity in setting this up. But what this means is that every time the device reboots, it's going to fetch software from the network. And then it makes it somewhat easier to deploy. But you can also script it using the boot shell or hash shell. Language. And like we are just TFTP, the kernel image, the device tree, and then set up a network file storage mount of the root file system. And again, this is pretty common used in continuous integration systems. You can also set this up for your development workflow to optimize the time you spend on waiting for software to be transferred. Yeah. And in the end, similar to network booting when you need to have a complete system that you are testing that's both Linux kernel and your application binaries and configuration files where you have applied custom configurations all across the device. So there's a lot of update solutions nowadays. I work with one of them, Mender, but there's Rao, there's software update, SW update, and there's more. And if you integrate one of these early on in the development process, you can use these as development tools to deploy your builds to devices reliably, but also like on many devices if you're testing many devices. One benefit is also that you're like, it's similar if you're deploying the same system in production for doing software updates. You test it throughout the whole development process. If you are using the same tool to build confidence that everything is working, that the tools are appropriately configured and that they are working. And if you some of these are using image based updates, which is really nice when doing testing, because image based means that your devices are stateless. When you flash them with an image or flash 10 devices with the same image, you're pretty sure what software what software you're running on these devices. So there's no like someone has entered a device or changed a configuration file that affects your tests. But also one benefit if these update solutions generally have robustness built in. So you're not breaking devices. That can be quite tedious if you are doing development work and you do an update and you break your device and you have to pull out all your tools to refresh it. So that's also a help to ease the development flow, so to say. And it also fits well into developer workflows. Like build system integration, you have some kind of something that produces by an image that you need to transfer to your device, which are build artifacts. And then generally you have some kind of central place if you have an update solution that's able to do over there updates. You probably have some kind of management server and you can utilize this in your development cycles to update or test on larger fleets reliably. Since I'm working with Mender, I'm using Mender a lot as a development tool as well. And Mender does deploy a AB update strategy, which means that you have two full copies of the operating system and you do image-based updates. You always update the whole system. And you can Mender works both in a command line mode so I can set up, expose my artifacts on the local network. And if I have terminal access to my device, I can apply that update, a simple command line command. But I can also integrate it in like the Mender server with my CI continuous integration loop so that when I push changes, they're automatically built and they can be automatically deployed to certain devices as well. And this is generally what the command would look like if I'm on the client terminal to fetch an artifact and apply it. I think that's it. I'm pretty fast. That's okay. Any questions? I'll do a question. Okay. So on the slide where you were talking about package fees with Yachto, how broadly applicable are those packages? Are they very specific to the particular board you built for? Or can they be built so that obviously the ARM core or whatever can they be used on other kinds of platforms typically? Yeah, the configuration files includes recipes generally specify, okay, if a recipe specifies this is an ARM specific package, then it's going to end up in an ARM specific folder in the package field. So if you're running X86, you won't get access to, so that's one of the benefits of using that distro feed config. It generates configuration files based on your current architecture that you're building with your image. So you can have multiple architecture, you can have both ARM and X86 feeds in the same folder as long as the configuration files points to the correct locations. Thank you. Have you considered using the Yachto way of test image to create like a test image integration for Mender? Yeah, I think we're not using that. I've used it in other projects. Our testing framework is based on Python, PyTest, and Fabric, I think. So that's what we, but I have used the Yachto test framework in other projects, which is pretty nice. And it's fairly easy to set up. Yeah, so is your framework available for the users of Mender, or is it just when you develop Mender? All our QA is open source. It's only that we have a Jenkins server that runs the builds. That's not publicly accessible, but all the scripts we're using is publicly available. Hi. I was wondering, you're promoting packages. Is this just for deployment and then AB images for production and, sorry, development versus production? Yeah, it's generally package managers are really good while developing, but not something that I would use deploying. So during production, you'd prefer an AB image? I prefer image updates, generally. Okay, thank you. Could you please explain at what level Mender works? Is it kind of extension to Uboot or in a Azure bootloader, or it's a user space application? So how it generally works? It's everything that you mentioned. Now, it's a client in user space that you have. It's the Mender client. It's a management server. But the Mender client requires Uboot integration, because you need to have control of the boot command. If you are having, you need to switch, like, which partition am I booting right now? And that happens in Uboot. So I love that. Okay, if there's no more questions. Thank you for listening. And have a great good lunch.