 Morning everyone. Welcome to this early session. So we are going to talk about today, and now I'll talk about building embedded Debian images with ESA. So our talk will give motivation for this work, obviously, and introduce what ESA is. We have a short demo part as well. And we would like to talk about where we are moving to, what needs to be done, and primarily we're interested in having a discussion around our approach, and ideas, and then comments on this. So first of all, who are we to here? So my name is Bojan. I work at Iberia 3 in the high in Munich, Germany. I am a software developer for Linux and other systems. We do drivers, applications, virtualization. And for fun, I am a Linux user since 1 to 13, and Debian user since 4.0. And I like low-level stuff. Yeah, my name is Jan Giska. I'm working now for 10 years at Siemens Cop Technology. They are doing in-house consultancy and development for meta Linux. So kind of proud to say that I haven't written any significant line of non-free software in just 10 years. Most involved in abstract activities regarding kernel, virtualization, real-time. And these days also about base systems, distributions. For fun, I'm doing Linux now for 20 years. Start the hard way, low-level stuff as well. And yeah, I also enjoyed hacking on this hardware, those two hardware things. So Linux is being used in industrial scenarios, embedded as the scenarios for quite a while. So I don't want to go into all these details, but we are, as companies, Siemens, we're active in all these areas, and also as a community here of industrial users, we are interested in all these kinds of devices. They are running various Linux versions, various Linux variants, and, well, for example, on the left column, the device in the middle, that's actually also running Debian. So these embedded devices, what makes them special compared to, well, never your desktop system or the server, the cloud? Well, first of all, there's a variety of hardware, even more than we have on standard systems. But that alone doesn't make it special. It's also about the customization that we apply on these systems, for example, to customize the boot processes for various reasons. We have special kernels still, although we are all happy to use upstream kernels, but there's always some need to adjust things. Optimizations are being done oftenly during to optimize the size or to reduce the number of packages for these devices or to optimize the speed of certain things further. And that leads sometimes also to deviating packages, so you can't take identically what is from upstream distributes available. And last but not least, hardware is always special and you have to do some interesting tricks to make it work in practice, so there's also need for performance. What also makes it special in our domain is the long-living, the long life of the system that we deploy. Most of the devices we support start with 10 years of support. They complete installation, sometimes live up to half a century. If you're building a power plant, you don't throw it away after a couple of years because the software is no longer up to date. Of course, the devices themselves, they change over the lifetime of these long-living systems, but the core you want to keep for a long time and yeah, that means special measures to maintain them at all. At the same time, if you look at the better device today, they're becoming more and more the same like we have in the enterprise world as well. There's a desicion of hardware going on. So for us, for example, it's mostly about x86 and ARM these days, modern ARM architectures. The ARM becomes frequently more powerful, so it's no longer an issue. The optimization need goes down a little bit. You have more CPU power, you have more RAM, you have more storage available. At the same time, we've been asked to introduce more standard features into the very devices as well. Well, connectivity is a big topic, IoT. That comes, of course, with the need for even more security, more hardening on that. We have a high demand for new modern languages on these devices, and last but not least, you have all these nice little visualization and container features these days that should be also available just like the same on the better devices. So now if you have a better device, the question for us is always what to get to a bootable image. It's about generating an image that runs in the end on the device and breaks it up. You're not installing, we are getting for this image. So the standard answer today to get to a bootable image is to build yourself. I mean, from scratch. This is what these two projects are doing, and there are way more than that, who basically create their own distribution sort of source on demand with all the customizations included. But the question is, do we really need to do 100% from scratch generation just because we need 1% of customization, maybe? And if you look at the software stack, that is something like that, maybe it's 10%, maybe it's even less than 1% depending on the device. But actually, you are paying quite a bit if you're just doing it for this customization. It's completely rebuilt. Why don't you just get in for it? So why specifically getting it? First of all, it's a well-tested major binary package feed for us, which is very interesting, and it has broad hardware support, even though we need less hardware these days, but still. It scales up and down, means we can apply the same thing on the high end devices, like on the lower devices. It's long-term maintained, which is very important. It's still not that long-term maintained that we need it, but it's on the right way. Which also is very interesting for us because we are then shipping those devices. We become a distributor of free software. And therefore the licenses that have to be, that are included in the packages, need to be very well-documented. And that's something which is also high importance for Debian. So we have the same page on this. And yeah, we have history already using Debian for products, so it's just about extending it a bit more. And that's where our requirements come in. How do we want to build these images for our devices? As I said, we need ready-to-run images. There is basically no installation. This is just about getting the image that has to be flashed as is on the device at the end. They need to be reproducible, at least on the higher image level. But of course, we are also very excited about learning the great progress on binary reproducible, the package side as well. And we're not just shipping individual devices. We are often shipping also a large variety of deviations of these devices. So there's a lot of commodity in how we describe the images for our systems. And that needs to be accompanied by the tool that we need for the image generation that we can layer things. Like it's seen on the right side and that there are common features that many devices get and there are some specializations and you don't want to describe one for each and every device, everything from scratch. And ILE, as quite a few of us will still use also the out-of-source tree approach that's industry standard. There should be a smooth way for developers, for engineers to move between both worlds. So we want to share the concept and the artifacts ideally with the out-of-source tree systems. Yeah, that's where our integrated system for automated root file system generation comes into play, so what is that? First of all, either uses the well-known, basically, embedded automation tool called BitBag. This tool combines the build of one where it's still needed and there are some cases of packages of new packages, custom packages. The bootstraping of the system and the customizations in one thing. It supports that layering concept by being able to append, repend, overwrite, replace descriptions of your configuration depending on where you are on the layout stack. It also parallelizes the build of multiple targets. Wow, which is important if you don't just build for one device. So either, why is that strange acronym on the page side, either is also a well-known place for people who have been evacueing back in Munich and the river side, so either is a river in Munich, so that's where the name comes from. So how does it work? We first of all have the upstream depth repositories, and from that, this is for the case of an ARM device. We create a built-in root environment. So while in this environment, we then can generate our custom packages, usually the so-called business logic, so your own proprietary software on top of that. That is done in the second step, and you get a normal Damian package on top of that. But of course, nothing which you can upload. Furthermore, you then create the rule file system itself. That's normal means of LibuStrip or multiple hour case right now. And you also add further artifacts like the bootloader or the kernel, either out of sources or out of existing Damian packages. And you install your custom packages on top of that, and you have your bootable system for the better device. Currently, we primarily focus on native compilation because that is the standard way of doing things. And well, we don't want to deviate much from the standard because it creates also interesting effects, but of course, it's interesting that Damian is moving towards cross-building again, so that will be beneficial for us as well. Right now, builds are being done within the free user space environment. Then of course, it's not very fast if you have to build a lot, but it's well, we have powerful machines, it's still okay. And that built-in root we have up there is also our SDK for developers to write applications on so Dev Shell SDK combination. So with that, I would hand over to Aran to give some demo and further insights. Okay, thank you. We need to switch the device unfortunately. So first of all, a question. Who has already worked with Yachto? Oh, yeah, okay, quite a number. Let me do a short introduction. Nothing seen yet. Yes, now it's better. Or a short tour of how ISAR works. So I'm currently in the ISAR directory and we see here a major director structure like BitBake is the tool that executes our customization scripts. Basically Debian provides built infrastructure and also artifacts that come out of it and all the tools that needed to generate the file system and the target image. However, we need some glue tool for scripting and we would like to do this in a maintainable and structured way. So we have started with playing shell scripts but as you see, we needed to handle product lines with multiple products that share code but have also differences from the very beginning and these shell scripts become, of course, quickly un-maintainable. So BitBake, what BitBake does for us, it provides needs for us to structure individual tasks in our recipes which are then executed in certain order according to their dependencies. So what we see here is also a meta directory. It's ISAR core which provides the core functionality used by all projects and meta ISAR is a template that you can copy to your project and customize and develop your own product. So to give you an idea how it looks like, so we have here the tasks that are packed in so-called recipes and they have the extension in BitBake they have the extension bb for BitBake and we can see here four recipes. Basically what we have is buildtrude is the recipe that creates a build change route for us in which we want to build our own packages. Hello is a sample package for our sample product and ISAR image base specifies how to build the complete image. So to give you a feeling how a recipe could look like, we can look at the buildtrude. So what we see is that a number of variable assignments, some of them have some predefined meaning like description or license product package version and so on some have more local meaning that they're used later in this or other recipes. So here we see for example the build change route pre-install variable that will be used later to install certain packages in the build change route. Also it allows to define so-called tasks. They look pretty much like shell functions and they can be implemented either in shell or in Python. And the rest of the recipe looks very much like shell but strictly speaking it is not shell, it is bit-backed image. So what we see here basically the do-build is the standard task of a recipe that is defined by bit-back and what it does after some preparations it bootstraps a build environment for building our own packages. So regarding packages, we have our hello package and what we see here we have the source URI, I changed it to local file in case that network doesn't work and we see that it doesn't define any actual tasks and this is intentional because we want to build several different packages and building them is actually all we're calling DPKG build package. So this whole logic is hidden in the DPKG class which is then shared by all package recipes. Classes in bit-bake have the extension bit-bin class and we can have a look at the DPKG one and we see here a number of tasks that implement certain things like fetching is cloning of the source tree into a local file download directory and pack puts these sources, cloning these sources into the build truth. Of course build truth has to already exist in at this moment of time and this is specified in the depends variable and the whole thing is then to build the package which happens in the do-build task and build SH script is basically more or less change to the package directory and execute the deep package build package. So to build actually an image, I have to prepare the execution environment to set path and other variables named by bit-bake and specify a build directory. So now it creates a build directory with some defaults. They can be customized or in our case we can use it as is and we can then create our base image for a D64 architecture with stretch. So what we, okay, this was already built previously so I will remove the build directory and start from scratch. Okay, I can take this later and who is interested, I can show the demo after the presentation. What we can see here is we can generate the tree of the tasks to be executed and we can see actually the sequence how this is done. So what we see here, we have build short creation and target image creation that can be done in parallel and after that we have further tasks that are done after that. Slides, yes. To give you an idea of how we structure this into different layers, basically you would like to have repetitive stuff in one place and don't copy paste it to every project. And this picture on the right that you have seen in the previous slides would be implemented in the following way that we layer the respective recipes in directories like Meta Platform which carries the platform packages Meta board for specific board support and concrete products like device A, A prime and B are each in its own directory. And in this way we can combine hardware-dependent parts and hardware-independent parts with product-specific parts and get our final images as a combination of those. These directors, they can be in one repository or in separate repositories. For example, if you have some library or hardware, PSP vendor, they could have their own repository that you could pull manually or using a tool like repo or CAS. And this is the way how we handle the variability. We have the machine layer, distro layer and application layer. And for example, MetaEaser includes a sample implementation for Raspberry Pi and we can see the settings in the respective configuration file. What is here interesting is perhaps the image type. It specifies the partition layout and sizes and so on for this particular device. And the distribution, so we can have products that use different Debian versions. And this is reflected in the distribution configuration file. And of course, any product is using its own applications. This is handled at the image level, so we provide our packages and install them into the image and we can specify it with image install, for example. So as we have seen, we have a directory, we have the application recipe and this is basically where we build our package and this application recipe is later included in image install. This is the way how you add your own applications in EZAR. You basically build the package in the recipe and list it in the image install. So in this way, we can actually provide any kind of customizations like update kernel, update the bootloader, provide some, for example, create additional users and so on. So the goal here is to have a state of the system that is completely described by the list of packages. Of course, these customizations can also be done in an ad hoc manner. This is a trade-off that everyone can do to his taste. My approach is always to package things in an appropriate package so that the package list and version alone specifies what is the intended state of the system and if some files are overwritten, then I can compare them with the package and see how the actual system deviates from what I had intended. And regarding ad hoc customizations, multi-strap that we are currently using is providing a number of hook scripts and setup and configuration scripts and we do this, it's possible to do this in setup script. So a couple of words regarding future work. We would like to do this in more deviant way. So currently what we do is building packages and copy them to their file system and install this with DPKG. Of course, this has the problem that we have to install all runtime dependencies of those packages in advance or later in some way. So the right answer to this is using apt and this is what we have in our development branch. Basically we build our custom packages into an app repository and install this from there. So this is expected to be soon merged into the master. What is next? Internally we use Debian source packages described by .dsc files. This is not yet released. We have to move this from our internal repository to the public one and we rely on sudo for doing debutstrap and other tasks. This works well, however this creates problems in environments where the servers are managed. So we are working on removing this requirement. We have tried to, yeah, I see. Nail shaken his head. Let's discuss this afterwards. But as far as I know, we have a prototype implementation of that. We did try fakehood and it didn't work well for us. So it would be interesting to discuss this. Okay, so Nail says please don't do this and he knows what he's talking about. So I'm glad to have a discussion about this afterwards. So as you have also seen, in the recipes we specify dependencies and if my package build depends on some other library or package in a general way, then I have to specify this in BitBake recipe. However, this is of course a duplication because Debian Control and other files already provide this information. So my ideal solution to this would be to have a BitBake backend that could directly support Debian meta information. Whether this is possible we have to see or to look for other solutions where if there are any suggestions, I will be also glad to brainstorm on that. What we do every day is providing customizations and the current solution to provide a customized version of a package is to clone this package into our own Git repository or create a branch out of that, make our tool line change, create a tag and then use this in a recipe. So this is quite of an overhead for this small task and what we are aiming at is easy source package package patching so that we can provide a source package that is quickly modified and archived in the app repository. And one of the last things is version pinning. So Debian basically provides version pinning and this is good. Our question is here how we can do this in a way that we don't fork the complete distribution and do this in more or less maintainable way. So now. Okay, to summarize I will see a lineup for the questions. So it's very important for us to emphasize that either although you may see it as an embedded BitBake thing, it's Debian, there's a ship there on top but in the core the majority is Debian. It's very important for us. We see it, we're not completely married to this approach and all details, we see it as a tool to increase the sharing of concepts and implementation between communities, between embedded communities and Debian communities. Things tasks like image generation is immediately done independent web images or web binaries for the images come from, optimizations, customizations, these things are to a certain degree shareable and other templates for that. We will, we already look into aspect where we can contribute to Debian where we have common requirements we fulfilled specific on packaging improvements, topics like extra C-flex or cross building is of common interest for us. We will possibly also improve the tools we use. We see deficits currently we don't have a hot list for this but it may come. In the end we are interested in, well if we are using it we want to strengthen also the community to attract more embedded developers to Debian, that's a huge potential right now because a lot of people are not very happy about how these out of source free image generation for when it works and that actually also is visible if you talk to some of our suppliers, the hardware suppliers, they all provide the ecosystem for Yocto and Co. But quite a few of them are also looking to alternatives like Debian and possibly this could be a tool, a mechanism and yeah to open it up for them as well. So with that thank you for your attention and I'm looking forward for the question and your opinions on this. Where do I start? I did all this about seven years ago right the way up from BitBake, right the way through the fake route and that's where the monster came from. The monster was actually part of a system that predates this and which we burned it in fire with a lot of celebration when we finally replaced it. Everything you're actually doing with BitBake can and should be done with standard demo tools. You get build package and that can work with S-Build. So strip directly from your Git source tree or the Debian directory, whichever type or branch you're on get a bit of a build package or get your package from there without pre-building everything. You can use the bootstrap once, you may want to put it on the stretch one for Jesse, one for Buster, one for Sid, leave them alone, stick a lot of falsehoods and don't want them when you need them and then you do the rest of the work on top of that just by running out of the sandwich route. That also allows you to have multiple repositories that you don't need to do multiple stuff all the time. The problem I see with this is, again, what we had in our own version of the same system was that you're actually running multiple stuff too often. You're rebuilding stuff all the time and that harks back to the BitBake methodology. BitBake is built on that kind of principle of rebuilding a lot all the time. Getting rid of pseudo is a red herring. I know there are issues with certain servers when you're trying to do that, but actually what you need to be thinking about is using virtualization on those servers now that you've got a virtual environment and a real boot that is disposable. If something goes wrong, you don't blow away the main system. That's the big problem with multi-strap, that's the big problem with the system we had that tried to run as a low reuse and then use in pseudo. When things go wrong, trust me, they will with this kind of system. When things go wrong, the first thing that happens is that something like multi-strap blows away the et cetera directory on the system it's running on. And the reason it picks et cetera first is because that's what Devin Stoller first creates. So it's the lowest I know. And that's the worst possible way to lose a system. Suddenly you've got all the configuration and trust me, I've rebuilt several systems from that and it's a lot of work. So stick all the building in virtualization. Then you don't need to worry about getting rid of pseudo because you've got a virtual boot, a boot in a container, a boot in a 4VM. And everything does work really nicely inside of it. Basically what I'm recommending is that you follow the way that Devin builds our own packages. So you build natively. Don't worry about cross compilation except when you actually have to rebuild a kernel yourself and this kind of thing. You build in a virtual environment that is safe that we throw away and we try. You build once and you use a repository tool like RepRet Pro or there's numerous ways of building a Devin archive. Don't rely on flat file systems or just your own directories. Use tools, database, a way of handling or proper directories and the release files and sign in the release files and building suites and pools and disks and all this kind of thing that could want all of it on the chain. And you can do all of that even with typed flow down in embedded code. You can get all that working on the chain. We've done that. It just needs a little bit of scripted to type existing bits together. And then at the end you can build your image just by putting these together from, so you build the boot date of the truth once. You add the packages using apt but just by tuning in and specifying the apt source you want and you can have exactly what you need from whichever repository you've got everything organized in, even a local one if you're making a local changes. And then once you've got that you just lump that into an image and you're way enough. So version pinning them becomes trivial because you've got a virtual environment and you've got all the tools to do that inside the tool. The biggest worry I've got for things like ISR is you tend to end up in a situation where you've ruled out the possibility of getting security updates from the visual source and you have to go around and rebuild everything from scratch. So that's a big cost. And it's a worry that that kind of thing because you're potentially customizing one step maybe too far with some of the core packages. It's a worry that I've got a lot more than enough of exactly what you're doing with the lower level packages. But if you take the standard degree strap and then add your own stuff on top you will always be able to add security updates as they come from Debian. And the kinds of long life machines you're actually putting it on. Long term security over the year upgrades that kind of stuff is going to become critical. We can't afford to avoid or just ignore the security problems or the need to keep these in the lake no matter where they are they need one of the updates. So that's, I realize that's a lot and it may sound as if it's pulling everything apart but actually you can still use a lot of the glue that you've got there just by switching out bit by bit putting in something that's virtualized and running S-Build or any other way of building that's from packages from GIMP. Like on 2000. Okay, so thank you for your comments. Let me comment on this. There were quite a number of topics. So the easier one is the virtual machine. So we have this requirement and we will basically we are considering what we do in this direction. This is less a problem because currently change route solves our problem but virtual machine has its advantages definitely. Regarding GIMP build package. Well, we already use DPKG build package because not all of our sources are directly in Git some are packaged in source packages and on tarp. That is why we use this generic tool but this already is done in the Debian way. Regarding rebuilding, so this app branch that I'm talking about, the development branch it already contains the fix for the problem of rebuilding. So our intention is definitely not to do it the Yacht away and rebuild everything every time. It happened to be like this because we used stock Debian and our applications were modified from build to build anyway so we didn't lose any time. However, right now when we are moving more into the infrastructure thing and want to provide some more infrastructure packages to other departments that are going to using this we look that we pack it into the app repository. So this works already in the development branch. This means we build packages, use rep-repro to create the pool and all needed packages, files, et cetera and then install this with app, with whole dependencies and most up-to-date packages. That's a fair way along the half months but the actual big thing, don't worry, means in this case it is actually easier to do it with the experience. Yeah, well, we use BitBake not to do to replace what is already in Debian. This is just a means of structuring the customizations. So we have many customizations and how we started was a huge shell script. This is of course unmaintenable and you see here basically three recipes that provide build system, base system and application. What you don't see here in the real project, this part remains pretty small because Debian handles it perfectly, I would say. This was the reason why we chose Debian in this regard. However, on top of that we have 20, 30, 50 recipes that do certain customizations. So this is seen as a glue tool that allows to structure this because metal airing is something proven in use in other communities and we want to reuse this. We don't want to invent a new tool for that because two years ago Rico already presented a multitude of tools that are available for this. So we are using one of the existing tools and it does its job efficiently. So it can parallelize tasks if they can be parallelized and so on. Yeah, so the recipes type of thing. If you follow through and put all of these into after packages, all your little customizations become dedicated packages, then the recipe just becomes a list of package names and you can extend that further so the list is actually coalesced into a set of what Debian is still called tasks and these are simple meta packages and you can have a meta package with the product name and that has all the dependencies, it can specify versions, you pass that to apt and apt will do all of the dependency resolution for you. So that's something worth thinking about is actually moving the recipes into packages themselves, sticking those packages into the repository and then saying, right, unpack Debian Strap, to root into the root, apt install product A1, done. Yes, yes. This is what we are into. So thank you for your feedback. I think it was really helpful. So coming back to you wanting to get rid of your current use of Pseudo, I kind of agree with Neil that fake-rooted friends are a trap, having tried them for like, flat-pack OS3 things and it's like they work except when they don't. I've had quite good success with somewhat counter-intertively using the virtualization backends for auto-package test, which provides like a fairly abstract interface for doing interactive things in an environment where you're root over there. And that actually works quite well as a way of having like the business logic run as your user normally and then having the bits of the work that need root happen in a virtual machine or a container or whatever. So that might be something that's worth considering. Yeah, thank you. Thank you again everyone for the feedback. I think we've asked them to follow up this question off-site. Thank you.