 So, welcome everyone to my talk about yet another build systems for embedded systems. So this talk is going to be a little bit a mixture of looking at it from a user perspective, which we did and are still doing, but also a little bit about promoting it because we are also interested that this thing is working. So this is roughly the structure of my talk. Well, I'm going to explain, of course, first of all, why we are doing yet another build systems, a build system for embedded devices. I would like to introduce you to this system called ESA, or ISAR in English, but ESA, you know why later on, present you a little bit first steps, how to get running with it, and then look into the customizations that you usually do if you build your own embedded system and how these could be done in ESA. Well, derive some to-dos for this project from it and give it an out-block and summarize the talk. So if you look at embedded systems, how they are being built out of their inputs these days, they are basically, well, I would say two directions. One is this roll your own thing and doing it out of all the sources. Like open embedded is doing, like Yocto is doing, like build route, and you can extend the list very long. That usually implies that you are doing cross builds if you are targeting a different architecture, but even if you are doing the same architecture, you may have the need to cross build because your tool chain may differ. So often the tool chain bootstrap is included. It gives you a high flexibility regarding the customization of your system, of course. But if you look at the increasingly complex systems we have these days, which are making more and more closer to desktops and server installations, the production times, of course, also go up significantly because you build everything from source code. And also one thing people tend to forget, well, not the experts, that you have, of course, still a certain dependency on your host system, on your build system where you are running on. So it doesn't solve the problem of this as well. And then there are more and more embedded-based systems for embedded devices these days, distribution-based systems. So their approach is basically to take what we have already, the ecosystems, from the desktop, from the server area, and use the distributions established there, try to fit them into embedded devices. That's getting more and more easier with the standardization of the hardware. It's actually not that new, and to say, if you look at the list here, there are many of these systems, some of these systems are a bit older, like LB, but also the distributions realize these days that they could scale down, if you think of IoT scenarios, there's more and more interest in enabling these distributions to go for the smaller devices. So small may vary, but yeah, there's a trend, so to say. So the idea is to install from pre-built binaries, not build everything from source. That may lead, of course, to larger images, that may mean that you have a slower boot time unless you apply certain customizations again. So we're doing some kind of post-processing on the normal distribution installation. Between both, there's something like a hybrid solution, a hybrid approach, interestingly. It's representing meta-debian here. Maybe there are some others, but this is basically what I'm aware of. That uses distribution packages in their source form, but rebuild them using a classic, yeah, from source distribution building system, like Yocto in this case. Well, so they try to combine benefits from both sources, but of course, they also have some downsides. That means, for example, that you have to write your own recipes in this case to import these sources. So if you look at what we really need, I would say this is some kind of list of general requirements on embedded system builders. So in the end, you want to have some ready-to-use image for your device. May it be on an SD card, may it be flashable directly into the internal flash or something like this. You want to have this done in one step more or less, or at least the output should be ready and not really like you have, for example, with some server systems where you have to do some post-processing on the device. That's something that you can replicate easily, that you can then deploy in your production phase to your embedded device and be done with it. Well, it has to be reproducible naturally. If you're doing this professionally, you want to have to replicate the results. You have to be able to reproduce the results after a longer period. For us, often it could be a decade or even longer where you have to be able to reproduce what you've done before in the past. You have to integrate further sources, not just from this distribution. Well, it's real life. You have your business logic, so your own application, possibly running on this. You have third parties coming in. Not everything comes from this one source. And well, if you look at our domain, well, we usually don't do just one device. We do many. It's similar. They have some overlap. So you have to deal with something like the similarity in these devices. You don't want to do everything from scratch over and over again. So product line development, reusable components, and also configuration artifacts. That's one goal for this. And in the end, of course, you want to have a quick bootstrap, a quick start for your beginner developers on this who want to have to deal with it. But it also has to be powerful and extensible for those experts and doing a real complex product out of it. We specifically, Siemens, have some additional requirements, but I guess they're also shared to some other companies as well. It turned out that we really would like on the long run to avoid building everything from source. Well, there is already a pool of pre-built packages available. And we want to reuse them as far as possible, because one of the things that you lose if you do everything from source yourself, you use the QA, that upstream distributions already did on their packages. You can easily lose them if you just vary a little bit on your tool chain, on your build process, and everything. So you get basically, you come up with your own distribution, yes. And that's an advantage, but it's often also disadvantage. From the QA perspective, it can be disadvantage. Yeah, and as our systems are increasingly getting increasingly complex, the requirements, the features that they would include, also the number of packages, the dependency that they pull in also increase. So a simple system can already contain a few hundred packages, or even up to, well, some thousand packages. So this is something which goes bigger and bigger. If you look at what kind of components you pull in from the server or from the mobile development and things like this, yeah, it's just counting, basically, and eventually you will have longer and longer production times for your system. Furthermore, as we are in the markets where we have to support the products for a very long time, as I mentioned, this 10 years is just the lower bound, so to say. We want to use more of the established long-term maintenance process that exists typically with distributions, which are not that common yet for the source-based distribution builders, often because, well, they can't really handle all the variation that you could build out of this. And last but not least, also very important for us is the license compliance, the OSS license compliance. That means that you follow the obligation that the open source licenses put on you. And that, of course, means you have to first understand what the licenses actually are involved. And often that's not that trivial just by looking at the package and looking at the top-level copying file that may reflect the truth, but it may also be only part of the truth. And that's also very important that you have a source from upstream, from your distribution or wherever it comes from, where this kind of information has been gathered very carefully already, and you can build on this, don't have to do the work. So if we want to build on top of a distribution, the question is, of course, which one to pick. And that's a little bit like, yeah, which editor to pick. So we can discuss about a lot of these things, but let's say, okay, we picked one for the time being. That's Debian, and well, that doesn't mean that others are bad or cannot fulfill all these requirements as well, but this is simply just a way to go for now. Why Debian? First of all, it's a large community-driven ecosystem, very established and proven to be there and stay there. It's increasingly popular and embedded as well. So if you look at the recipes, but also others like the Ambien flavors of Debian, they more and more become popular on these embedded devices. Well, not to underestimate, we have some experience with Debian before. So we are shipping products, embedded products where Debian is included. Doesn't mean that all our Linux products are Debian-based, but some of them are. So this apparently seems to work, even without all the tiny features on top, like a standardized build system for these kind of devices. So they are all doing their image production in, well, an efficient way for the specific product, but always a little bit different. Yeah, well, another advantage of Debian is long-term support. Well, this is, of course, something that other distributions have as well, not all, but many. But very interesting, as I mentioned before, is the strict license check that Debian implies. Debian does it for the reason that they want to ensure that they only have real free software included, nothing which is, well, violating these goals. That implies, of course, they have to check the licenses as well and find out if they are something, well, inconsistencies or, well, not applicable packages. So they do some work, which is valuable for us as well, from a perspective that we want to build on top of well-worked-out license description of a package. And last but not least, Debian is something which scales down, can scale down to small sizes, maybe not as small as you build your own Linux from scratch, and it also can scale up. So if your embedded system becomes bigger and bigger as features, you basically grow with this packaging system that you have. So this is where the ESA project comes into play. So ESA actually is a new project being released in October, something like this last year, but it actually has a longer history. And the history starts with something which is called, or was called, SLINT. Depending on your point of view, it was the Siemens or the small Linux distribution. A Debian-based attempt at that time needed in order to get Linux from a distribution source into embedded devices. Well, that is what even colleagues before I joined Siemens worked on for quite a while. SLINT became, well, it came into some products, but eventually, of course, the community became bigger than what we were able to build. So SLINT basically faced out, but still it was in one product present, or one set of product present, and it evolved a little bit, and it also involved in regarding how it was being produced, so how the image was being produced. Starting from a script in the early days, later on, someone implemented a BitBake on top of this for these specific products. So at the point, then later on, as SLINT, as a cross-building version of Debian, became less interesting because of the maintenance effort, this specific product series switched over to pure Debian, but kept the BitBake part of this. And finally, in last years, basically, when we started as a central department of Siemens, look into the options how to get Debian integrated into more than one Siemens device, or more than one Siemens product series, we basically got in touch again with those people doing the early stages of ESA development, and that is a company called Ilbus, who was doing this kind of development for a specific division of Siemens, and then said, oh, why wait, we already have something which is conceptually possibly interesting for you, we just have to make it open source. And that's the point, basically, when we came together and said, okay, let's try to evolve from this product specific development, something which could be useful as an open source project and could be shared by others. Of course, that means that, well, a lot of things have to be a little bit done differently because they were done for a specific purpose so far and not for a generic use case. So we spawned at the open source release, which was done technically by Ilbus, and it was called ESA. So what is ESA? Well, it's an integrated system for automated root file system generation, and you see it already a little bit, it's not completely expressing what the term is. It's also a nice place to have barbecue in Munich, while it's the river floating along this side, actually, so ESA is the river flowing through Munich. So this is basically where the name comes from. ESA is trying to combine the best of three worlds. One of the world is a Debian-based system that delivers you all the packages you want to include, at least most of them, with an integration tool, BitBake, which is well established in building distributions, and highly flexible, we'll see it later on, plus the way Yachto structures the description of embedded systems and also enables certain workflows. So this is basically pulling these kind of information together and building something while you're out of this. How it looks like, using an example of an ARM target. So we first of all have the Debian repository, upstream repository, and we built CHRoot, all of this. Yeah, we've been able to build further elements of your target device, specifically you only have some parts of your device which are not coming from binary packages but have to be built, actually, and that's, well, in this case, a hello, so your example application, this case, but it can, of course, be more complex. So this is being, as put as an input, can be a Git repository, a source repository which being built then in this environment and generating then a Debian package out of this. So basically filling the gap that upstream Debian is not providing. And you also use the Debian tooling to create the root file system for your target. That's a multi-strap in this case. And yeah, this pulls together the standard root file system parts from the binary packages plus those packages which are custom made and typically embedded systems beside the business logic that is maybe a bootloader that is most often the kernel and these things then are put together, installed and creating a root file system image. And last but not least in this chain, of course, there is the generation of the bootable image. That's also part of the ether logic and in the end, we end up with a directly bootable image in this production chain. So if you want to try out ether, these are the first steps, the first tries to get something running on this case on the QEMA machine, for example. So it requires, first of all, a Debian build environment, so host environment. Can be the host itself or a virtual machine. Clone the repository, it's on GitHub. And then bootstrap your environment similar to what you're doing and open embedded. And then fire up the build process with BitBag, specifying the image you want to build. In this case, with the multi-config feature, you can also specify the machine you're building for or you do it like the normal way we are configuration files. And yeah, for this demo, start up the virtual machine, so start up a QEMA emulator where this image is being booted then. So that's the one case. You also have a case for testing on physical hardware. So this was a Raspberry Pi once in this case. That uses directly the Raspbian repository as source and not just Debian. Same approach basically, just a different target to build. And then you can directly write to an SD card and have your bootable image. So how does ether look like internally, the structure, the top level view of this. If you check out, you basically have this kind of photos and files in your repository. So first of all, there's BitBag, the tool well-known and, yeah, support or maintained out of free. So we just copy this in, updated once in a while. There are no patches on it, just standard. Then there is the core layer, the meter layer. And there is a template layer available, meter ether, which is providing you some examples and also enabling the bootstrap processes I showed on the slide before. There's a script folder with some additional helper scripts and then there is this bootstrap script available on the top level. So if you want to start your own project with this, one way is to just clone the repository as it's there. You will find this meter ether layer there and use it as a template. So copy it, modify it. And, yeah, add what you normally do in building your own device. That's your own image description with your own list of packages and possibly also your own board machine that you want to describe for your target. That's one way. Or if you want to do it more sophisticated with a more complex system, that's of course what we are targeting for is, yeah, create your own repository with your own layer or your own set of layers, which just basically then includes either as an upstream source like you would do this with Yachto, you have the unmodified Yachto repository, the Poco repository and just include it and add additional layers on top of this. That may mean that you need probably some configuration management for all these sets of repositories like repo or other tools. So then you can start using the normal layer mechanisms that you know from, possibly know from Yachto. You have different kind of sources of input. You add to this, so you may have board support package layers. You may have layers with specific libraries coming in from third parties. You may have your own, well, company division unit layer which adds certain features which are commodity to all of your devices in a series or in a department. And then you specifically have product layers where you describe specifically the differences in the products in the configuration. So adding your own image, yeah, as I said, first step, either derive from the template file that's there. There are some of these two images already provided. There's also a variant available which describes additional debug packages, also a normal case that you have a production release image and a debug image. And yeah, you can extend basically on this base image what's available there, adding your things. So this is a typical task to be done there is yeah, adding some additional packages to the list. So there is the variable describing that image pre-install. You can, this is for packages from coming from binary sources, from the repository, upstream repositories. If you have self-bid packages, you use the image install variable for this. Add files from, I want to add files in your root of this of the target. So customization steps, you can add the task which is basically doing this. So shown on the right side is some task which copies over from your repository, from your layer, in this case a host key for the target device drops it in basically on the root of as well. That's trivial as it is this slide. Or you can also modify post process, so to say your image, removing stuff, removing, for example, the package database if you don't want it on the target device, so there's a bootstrap script or post processing script available, it's a Debian config script where you just can add your own thing, fork it, or modify it in a way that you can reuse it part. This is, well, basically scripting. And then you can modify the target system further into your needs, strip it down, and things like this. So next step might be adding your own applications to this. So two options basically, one is to let either do the build process while the image is being produced. And the other one is, which is also being used in the internal projects that either rose from is having a separate build process for those packages or those applications coming in as Debian packages already into the central repository and just put it as if they were upstream packages. So those are possible. But if you look at the, we want to look at the source based approach now. Well, one approach is, or typical approach is to Debianize these kind of sources. That means adding a folder with the necessary meta data according to the Debian format. And yeah, then let Debian basically do the work for you. If you're building natively and you're lucky, and this is, well, you don't have to do any kind of cross building, maybe don't do cross building actually with either anything, but just not for the target. The same target architecture is done in a QEMO environment, a related environment, well, just to ensure that we're not have to have a cross tool chain at this point. Yeah, and then of course you have to add this additional package would be done in the local config just for experiments. Or you add it in your own image recipe as a static hook to the same. So on the right you see the example of such kind of application here. It's not overly complex and this code alone actually describes everything just needed for this specific package build. So the git repository where you have to pull it from, the hash you want to use there. And it inherits here on the top, on the bottom you see, it inherits the DeepPick agree class which describes how to build this package with the Debian way. So basically wraps around the Debian build process and that's it. So now you have your own application, but normally what you also customize in your better system is the kernel for various reasons. So what to do with the kernel? Well, similar approach, you can debonize the kernel. So you add the specific folder describing how the kernel is to be built and let ESA do the work. So there's an available example branch on the upstream project, custom kernel. There's just a little bit fixed up needed for the URI used in this description and that basically pulls the demonstration kernel in and builds this thing within the ESA build process for a target image. Or an alternative lead to this, well, you can build it separately just like the application. Do it outside of the normal image production process and just pull in from a repo already built Debian package. That's one way to do it. But if you want to pull in some kernel sources in an unmodified way, there's also another mechanism possible. That's what I was playing with these days. Basically, carrying the meta files to debonize the kernel inside the recipes and only applying them as you prepare the kernel sources for the build process within Debian. That allows you basically to pull from unmodified Git repositories from upstream or different branches, like stable branches, for example. And of course, we don't want to do this over and over again with every kernel variant we are pulling in. So the next step would be to make this kind of pattern reusable for your own kernel. And even possibly carry over this pattern also to application, that's another step to be done or could be done if you have, for example, an auto tool space build process for your application. You can just write the same set of Debian packages, Debian meta files for this kind of build process. And you can reuse it as well. So how does it look like if you want to split up this kind of build process in a usable part? So I patched a little bit on the meta layer and added basically a folder which is for building kernels, Linux kernels. So it consists of a set of these meta files for Debian to produce this in Debian and an include file which describes the common part of the build. This include file basically has, yeah, describing the standard build process. It's including the Debian folder that is on the left side. It's also expecting that the final recipe will provide some DevConfig for the kernel. And last but not least, it's the instruction you see below that is basically the step needed to put over the Debian folder before the build process starts and also copies in the DevConfig for the build. And that's basically what you have to do now. You want to use this kind of pattern. You just create in your own layer a little folder with containing the DevConfig and containing little recipe. The recipe is looked like this. It just pulls in the include file and then describes what repository should provide the kernel, which branch, which source, rest, and you're done. That's it. So there's another example available for customizing the bootloader, in this case Uboot. That's also available in the custom branch. Now pushed upstream. So just look into it. It's basically just the same pattern. In this case, what was done in upstream is setting up an Uboot fork which contains just the additional Debian files. But of course, the pattern I just presented for the kernel could be applied on this as well. Just the same, so to say. So last step, basically, before you have a fully bootable image, is describing how the image should look like with the layout, as partitions possibly, and how these things should be done, should be laid out on the final image file. So there are good examples available, in this case, for the recipe. First of all, the machine defines what kind of image should be built for this machine. So the image type variable defines, basically, where to look for what kind of class describes the build process of the image. And that, again, this class file contains then a list of shell commands in this additional task, described here, this case, rpi, SD image generations task, a list of shell commands. I didn't list it here. They are a bit longer, basically creating the structure on your SD card image file, putting in the root file system image there, putting in the bootloader at the right location, and things like this, setting up these other things. And at the end, yeah, you have this image done in these manual steps, but encoded in the task here. And you add the task in the bit-baked way before the actual build is done, and after, well, the root file system has been produced. That's the dependency expression here. So that's basically the steps, if you want to build your own images. And while we played so far, I was hoping to do really complete bootstrap of a new device this way, but I didn't finish quite in time for the presentation. But of course, we tried the individual steps and learned some stuff about this. Oh, I skipped something. Important point, right. Because one thing that you see here, that general description to be done for the image production, of course, is not specific to Debian. So if you produce a recipe pi image, you have basically to do the same steps, independent of what sources you're pulling in. So the vision, for example, for this part, is to reuse this and share this with other projects. For example, we are thinking of using or reusing Vick for the image production step, pulling this in and not open coding it like it's done right now manually. So back to the top. What lessons did we learn from the experiments? Well, the good things about either. The similarity between Yachto and Open Bed, it helps a lot when writing the recipes. So you don't have to learn a complete new build system if you switch over from Yachto, for example. It's the same language, the same structuring of layers. That's the benefit you pull over. So I'm not an expert in both of them. So for me, it was pretty easy to lose my little knowledge in Yachto builds and applying it basically on the ESA build. The recipes, as you've seen, can become very simple. Because all the magic is in the back. It's in the distribution. It's the distribution build process. The image generation itself, out of the binary packages, is nicely fast. So it's been done maybe in 10 minutes or even less for a complete embedded system. Try this with a sufficiently complex source build build system. And the structure of ESA so far is rather simple. So it's just 300 lines of code. Well, for the differentiating part, the non-differentiating part, bit bigger, of course, a little bit bigger. But yeah, well, anyway, it's not changed. So it's just existing code. If someone would package BitBake basically as a package, we could maybe just use it as is and don't have to carry it on version. But of course, there are also shadow sites. And in this case, well, we also find some things. One thing that, first of all, came up for me when I first look at it, you currently need root privileges for generating the image. It's not really nice if you have a build server which then tries to execute it. And you have to set some pseudo rules to enable the build process. This is probably fixable. Or you would just put it in a virtual machine, like I did for this. But still, it should be done better and other systems show that this is possible, technically. Well, there's some room for improvement, definitely, for the recipe development itself. So this is the first thing that some recipes aren't really rebuilt when you change them. It's apparently in a BitBake issue. There might be some workarounds for this or some solutions for this in the recent version of BitBake that not applied here. So I had to fight a little bit while I was playing with the recipes to rebuild things, manually destroying some of the files which basically detect the stages, so the timestamp files, and trying to trigger the thing in the right way to do it. And yeah, there's also no cleanup task implemented yet. Trivial things, but still, if you do it the first time, it was not the best experience at this point. Well, on the to-do list by now. Another thing you quickly learn, well, we don't do the cross build so far. That's nice. You can use the upstream tool chain as is. It's not so nice if you are sitting there waiting for your kernel. Usually, this user space or QMU user space based build processes, they take 10 times as the native build or the cross build. And so this is maybe OK, and it's being actually used for the building of the existing in-house system this way. It's OK if you have a large server farm. If you just put it there and run it overnight, it's not OK if you're sitting there as a developer in front of your console waiting for the image. So one approach to do this, to overcome this, is to switch back at least for the kernel to cross building. The kernel is nicely cross buildable. So not a big issue, but still it has to be done. Or the alternative to this, well, ARM servers are coming, some of them are already in the racks. And they, of course, have a different performance than if they have to build for the ARM systems at least. So what is next? What's in the queue for changes? Well, the findings that we have done during the evaluation, they are on the to-do list now and should be resolved soon. Some were resolved just before this presentation. x86 support is something we look into. There are some traces for them. Pits for this already for QAMO. But the real system is basically waiting on our desk to be bootstrapped this way. We want to add a reference board for x86, definitely. jessi is being enabled. So the jessi version of Debian is enabled in the development branch. It has to be integrated to master. And some things have to be done still there. And then the image creation we have, as I mentioned before, is something nicely shareable between existing other build systems and ESA. So wik itself has its pros and cons. But we are working on this already for the Yocto part. So we will probably reduce it also here. And documentation can always be improved. It has been improved already recently. So you may wonder now, problem solved. One size fits it all. Dump all the builds from source. No, not really. So one of the devices that we are shipping now is a nice one which shows that you still have a need for source build. That's the Zimatic IoT 2000. It's an industrial IoT platform device. Containing a processor with some erratum. So we need to work around in the tool chain. One reason to change the tool chain. And one reason to not be able to use your pre-built distro packages because no distribution out there supports this kind of processor out of the box. So what we did for the product release, we created a Yocto layer for it which enables you as a customer, as a user of this device, to build your own distribution for it. That's the normal way if you have to build from source code. But of course, there are also other reasons to do this. Well, highly optimized systems. Typically, if you go down for size for whatever reason, we are typically with our devices not in the domain where you really have to count the bits in your flash. But there are that kind of devices in the market. Or if you have to go down optimizing for performance. May it be that you have to apply some compiler switches to squeeze out the last performance bits from your hardware. Or that you have to tune the boot times of your system. Then you may also go for the source build and drop certain approaches that the distributions are taking which are not applicable for your specific embedded devices. So to summarize, well, I still think despite all the small itches we still have, ESA is a promising framework for building embedded Debian images. The edges we found so far, well, I think they are fixable and they will be fixed probably soon. So that's not a blocking point. Very interesting and very important point of ESA is that code sharing and the SAP sharing is in the center. That means you can build your product lines reusing the description that you made this way. So you don't have to rewrite basically your bootstrap script all the time and copy it over. But also it enables sharing common steps in building embedded systems with other solutions. May they be source-based or may they be distribution-based. So there is room for collaboration and ESA is already reaching out to these mentioned LB and MetaDebian people here and they are in contact basically trying to define what the next steps to work on together is. For example, LB is thinking about using BitBake. Internally, MetaDebian is also thinking about basically what to do with the bootstrapping process, the image generation process. And I specifically think it's a very nice tool if you are living in both worlds. And that's what we are going to do with the future definitely. If you have your source-based Debian, sorry, your source-based Jogto-based devices there and your Debian-based devices, you can use now the same similar, very similar language to describe the build process of both. You don't have to learn two completely different approaches for building embedded devices that, of course, is valuable if you have to deal with a large set of embedded systems. So some resources if you want to dig deeper. And otherwise, thank you for your attention. And I'm taking questions, of course. Yeah, OK, the question was how to express dependencies between the packages. So in my examples here for the self-build application, there were no dependencies expressed. But you have the same mechanism to express it like you have in Jogto. The depends variables and our depends variables, you can just add them where they are not part of the Debian upstream packages. So if you take an upstream Debian packages, they are already encoded in. So if you say, install me x, x pulls in the dependencies automatically and the normal dependency process of Debian works. But if you describe your own source-based package, you may have to add specific these kind of dependencies and variable form. So this is an example that you find in the meta-easer repository, but it's not a part of the slide set yet. Does that answer your question? OK. So the Debian package built out of this process, if you want to put in the dependencies there, you have to put it into the metadata, just like you do with Debian packages. So there is currently no automatic process. If you express it in the recipe, the dependency, there is no automatic translation to transfer this into the Debian meta files. Could be done technically. I don't think there's anything like this already in place. But yeah, possible and probably makes some sense in certain scenarios here. Yes. So the question is, how is the consistency between the upstream binary package we pull in and the self-built packages we have during the ESA build process? So this relies on the fact that we are using the upstream tool chain available from the binary sources for the build process of our own self-built packages. If Debian doesn't fulfill this requirement, well, then we are lost. But this is something we can, I think, reasonably expect from Debian, that you can rebuild your own packages with the upstream tool chain that you can get also a consistent package result of this. So the question is, if we thought more in details about the cross compilation option that I brought up, I personally developed a feeling that someone should look into this. And I briefly discussed with the maintainer, Vasily Vagulov, of ESA. And he said, yeah, possible, but it has to be done. What technically has to be done for this? So first of all, yes, there is upstream available cross-built tool chain for ARM, for example, V7, ARM V8. That would be the first step to use. And well, but it has to be examined in more details how what the restrictions are. And probably, I think, the pattern will remain that your production version will do native builds or will run in QEMO, just to have the insurance that this is consistent. But for the development phase, where the developer, the kernel developers, maybe sit there and then want to build an image while being able to build the kernel, they may go this fast path. So this is probably not the version you want to apply for the production version. And this is actually, this was picking up from a colleague of mine who looked into this. And immediately started switching over the kernel build doing out of tree in a crossway on his desktop and just used ESA for generating the root file system. Yeah, the kernel is easy. And we don't want to open the can for all the packages there. So if you look at Vajran's presentation from Foster, for example, he also has a pattern to be further developed. But conceptually, it's there how to deal with modified upstream packages. So if you have to open an upstream package, maybe modify something, apply a patch or something like this, then the question arises as well, how to deal with it. And I think this is probably most reasonable to keep it in the QEMO environment, build it with the same toolchain it was originally built and not going for cross in that case. Yeah, oh, sorry. OK, the question is how do we, what's our feeling about reproducibility using the Debian toolchain over a longer period? OK, plus the variation that we have. Well, this is currently based on the experience we have with doing basically roll your own Debian version for a long period. So far it works. There might be some corner cases that we didn't run into with the existing Debian-based products. But this is something, of course, to keep in mind. And if it really turns out to be the viable way by using the upstream Debian toolchain for a 15-year-old product, well, it probably needs further thoughts. I wouldn't blindly apply it. But on the other hand, these kind of system exists, not yet 10 years, but at least five years something. And they didn't explode yet. So it's not completely infeasible. But yeah, if we identify problems, and that's also one of these approaches here, we want to go upstream, we want to talk upstream. So if we identify problems in the Debian way of maintaining toolchains or other elements for maintaining it that long, and Debian is targeting for, I think, seven years or longer with the maintenance of their versions, we of course would report or even fix upstream the issues and to make them reusable for us again. For the questions, otherwise, thank you all, and enjoy lunch.