 So, hello, my name is Baozhan, and I'm glad to see you today on this presentation about Ether, a Debian-based product system built on BitBake. So today we are going to see which problem we wanted to solve, how we solved it, and how this compares to other approaches. A couple of words about us, we are a small team of developers and we help companies implementing Linux in the right way, and doing this we also contribute to Linux kernel and other open source projects. So Linux is very attractive for building embedded systems, and what we wanted to have is a product build system with one command on-demand building that produces complete, ready to use firmware images. So you start one command and it produces some image, you flash it and it works. No post processing involved. We are not in the business of creating a Linux distribution, so what we definitely want to avoid here is building our own distribution or massively modify some existing distribution. That is why it has to be a low-off effort, we reuse as much upstream as possible. And we have also efficiency in mind. This means after the whole thing flies, it has to work quickly. Some use cases that we wanted to cover was adjusting upstream packages, build several products from one repository, share components between them, and use with several, work with several vendors that provide code to us. There are also specific customer requirements that we had to fulfill in the system. For example, native compilation. So we started with cross compilation and ended up with native compilation. We have to have security updates. Typical products are maintained more than 10 years. So one legal aspect here for some industries, we have 10 years after end of life and also legal clearing of the licenses. So basically there are many systems that are suitable for embedded use. Two huge candidates are basically Debian and Yokto. So let me see who is using Debian. Okay, who is using Yokto? Yeah, that's great. So I think we can proceed with some slides quickly because you will know already how it works. So this green carpet is only for your information. To emphasize which features of the Debian distribution were interesting for us. So this was a very conservative version selection so that we have mature, pretested results for our systems. License clearing. So every package in Debian has clearly specified licensing and also in last versions it's even machine parsable, long-term maintenance, security updates from upstream and last and not the least the usage scales between individual projects and product lines. One thing that we are missing there is the one command on demand building on the whole project. It's not out of the box. So the processes to do that are not quite standardized. Well the alternative is Yokto. It is like 10 times smaller than Debian. It's a source-based distribution. This means that you have to rebuild the tool chain and all the packages from scratch for every developer. But it does provide one command on demand building of the whole project. It has BitBake which provides modularity and full customization of the build process. And it also has layers which provides collaboration between vendors. Also what is different from Debian is it is a source distribution. You can fine-tune your tool chain and your build flags to exactly match the platform that you are using. So if you can see since Debian for example is pre-built you have to choose basically from two variants, either it's ARML with a pretty low architecture version, no FPU, more or less the lowest common denominator of ARM systems or ARM HF which uses high architecture version and vector floating point with 16 registers. Why I am mentioning this because for example some socks are falling between these two. For example the sock of Raspberry Pi is ARM architecture version 6. It has vector floating point but with 32 registers and it only has thump. So this variant you would have with Debian you would have to use with ARML. So what we did is to combine the advantages of both systems we used Debian binary packages and used BitBake to generate the root file system and to build our own packages. So EZAR stands for integration system for automated root file system generation. And also we do reuse the workflow and layering structure of the Yocto project here. So at a glance EZAR installs Debian binary packages as a base system, it builds and installs our own applications, drivers, libraries, whatever and creates the images. It consists of a set of scripts namely BitBake recipes to do that and ships also a product template so that you can copy it as your project and start developing your product. So where is it useful? Basically we see it of use on any Linux-based embedded devices especially if you want to share components between the projects or even departments or business units. So benefits are easy code reuse and quick automatic repeatable builds. We started in 2004 with a Siemens Linux distribution and a shell script that builds the whole thing. After some products changes we have seen that this approach doesn't scale and switched to BitBake. And the last year we decided to move to Debian and started open sourcing things. So from here we can see that I'm definitely late open sourcing the whole stuff but I'm doing this hereby. So how it works? We basically have five steps, the things here are external repositories and the boxes represent the steps and on the right side we have the outputs of every step. So first what we do we create a change route environment. It's an arm or a target change route environment. It is created automatically by the tool. Then we clone our packages, our custom packages, build them natively with Debian infrastructure. It means a DeepKG build package and this produces Debian packages, binary Debian packages. This is run in the target change route under QEMO. In this way we avoid cross compilation of the packages. After that we create the target route file system, make customizations to it, install our packages and create the target image. So the target image may be a simple file system image or it may be a complete partition for example SD card image. So this we already know BitBake is a build system. It provides recipes or we provide recipes for BitBake, recipes are files, files contain tasks and recipes are organized in layers that group different recipes together. The whole thing, the set of recipes and configuration files is called metadata. To distinguish it from the proper software like coroutils and so on. So here we see what you get after you clone the Ether repository. You see BitBake, the interpreter. You see Meta is the core layer and Meta Ether is the product template that you can use for your products. And there is also an init script that prepares a build directory just like in Yocto. So to give a feeling how recipes are organized, they end with .bb and they contain variable and task definitions in a shell-like language. It's BitBake language, it's not shell and the task definitions can be implemented in shell or in Python. So what we see here, here is a variable that defines which packages we want to pre-install in our build change route. And this is the task body which actually creates the change route. I think we can skip some slides and return to them later. After we see how the whole thing works in Dynamics, so we see here our repositories. So we need to have Debian apt repository. We have our applications, drivers, libraries, there are many of them. We have a copy of Ether core layer and the Meta product layer is our product. What we see here in the boxes, these are the same boxes as in previous slide and we see some dependencies between them. Which means that BitBake can start with a task in parallel. You see here some boxes are numbered with one. These boxes do not have dependencies and they can be started in parallel. The boxes that do have dependencies have to wait till those tasks are finished. For example, four to three means do populate of image building depends on do install of application recipe. In this way, tasks can be parallelized and at the same time they wait if it is necessary for the results of the previous operation. In this way, for example, we generate a bootable image on a PC with a hard drive in 10 minutes. So what we currently provide is core framework and template and an example to build several images from one repository. The use case behind this is first we had a board that was used with some sock. After some years, the sock was end of life and we moved the board to another sock. And also we added other boards using other socks. This was our motivation to provide this variability. So this example provides two products, product A and product B. Product A runs on QEMO and Raspberry Pi and product B runs only on QEMO. And they share basically 90% of their components. Only the different parts are built separately for every product and for every board which is called machine in BitBake parlance. What is the peculiarity here is that Raspberry Pi uses its own distribution, Raspbian, and we provide different build change routes for each. Okay, so some examples how we use it, these will be very familiar to Yocto users. We clone our user repository, we change into the directory, we initialize the build environment with source, command, and it automatically changes into the build directory. You can use your own name of the build directory and then we say BitBake and ImageName. We can also specify several images here, however, in that case syntax gets a little bit more complicated as in the last line of the callout. This is a new feature in BitBake that was merged in September. So what we want to do is to develop our product. We are using Debian as our base system and we have an application that we want to install on the system. What we have to do is to create a repository for that. We name it hello.git in this example. The sources have to be Debianized. You can do this with, for example, DHmake. And it will create the Debian directory with templates that you can edit to generate your Debian package of your application. And then we have to create a recipe for our application. So we specify where we get it from, which commit ID we want to use or tag name here, and then we inherit a DPKG class that builds the package, the binary package from the sources with DPKG build package. After that, we list our package name in ImageInstall and it will be installed in the file system. So a couple of words about classes. We can extract the tasks that are repeatedly used in recipes into classes and put them in specific directories in our layers. And then if we say inherit class name, it will be reused in every recipe we would like. So we have an overview here, what kind of classes we have in the core layer. So DPKG class builds binary Debian packages from a Git repository. We also have some image classes that depend on each other to create images. For example, X4 image creates an X4 file system image. And ImageBB class is a generic class that uses, for example, X4 image to generate one partition of the whole image. So to create a new product, what we need, we copy our template to our meta product. We add packages, we add boards that are called machines. And we add or modify images. So a couple of words about layers. We can group different recipes in layers. This can be seen as a group of recipes that has a name and is organized according to some code ownership or region or function. They are usually called metas for metadata and they must be configured with a layer configuration file. So this file lists, among other things, where to look for the recipes. Okay, so an example, how to override an upstream package. So quick and dirty way is to do that in the image recipe. So that after the whole file system has been installed, we just hack some files directly there. The clean way would be to assign this or to attach this change to an existing package. So what we, the current way of doing this is forking the respective package and doing the change there. For example, if we want to modify init tab from upstream, we fork the whole sys5 init and we do our modification there. Of course, only for one modification of one config file, it is an overkill. And what we envision is to patch the source, the ABIAN package, without having to fork the whole package in our repository. So this could look like this, that we specify the ABIAN source package as the source URI. We provide the MD5 sum and we provide our patch here that patches the respective file and then installs it as a package. So our modified package has to have a different version than the DBian one. So we see here this is the DBian version, the upstream version and plus my project two is my modification. So I start with my project one, my project two and so on. So how we could typically develop our products? First we have to produce something working at all. So we create repos for our components, DBian, applications, EZAR and so on. Because we want to be able to rebuild everything in the future also for an older release. That is why, for example, we make an internal company, DBianMirror and achieve it also with the whole project so that we are able to do this also after years. This is also a requirement, for example, for any safety certification. Then we develop our applications, what makes our product business logic in our, for example, app.git masters. And if we need to change our upstreams, we fork this in like package.git but not in master. But in branches. The reason for that is that whenever we want to update this package, it will be updated in master and we will create our new branch to develop this further. Then we tag all components and we tag, we use this tags in our recipes in metaproduct and tag the whole metaproduct. So this becomes our 1.0 release, we can branch from it and maintain it further. And in master we have then the development branch and we can proceed with the same procedure with the only difference that whenever we have to update some upstream package, we update it in master and rebase our 1.0 changes in some another 2.0 branch and tag the respective stuff again. So here we see that whenever you modify anything, this effectively becomes your code and you are responsible for maintaining it. So the goal is to minimize the number of packages that you have to fork and therefore maintain. So if we follow these steps, then we can build an older lease in the way that we check out our either or our product repository with the tag that we gave at that time and call BitBake with that revision. Of course for this to work, we have to specify source revisions in the recipes in the correct way. So they have to be not branches, but tags or commit IDs. So the whole idea behind layering is collaboration. So what we can have here are some board support packages from one vendor and some libraries or codecs from another vendor. Our own code that is product specific or our own code that is shared at the department level or at the company level. And we do this explicitly by assigning a different meta for every level of development. And then we have to think who is responsible for that because we can fetch Debian source packages, we can modify all of them. But after that, when we have to provide version two of our product, we have to think, okay, how much would it cost to update these old packages to their new versions. So the rule is to avoid change, massively changing upstream code and if possible to override the small changes in your own meta. Yes, and also if there is some shared stuff between two products, it can be put into the department meta or into the company meta. This is also then strictly separated also from EZAR core, which can then also be updated independently from companies own stuff. Okay, so we are actually ahead of time, so I can show a couple of levers we can use in EZAR. So what we have in our core layer is global bit by configuration. It contains mostly paths, but it is important to know that it is copied by build environment initialization script into the build directory and it includes local configs to create a single global environment. So this means that anything that is specified in any configuration file or recipe has a global scope and is globally visible. What is also created in the build directory is BB layer sconf. It is created from the sample that is provided in the product template layer and among other things it lists the BB layers that will be used to build the system. So after you add your own layer, you want to remove the meta either here and to add your meta product layer to this configuration file. And there is a configuration file for local, let's say what I call local developer level changes. It's called localconf and you can specify variables there to tune the build process. For example, by default bit bake creates only one image and of course this one image has to be targeted for a specific board and this is specified here. For example, the default machine in localconf sample is the ARM QEMO machine and it uses Debian VZ distribution for build truth and to bootstrap it. Also we have image install setting here which lists all packages that we want to have installed on the target. So this means if I create two drivers and three libraries and one application package, I list them all here and then they get installed unless of course they have interdependences between them then I don't have to list all of them or I can do this also. And the further setting is bit bake number of threads. If you have a multi-core CPU then you can set this to a higher value and it will start that number of threads. So how we build our distributions? We have a distro directory in the product layer which contains our different distributions that we want to use because it may happen that one product is using an older Debian release and the newer product uses the next Debian release. So then you would have, for example, Debian VZ and Debian Jesse here and they will be respectively created to different build truths and to different target image file systems. The same thing for machines. So what we call machine is basically a board that our hardware is based on usually there are settings like which U-boot sources and version to use here, which kernel and so on. In this way we can have the same product for three different boards that are based on three different socks and use different U-boots, different kernels, but the same application, the same business logic. Okay, so where we are now? There are, of course, alternatives to these approaches. This is not the only approach. One talk we had today in this room was about embedded Linux build environment. This is a project with the same goals as of either and it produces similar results but has a different philosophy. So the main difference is that it has only one file, one configuration file, pair product and if you want to generate several products, you have to use several files. It provides many features out of the box. The other approach is MetaDebian. It also uses Debian and BitBake, however this is a different type of project and it has different focus. So as I see it, this is a Debian-based source distribution that can be used to create products. So either uses a pre-built distribution like Debian or MetaDebian to build products and MetaDebian is in the business of creating that very distribution. So there are also many other tools that do this. There is also a quite entertaining presentation by Riku. You can have a look at that and what he says is basically each tool is tailored for the developer's use case and what I can add that product development is more than creating a router face. So to summarize what we do and what we do not, we have small tools for well-defined tasks. These small tools have small configuration files. They are not consolidated. They are divided and have dependencies between them to provide variability. And we try to reuse as much as possible, for example Debian binaries and also familiar tools of Yocto and their workflows regarding layers. Because after you have layers, you have to actively think, yeah, okay, who is responsible for this code. Do I want really to maintain stuff myself if I modify this? And if I do, how do I do this for the version 2.0? These are questions to answer if you are concerned about costs during release 2.0. Well last but not least, we design things also with performance in mind. So we try to parallelize the whole stuff just from the beginning. Because at the end, this will cost developer time to wait for builds. Okay, so there are some ideas where to go further, what to do. After that, one of the points is, there are two actually major points related to Debian building. One is to be able to build Debian source packages. And to put the binary results back to an app repository. So this would be a major thing once it would work. After that, of course, one could always optimize the thing to see if a package was already built and provided in the app repository. Then we could skip building it. Another thing is about BitBake. Because currently, we have to specify dependencies in BitBake Recipes 2. This means we duplicate dependencies that are already specified in Debian-specific control file, we duplicate them in the recipe. And here, I haven't studied the subject in detail, but I envision creating a DSC or Debian source package backend for BitBake that could possibly understand the DSC files directly and in this way, that would be a direct alternative to Bibi recipes. So you are invited to try the stuff, to provide suggestions, and of course patches. What we are also interested in is collaboration with other projects like MetaDebian and Elbe. Let's see how it goes. So to summarize, what we get if we choose Ether. We get a quick project startup because many people are familiar with the tools. We get a product template with default images that we can quickly reuse. We provide the basic recipes necessary to build the system. So you have only to add your own recipes and it will just work. And it works quickly. And also, layering is very important for the collaboration with vendors and with the community. So you clearly specify from the beginning in which layer you put the stuff and it forces you to think about further releases from the beginning. So here are the references. We have code on GitHub. We have user manual. And for now, we are invited to communicate on Debian Embedded unless there are objections. And in the future, we will see whether we provide our own mailing list infrastructure. So that's it about Ether. Any questions? Yes, we do compile in... No, it's created automatically by the build system. So you see here, all this what you see here is done automatically. And the very first step here is to create a change route that is then executed under QEMO. Excuse me? I don't remember how it's called. QEMO static. This is currently the same change route. So this means if we have two distributions, like one Debian and one Rasmbian, then we create one Debian change route for building and one Rasmbian change route for building. So we don't rebuild the clean room change route for every package. We reuse the same build change route for the same type of distribution. Well, actually, it shouldn't happen, but we haven't specifically looked at that. So till now, we had never problems with that, either with Debian or with Slint. That's why we haven't looked into that. Well, currently, we don't use Vick. And it's, let's say, a matter of changing the recipe. Debian provides several alternatives for this. And we actually do this right now, let's say, manually in our recipe. But it's actually a good question of policy, whether we want to proceed with Debian solutions or with Vick. We have to do a more detailed evaluation here. But for my feeling is that Vick is probably going to get more acceptance than existing Debian tools, because Debian has a very rich tool environment, but it is concentrated about apt and building and DPKG and so on. It's not very embedded oriented, from my personal opinion. Okay, so the first question was whether it's possible to build an SDK. Well, this is an SDK and you can build your own SDK that you build your own layer with your own files for your developers or for external developers. So the answer is yes, you can. And this is what actually this tool is about. So you don't have to end up with images. You can provide, let's say, a half of the solution to your users in form of SDK. And regarding timestamps and caching, yes, they are used. So if a developer built the system once, the packages are cached and they are not rebuilt in a successive build. Well, currently what we do is get clone of Isar and you have Meta Isar there and this is a product template. So if this is an acceptable delivery form, then you can do this or perhaps you want to elaborate. Okay, so the question is how I distribute this layer. Yes? Okay. Okay, so if you don't want to distribute it via, let's say, Git or whatever, you can always pack it into some zip file and provide it. I'm not aware of any Yachter ways to do that. Yeah, and it is? Ah, okay. Yeah, we can look at that. That would be an interesting idea. So I haven't looked at that till now. Currently I don't see any obstacles that would speak against that. So for me personally it would be interesting to do that. That's a good point because we don't provide a tool chain. We use Debian native tool chain and in the end if you want to build an SDK you will be providing this part or let's say this is your SDK. But this and this you won't be providing. So this has to be provided externally. And for example what we do for our company builds, we duplicate for example external repositories like Debian internally in-house. So that's a good question how one could proceed here because in this way Yachter is self-contained but at a price that you have to rebuild everything from scratch. And Yachter runs in theory on any distribution. In practice there are issues with systems that are not tested that much. But our approach is to reuse. Then the question is of course then we pack the whole Debian stuff into it or we assume you have somehow access to this stuff. So this is a good question and I think we will not have one size fits all answer for every customer because one will say I don't want to duplicate it because we have it in Internet and the other will say yeah okay I want to have it whole completely here. But it comes from depositors that's the problem. So the compiler it comes from Debian app. It's just a native standard GCC of that particular Debian distribution. Okay, okay, okay now I get it okay. So you are talking about setting the host build environment to be able to run this. Well the standard answer is we assume that you run Debian. Any Debian version it doesn't matter which but you have the tools that are necessary to bootstrap the whole thing. Of course for different distributions what we currently do. So one developer of us is using Fedora for that and he uses ChangeRoot. So it works but it's a good question to discuss how to provide this out of the box let's say. Okay. I haven't tried it. I'm not aware of that feature. I'll discuss this with our Yokta guru and it for me it sounds definitely interesting. Okay if there are no more questions then thank you.