 Hello, my name is Manuel Traut, I'm from Linodronix and I did a lot of stuff in the area of WM-based build systems and Yocto BSPs over the last couple of years and I learned both sides of the world have their their pros and their cons and so I thought isn't there any possibility to combine those two worlds and this is something I wanted to talk with you today So what's next first? I want you to show the reasons why people out there are using Yocto and why they like it then I want to talk about Debian and especially about Debian for embedded systems and then I will light out the benefits of a combination of these two worlds and Because it's maybe a good combination There are already existing solutions out there in the field that combine those two techniques or frameworks or distributions or however you will call it and Then I will give you Idea about what's in my head about the perfect combination of those two worlds So let's talk about why we are using Yocto first Let's see what's Yocto then have a look at the typical usage of it and learn about the limitations So Yocto is basically tooling for building your own Linux distribution and Your development environment for building applications for this new distribution you created So it defines a format for you that eases sharing your receipts that describe your packages and your content of your board support packages and your root file systems and And this is quite a powerful format because you can just say okay I grabbed some layer from my hardware vendor I used this one from my chip vendor and then I add my own layer and all those layer contain different Receipts and you even have the possibility to overload Receipts that describe power certain software package is built in your own layer Even if the package is described and for example the layer of your hardware vendor So it's also a big management tool and Beside that you just can do that for one BSP You can even do that for all your BSPs in your company So if you have ten similar BSPs for example for different types of hardware with one for example with a large display another one with a smaller one and Maybe one with the high-end application and one with some stripped-down application Then you can build all those combination images and of course they they share a lot of receipts or Packages with each other. So you need to maintain this combined set of packages just once so all of that is based on a project called open embedded and The open embedded and the Yachter project is Working closely together So if you have a look for example at the open embedded treat repository and that the pocky treat repository Which is the example distribution of the Yachter project You will see that there are nearly the same commits in both repositories and there's just a merge of the pocky example distribution inside This treat repository So how do people typically use it? They used pocky example distribution Then they add some meta layers from the chip or a hardware vendor maybe they add third-party layers for example for adding Qt5 and Then they add on top their own layer with image customization with receipts for their own applications. So basically it's a really nice thing but All nice things have limitations, you know, so What's the problem? The problem is you basically build your own distribution. So you need to maintain your own distribution maintaining as a hard work, so You get those receipts from those different layers, but you need to check are The patches applied I'd like to have on this several software components. Are those layer compatible with each other because The big big build engine might be Depend on the version of the receipts you use and so on so and then you have for example three layers one is Available for the multi-release another for the Kurokot release and then you need to combine them And then you need to really to do some work to get your distribution Compiling working and if this all works, then you go to the maintenance Part of your distribution. So you need to check it are there security updates and so on Also the quality of the receipts is quite hard to verify so you of course there are a lot of layers outside in that But if I integrate them into my project, I need to review those receipts I need to check what patches do they apply is this okay for me? So there's some work to do also there is no Long-term support this to the example distribution available. So if you use for example the pocky receipts that they maintain the Current and the last branch But if you need longer support, you need to upgrade all receipts Also the Redusability is not completely given because you always have some small tools, but you have them there That depend on your host machine. So it might give different build results if you'd build on a different host machine So there are quite some limitations you need to solve So some people think about oh, there are other distributions out there They can do the work for me for creating a distribution and one example of those distributions is Debian So Debian is called the universal OS as a slogan And so you might ask is it that only were universal that I even can use it for embedded? Let's see then we will have a look at the usage of Debian for embedded projects We will have a look at of course Even Debian has limitations. So let's see what they are and Some of those limitations we at Linux tried to solve with a tool called LB And I want to explain you some words about what LB is doing and how it can help you so the universal OS is In the end more than a pure operating system because it comes with Over 51,000 open source packages pre-packed in a binary format for different architectures and Also the whole infrastructure of this operating system the documentation and to build tools are Available as open source. So if you want you can set up an environment in your own company in your Lab to rebuild everything from scratch. It's documented and it's available as open source And another thing is Debian takes security very seriously So you have update channels for the current stable and the old stable release of Debian that covers about until six years of security updates and Many of those security updates are Coordinated with other free software vendors like redhead Ubuntu and so on and zoosie and They are published at the same day a very narrow ability is made public. So Basically, if you maintain your own distribution, then the day the very Neurability is made public is the day you can start working on the security issue And this is time if you use one of those big distros you have The package package binary package available. So is it possible to use Debian in embedded systems? So basically it is you have packages available for a lot of different architectures They are cross-tool chains available since stretch for different architectures. So Yeah, you might use it So for example in the last talk about the civil infrastructure project these are the commitment That they want to use Debian as a source for their Distribution, so there are even other peoples that think about using Debian in the embedded world and The usage is typically if you want to use the binary Debian distribution. You use the bootstrap to Get an embedded root files stem for example for the arm bootstrap into a single directory of your computer then you can might use tools like the package builder P builder or S build and cross compiler or QEMU emulated compiler to build your own application Inside this root file system directory After that you typically remove all those unneeded files like man pages internationalization and so on to get your embedded system even smaller and Then you build some file system images for extended fear force for UB and so on whatever you need on your target and then you have to drop to extract all those license information and retrieve the source code of all your packages you used in your embedded architecture to Publish it to your consumers so What are the limitations if you want to use Debian in your embedded product? You have only a limited number of hardware architecture supported as I described two slides before So if you have another architecture that is not listed there you would need to bootstrap Debian from sources and This is something we talk later about Also, there are no hardware specific binary packages available for for Debian so for example if you have a cheese streamer plugging for I am xx you don't get the binary package You need to build it by yourself You need to do those files a stem and UBI and so on image generation by yourself Even you need to generate the SDK yourself you get the tool chains You have multi-arch support and stretch that you can cross compile on your PC farm But if you want to ensure that you have exact the same versions of libraries in your tool chain You have an additional effort to do Another thing else reducing the image footprint you need to purge maybe essential packages out and so on and Then the integration of your own application needs to be done somehow So you need some tooling maybe around Debian that solves those issues and This is the place where we thought about Developing something called a bit so this goes back until 2007 as we had the first idea using Debian and embedded and have some tooling around it What we have at the moment is we Describe our board support package in some kind of XML file where we say what is the result So do you want to UBI images or extended for disk images is decoded images and so on and We describe what packages do you want to have what are the fine-tuning rules that should be applied after the root files Generated and so on and then we put this and something this is also automatically generated by L be called the L be in it we am Virtual machine that ensures that everything happens reproducible So we push the LB XML into this virtual machine There runs an LB demon that builds our images that extracts the license information That creates our source code CD that we need to give to our customers with containing all those source coders so it used inside our BSP and Generates for example a sussur that you can add to any tool chain to build Your application against this image and it generates a rebuild CD Containing all Debian binary packages that were used to generate the virtual machine and your BSP So you can just say L be great and give those either image and it builds all the environment up again So you can catch up in a couple of years with the development Also the virtual machine has included a P builder and The P builder is also generated out of the LB XML file so it is ensured that you have a package builder inside there that is able to generate Debian packages exactly for your target image and You can give any Debianized sources into this package builder and there comes out Debian binary package so Once again, what are the limitations using Debian if you use LB and Debian combined? So we have all those topics below That we can do with LB but still we have only limited number of hardware architectures supported we Are now able to build packages like tree streamer plug-ins from source But still there are no binaries available for that because we also don't do a distribution We just give a build environment to you So LB limited limits the The way So What would be the benefit of a combination of using yokto and Debian? So my my thing is there are quite a something some Stuff that's good in yokto for example the tasks scheduling. So even if you use LB to build your Source packages with the P-builder you need to know the order which package build first which next if you just change one package Okay, what are all the dependencies that I need to rebuild and so on and therefore yokto is really great or a bit bake because you Can model those dependencies? So task scheduling is really good in yokto. I'd like that to have and Debian For building an embedded root files in image also the configuration management if I have a few similar boards I Need to have several shell scripts or stuff like that that generates the images for the different targets Even if you use LB you need one XML file for each target And if they share a lot you need to maintain all those things in five different XML files. So that's also not a limitation Then I'd like to to cross compile from source Why cross compile might be interesting for really big frameworks for example if we build Qt We have really big machines with 100 plus cores But if we build Qt in a QM or emulator it takes a couple of hours until it's finished if you use a cross Compiler it's done in less than an hour So often during development it is really interesting to have cross compiler available Also the SDK generation that I get a cross-toolchain for my application developers is really helpful even the SDK Generation into Eclipse where I say okay you have this setup file include this in Eclipse and everything is set up Correctly that you can start cross-compiling in Eclipse is a really cool thing in the Octo On the other hand I want to use some stuff from Debian So I want to use the well maintained packages. I want to use the security tracking I want to use the binary package whenever useful because they are there the ones that get the most testing in the world because everybody uses the same binaries and I want to have the possibility to use the Debian sources if necessary So if I need to rebuild something with another configuration or apply a patch, I want to use the Debian sources if possible if necessary So there are already existing solutions giving you this flexibility. So we have a layer Project called meta Debian We have one called either and I just did some kind of proof-of-concept hack called Nita Elbe which is a layer that also can be included in the Octo and uses Elbe as built back end and These are the three things I want to introduce to you in the next couple of minutes and I have made a comparison table of these three tools what is possible with which tool and where are the limitations So I think the biggest project is called meta Debian It has about 600 bit bake receipts that uses Debian source code from the chassis distribution and that uses Optimized build rules for using those Debian packages in embedded systems So basically they use the options from that from Debian rules for configuring the packages, but if it doesn't make sense for an embedded Product to have this feature enabled. They just disabled them to lower the dependencies between the packages to get smaller Board support packages They also use a long-term Linux kernel from the civil infrastructure platform project because one of the most goals of the CIP project Which is using this meta Debian layer is to have a really really long time support Call in Counted in decades not only in the years It also supports generating Nasty K and cross-tool chain based on Debian sources and It's a very very active project. It has about 2,000 commits on github But if you specify your own compile options and so on of course, it's not compatible with those existing Debian binary packages, so if you modify those pack use those modified Packages with different rule settings and so then in Debian and other compiler settings then in Debian Then you're not compatible. So you can't install some packages from meta Debian others from Debian Another project out there is called either either uses Debian binary packages from different Debian releases and It uses BitBake as a build engine and as a configuration tool and You have the option to even build Debian packages from source Inside a change route And if you do it for a foreign architecture So for example if you use either on a PC and you build for an arm target They use QML user to emulate Target and to build your package natively But there are also some limitations there So it needs sudo for several tasks that are executed So they recommend to use sudo BitBake image name and there's really happening a lot there So this is something people don't like on this thing then you have a default image size that Is bar 300 megabyte because it's just a Debian essential So plus your extra packages and it's not that active. So it just has about 100 commits on Github and So It is nice. It is a nice thing but I think we can more or less do the same with Elbe and So I thought about doing the same than either does with Elbe by implementing something called Neta Elbe I called it NN because if you put those two characters very close together They look like NAM and we are not really a meta layer like The other things because in our meta in our Neta Elbe layer We don't really have BitBake receipts for compiling packages from source We just have some kind of wrapping around BitBake to use it as a scheduler as a scheduler to schedule jobs in Elbe So this is just a proof of concept tag with about nine commits on Github from me and But it uses the Elbe project and the Elbe project also has like MetaDebian about 2,000 commits on Github and is quite big I just tested it with stretch binary packages and Amhar F HF But I think it should also work with other combinations I added the option to also build binary packages from Debian source packages using the LBP builder and The output is assigned Debian repository containing all the self-built packages and How we do that all is that BitBake generates an LBX ML file describing your root file system and schedules Elbe image build jobs and LBP builder jobs inside this LB init4m So Neta Elbe also generates the license information and SDK generation is currently not implemented But it's quite easy to do because we can generate those sysroutes for different tool chains with Elbe and Then you can add them to any existing arm HF or something like that to a chain So let's talk a bit about the architecture of this thing So I re-implemented the base BB class because it's something completely different than the Octo So I don't I need a known Ordering of the tasks and so on Then I have a task called every project BB class which just endures That my project described by files like the machine config and my image definition is Settled up inside the init4m and therefore I have a macro template for the XML file That is filled in with those information from the big configuration file and from the image file I also implemented a new image BB class receive and So my simple Example image just does an inherit of this image BB classes and this just builds also XML file from the source XML and Puts it into the init4m and triggers the image build Then I have written a P builder class Which can be used to write your own receipts like I showed in the extension layer For example, you can have a simple bit big receive like that containing the URI of your software and That trust inherits from P builder So you don't have to define any build rules for that and all that happens is that this project is put into the LBP builder and we use the Debian Initiation inside this project to build it from source So this is also quite nice Then we have a machine configuration where you can specify the architecture and so on you want to use from Debian But of course also this has some limitations Salutations, so we are still not able to build for an architecture that is not supported by Debian but on the other hand you have to say I think the same Flexibility then in either so let's have a look at the table. So we have the three candidates here and Of course, they have the different goals. They want to use Debian and embedded and so they Have also a lot of similarities. So all of these three support your to style config management and app integration you have Hardware specific software like kernel and bootloader buildable in some way in this Different approaches so meta Debian and either Sorry is our and native LB to building it inside a native QEMU Instance and meta LB does a cross compile of them. So they Support it but in different ways In all projects, you can use the Debian sources to build to rebuild it in meta Debian They are also cross build again the other two uses the QEMU version so Building here takes a bit longer But for example in net I'll be it's super reproducible because we always generate a new change route Installer build dependencies build the package there and extract the information in ether We have a change route where all packages are built. So maybe you get some influences of all package builds because they Keep their their files there and then the new packages uploaded into the same root files as them and built there So maybe you can forget some built-in pens these two specifiers and will work in ether, but not in a tile there and In meta Debian they just support the cross build thing So another thing is the default footprint It's the same for either and net I'll be basically because they use the same technique to generate the root file system Net I'll be doesn't support Shrinking the root file system at the moment. So we have LB fine-tuning To shrink it or LB copy modes, but they are not implemented in net I'll be at the moment These are Supports us with the yachter mode methods so you can do something like do root image Append and remove some files This is not possible in net I'll be because all the image generation is done inside I'll be and The default footprint of meta Debian is already pretty small if it just is built Although the non-debian are also meta Debian is the only Approach that allows us building non-debian architecture. So you can Basically bootstrap a new Debian from sources with modified settings What is not possible with either and not possible with an entire LB? Also, of course using architecture not supported by Debian is the same thing. Yeah, so it's also the same result Then you the next topic is about Exporting the use to offscode so in meta Debian You have all the sources that are used during the build in the download directory like in yachto And you can read it is to put them In net I'll be it would be easy to develop because the source CD rom Is a can be generated by LB. I just didn't activate this feature at the moment to increase the build To decrease the build time, but it could be make optional that for release builds. For example, the CD is generated In either we need as far as I have seen to add some code that Looks which binary packages were used and download the source code, but this is also Possible to implement Then we have those yachto style SDKs using cross-tool change As explained that can be added to eclipse or something like that for cross-compiling your application This is generatable with meta Debian Not generatable with either because they say everything should be built inside our chain shoot with native QM Oh, and it would be easy to develop with net I'll be because basically LB has this support Also generating license informations is available in meta a LB in meta Debian They generate CSV while containing all license informations Either doesn't have this point and in net I'll be an XML file and a plain text files Generator containing the license informations from Debian So the next thing that is interesting is reproducibility So how can I build the same image again after a couple of years the meta Debian? people Did a quite clever concept? They said every Debian source package need to be in a git repository. Therefore they have a Docker container, I think that can be used to clone all the reference git Debian packages into git repositories automatically and on each build they tag the Diversion they have built inside each package git repository and then if I want to rebuild a certain version I can just Specify this tag again all the versions are used as in the last build It's just yellow because you still have this yokto problem So if you're not running in a VM, then you have those host dependencies and so on In either it's quite the same you it's not running in VM So still you still have all those host dependencies containing debut strap for example because they call the bootstrap to Retrieve your packages and so on so you should put it into in 4m and use a snapshot for example for each build Then it would be quite safe But then they have another problem. It's the shared change route they use for each package build So there's a dependency on the order you trigger the builds of your source packages With net iLV you should be quite safe because everything is Scheduled in a VM that it's re that is reproducible and We use the WMP builder that always starts with a new change route and just installs to build dependencies and builds your package another thing else you you might think about is There are solutions where you need a bit big file per debian source package This is given in meta debian because there you need to Specify all options needed for cross compiling the package and so on and this is not needed for either our net iLV Because we use the informations from the debianization to rebuild the package Another thing is using the debian binary packages that is only Safely possible with either and net iLV And not possible with meta debian Another difference is the number of available debian packages Because in meta debian you need to write those big files for each debian source package As the number of packages available is limited to about 600 source packages, but remember out of a source package there are generated Multiple binary packages sometimes so it's not the number of binary packages here and I talked to them It should cover the most important Packages for embedded linux so they they had a look at what's needed in embedded linux and package those stuff primarily for either and net iLV of course you can use all the available debian binary packages and The thing resulting out of that is the effort net needed to adapt the build system to a new debian release So if you want to use Chassis and go if you use Chassis and want to go to stretch for meta debian You need to adapt all those 600 bit bake receipts or possibly most of them to work with the new release In either and net iLV it's basically for free because we just used binary packages You don't you just need to change the string and typically it works them so you see there are basically three solutions out there and But we have only two use cases so Meta debian is good for architectures that are not available in debian and Either and net iLV can only be used with architectures that are available in debian So if you need a special architecture special confine pile of lag meta debian is really interesting and Net iLV is in a proof-of-concept state, but I think it's already very powerful because it used the Established lb as a back-end And so I come to my personal wish list and Then I will finish the conclusion and hopefully we have Time to discuss some ideas might be out outside then because we're running quite out of time So there is a script in debian called reboot strap for bootstrapping complete debian architectures I think we definitely should collaborate with this project Because they they already done a lot of work automating the bootstrapping work of debian Then we should support the multi-arch support for cross compiling any modified source package In debian then we could also do this for self bootstrapped architectures And Where possible we should allow a mixed usage of cross build debian packages via bit bake with the official debian binary packages and I think we should have reproducible builds all over debian There this is another project available in debian that care about having a reproducibility Inside the package builds. This is something that cares about timestamps amps and binaries and so on so I Think we should continually Try to reduce the number of existing Tools in this field and we should to calibrate here where possible My dream is having one layer that is able to cross build debian to bootstrap To cross build debian packages to bootstrap debian from source to allow using binary Packages and to use something like pbuilder stuff to build it natively in QEMO Also, the the bootstrapping stuff might be interesting to port to bit bake because bit bake is really good in this I'm just started this discussion with the reboot strap guy who wrote this script if you could imagine Porting this to something like bit bake what we are still in progress here and Then it would be a really cool combination of debian and the octop So now I think we ran out of time. So if you have any questions or ideas Please let's meet just outside of the room so The slides are available on the download side Of the Congress. There are also all those references here available to the different projects And now I'd like to thank you for your attention