 Good afternoon. First of all, I apologize for my voice. I lost it in the flights to the US, so my voice is a bit unusual. But I guess that that do it for this afternoon. Welcome to my session. Thanks for coming. If you have entered this talk at the ELCE in Prague in last October, that's going to be pretty much the same talk, so it's still time for you to leave and attend another session. If not, then thank you for joining. I'm going to be talking about BuildRoot and see what's new in this project since the last two years. I work for a company called Bootlin, formerly known as Free Electrons. We recently changed our name to this brand new name, Bootlin. But we're still the same company. We do ambitologics, engineering and training, and I personally work on kernel stuff as well as BuildRoot, and I come from France. So before we get started, a short poll. Who already knows about BuildRoot in the room? So almost everybody. Who is already using it? Like a good half of the room. Who is using OE, Yocto, another half of the room? With some intersection between the two, interestingly. Open WRT later or another of its... Okay, a few more people. Another build system? Okay. What? Build system? Okay. Other people using what? Okay. Good. So for most of you, that's probably something that's already known. A short introduction about what BuildRoot is. It's an ambitologics build system. So the point is to build from source a cross compilation tool chain, a root file system with a number of libraries and applications, all built by cross compilation, build a kernel image and potentially bootloader images as well. One of its kind of strong selling points is that it's reasonably fast, pretty simple, and allows you to build a simple root file system in a matter of two minutes. It's easy to use and understand, and it's all based on standard technologies, K-config, like the Linux channel for defining the configuration, and based on make files for actually describing what the build is going to do. It easily allows to generate small root file systems. The default root file system that is built is just two megs, so then you can add up more libraries, more applications, but at least the baseline is already small. And you can optimize that further down if you need, but it's kind of a reasonable baseline. We've got more than 2,300 packages, and that's growing pretty much every day. We generate file system images and not the distribution, so it's kind of one of the big difference with OE Yocto, which builds a distribution with binary packages. We don't do that in build-root. We build just a root file system image that kind of fixed in stone, and if you want to build a new version of it, you just have to rerun the tool to rebuild a new image, so we don't have any package management system integrated. It's a vendor-neutral project. It's a fully open-source number of contributors coming from different companies, hobbyists. It's completely independent from any specific company. The community is pretty active, and I have a bunch of slides about that in a few slides. We ship stable releases every three months, and I talk more about the release schedule because there have been some changes and improvements recently. And it started in 2001, which means it's probably the oldest still maintained build system. I'm not sure, so I'm gonna say maybe, but I think it's the oldest still maintained build system. So I gave this talk four years ago here at ELC, which was what's new, so it's been four years, and I thought it was time to kind of refresh the people who were interested in build-root. And a number of things have changed, so we'll talk about the activity of the project, the release schedule, the architecture support, toolchain support, a number of infrastructure improvement in the project, testing improvements, and a bunch of other details. So the activity of the project is shown on that first slide, the per-commit activity per release. So we've got approximately 1,000 to 1,500 commits per release, and we ship one release every three months, so it's fairly stable over time with some variations, but it's reasonably stable. We've got about 100, 110 contribute per release, and it's also fairly stable over time, so you can see it has grown from the 2012 years all the way up to now, and we've reached kind of a fairly stable point. The mailing list is fairly active as well with about 2K to 3K messages a month on the mailing list, so it's pretty active. And the number of packages, as I've said, has grown over time, so we're now up to 2,300 packages integrated, and as I said, every day people are contributing more and more packages to suit their own specific needs. In terms of release schedule, the basic things haven't changed much over the last years. We still do one release every three months, so we've got one release in February, in May, in August, and in October, and we never skip the release or missed the release date, or maybe by like two or three days, something like that, but not much more than that, so it's a pretty impressive achievement, and we'll be celebrating next year the 10 years of the release cycle, because that started in 2009. 2009-02 was our first stable release. The change that was made last year is to introduce the concept for a release maintained for a longer period of time, so until 2017-02, we were not doing any really serious maintenance on past releases, so as soon as the new release was up, that was the one you had to use if you wanted to have any sort of support, who were simply dropping support for any past release. Since 2017-02, we decided to have one, so LTS, so long term is maybe a little bit of a stretch here, because we are only a one-year maintenance period, but it's still better than just three months, so 2017-02 has been maintained for a year, so until 2018-02, that's been released just two weeks ago or something like that, and we will do that every year, so I know that 2018-02 has been released, we've stopped the maintenance of 2017-02, and we've started the maintenance of 2018-02 for one year, until 2019-02, obviously. So for 2017-02, we made 10 point releases, so almost every month there has been a point release integrating mainly security fixes and bug fixes, so we tried to avoid upgrading packages, to avoid breaking people having existing systems, the idea is just to backport security fixes or build fixes or fixes to license information, other things that should normally not break things for users, so we've had about almost 100 commits in this branch that has stopped to be maintained two weeks ago, and we've started doing that on a new release, so that's something we would be trying to do moving forward, and we hope that users will help us in this effort by reporting the issues that they have faced if they are using the maintained branch. So if you are using build route and not upgrading on a three months rhythm, I would encourage you to pick one of the O2 release so that you join this more long-term maintenance effort, and so you can also plan on yearly updates of your build route infrastructure. So that's, I think, one of the big changes that occurred last year. In terms of maintenance, we used to have that single committer project maintainer model, a little bit like the Linux channel, but of course to a different scale. We now added two additional committers that have the same, like I would say, power, just three people that can commit to the official repo. So I've been part of this team of three people, and that has helped integrate more patches and review more work and get more stuff merged in a more reasonable time frame. We still do physical meetings, so every twice a year we have a meeting, one at ELCE and one at FOSDEM, so it's very European-centric, but maybe one day in the US. And we also have, once in a while, some more private hackathons, which is just part of the core team, while the meetings at ELCE and FOSDEM are more publicly open to anyone who wants to participate to that. So that's a picture from the meeting at FOSDEM in Brussels last month, and we had, I think, 14 participants, which was nice and really allowed to make progress on a number of topics. Speaking of architecture support, we are probably the build system supporting, so I'm not sure what you're seeing here, probably the build system supporting the largest number of architectures. Are you seeing just the full slide, or just, I'm not sure what you're seeing right now. Let me check. Yeah, you have the full slide, OK. So probably one of the build systems with the largest number of architecture supported, we've got the big ones and the obvious ones, like Intel and ARM and stuff like that, but also more specific CPU architecture, all the ones like M68K or FPGA-based CPU architectures like MicroBlaze or NIOS 2. Yes? Everything does not have to be this big. Yeah, so there's been discussions, well, not really discussions, but people talking about, hey, it would be nice to have support for RISC-V. We were kind of waiting for the support for it to land upstream in GCC and Binitils and the Canal and Gilebsi, and that has happened, which no paves the way for adding support for RISC-V. So it's just a matter of someone being sufficiently motivated to do the few patches that are required. It's very limited, the effort that is needed to add support for a new architecture. I think the main effort is not actually introducing the architecture, but maintaining it over the long run. And we'll see that we do quite some amount of build testing, and that is the thing that takes the biggest amount of time to take care of when you maintain a CPU architecture and build with that. I'll get back to that, but yes, RISC-V is definitely on the radar at some point. Yeah, yeah, and thanks for doing it, yeah. So what we've done in terms of CPU architecture improvements over the last year is mentioned here. So we've got no MMU ARM support. So people doing M3, M4, and I should mention M7 has been added recently as well. We've done some little bit of reorganization around ARM, ARM64 options, so that if you have an ARM64 SoC, but you want to build an ARM32-bit system, you can still select that you want to build for, let's say, Corelc-A53, but still in 32-bit mode, so that's been added. We've got a lot of work from IBM done on PPC64, big Indian and little Indian support, so it's nice to see contributions from the manufacturer directly. MIPS has been improved quite a bit as well from imaginations, but activity has reduced recently due to, obviously, imagination changing a little bit its strategy. We've added OpenRISC, CSKY, and Spark64 support, and a bunch of other architectures have been improved, M68K, Blackfin, MicroBlaze have been improved, and we've dropped a bunch of architectures, and I think Blackfin is on the list for being removed as well in the near future, because it's gonna be dropped from the upstream canal in the fairly, well, near future. On the tool chain side, which is kind of obviously an important part for a build system, so build route support, since a long time, there are two models to provide a tool chain, so it can build its own, so it's gonna build the bin cells and GCC and DelipC, or whatever C-Library you like, in the right order with the right dependencies, so that's what we call internal tool chain, but it can also reuse existing tool chains, that's what we call external tool chain, so you've got your Lenovo tool chain or your vendor-provided tool chain for your favorite CPU architecture, you can tell, Peter, please use it, because I trust that tool chain more than what you're gonna do, so you can do that. So on the internal tool chain side, we've added support for muscle, which is kind of the new kid in tone in terms of C-Libraries. We've moved from Usylipsy, which was pretty much dead, to its fork called Usylipsy Engies, which is basically the same project, but kind of maintained by another person that has done lots of work to clean it up and improve the testing and merge lots of patches that were out of tree in a number of build systems. We do regular updates of the tool chain components, so pretty much whenever there is a new GCC or Binitals or GB release, we have patches flowing in to update to that latest version, and we have a policy of using not the latest version, but just the one before as our default and offer the option of using the latest one, so we can have, don't directly upgrade to the latest GCC version to give some time for us to test the packages and for people to also test them. So we have support for, for example, GCC 7.x, but our default is 6.x at the moment, and same for Binitals and GDB and so forth, so every all of these components have been updated whenever needed. We've added the link time optimization support and fortune support, so that's been, we have sometimes surprising contributions, but it's there. We used to have a tool chain wrapper, so it's basically a GCC wrapper that calls GCC that we were using only for the external tool chain, which we also know use for the internal tool chain, especially to check for a number of bogus flags. So if you're cross-compiling, but you're referring to libraries of your host machine, or headers of your host machine, you're probably doing something wrong, so the wrapper catches that and say, ooh, something bad is happening. We've dropped IGLipC because IGLipC no longer exists, it's all in GLipC nowadays. On the external tool chain side, so we've done a little bit of internal reorganization in the way external tool chain have added, it used to be like one big single package ending all of the external tool chain stuff, which was pretty difficult to maintain, so we split that into individual packages per external tool chain family, so you have one package for the linear tool chain, one package for that other external tool chain and so on and so forth, so we've got maybe 10 or 12 external tool chain packages, so it's a little bit easier to maintain. We've improved the wrapper with this include and library pass checking, which is also used for the internal tool chain and we've just like internal tool chains updated to newer versions of the tool chains that were available. Side note, it's kind of not directly build route but related, we've started this tool chains that bootlin.com service that provides a wide range of pre-built tool chains for basically every CPU architecture that build route supports with the three variants of the C libraries and each time with a stable version, which is basically the default version of the tool chain components that build route uses and the bootlin version, which is the latest of them, so it makes like 34 CPU architecture multiplied by three C libraries, they are not all available on all CPU architectures but when available, it's provided, multiplied by two, one stable, one bleeding edge, so it's I think 180 or something like that, pre-built tool chains that we provide and we regularly update them as boot route updates and all of those tool chains, we not only build them but when we build them, we build the tool chain then we build a minimal Linux system with a Linux channel, minimal user space, we put that up under QMU to kind of do a minimal validation that the tool chain is reasonably working so it's not like a full test but it's still better than nothing and only if the tool chain passes all those tests, it's put up online on that side, so if you're looking for a pre-built tool chain that may be useful, yes please. Yeah, it is, we did support that, I've been doing that for quite a while. Yeah, there were a few gotchas and my next slide's gonna talk a little bit more about that but yes, those tool chains are built by build route and made to be reusable by anything else, so you can reuse them as external tool chains in build route but you can also use them, I use them regularly for like building my own channel or my own boot loader on the side when I do channel work or boot loader work so those should normally behave like any regular pre-built tool chain that you can find. So one of the things that we've improved and related to that is relocatable SDK so when you build build routes, you've got a bunch of folders and one of them is output host which is where we install all the native tools so that's where you've got your cross compiler and all other tools that might have been compiled for the host machine that are necessary for the build to proceed and it also contains the tool chain sysroute which is where all the libraries and headers that have been cross-compiled for your target are located so that the cross compiler can find them and this basically output host folder can be used as an SDK, it has the compiler, it has all the libraries, all the headers so you can give that to application developers and they can use it to build applications that will run on the targets that has been produced by build route but that SDK was not relocatable until now so if you had build it on, I don't know, slash home full build routes output host it had to be installed at that very same location which is obviously annoying so we fixed that up with a new make SDK target so after the build is finished you can do make SDK and it's gonna do a number of things in the output replacing our paths, generating a shell script and stuff like that that makes the SDK mostly relocatable but not everything has been made relocatable but there is a shell script that is generated that's people using the SDK have to run when the SDK has been installed to fix up the remaining absolute path so it's been, it's quite interesting that's being used for the tool chain work obviously and you can also use it to provide SDKs to your users we've added Ashes to packages to verify downloads and verify that the targets being downloaded the patches being downloaded have not been modified and also that the license files have not been modified compared to what you expect so it's basically just a very simple file that sits next to every package make file that provides the Ashes so the tarbol and patch Ashes are verified when you extract the tarbols and apply the patches and the license file Ashes are verified when you generate the licensing report so it's a nice make legal info target that provides for you a manifest to help you with the license compliance and as part of that it will verify that the copying file or the license file is still the same as what we expect so it's a pretty nice feature and almost all our packages know of Ashes available and it for example allows us to detect that upstream sometimes re-uploads a new tarbol under the same name without putting it as a new release so the hash has changed and you don't know what has changed in the tarbol so it's a pretty nice feature so we've got licensing report as I've said it's been there since more than four years but we've done a bunch of improvement there we're now using more consistently SPDX license code so that's all the license information is encoded in a more standardized format we've added Ashes for license files as I mentioned we've added support for storing the source code of binary artifacts and that's especially used for a pre-built toolchain when you download a pre-built toolchain you download binaries but as part of your license compliance process you also want to ship the corresponding source code so we have a new variable in packages where they can specify okay this is the binary you want to use like for the build but for the license compliance here is the actual source code tarbol that contains all the GCC and binital source code so that you can provide a set of tarbols that meet your license compliance requirements and last but not least we've added license details to a large number of packages so to the point where almost all of the packages have a license information nowadays there's like less than a hundred packages still left in the queue and your patches are welcome to help in that direction another thing we've added is BR2 external so if you're familiar with the concept of a layer that Dr. OEE has and in some extent OpenWRT as well it's kind of a simplified form of that it's somewhat simplified compared to what OEE is capable of doing but it does help in a number of situations so basically BR2 external allows you to point build route to another folder which contains package definitions and dev configs and other build related files and artifacts that you need for your build so in companies it helps kind of separating the open source build routes and open source packages, recipes and make files from your own in-house stuff into two separate locations clearly identified so some people don't necessarily use that and they prefer to use kits and maintain branches so that they use branches to separate their own work from the mainline build route work but some people felt it was clearer to have really two separate folders two separate git repositories so BR2 external helps in doing that so it's available since about four years now and over time we've improved it to support multiple BR2 external so we've got multiple place where you can put your custom package definitions if you need so and we've improved it so you can implement bootloader packages or fan system image formats into your own external tree if you want so packages and frustracers so build route factorizes a lot of the build logic when your package uses a standardized build system so if you're using the other tools or you're using CMake or you're building a Python package it doesn't make a lot of sense to repeat in those hundreds of packages the same configure make make install logic with all the variables that you have to pass so we have those packages and frustracers that factorize that common logic so we already add auto tools, CMake and Python infrastructures and probably a few more but we've added many many more over the last few years so we've extended Python package to support Python 3 we've added Perl and I'm kind of skipping intentionally virtual for a while WAV, Rebar for Erlang packages, Kconfig for Kconfig base packages and a bunch of others the virtual package allows to create virtual packages so it's not really a package infrastructure like all the others but it helps us support OpenGL or JPEG or Udev, those packages that provide an API but that potentially has multiple different implementations behind the scenes so it helps us in like that sort of things we've got a Kano module infrastructure to help building packages that build Kano modules so it does very little things but that was repeated in many packages so we can factorize that and we're adding more and I think on the radar we have a goal line package and frustracer and Mason package and frustracer and probably a bunch of others that may be coming in the future graphing so we have around the action of building your system we also provide a number of tools to analyze what your build looks like so we can build graphs about the dependencies of packages about the time it takes to build the system to analyze why your build is so long we've added some facilities to graph the size of the file system on a per package basis and yeah so normally it was supposed to be two slides here but I'm not sure how that shows up here and we've added also a way of graphing the reverse dependency of packages so that's pretty nice to analyze who is bringing that package into the dependency tree or why is my root file system so big which package is contributing this amount of kilobytes or megabytes to my overall root file system which helps in optimizing the root file system footprint on the infrastructure level we've restructured a little bit the skeleton so the skeleton is in the root speak the base of the root file system so it doesn't contain any program or anything compiled but it's just a basic directory hierarchy a few init scripts config files and stuff like that that kind of is the base of every root file system and it used to be something like handle in a very special way and we've changed that to just be a normal package so all the packages that you build depend on that skeleton package so it's no part of the normal build and package dependency logic and that package has actually been split into several sub-packages to handle the different init systems that we support so if you're using a CSV-based init system or a system-D-based init system you have different skeletons I mean if you're using system-D having init script in etc-init.t doesn't make a lot of sense so if you kind of split that up factorize the common part in the common package reorganize all those things so it's much better supported these days this allowed us to add support for a read-only root file system with system-D which was not nicely supported until then we support the merge slash user so whereas user-bin and bin and user-lib-lib are the same folders which is also used by system-D so there's a bunch of basically better support for system-D-based systems in build-routes since a few years on the file system side there hasn't been that many I would say major improvements it's mainly minor improvements here and there for the XT234 file system formats we now use mkfs.ext234 which has upstream grown the capability of generating file system images and not just an empty file system on an existing block device instead of previously gen-ext2fs which was a little bit limited we've added support for exfs kind of weird file system but it's, I was used by some build-routes users the ISO9660 file system support has been completely rewritten and we support Grub 2 ISO Linux has bootloaders and initramfs pure ISO9660 scenario so all of that has been made more flexible more extensible we use more extensively a tool called Genimage so it's done by the guys at Pengatronics and it's a tool that helps automate the process of creating an SD card image composed of multiple partitions containing different file system formats so you can say with Genimage okay I want a first partition that is in the Fat32 format with those files inside and another ext4 partition of that size with those files inside and yet another ext4 partition with these other files and it all generates a completely ready to use so it's a SD card image but obviously can be used for EMMC as well or any other block device that you want and this helps build-route producing an SD card image that you can just DD into your EMMC or SD card and make it completely ready to use so that's pretty nice and we've updated a number of our DevConfig so our default configuration that support popular deployment boards to use this Genimage tool we have the way we handle customization in build-route is very often by calling scripts at various points of the build so that we don't support in build-route itself very funky use cases but we give the possibility for people we have those funky use cases to plug in their own custom secret source at various points in the build so we used to have and we still have a hook at the end of the build but so when all packages have been built but before the image is created and we have a hook at the very end of the build when all images have been created so you can call any amount of shell script or Python script or Perl script that you want to do your secret source and we've added one inside the fake-route environment so while the image is being built we use fake-route to make build-route believe its route while building the root file system image and you can now call a custom script at that point as well which allows for more customization and another topic that's been not specific to build-route but kind of a training in a number of other build-related projects is build with reproducibility basically given a certain configuration having the ability to reproduce the exact same build to the byte level or to the bit level so you do your build you hash your file system image and then you do the same build six months later you have the exact same file system image with the same hash so we're not there yet what has been added is the basics so we've got an option that says okay I want my build to be reproducible it sets a variable that's observed by GCC and a number of packages to avoid using timestamps which obviously break the reproducibility of the build and a number of other things I've been tweaked here and there but a lot more remains to be done and if there's one area where contributions are welcome that would be one area the people who started that effort are no longer really active anymore so help is welcome to push that further on the package side which is obviously where the majority of the contributions are made and the majority of the activity is happening it's kind of hard to summarize what has happened we've added more than 1,000 packages in those four years so lots of things small and not very commonly used to bigger things but I tried to came up with kind of the big things so we've added a Selenic support contributed by people in the aerospace industry we've added Qt5 was already there but upgraded to 5.9 with many different components and GTK, EFL has been upgraded OpenCV, Codi has been added the support for languages has been improved with Go, Mono and Rust being added so if you want to use one of those languages on your embedded system that's possible we've added gasoline's of Python modules Perl modules, Erlang modules so lots of people are now using not only C and C++ but many more languages and we have support for that in build routes Docker, AUFS, all that container technologies also being added to build route and there are still patches pending adding more of those packages the system upgrades field is also there with solutions like SW update, Rok I think we still don't have a package for Mender but hopefully that someone will contribute that soon another area where a lot of work has been done was hardware support so more and more people have been enabling build route on their platform editing the corresponding packages to support OpenGL and other more hardware specific aspects PRUs for the big old bone platforms or other aspects and so networking and other things have been added as well so it's really plenty of packages that have been added another aspect where we've done a lot of work is everything around I would say QA and helping maintaining the overall build system into a fairly decent shape as part of that we've added a runtime testing infrastructure so a way to describe test cases and run them under QMU and check that it's running as expected so that's a very small test here that says okay build a configuration that has ProBear put it up under QMU and verify that there is an SSH sample running on port 22 and we've added more and more tests so we still don't have as many tests as we'd like to but we are adding more and more and that has helped cache a number of issues and no days for some specific bugs that we get we add the corresponding test cases to catch the problems in the future so that's really nice and more and more people are relying on that testing infrastructure to even write changes in bit route and test them CI so we already add a CI efforts called autobill.builder.org so basically what we do there is we have a set of 50 architectural toolchain configuration that is kind of fixed and then we pick a random of those architectural toolchain configuration generate a random selection of packages and build that and see if it builds or if it doesn't build and that helps a lot to catch missing dependencies or specific combination of packages that have not been handled properly or oh we've upgraded that library but it's no longer compatible with that other package and things like that so this is running 24-7 on multiple build slaves so we've got I would say five, six machines doing that 24-7 and it's amazing the number of problems that this has allowed us to figure out and fix and this is still running as we speak and as part of bringing up a new CPU architecture so looking back on the risk V question when we add a new CPU architecture we do add a configuration for that architecture to the system which means all our 2000 packages start to be built on the architecture so if your GCC or Binitil support is not up to speed you'll get tons and tons of build failures because your GCC port is not good enough or your Binitil support is not good enough and that's what we expect from people maintaining CPU architecture that will help us fix all those problems in terms of so that other build stuff has been a few improvements but mainly it's been the same so what we've added is we're testing our dev configs so that builds minimal systems for a number of popular development boards on GitLab CI so we make sure they build the runtime test that I was describing in the previous slides are also run on GitLab CI we added the support on autobill.brew.org to support testing multiple branches until then we're just testing the master branch of build route but now we have the ability to also test the long term support release so when we make new commits to the maintenance branch it continues to be tested and we can figure out if a minor update to a package causes any build breakage and another thing we've added that has helped a lot improving the results is notifications sent to relevant developers whenever there's a package failure so I'll get to that in the next slide so a little bit like the Linux channel as this maintainers file that says okay for this driver this is the set of people in charge for it we have a file called developers so it's not really maintainers but it's basically people who care about some part of build routes some set of packages or some architectures or the documentation or anything like that and it's used like in the Linux channel to send patches so you can look up that file and we have a small tool that does that for you you feed it a patch and it tells you okay you should send that patch to this person that other person enter the mailing list but we also use it in conjunction with the autobillers so whenever there's a failure for a given package it looks up in that file and says oh this person is likely to be interested in being notified about failures on that package and that person will be notified so developers in that file will receive every day a summary of the issues that affect either their packages or their architecture if there's been any failure of course and that has helped a lot to raising the attention of people who are not like actively monitoring the build results but realizing oh my package has a problem I can probably spend half an hour looking to that and submit the corresponding fixes we've added the check package script a little bit like the channel has check patch to verify your patch, check package verifies a bunch of very basic and silly rules that we have that your package should look like this to meet the coding styles so that also helps in avoiding well stupid review cycles on the mailing list we've added the test package script so it's the output you can see on the bottom right of the slide and it's basically a small script that will build test your package on all the tool chain architectural configuration that we test in the auto builders so it's gonna show you how this package gonna break or not the auto builders if it gets merged so that is nice to run before submitting a new package and we've got some tools so I mentioned in scan PIPI but I should mention scan CPAN as well which use respectively PIPI for Python or CPAN for Pearl and automatically generates the corresponding build route packages we say scan PIPI in the name of a Python package and it's gonna be generating all the build route packages that you need to build that Python package into your build route system so it's pretty nice helps maintaining build route packages for those interpreted languages moving on on other miss and the loose improvements the Linux channel package has been improved to support what we call extension so it's basically features that require patches to the channel things like the Xenomai RTI channel patches sometimes specific drivers are not like standalone channel modules but they really need to patch the channel itself so we have ways to express that things and then probably package those extensions we've also created a small infrastructure for Linux tools so it's been kind of a trend in the channel over the last five years or so to not only have the channel itself but a bunch of user space tools for which the source code is part of the channel tree itself which from a package in point of view means that the channel tree is not only the channel anymore but also a number of user space applications like Perf or GPIO or IIO or other things and we've added mechanisms in build route to build those user space applications if you need them we've reworked the get text handling that's what a fairly big effort there not that useful for the final user but internally it's made a lot of things a lot clearer and solved a number of build issues we've added checks on the binaries that build route generates to verify it's really built for your target architecture and you will not build it for the host architecture and recently support for hardening feature like Railroad and Fortify source has been added and we've got more and more people interested in generally security hardening the LTS F4 is part of that Acelonix and Railroad Fortify so there's a pretty strong set of people looking into improving the security at the build system level so what's coming up next on the radar, what do we have? We have a git download cache so right now if you fetch a package from a git repository we're just gonna do a clone and then generate the target on the side so that if you do the build again with the same git repo and then same version it's gonna reuse the target you have locally available so you can do an offline build but if you change the version of that package it's gonna do a clone again from scratch and really download everything so it means that if you're just bumping your kernel version from like one tag to the next tag every time you do this update you pay the price of a complete git clone which isn't really nice so we have patches in the backlog that avoid that by having a cache of the git repositories so you only download basically the new git objects instead of re-downloading everything so hopefully that should land at some point in the near future but it's take a while to integrate that kind of core functionality we're looking into adding per package out of three builds so right now when you build packages and build routes the source code gets extracted into a folder called output build slash the name of the package dash the version and we do the build entry so if a package get build two times one times for the host, one time for the target we extract it twice and we build each time to its own source folder and also if you use a feature like local packages or override source here to say my source code is already locally available somewhere else on my system build route has to do an R sync of the source code to bring the source code in the tree and build it there which is a little bit annoying so we wanna bring per package out of three build it's in theory it's not very complicated but that requires a lot of cleanup in different places so it's more effort than you might initially think but hopefully we'll get there and another topic I've been working on at the end of last year and I hope to be back to it in the next few weeks or months is a top level parallel build so right now build routes builds in a completely sequential way each and every package so inside each package we use make minus j something to make use of your multiple CPU cores but each package against each other is built sequentially so we can guarantee that the build is reproducible so we've done, we've started doing some work to achieve that the same amount of reproducibility but even when building packages in parallel patches have been posted on the mailing list people are invited to test them and see what it gives in terms of build results in practice we have seen sometimes a two times reduction in the build time in most use cases so it's pretty nice to divide your build time by two and there's also an effort to improve package tooling so we've recently, so I've contributed a mechanism to track upstream packages using releasemonitoring.org that is really great web service that tracks upstream projects I think it's tracked 16,000 upstream projects and that allows you to notice oh okay Bosleybox has made a new release maybe we need to upgrade our release as well because the new release might have interesting bug fixes or security updates or things like that so it's nice to have some automated way of doing that when you have more than 2,000 packages and some other folks are looking at tracking CVEs using the list database so in the same kind of direction okay my system is using Bosleybox 128 or whatever are there known CVEs affecting this Bosleybox version should I upgrade or should I back port some security fixes so that's being worked on as well as I mentioned new package infrastructures Go, Maison and perhaps others will come but at least for Go patches have been submitted and Maison it should be done in the near future people are interested in that build system and we're converting gradually some packages to it so to wrap up the project is active still releasing every three months we have this new LTS thing, relocatable SDK the package set is richer and richer and being kept updated and we are adding tooling to keep it even more up to date the testing effort has been significantly increased it could be improved even further obviously but it's still better than what it was and I think we have interesting new features on the roadmap if you're new with build route and you've never used it there is a tutorial organized as part of the embedded apprentice Linux engineer track I think I got the acronym right which is gonna take place on Wednesday at 2.30 p.m. so it comes with practical ends on on the Pocket Beagle platform so I think the seats are sold out for actually doing the lab with the Pocket Beagle because they don't have enough boards but you can still join the session and see what people are doing and look at the slides and things like that if you're interested in. With that I think I'm done and I have apparently a minute and 30 seconds for questions. Questions? Yes please. Sorry, can you continue? I'm not sure what the question makes sense I mean build route is an alternative to Yocto or to open WRT or to OE, right? So, yes? I'm not sure either I'm not understanding the question or you're confusing what build route and open WRT are so build route is an alternative to open WRT and to OE or to, I don't know, PTXDs or something like that so you're going to use either or one of these, right? Okay. Right? No, no, no. Open Gibralty was very, very a long time ago based on build route it's a fork of build route but that dates back from like 15 years or something like that and nowadays beside the fact that both of them use cake and fig from the canal they have pretty much nothing in common anymore. Right? Another question perhaps? Yes please. So we have these sources.buildroute.net mirror where we keep the tarbles we never delete them. Yep. So far we have never deleted any find from there so we've got tarbles that dates back from 10 years ago when there is. Yes. The only packages where that is not true are packages where you can define an arbitrary version like the canal. We can't like mirror every possible canal version because they are, yeah but for all the packages where the version is fixed in the package recipe we have all the tarbles and we run like every, I think every day we have a cron that says okay download the tarbles for all the packages and we do that every day, every day, every day and it's, we never delete them. No, no. There was a question yet, Olaf? So the reason why we do, so the host directory contains the sysroute, right? So what we want to avoid is we want to avoid having a package a building and while it's building the sysroute of the compiler being modified by adding more libraries, right? So if you have a configure script that runs and while the configure script runs at some step you have library A not installed and then suddenly library A shows up in the sysroute. So you would like check if a header is there, it's not and then you check if the library is there and then suddenly it's there, right? Right, so in the ideal word if the dependency annotations are correct you don't need that but an ideal word doesn't exist and in practice it's almost impossible to catch all the optional dependencies that packages are checking and they are changing all the time. I mean you upgrade to a new version of package foo and the upstream developers will oh no if you have library blah blah on your sysroute I'm gonna use it and checking all of that when you do, we do a version bump is like unrealistic. So in the ideal word, yes but in practice it's really not possible to do that. Question? Are we done? I've got plenty of build with stickers here. Thank you for joining. Don't hesitate to come and pick up as many stickers as you want. Thank you.