 Hello, good morning, good afternoon, good evening, or good night depending on where you are in the world and welcome to my talk about Buildroot. So today I'll be presenting what's new in Buildroot. My name is Thomas Pedazzoni and I work for Bootlin. And so to introduce myself, I am the Chief Technical Officer at Bootlin. We are an embedded Linux services company based in France. We do a lot of embedded Linux development in the area of bootloader, Linux kernel drivers, Yachto project integration, Buildroot integration, complete Linux BSP development. So we do both engineering services but also trainings and all our training materials are freely available online. Maybe a lot of you have already came across our training materials already. I happen to be one of the co-maintenor of Buildroot, a project I've been contributing to for the past 10, 11 years now. And I'm living currently in Toulouse, in the southwest of France. So before we get into the actual topic of what's new in Buildroot, let's review a little bit what Buildroot is actually. It is an embedded Linux build system which allows everyone to build from source code a cross compilation tool chain, a root file system with pretty much as many libraries and applications as you want, which are all combined using cross compilations so you can leverage a fast build machine to build a Linux system for your relatively not very powerful target. And it also allows you to build, of course, a Linux kernel image and as many bootloader images as you want, such as Uboot, Bearbox, Drive or other bootloaders. It allows to build a system that is simple in a relatively small amount of time, so our default configuration can build in a few minutes. And it's easy to use and understand thanks to the use of KCONFIG, which is the same configuration system as the one used in the Linux kernel, and the use of MAKE for describing all the internal logic of Bearfoot and how to build the different packages. It allows to generate very small root file systems. Our default root file system is only 2 megabyte in size. It contains just BuzzyBox and a C library. Of course, you can add up many more packages using the more than 2,500 packages we have, but at least the default is minimal and small. We generate only file system images, not a complete distribution with binary packages that you can add, remove, upgrade on an individual basis. We really only generate file system images. We are a vendor neutral project. We don't have any single company behind us. It's really an open-source community-developed project which has been around for a long time. We started in 2001, so it's probably the oldest team-maintained embedded Linux build system. The community is very active, as we will see in some of the next slides, and we ship stable releases every three months. If you want to learn more, of course, build.org is the place to go. So today, we're going to discuss what's new in Bearfoot within the last two years. It's kind of a talk I give regularly to update the embedded Linux community about what's changing, what's improving in Bearfoot. So we'll be covering what we have improved in Bearfoot since the release 2018.05 to the recently released Bearfoot 2020.05. More specifically, we'll review some community activity metrics, the release schedule, some architecture support changes, some tool change support improvements, package infrastructure improvements, improvements to our DAWL infrastructure, some interesting package updates and additions. We'll talk about reproducible builds, about top-level parallel builds, and also some important tooling improvements that we've made over the past two years. So start with the activity of the community. This graph shows the number of commits per release, and we do one release every three months. So we can see on that slide that the number of commits is pretty consistent from one release to the other. We're between 1,400 and 1,600 commits in every release, with a good spike in recently. So that's showing a good level of activity in the Bearfoot community. The number of contributors is also an important metric in every open-source community. And here we can see that even in the last three releases, we increased slightly our number of contributors. So we have for every release approximately 120 to 140 contributors, which is nice. And the mailing list activity is also a good metric. We can see it's pretty constant over time as well, with between 2,000 to 3,000 emails per month on the mailing list. So it's a pretty significant amount of traffic, which is in part due to the fact that we have all patches and review occurring on the mailing list just like the Linux channel is doing. Our release schedule changed a little bit recently. So what we already had was four releases a year, in February, in May, in August, and in November. So we have a three-month release cycle with two months of development and one month of stabilization. And this release schedule has been in place since 2009, so we've been doing that for over 10 years now. But what we've more recently added is the long-term support release. Every release made in February, so 2020-02, for example, or 2019-02, is going to be supported for one year, which is an improvement over the support we had before it was just three months when the next release was made. And in these LTS branches, we provide security updates and bug fixes. So this is very useful if you're doing embedded Linux products so that you can more easily have access to security updates and bug fixes. And to achieve that, we have a maintenance branch open for each of those LTS releases. So we started that with 2017-02, so three years ago, and we had 11 point releases for that LTS branch with approximately 800 commits, and then we continued in 2018-02, 2019-02. Each of them had between 11 to 12 point releases, which we make approximately every month. And you can see the number of commits increasing as we track better and more and more security vulnerabilities and their fixes. So today, the currently maintained LTS release is 2020-02. We've already made three point releases with approximately 340 commits done so far, which of course is increasing as we speak and as we find security issues to fix and bugs to fix as well. In terms of CPU architecture support, we've added support for risk 532-bit and 64-bit, obviously a very popular CPU architecture these days. But we also added support for NDS-32, a CPU architecture from China, made by Andes Technology, and they contributed themselves to support for this new CPU architecture. Support for new variants of existing architecture was added, things like new ARM Cortex Core, or X86 cores, MIPS cores, R cores, and so on and so forth. And the Blackfin CPU architecture support was removed. It was removed from the Uptrim-Millenoch scandal, so it of course makes sense to also remove it from build-root as well. And overall, we have support for a really wide range of CPU architecture. ARK, ARM, RTX 24, CSKY, M68K, MicroBlaze, MIPS, NDS-32, NIOS 2, OpenResport, PC, RISC 5, SuperH, Spark, X86, XTENZA, which probably makes build-root to the build system that has the widest CPU architecture support. These architecture of course need to be supported by a toolchain so that we can cross-compile code for your CPU architecture. And we have two toolchain backends in build-root. The first one is called the internal toolchain. This is the backend that allows build-root to build its own toolchain from source. So we haven't had a lot of significant changes there. Mainly regular updates, so we've updated GCC to GCC8 first, GCC9, and we have patches for GCC10, so this is coming up soon. We've removed support for older versions of GCC, such as 4.9, 5.6, so this is really our regular updates. Binutils was updated, and 3C libraries have been updated as well, Lucidip, CNG, Muscle, and GDipC, which we all support. We are doing some really nice testing of these toolchain capabilities using the toolchain builder project, especially a woman now from Smile, is doing a lot of QA and NCI, which allows to test these toolchain components and even report bugs to these upstream projects. Moving forward, the second backend we have for a toolchain is the external toolchain backend. It allows to use an existing pre-built toolchain that you have from Mohandar, Vendor, or other third parties. And in there, we added support for more ARM toolchains. AR64 beginning toolchain from ARM and Leonardo were also added, since the NDS32 architecture was added and NDS provide a toolchain for that CPU architecture, we support for that as well, and we did many updates to other existing toolchains. Another thing we did is allow declaring external toolchains in the PR2 external trees. PR2 external is the mechanism that Beardroot provides to allow you to store your own custom packages and recipes and configurations outside of Beardroot itself to make it easier to update Beardroot in the future and to more clearly identify what is custom and specific to your project from the mainline upstream Beardroot. The packages' price structures in Beardroot are really key. They factorize the common logic that describes how to configure, build, and install packages, and that you use some kind of standardized build system. A good example is auto-tools-based packages. You build them by doing dot-slash-configure, make, make, install, but repeating that logic in each and every package that uses the auto-tools as a build system would be a bit annoying and difficult to maintain. So we have the concept of packages' structure in Beardroot which factorize that logic in a common place. And we are adding more and more of those packages' structure for new build systems that appear or at least that get support for in Beardroot. So over the past two years we added support for three new packages' GoLon package, which as the name suggests is for GoBanes packages. We added support for Maison package which is support for the Maison build system which is becoming very, very popular. And we very recently added a Qmake package structure cure for Qmake-based packages. Qmake is the build system I use mainly in the QT world. And of course we already had support for auto-tools, CMA, Kconfig, Dual Rocks, Perl, Python, Airlong, WAF, and Kino modules. So we are simply extending that with support for more packages' structure. To illustrate that I have an example of a GoLon package here Docker CLI which is the commonline tool to communicate with the Docker daemon. And as you can see in this example we describe in this package makefile how to build that Docker CLI project. So we describe that it is available from GitHub in an undergiven version. It has a certain license and then we have a few variables that describes how to build it. But the crux of the logic is built into the GoLon package infrastructure which is invoked on the very last line of that example. And this is really where all the logic happens. And we don't have to describe step by step how to configure, how to build, how to install this package. This is all encapsulated into the GoLon package infrastructure. Another example with libmpd client which this time is using the meson build system. You can see that the package makefile is very simple. We don't have to describe how to configure, how to build or how to install this package. We only have to provide metadata information such as the version, the location of the table, its license and a few other things and that is sufficient for buildwood to build this package. Our GoLon infrastructure was improved as well. This download infrastructure is the code in Beardwood that downloads the source code of the different packages that we are going to build. So download the source code for your Linux kernel, for Uboot, for QT, for all the user space packages or kernel modules that you are going to build. And this download infrastructure already had capabilities to download from Git, from HTTP, FTP, Mercurial, CVS, Subversion and others. But one really key thing we changed is the addition of Git caching. So when you are fetching the source code for a package from Git, we used to do a complete Git clone, then retrieve the specific version you were interested in, create a tarble out of that and throw away the Git clone. Which meant that each time you wanted to fetch a new version of that same project, such as the Linux kernel, you would have to do a complete clone again, which was very long bandwidth consuming and so on. So what we are doing now we are keeping that clone of the Git repository for every package in the download cache so that it can be reused for other downloads in the future. And this is illustrated on the right side of this slide where we can see for the Uboot folder, which is in your download directory. We have the different tarbles. You can see Uboot 2018-11, Uboot 2019-04 for example. And next to that we have a Git sub folder where we store a Git clone of that project. So whenever you are going to retrieve Uboot releases using Git, it's going to use that clone to avoid redownloading everything. As you can see on that slide, our download directory was also reorganized to have subdirectories per packages. We used to have a flat organization where all tarbles were thrown in with no subdirectories into your download folder. Now we have subdirectories per packages. Another thing that we're doing constantly in Beardwood is of course adding more packages and updating the existing packages. Between Beardwood 2018-05 and 2020-05 we've added a bit more than 400 packages which is quite a lot. We've removed a few packages but that rarely happens. We remove individual x.org proto packages because they've all been merged into a single project. Qt4 has been removed because Qt5 has been around for long enough. Gstreamer 0.10 has been removed because Gstreamer 1 has been around for long enough. In terms of significant package updates, we've added Rust support for the compiler and the cargo package management system. We've added support for LLVM clang not yet as a compiler but as a library that can be used for example for the Mesa3D OpenGL implementation. We've added support for Mender an over-the-air update system for OpenGDK, a Java implementation for the OpenRC init system which originates from the Gen2 distribution but can now be used in Beardwood instead of SystemD or the BuzzyBox init. We've added support for the OptiOS which is the secure trusted execution environment that is used mainly on ARM platforms. We've added support for geo-object introspection for the upper-most security modules and support for a gasoline of parallel and Python modules. Also, as I said, we've done many updates to existing packages. Qt was updated, X.org was updated, Gstreamer, Wayland, Weston, Kodi and many, many more. At the bottom of the slide, you can see how many packages have been updated and how many updates we've done. I've counted over 4,000 updates over the past two years to the various package collections that we have added. In terms of hardening and security, we received some contributions from Collins Aerospace to be able to build the entire user space packages with a number of security hardening features available at the toolchain level. So we've improved support for stack protection, we've added support for RedRow and support for both for overflow detection using the 45 source option provided by some C libraries. This is now tested in our CIs so that we can verify that as many packages as possible build properly with those different hardening capabilities. We've added a new make target. We have plenty of make targets in BuildWood because everything is written in make to query information about the build or to start the build. And the new target we've added is make showinfo. It outputs a JSON blurb that provides a lot of metadata about the packages that are currently enabled in your configuration. So it tells you the name of the packages, of course, their version, their license, their original location, their dependencies and many, many other things. And this is really meant to be used by your own tooling to analyze what is in your configuration, do some post processing, verify licenses, verify that the upstream location is still available, any other thing that you might need. This is complementing some existing analysis tool we already had in BuildWood such as make legal info, which is outputting a set of manifest and collecting all the targets and all the patches of the source code this BuildWood system is building for your configuration to help you be in compliance with the open source licenses. We already had makecraft build to generate graphs of the build time or makecraft size to generate graphs of the file so that you can analyze why your file system is so big and what could be improved and optimized. Another area of effort has been the reproducible builds work in 2019 we had a Google summer of code with Atarval Lele working as a student for the BuildWood project who was mentored by two BuildWood co-maintainers, Arnold from the Kapelle and Yannem Morin. And the idea was to improve the existing support we had for reproducible builds where the goal is to be able to guarantee that if you do the same build two times in a row with the same configuration, same BuildWood version, you get a bit identical result. So BuildWood already had some good reproducibility properties in the sense that when you rebuild a system with BuildWood we are going to build exactly the same version of the different software components with the same configuration and so on. But we did not yet have something where the result is bit identical and so the BR2 reproducible option that we have is helping enabling more mechanisms to increase the chance that the final result will be fully reproducible at the bit level. It is not perfect yet, there are still areas where reproducibility is not there but we've made good improvements and especially what this Google summer of code has allowed to do is to do automated testing of that reproducibility. So in our auto-builder infrastructure we now have some builds that we do twice in a row and once the two builds are done for the same configuration we compare the results and we check if they are bit identical or not and if they are not we will compare the differences and use that to analyze the reproducibility issue and hopefully fix them. Obviously not only with its improvement on the testing side but this allowed to discover some of those reproducibility issues which were fixed in TAR, GZIP, CPIO around timestamp issues. We of course need more work in this area and contributions are welcome but there have been some interesting improvements. In my next slide I have an example of a report that DefoScope is giving us so we are comparing two tarbles of the root file system generated by PLDroot and it shows that we have one small difference in this one file inside that tarble the APP agent pool shared library from the asterisk package has a small difference in the sense that it contains the absolute pass to the build directory which is different from one build to the other and so that is something we have to address in this package it probably shouldn't include in its binary some reference to the absolute location of the build directory. Another area that was really recently improved is top-level parallel build. The goal is to be able to build several packages in parallel. Indeed, until recently PLDroot was building each package sequentially so whenever it was building one package it would use make-jx to benefit from parallelism within the build of a package such as when building the Linux channel but each package compared to the other packages were being built sequentially and this is of course a bottleneck in modern systems that have a lot of CPU cores so we want to be able to build multiple packages in parallel and we've made experimental support for this functionality in PLDroot 2002. It takes the form of an option called br2-per-package-directories what this option does is that it enables per-package-build that creates for each package its own host directory and its own target directory so that each package is nicely isolated in its own environment and therefore we can build multiple packages in parallel so that guarantees that the dependencies seen by the package are always consistent and cannot change during the build due to parallelism. If you have this option enabled you can then run make-j4-j8 at the top level when invoking the build-wood build and that will really build multiple packages in parallel. We still have some limitations for example Qt5 does not support that yet. There is already a patch series pending but it requires some review and effort. We also have issues with the package-rebuild, package-reconfigure package-reinstall targets which are not working yet but we have some ideas on how to fix that. In this slide we illustrate the effect of top-level parallel build. In this first slide we have a given build that is not using top-level parallel build so we can really see each package being built one after the other. Conversely, on the next slide we have the exact same configuration being built with top-level parallel being enabled. We can see multiple packages being built in parallel and therefore the overall build time being reduced. This configuration was relatively small but in more practical configuration we have seen build time reductions of two times or even sometimes three times so this is really a great feature to reduce the build time. Another area of work was runtime test. We added infrastructure for runtime testing in 2017-02. What we call runtime testing is the fact that we not only build a given configuration but we also boot it under QMU and verify within QMU a number of assertions. For example, we might start a Python interpreter and run some test case. We might start an HTTP server and verify that it is running and replying to request and things like that. This is really complementing the auto-builder testing we were already doing but which was already doing build testing. What we're doing now is not only build but also runtime testing. Since 2017-11 we've added many, many new test cases and this has become a more usual practice in the build route community to add test cases when new packages are being added. We've added test cases for Python modules, per modules, UAM modules especially because for those interpreted languages most of the problems occur at runtime and not so much at build time but we also have test cases for a number of other functionalities. Another area of improvement was the tooling for the maintenance of the project and here we had an internship with Victor Weska as a student working at Butlin with me in December 2019 and the topic of his internship was to improve the build route maintenance tooling and more specifically what we worked on during this internship was use of releasemonitoring.org for tracking upstream releases improving the notifications sent to build route developers in relation to their packages and improving as well the search capabilities in our auto-builder so I'm going to give more details on these different topics. So build route monitoring.org is a service from the federa community that tracks a lot of open source projects and tracks their upstream releases so it tracks over 27,000 projects and in build route we have above 2,500 packages so it is difficult for us to make sure they are all kept up to date with the latest upstream releases so in build route we already had a script called set that produces for us a big table where for each package we have information about the state of that package how many patches we have and so on and so forth but what we wanted to add is having the current version of the package in build route and comparing that to the latest upstream version of that package so the improvements with the internship were to add a lot of mapping between build route packages and releasemonitoring.org packages indeed the naming is sometimes slightly different so we had to accommodate for that we made some fixes to build route packages so that the package version would match better with what upstream is using we've added a JSON output to pkgstat so that we can do more tooling around it and we've also improved significantly the speed of pkgstat this releasemonitoring.org site looks like this so we can see here the buzzybox project and all the releases that were made over time so it is regularly putting the buzzybox.net website to see if there are new releases and at the bottom left we can see the mappings which is a feature of releasemonitoring.org website that allows each distribution to document what is the name of the package corresponding to buzzybox in their distribution so for example for build routes the package that we use buzzybox is also called buzzybox but there are a number of cases where we have differences between the releasemonitoring.org name and the name used in build route another thing we've added recently is CVE checking the idea is this time not to make sure that we are up to date with the latest upstream release but that we don't have any known CVE affecting our packages for that we are using the NVD traditional vulnerability database provided by the NIST which lists all known CVEs so we've improved the same pkgstat script to make a mapping and a matching against the build route packages that we have on one side and the list of packages and software components known by the NVD database and based on that and the version affected by different CVEs and the version currently packaged in build route we are able to determine if a given CVE is affecting one of our packages or not together with that we've added a variable to our package makefiles called packageignoreCVEs which allows a package to explicitly say yes I know there is this CVE in the NVD database but I am not affected by it usually we are not affected by some CVEs because we fix them locally with a patch so our version technically is still affected when you compare it with the NVD database but because we backported the security fix this CVE is no longer affecting us this matching allows build route to notify package maintainers when there are CVEs affecting their packages so here is an example of the pkgstat output we can see in the middle white column the current version of the package in build route for example C cache is in version 379 build route and the next column where it is found by distro is the information retrieved from releasemonitoring.org we can see releasemonitoring knows about C cache also being in version 379 so there is nothing to do in terms of build route update the next package CCID is a bit different we have version 1431 in build route but releasemonitoring.org knows about version 1432 which is newer so probably we should update that package moving further down the serial package is up to date with the latest upstream version but apparently according to the npd database there are two CVEs from 2020 that are reported against this package so we should probably investigate that see if upstream has the appropriate vulnerability fixes and around these releasemonitoring.org checks and CVE checks to improve the notifications tend to developers a little bit like the Linux channel as a maintainer's file build route as a developer's file which says which developer is responsible for which package or which dev config for a given platform or which cp architecture and we are already sending a notification to build route developers when they are failures related to their packages in our auto builders and as part of the internship we improved that notification to cover more aspects so we know notify developers about packages being not up to date about CVEs that are not fixed about build failures of our dev configs in our GitLab continuous integration or about failures to run our runtime test in the same GitLab continuous integration infrastructure this notification look like this for example packages having a newer version this is information coming from releasemonitoring.org telling a contributor ok this package you are taking care of is no longer up to date with upstream we have the same for packages having CVEs as can be seen at the bottom of that slide moving on we also notify developers of failures of their dev configs so we can see here at the top of the slide a number of dev configs for different platforms that apparently do not build and some failures in our runtime test as well which need to be looked at by their maintainers another aspect that was improved as part of this internship is the search capability on autobilder.builder.org so this is our autobilder infrastructure which we use to build 24-7 random configurations of build routes and report those results in a central location at buildroute.org this has been in place for many years in the build route community and has helped us detect and fix many many dependency problems, version compatibility issues toolchain problems and many more and so what we wanted to do is be able to query the database for things like hey can you tell me what are all the successful builds that PR2 package Buzzybox enabled on ARM with use ellipsy because that kind of query is sometimes useful to understand why a given failure is happening since when it is happening under what conditions it is happening and so on and so our intern improved the search capabilities to make that sort of query it's possible we've done a number of other smaller improvements as well we've added a make package diffconfig target for kconfig base packages so the kconfig base packages are for example the build route package for linux for uboot for buzzybox all those packages that use the typical menu config, hexconfig configuration interface and the make linux diffconfig target allows to calculate the difference between the currently stored configuration for linux and the one you're actually using to build your linux channel indeed when you run make linux menu config you can change the linux channel configuration but it might diverge from the one you have stored so this allows to calculate the difference and help you update your linux channel configuration we've added support for generating root file system images in more format we obviously already supported generating exd4 file system images, quashfs ubifs and 20 more but we've added support for f2fs butterfs and herofs as well another nice contribution that we got was the addition of support for gettxtiny as an alternative to the full blown new gettxt so gettxt is used for message translation mainly and in the number of embedded systems having message translation is not always necessary but we still added to use the full blown new gettxt which is quite long to build and has a certain footprint on the target and there is a replacing project called gettxtiny so now we have the two as an alternative which is really nice to create more lightweight embedded linux systems so to conclude this talk Beardwood is a very active project as you can see from both the activity of the community and the number of things that have evolved and improved over the past two years we are now doing an LTS release each year with a one year maintenance window perhaps in the future we will extend that if we receive enough interest and contributions but for now it's a one year duration we've added support for new cpo architecture, new packages structures, we have git caching and we have kept a lot of packages up to date and added more than 400 packages top lever and parallel build has made good progress we've also made progress in the reproducible build effort and most importantly the maintenance tooling has been significantly improved overall if you're interested in learning about Beardwoods I'm of course available in the chat following this talk we're also going to teach a 16 hours online training to dive into Beardwood so if you're interested it's going to take place online from July 28 to July 31 and you can register online on the bootlin.com website thanks a lot for your attention and know if you have any question I'm available in the chat to discuss anything related to Beardwood with you thanks a lot and enjoy working with Beardwood Hello Good morning again good afternoon good evening good night depending on where you are as I said at the beginning of the talk it was nice to see if you familiar names in the list of attendees and the list of questions as well and obviously a lot more new people so I'm just going to talk a little bit about the different questions that I had I already answered most of them I'm just going to hopefully say a bit more so my colleague Michael of the Knacker asked about clang support in Beardwood and we do have a clang package already but it's only used on the targets within the context of Mesa 3D for OpenGL there are some OpenGL implementation that rely on clang so we have that already in Beardwood merged and we have a package series pending to add clang as a host package to a cross-compiler based in clang that could potentially replace GCC but it's not ready yet and I think the road towards having full support for clang is going to be quite long but we have initial steps in that direction so I think this is hopefully something we can make progress in the next let's say months maybe a bit short but at least in the next few years hopefully then there was the recurring question on package management systems as said again at the beginning of the talk Beardwood generates a file system image and we don't have any package management system on the target itself so we don't have any things like OPKG or APT or DNF or anything like that and it's been a design choice for Beardwood for quite some time as I replied to the question in the Yocto Open Embedded where there is needed support for packages but when you ask even like Open Embedded and Yocto developers and experts they rarely use packages because they really like that concept of like one single image that you have tested and that you then flash on your device and then when you ship an update you update the whole thing at once so you know that what you are flashing on the device is exactly what you have tested and so packages on the target look like a nice idea but in lots of the industrial embedded context and environments having like this full image update is often the safest thing you have so that being said another reason for not having that in Beardwood is because it would add a lot of complexity to create proper packages you really have to be able to understand what are the runtime dependencies between the packages we have lots of support for optional dependencies we make most of the time as many dependencies as possible optional and this is making support for packages on the target too complicated compared to the benefit that we perceive and one of the key things of Beardwood is to be relatively simple so if we can break that property of Beardwood miss the point Nishan then asked a question about having something to host the downloaded artifacts in some kind of of central location in a company so by default obviously Beardwood downloads from the upstream location like you know canal.org or buzzybox.net and so on and then as a local cache on your machine of the downloaded tarbles but optionally you can specify what we call a primary site which is an HTTP server in your company for example which Beardwood would query before going to the upstream site so that would create like a second level cache your local cache on your machine then you have a second level cache in your company and then as a fallback you have the upstream location and in fact there's even another fallback sources.Buildwood.net which is a backup mirror that the Beardwood community maintain with all the tarbles for all the versions of all the packages we have ever supported and we never remove anything from there so you can add this primary site locally in your company if you want to. You can even tell Beardwood please do not ever go to the upstream location by ticking this BR2 primary site only option and that would prevent Beardwood from querying anything but your local cache and the cache in your company the primary site. So that possibly answer some of the some in at least partly what Nishan was asking Nishan was also asking about like would you become in Beardwood for a new board bring up and why and yes I think Beardwood is really relevant there compared to a more complex build system such as Yocto and OE which makes a lot of sense for complex products when you have a whole product family with differences between machines and images and pretty complex setup it makes a lot of sense to use Yocto and OE when you're more in the prototyping phase or in the like I'm doing canal development, canal bring up, what you want is not like something final for your product but just quickly get a small fine system that has this tool to test audio or your camera or video or display or some other thing and often you need a few user space tools it's kind of annoying to cross-compine them by hand and Beardwood really fits in really well here I know lots of companies who do canal development that do that and in fact most of my colleagues at Woodland that do canal development they use Beardwood a lot for building this small tailored root file system that just have the tools that they need for their canal development. Another attendee asked about C-Sharp support so I'm not really well versed into all the C-Sharp ecosystem but we do have a packet for Mono we do have a packet for GTK Sharp which is the C-Sharp binding for the GTK tool kit library and this is Beardwood it's been there for quite a while and it's been contributed by a company called Amarilla Solutions and they've been using it for some products with their customers I suspect and it's being maintained updated regularly so it's definitely there and I suppose it does work. Sergio asked about like the roadmap what are the mid and long term features planned for Beardwood and I kind of replied like it's very much like all open source projects we don't have a roadmap the roadmap is whatever gets contributed whatever is ready and that we can merge so the best way to get a sense of the roadmap is to look at the backlog of patches we have in patchwork so there's about 500 patches pending in patchwork at the moment so we have a pretty significant backlog which is good in the sense that it means we have lots of contributions but not so good in the sense that we don't have maintainer bandwidth to review all of that but I guess that's pretty common in many open source projects but I'd say the big things that I see is indeed improvements in top-level parallel build that's one thing we have also some work ongoing on CVE tooling improving that's making it more useful for users so you can get not just a list of CVs for the entire package set in Beardwood but specifically for your Beardwood configuration so if you have an older Beardwood you can still run that tool and check if the system that you produce is affected by any known CVE so there will definitely be some work in this area supporting the client compiler is definitely a big thing as well to answer the question I quickly proved through the patchwork and also we also had a patch series pending towards the Chromium engine so that would be another big thing that we could merge but most of the patch if you look at the backlog is really new packages, package updates I think the base Beardwood infrastructure is pretty solid now of course it needs improvements here and there but the foundations are pretty solid and don't change that much so most of the activity is really focused on more packages keeping the packages updated and so on so then there was a question about the slides but they will be available like for all talks I do not really understand why people keep asking about the slides it's been a tradition at ELC that all the slides are online after the event and there will be for this talk but also any other talk at ELC there was another question that I did not answer it came later on which says my pivot builds are taking around an hour and I am interested in parallel build when do you think the Qt 5 issues will be resolved also what other issues might I encounter so I believe this attendee is referring to a top level parallel build here so the Qt 5 issues with top level panel builds we do have a patch series pending in patchworks it's one of the things where we do have some existing work that needs to be reviewed and integrated it looks pretty good just needs a bit of time for review and merging so I think this could be resolved not in a not too distant future and if you want to help feel free to pick up the patch series and you're tested by and participate to that discussion we recently had someone do that and it's always useful to have that feedback from users and from the community in terms of other issues with top level parallel build that you might encounter I see two issues that you might encounter at the moment one issue is that's the way top level parallel build is organized it's only five minutes left that's too short to explain all the details but the consequence of how it works is that it is no longer possible for one package to override a file that was installed by another package so you can't do something like package A installs a file and then package B comes and then does some set or replacements into that file because due to how it's organized at the end of the build you may have a version of the file that doesn't have the changes that other package made so we really have to have each package install separate set of files and possibly at the end of the build have some logic that concatenates the number of files together to produce the final result or something like that but that cannot be done within the parallel build itself and on this we have a page series actually for me pending that detects that kind of overwrites and aborts the build if one package overwrites a file installed by another so it should be pretty easy to react and fix if that work is merged in build-wood the other thing that isn't working at the moment with stopper of a parallel build are the package reconfigure rebuild package reinstall targets which you can run you can run for example make Linux which forces build-wood into rebuilding the Linux channel even if it was already built maybe you've made a change into the source code you want to rebuild that package specifically so you can force into rebuilding it and that is working fine without stopper of a parallel build but with stopper of a parallel build again due to internal implementation details it doesn't work at the moment and that requires tracking which file is installed by which package which will need a bit of work we have some ideas there but it's probably something that is going to require a bit more work than the overwrite files issue that I was referring to before so I think I pretty much covered all questions at least that I had until now I'll be available on Slack there is this channel I think it's called a 2 Dash, Track, Dash and Biddle Linux I've been around for the past two days and I'm going to be around as well later today and in the next days as well the slides will be online I guess the video will be online Pyrrhut is a very open community you can join our mailing list our RSE channel is also very active lots of Pyrrhut contributors maintainers there so if you have any questions don't hesitate to come we have lots of users joining asking questions and we help them so really don't hesitate to join us and thanks for attending my talk again staying for the Q&A session and hope to see you soon in the Pyrrhut community enjoy the rest of ELC bye bye