 All right, welcome everybody, good afternoon. Thanks for joining my talk today. So my name is Thomas, and as you can imagine, based on what's written in the slide, I'd be talking about build route. Before we get started, I'll introduce myself shortly. I work at Bootlin. We are a consulting company providing and below Linux expertise. We do engineering services. We help our customers doing development of their Linux BSPs for their custom hardware, doing Linux channel driver development. We do a lot of build route or Yocto integration, real-time, boot time, security, multimedia, pretty much all things, embedded Linux related. So I think this is important to point out that we're not a build route only consulting company. Sometimes we've been seen that way, but we do also a lot of Yocto work, and my colleague Alex, who is in the room, will be talking about some of the work we do on Yocto tomorrow at this conference. We also provide training services around pretty much the same topics. And the reason I'm talking to you today about build route is because I'm one of its co-maintainer. I've been working on these projects since 2008, contributed thousands of patches to it, and I've been deeply involved for a long time in its community. I used to do some amount of work on the Linux channel, and I speak regularly here at ELC, and I'm also part of the program committee of that conference. And you've heard me speaking for two minutes, and you've realized I'm French, and I indeed come from the southwest of France, namely the city of Toulouse. So today, I want to start with some really basics on what is build route, just to make sure that everybody in the room is on the same page with that. Briefly compare with Yocto, because that's kind of the question that always comes up. So rather than waiting for the Q&A session, I thought, okay, let's address that question right at the start of the talk. And then we'll get to the bulk of the talk, which is what's new, what are the new things, or the changes, the improvements that took place in build route over the past two years. And you can see already a list of the things that I'll be covering. So what is build route? It's an embedded Linux build systems. So for those of you who do embedded Linux work, it's probably a familiar wording. It's a tool that automates the process of cross-compiling the different software components that you need to build a fully functional embedded Linux systems. So build route can build your tool chain or an order cross-compiler. It can build one or several boot loaders. It can build your Linux scale image, and it can build a complete root file system with an arbitrary number of user space applications and libraries. And all of that is done by cross-compiling directly from source code, which gives you a lot of flexibility. You can upgrade or customize any component in your system. You can optimize how they are compiled. So compared to binary distributions, which are more like used as is, you can, with a build system, much more finely tuned what goes into your embedded Linux system. So it has a similar aim as Yocto Open Embedded, as OpenWRT, as PTXDest, with, of course, some differences, but the general aim is kind of the same. Build route relies on well-known technologies. Sometimes this is one of the reason why people like build route. Build route is written in Mac, which is a technology that's pretty ubiquitous in the embedded Linux space. Not necessarily the easiest and simplest, but it's ubiquitous at least. And it uses K config for the configuration, which is also widely used in the Linux scandal, in Uboot, and in many other embedded Linux, let's say low-level software components. It's simple to use and learn. That's the other reason why a lot of people tend to you to like build routes. The typical use case, of course, it gets a bit more complex, but the very basic starting point is you run make minic config, you configure your system, what you want into it, you run make, it builds, and you profit. So the starting curve of learning build route is really smooth. There are over 2,800 built-in packages in build route, which means we have lots of major software stack pre-packaged, things like G-streamer, Wayland, and Go, or Node.js, or all these big software stacks are pre-packaged, so you don't have to worry about packaging them yourself. And of course, that can be extended with more packages, and many packages get added maybe not every day, but on a regular basis, and that's part of the things that we'll be covering later. It's driven by a very active community of developers and users, and I'll have some numbers later in the talk, and it's used by many companies. We see contributions from Silicon vendors, from companies making final products, embedded products. We also have a lot of OBS contributing to build routes, so it's really a diverse community that is actively maintaining that build system. And I think we're probably the oldest still maintained build system. Build route was started in 2001, so even before Open Embedded was a thing, and we're still actively maintaining and extending that tool. So now, the one question that everybody asks, as I said, is, okay, why would I use build routes? Everybody is talking about Yocto, so I wanted to kind of summarize some of the differences. If you want to learn more about that, Alex and myself gave a talk exactly on that topic I don't know, many years ago at ELC, so you can find the video and slides online if that's relevant to you. But kind of the summary of the main differences, at least from my perspective, I think everybody can have a different opinion on that. The first main difference is in what it builds. What is the product, the results of what Yocto or Open Embedded, and build route gives you. Of course, at the end of the day, it's producing an embedded Linux system, but the actual result is slightly different. Yocto, Open Embedded, really builds you a distribution with the concept of binary packages and a package management system. So out of Open Embedded, you can get a number of binary packages, you can select some of them to be directly installed into a root file system image that you flash on your device, but then on your device, you have access to a package management system that allows you to install, remove, individual software components or update them individually. Very much like you would have on your Ubuntu, Fedora desktop distribution. Build route does not support anything like that. There is no concept of binary packages at all. Build route generates a fixed functionality root file system. So it speeds out, let's say a SquashFS image or EXT for image or whatever file system format you like. And there's no package management system built in. And that's kind of a feature. It's the way we think it should be. We don't think binary packages are really needed in most embedded Linux systems. So that's not something that's supported. And that's part of what makes build route somewhat simpler. The configuration. So how you tell those tools what to build and how to choose what to build is done in very different ways. In Open Embedded, Yokto, you do that by filling a number of individual configuration files with some specialized syntax, which is extremely powerful. It allows you to describe in a very fine-grain way what you want to build. But the known side, it's quite complex to get into that and to understand what is going to be built and to customize that configuration exactly to your needs. A build route, as I said, uses K-config. So all the configuration takes place from the usual mini-config X-config interface that many Linux developers are familiar with. So you feel really at ease with that. It's really easy to get started. But the known side is that it's sometimes a little bit limited in what you can express in the configuration. Then the build strategy is also somewhat different. In Open Embedded, there is a very complex and I wanted to say heavy. If you look at the disk space that the Yokto Open Embedded build takes, it's fairly, fairly heavy. But it has, thanks to that complexity, it delivers really interesting features. It is able to cache build artifacts so that if you build something once, you don't have to rebuild it again if you do a similar build for a slightly different platform but as uses, let's say, the same CPU core, for example. It also has lots of mechanisms to rebuild only what's needed when you make a change in the description of your system. At the opposite of the spectrum, build route takes a much more simple but also dumb approach. So there's really just a make file that builds things from A to Z and there's no mechanism to cache build artifacts. So if you redo a build, you do the full build from scratch again. If you build a different system but that is fairly similar, it's gonna redo the full build anyway because that's completely dumb. So full rebuilds are very often needed for some config changes. When you get to experience, you usually need to do that less and less but still when you get started, it can be a bit annoying and it may be annoying for some big projects. The ecosystem is also organized a little bit differently. In the open embedded projects, they have these concept of layers which are a collection of recipes to build packages and images. And they have a common base in the open embedded project itself but then there are many third party layers provided by Silicon vendors, provided by some vendors, by other communities that package Python or virtualization technologies and other things like that. Which is great. It means lots of different communities can provide extra recipes, extra layers. The downside being that the quality and the maintenance is varying depending on who is providing the layer. Some layers are really well maintained and they are of really high quality and some others are not so great. They do weird integration things which can cause problems. In Bielvoet, we take a little bit more and approach that's similar to the Linux channel in that we encourage people to bring everything into the main tree. So all the support for all the platforms, for all the packages, they go into the same tree which means there's, yes, a bit of friction to get in but there's more review going on, there's more consistency, there's more uniform maintenance over what goes into the tree. So two different approaches. I don't think one is good or bad is just like different trade-offs that are being made. Complexity learning curve is also another big difference. Open embedded has a somewhat steep learning curve. The tool that controls the orchestrate, the build, big bake, remains a magic black box for a number of people. It's kind of harder to get into what really is happening. BitRoot has a much smoother and shorter learning curve. The tool is simpler to approach and reasonably simple to understand but has its own limits as I've shown earlier. So it's kind of a different trade-off and some tools will, sometimes for some projects, BitRoot will be more appropriate and sometimes Yocto will be more appropriate. And there's also a very important aspect, the personal taste or preference. Some people will feel better with one or the other tool and not really because of any objective criteria but just because you feel like, yeah, that works better for you and that's also an important thing. So with that said, I wanted to talk more about what has been going on over the past two years in the build community. So first I drew some graphs on the activity of the community. So here it's not over the past two years but over the past 10, 12 years. And what we can see fairly easily is that the community is acting in a very stable and mature way. There's not much change in the number of commits per release. So we are of about 1,500 commits per release on average and it's been like that for many years. So it seems like we've reached our cruising altitude, I should say. And we can see pretty much the same with the number of contributors in every single release. We have about 100, 120, sometimes a little bit more individual contributors and that has been stable for many years which is kind of matching what we've seen with the number of commits per release. So let's say what looks like a mature community that is maintaining an open source project. Traffic on the mailing list says it's pretty much the same thing, right? Since 2014 or so we've been at pretty much stable in the number of emails on the mailing list. There is pretty heavy traffic. We use a contribution model similar to the Linux scandal. So all the patches are posted over email to the mailing list. They get reviewed there. Some people will call that old style. Some people will call that good style, contribution model, depends on the perspective but that's the one we use which also explains the traffic on the mailing list because lots of discussion takes place over email. But yeah, pretty stable. I am, well, traffic on the mailing list which reflects the activity of the project. And the other and last graph that I have is the number of packages which is, I don't know if it's an important metric but it is a metric, especially because we pull in the tree all the packages rather than encouraging really people to keep them on their side. So it's growing progressively. You can see it's almost a straight line that shows, yeah, it's progressing and more and more people are contributing more and more packages. There's not an explosion in the number of packages just a normal growth based on what people use in their Pitwood base projects. One thing that we've started doing a little bit over two years ago but not much more is having a slightly longer maintenance period for some releases. So we have releases every three months in the project in February, May, August and November of every year. And it's been like that since 12 years or so. So this is a very well followed development model that we've had. But in about three years ago, we decided to pick one release every year, the February release to be maintained for a little bit over 12 months. Actually it's more 13, 14 months just to have some overlap. So that's what the graph shows here. You can see that the 2020-02 release which has been made in February this year is gonna be maintained until March, April of next year. So that there's a bit of overlap with the next long-term maintenance release. Of course some people can consider 12 months as not being long-term and I would tend to agree. But that's a starting point and we hope to kind of extend that later but for now that that's where we are. So this has been ongoing already for 2019-02, 2020-02, and so we've recently started the cycle for 2020-02. So that process works fairly well now. The way it works is that we have one of the main painter of the project reviewing all the comets that go into the master branch and decides whether that comet is something that's applicable to the LTS branch. So whether it's applicable is mainly related to whether it's a security fix or whether it's a bug fix. So that's really the two main criteria and if that's the case then that comet's gonna be back ported. So there's fairly significant effort going on to review all those comets and decide which ones can or should go into the LTS branch. So I just picked some numbers here from 2020-02 and 2020-02 and number of comets that we have back ported. I was actually surprised to see that it was less in 2020-02 I don't have really much of an explanation there. Maybe less security fixes but other than that I don't really see much other reason. We make also point releases throughout the life of those LTS and usually that's one per month but even in between those releases the branch is publicly available so if you wanna pick security fixes earlier than having the next point release those branches are pushed pretty much every day with the latest fixes that have been back ported. So right now as I said 2020-02 is our current LTS branch. The previous one has been end of life on April 6th so we gave like a two months window for people to move on in addition to the fact that the schedule being known, a number of our users are not planning for those migrations a bit of time. The other thing that we've added over the past three years is also related to security is helping match the set of packages that you have in your system with the list of known security vulnerabilities as listed by the NIST database. So there are two databases published by NIST or probably more but the two ones that we're using are the well-known CV database which lists the known security vulnerabilities and there's also another one that's somewhat less known called CPE for common platform enumeration which is a database of identifiers for software releases. So by itself it's not a security related but those identifiers are used in the CV database to identify which software is impacted by a given security problem. So we have added this make package stats target which takes the set of packages in your current configuration and their version and matches that with those databases and produces an HTML and JSON output that tells you which of your package has known known security issues and which of your package has known security issues. So as I say, I check these packages that are affected by known CVs and also if they're a CP identifier and I'm gonna get a bit more into that is known into the CP database because if it's not known then maybe we may be missing the CVs for your package because it's identified with a slightly different name in build routes than in the NIST database. So this works thanks to extra metadata provided by the build route packages. So in build route every package has a make file that describes where the source code can be fetched and how to configure it, how to build it, et cetera with lots of different variables and we extended that with more variables. They are let's say two main, it's a bit more than two but there's the two bullet points here. The first one is ignore CVs that allows to tell build routes to ignore a particular CV for one package. So you might be wondering why would you do that? It's mainly when we backport locally into a build route a security fix for a problem. Let's say version 121 of a software package is affected by a security issue then make package that will tell you oh, this is affected by CV 2021, 1234. We backport the security fix. So the version is still a one to one but we have locally the security fix. So we want to tell that to okay, this CV you can ignore because we know we have fixed that issue. So we can write in the package ignore CV, CV 2021, 1234 and make that package that will stop reporting that CV. Of course, next time we update the package to and let's say version 122 hopefully the security fix is in we can drop the patch, we can drop that line and we're happy. The other variable set is the package CP ID and there's a number of them for I think vendor products version and so on. That allows to override the default CP identifier because as I said, the CV database is using CP identifiers to identify which software package is affected by a security problem. And those CP identifier, they look like this, the CP 2.3A and then the name of the kind of organization that provides the software package then the name of the software package itself and its version and there's a whole bunch of other metadata. So by default, Beedroot is gonna come up with a kind of a default value for that but it may be wrong, right? So the default value is that one. So for the OpenSSL package, it would be CP 2.3A, OpenSSL underscore projects colon OpenSSL and then its version, which probably is not the one that's used in the NIST CP database. So we have extra variables in the OpenSSL make fine let's say for that package, the vendor value and the product value may be this and that. All right, so that allows us to match better with the CP database. And that allows us to provide an output like that. So obviously it's maybe much bigger. Here I just pick four packages to illustrate different situations that can occur. So we have ATTR, ACL, 8-top and BuzzyBox, which identify like the four main situations that can take place. So here it's mainly the last two columns that can be relevant, the rest is also interesting but for other reasons and the last two columns is really what matters from a security point of view. So here we have a first example of this ATTR package. So this package has at least some of the CP ID variable defined. So it means that one Beedroot developer has already done the effort of adding this metadata in the package. So this developer has verified that, yes, in the CP terminology, that package is referred with those particular values. And the CP identifier actually matches, there is a match in the CP database. So we're kind of confident that when we tell you there are no CVs, we're relatively confident it's the case because we know this identifier is really the one used in the CP database. The second example here on ACL does not have any CP ID variable defined. So we try to do a match in the CP database based on a default created CP identifier but we cannot be really sure it's the correct one. And if it's not the correct one, we may be maybe entirely missing the CP information. So if you see something like that, you should be a bit cautious and say, ooh, this is a bit strange. Maybe I should dig into the CP database, find what is the right value for the ACL package. And if it turns out to be different than the default value Beedroot comes up with, you need to fix that up to make sure you detect the CVs. So we've done a lot of work to add this CP metadata to a number of packages, but it's not done yet. Here we have a different situation. You can see in the column before the last that there's a matching CV, so CV 2011, 3618 here. So we found a CV matching our version. But the CP apparently is not known. We have some CP ID variable defined in Beedroot, but it's not known in the CP database. So you might wonder like, it doesn't make any sense. I mean, if a Beedroot developer has gone through the effort of adding CP identifiers, why isn't that matching? It's because the CP database contains one entry per release of the software. So whenever the new version that is published, the CP database needs to be updated. And the NIST does that on a regular basis, but they're not doing that necessarily for all open source packages on an extremely regular basis. So that may be what's happening there. Probably in the CP database, there are other entries for ATOP, but not one matching the 2.6.0 version, all right? So here what we could and should do is contribute, and we've already done that a number of times, to the CP database, and the NIST is welcoming contributions and they update their database based on that feedback. So here we have a similar situation for the CP identifier, but in that particular case, there was no CV identifier in the CV database. So that's the short thing that we can know produce out of your bid route configuration and the NIST database and the make package stat automatically downloads the NIST database. So if you run that in a crown every day, then every day you're gonna get the latest NIST database and that will be matched against your package stat and you will know if you need to address some new CVs. Security wise, we also changed some default settings. There were a number of features that we already supported, but that are no enabled by default. So all new bid route configuration will get those things enabled right out of the box. And those are, I would say, some of the basic security mechanisms available at the tool chain level that helps harden a bit the user space code that you have. So things such as position independent code, which is needed for some of the other security features in that slide, stack smashing protection, the F stack protector feature of GCC, Railroad, which makes more parts of ELF read only to prevent like being overwritten by exploits or Fortify source, which is implemented in the C library and adds more checks for both of our overflows and that kind of thing. So all of that is no enabled by default, which means more people hopefully are gonna make use of those same security features. Also security related, there's been quite a bit of work related to a Selenix integration because some of our users deploy bid route based systems in critical environments where a Selenix is mandatory. So here are some of the improvements we've made. First, we've made it possible to set the Selenix find security context at build time and not run time. Until then, it was necessary to do a first boot to set the context of all the files in the file system at boot time, which obviously prevents read only root file system and is anyway not really good. So now we do that at build time as part of the build process. So it's much, much better. We've also done a lot of work on the Selenix policy, which is the kind of the database that defines which entity in the system is allowed to do what on what objects in the system. So basically define the security policies and the default policy provided by the Selenix project is quite huge. So we've made it possible to keep only the base Selenix policy modules by default and then extend that. So it's stripped down by a factor of 10, the size of the ref policy, which not only saves space but also kind of makes it a little bit more manageable. Let's put it that way. Then we've allowed the ref policy package, which is in build with the package that downloads the Selenix reference policy and builds it. It's not code, but kind of builds it into the binary format that's expected by the Selenix tools. And we've made it possible to enable additional modules because if the default policy is no much more minimal, depending on which software packages you integrate, you will have to enable more modules to allow those extra packages to do their work without facing Selenix denials. And we've made it possible for that package to provide additional custom modules. So they are not in the reference policy, but they are modules that you have written for your own system. They can be integrated and be built as part of that ref policy. And then we've also allowed individual packages to provide their own additional Selenix modules. So a package like Nginx or like T or SystemD or other things like that or other system services that have their own Selenix policies. They can either enable modules that are part of the standard ref policy. We're using the Selenix modules variable in their package. So that's gonna extend the ref policy with some of those standard modules. And they can also provide their own custom Selenix modules in Selenix sub-directory of their package. So that's a lot of ways to customize the ref policy. So we've gone from something that was completely monolithic. We were building the ref policy as is with all modules enabled to something that's much more modular and that adjusts depending on the packages that you enable in your configuration. So we've annotated many packages with this Selenix modules variable. So here is an example in SystemD. So that's a line that comes from package systemd.mk. And you see that we enable the modules SystemD, UDEV, and XDG from the ref policy. And the other thing that we've done which is not in PIDRUT itself but we also made contributions to the upstream Selenix ref policy to make it work with PIDRUT. It was making assumptions that were not true in the general, yeah, the next system such as the one PIDRUT would build. So we made extra contributions there which have gone upstream. All right, so that was about Selenix. Another big area of work has been around go and rest support which I'm not gonna tell you that they are becoming more and more widespread in embedded Selenix systems. And do's languages, they are not the only one but do's two specifically have language specific package managers which usually the community behind do's languages law and people doing build system hate because they do have some challenges for build systems. So do's package managers, they do a lot of different things but one of the things that they do and which is the part where they kind of conflict a little bit with what the build system is doing is that they automatically download the dependencies. So in the go world you have go modules which can download tons of other modules that you depend on. In the rest world we have do's crates that are described in cargo.toml that describes what extract enough libraries or modules you need to build your rest application. So this is all great but this breaks some fundamental features of build systems. Build systems and that's not specific to build route. I'm pretty sure open embedded is as faced the same challenges and open WRT as well and others. They all have some sort of download infrastructure that allows to cache locally the load nodes to avoid re-downloading things. They also have reproducibility concerns where we want to be sure that if you do a build today and a build in a year you'll get the same results. So you don't want dependencies to be like dependent on the day you build. You get a slightly different version of a dependency for example but also they do build systems do legal license information collection. So they collect the license information from the software that you integrate in your embedded system so that you can comply with the open source licenses in the proper way. And that is made a bit difficult if some random go or cargo package downloads random set of dependencies out of nowhere for which we don't really control the licenses. And so they represent some unique challenge. So in build route we have some let's say initial way I don't know if it's perfect yet but it has at least allowed us to move forward. So we've extended our download infrastructure to be able to inject some specific actions depending on the type of package. Until now the download logic was only based on where we are downloading from. So we're downloading a table over a key or we're calling for McGit repository or we're checking out some subversion thing or whatever but it was not dependent on what we were getting. So now we have what we call post download helpers that allows us to inject a little bit more logic within the download step. We have no two of those helpers, one for go and one for cargo and they are run again within the download step so that we can download the actual source code of the package, tell the package manager either go or cargo to download the dependencies and bundle all of that into the table that will then contain not just the source code of the software package but also the source code of all its dependencies and all the license files. So that means that locally we have a cache with everything that we need to do the build. We can have a hash that makes sure that this table is always the same. If there's the change we have a reproducibility problem. It also ensures that we have all the license files of all the dependencies. All right, so that's what we've done. Hopefully that's still working. So in the past we're just downloading a table, putting it aside or cloning a Git repo, creating a table out of it, putting it aside. No, we have an intermediate step in between where we can invoke the package manager system to download the dependencies. So it looks like this. Actually in packages it's almost invisible, right? Here is a package called TinyFire which is implemented in Go. So it uses what we call the Golang package infrastructure and we just describe where I want to download it from. So here it's from GitHub, what is its license and the Go mod where it is located inside the source tree. And so it's totally invisible but what's gonna happen inside that is we clone that Git repository. Then we're gonna run Go, mod, something, whatever magic Go needs to download these dependencies and all those dependencies will be in the table that get extracted into the build directory before doing the build. Similar thing for Rust. Here the build package infrastructure is called cargo package, but exactly the same will happen. We're gonna clone that GitHub repository called cargo to retrieve the dependencies and then create a table out of that to verify that this table has the hash that we expect it to have so that you and me when we do the builds we know we are building at least from the same source code. And then we can start up the build. All right, switching completely to a different topic, Python. So you might wonder, Python also has some package manager like PIP for example, but in BitRoot for Python we've decided to create individual packages for every Python module. So we have many BitRoot packages that package Python module. So we have kind of a different strategy between Go, Rust on one side where we rely on their package manager and Python on the other side where we create individual packages for the different Python modules. So on Python what we did, one thing that was important is we finally removed Python 2.x. I think lots of Linux distributions had to go through that process, so same here. So it was finally removed in 2022 or two. We probably kept it a little bit longer than other Linux distributions because we know embedded people may be a bit slower at moving things to like new technologies and especially in that case, the new version of Python. And, but we really wanted to get that done before 2022 or two, because that's our new LTS onto which we do a lot of backports. So that was really great to get rid of that before we entered that maintenance period. And also that allowed to remove a lot of complexity that we add internally. Wasn't necessarily super visible to users, but internally it was a bit messy to handle the Python 2, Python 3. Sometimes you have Python 3 on your target, but you need Python 2 on your host to build all the things. Sometimes it's the opposite, it's a result of cases to handle. So no, it's basically all Python 3 everywhere, which really simplifies things a lot in terms of complexity. The other thing that's been added more recently is support for, I don't know how you call that, PEP 517 build system support, really a crappy name, but that's how they call it. So if you've done Python, I'm sure you know, setup.py, build install, which was kind of the traditional way, which used either distitiles or setup tools. So it's been a standardization around a new way of describing how to build a Python external module using a pyproject.toml file. So instead of being a Python script, it's no more metadata oriented. And in Bealwood, we've added support for Python modules that use flit-based build systems. So PEP, if I understood correctly, 517 kind of mandates this pyproject.toml, but multiple build systems can be used, one of them being flit, and which we support. So this setup type variable, I know if my laser is working, yep, here, this setup type variable, we already add support for distitiles and setup tools so that Bealwood knew what to build before your module so that it could successfully install. But now we also have support for flit, which ends up bringing the dependencies that are needed before we build your particular module using the appropriate invocation. So that's now been supported. We don't have that many packages yet using that, but probably there is a fairly strong trend in the Python community to move over that. So we expect to see a broader adoption of this in more and more packages in the future. Obviously, one of the things that we've done, you've seen this curve, right, this kind of flat line of new package addition over time. So we've added more packages over the past two years. There's no mystery about that. So we've added about 290 new packages over two years. So I tried to make an extract of significant ones, but of course, what is significant is very subjective and I had to skim over a lot of new libraries that looked interesting, but were maybe not that relevant. So here I took an extract, so you can see. We have tracing utilities that's pretty strong, liburing for doing IOuring things with the channel, Zabix for monitoring, Wireplumber to do audio together with what's the name. I'm gonna forget it. OpenCVE and new Qt5 modules, obviously a lot of Python packages as well. So that's it for packages. We've done CI improvements as well. So we've already, we're doing a lot of CI around build testing and runtime testing. We've extended our build time testing to test more random configurations. They were already somewhat random, but only partially randomized. No, we test fully random configuration all the time 24-7 to detect incorrect dependencies. On the architecture support, we support many, many CPU architectures, probably more than any other build system, at least that I know of. We've added support for S390, yeah, that's pretty crazy, not really embedded, but people at IBM contributed that. They used bit route for some of their work and we've added support for RISC-5, 64-bit, no MMU. So we already had RISC-5 support, but no MMU support came in. And we recently dropped NDS32 because that's got dropped from the next candle, so we kind of followed that as well. And my last slide, because I see I'm progressively running out of time, is on tool chain support. So for the cross-compiler, in bit route we support two mechanisms. Either we build it from source or we use a pre-compiled compiler that you have from, I don't know, your hardware vendor. And here the main improvements were on the internal side, so when we build the tool chain ourselves, is basically keeping up to date with the latest version of GCC, binitils, G-Lipsy, UsoLipsy, Muscle, and all those things. On the external tool chain side, what we mainly did was do integration for the tool chains that we provide at Butlin, so we have a separate website unrelated to bit routes called toolchainsbutlin.com. It has almost 200 pre-compiled tool chains for many different CPU architectures, and now there is pre-built support for that in the bit route tree. So if you go and look at the screenshots at the bottom, you can directly from bit route choose, okay, I want this ARM64 tool chain, building age or stable in different versions, different C libraries, and that's readily available in the tree. All right, time is up. So that's my last slide. I'm teaching a course on bit routes in September. Our materials are freely available. We're a fully open source company, so all our training slides are free on our site. There's even a GitHub repo with our source code for our materials, but if you're interested in learning more, I will be teaching this course, so if you like this talk and you're interested in bit route, that's maybe a good opportunity. And with that said, because time is up, I'm gonna open up for questions now. Questions? Yeah, please. You can maybe find a mic somewhere. Oh, or I might have it. Oh, okay, you're handling it. I'm curious why you made the design decision to you do Python packages differently than the Rust and Go packages. Yeah, that's a good questions. It's somewhat tied to how people expect things to be built. In the Python world, the build process of one module expects the other, the modules that it depends on to already be installed, right? It doesn't take care itself of downloading the other things. While in the go and Rust world, it's more really part of the build process that it's gonna grab the dependencies. And also there in the Rust and Go world, they're more often tied to one particular version of their dependencies, which means that one package may need a dependency in version A and another package may need that same dependency in version B, which would be a challenge. It's not perfect, right? But it's kind of the line we've been so far able to draw between the two, the two camps. Yep, go ahead. Aspom support, question mark? Yeah, so we already have a tool for collecting the license information of all the packages because we have that metadata. We collect all the license files. We have hashes for them. So I think we have, I believe, pretty much what would be needed for proper Aspom, but we don't yet generate something that complies with the Aspom format itself. But that would definitely be something to look at. Yep, so I think that's it. Yeah, we're running over time. So I can take questions in the old way. Thank you very much for attending and yeah, enjoy the rest of the conference.