 Good afternoon and welcome to this talk. Oh, this is real loud So we come to this bit route talk. My name is Thomas and let me introduce myself I'm the CTO and one of the embedded Linux engineers at Freelectrons We do and the next consulting Linux channel development order development and build system work. I Do work on the Linux channel mainly around Marvel platforms I do also contribute quite a bit to build route which is going to be the topic of today's talk I live in the southwest of France, but you've already noticed my terrible accent and When I'm not doing build route stuff. I do windsurfing and snowboarding So before we get started Let's have a quick poll who already knows about build route Okay, almost the entire room who is already using bill route Okay, quite a few people who is using open embedded Still half of the room open WRT or yeah, what a few people. Okay another build system Okay, you were people. Okay. Thanks So most of you know build routes I'm gonna go pretty quickly over that it's an embedded Linux build system so the idea is that we have Source code for a number of software components the Linux channel the bootloader perhaps Buzzy box Graphical libraries network libraries bunch of applications and we want to cross-compile everything and generate a root file system image That you can put on your embedded device So that's what a build system is all about and Build route tries to be fast and simple. That's clearly the the main two goals And it's try to be easy to use and understand so we use K config to describe the configuration of the system So you run make me a config and you can set what is your target architecture? What components you want in your system which channel version you want the channel configuration? It's all saved in a nice dot config file. So it's very Familiar to people doing a little bit of learning channel development or at least using the Linux channel And then you run make and it goes and downloads everything that's needed Builds a toolchain builds your channel builds all the user space components puts all that together and creates a root file system image out of it We generate by default a pretty small file system two megs So if you do the default build build you get a two megabyte file system with just Buzzy box and your solipsy So it's you can start small and then based on that at whatever package you want So we've got more than 2,000 packages nowadays Ranging from small things like Buzzy box all the way up to a full x.org stack G streamer QT and many other things We generate file system images so contrary to OE or Yachto would generate complete distributions with binary packages We really the only generate like a ext2 image or UBI FS image or whatever file system you like without any Package management system. So if you want to do upgrades you do a full system upgrade It's a vendor-neutral tool. It's a maintained by an open-source community with lots of contributors originating from different companies The community is very active and I have some graphs about that in a few slides We do stable releases every three months and we've been doing that for since 2009 so it's been quite a while and The project has started in 2001 which makes it I believe the oldest still maintain build system So we've been around for a while So I made this talk About three and a half years ago at ELC in the US in 2014 And so I thought oh it's been a while that we haven't Presented would start with the new features and improvements in bring root and when you make the summary There's a fair number of them. So I submitted that talk and it's luckily got accepted So I wanted to share a few details about like the project activity the release schedule The improvements in terms of architecture support toolchain support in the core build one infrastructure Testing improvements as well and a few other details So moving on with the first topic project activity. This shows a number of commits per month So report release here So for every release we've got approximately between 1,000 and 1,500 commits And as I said that we produce a release every three months. So it's a fairly active project get lots of Activity and contributors. So the contributor numbers is about 100 people contributing to each release It's nowadays a little bit more more than that, but yeah 110 120 sometimes Contributing every every three months to to the release So it's nearly a different scale than the Linux channel, but it makes it a fairly active Medium-sized open-source project. We've got a pretty serious mailing list activity Over 2,000 email per month on the mailing list. So it's if you subscribe to the mailing list you get a good and nice fluid of emails in your inbox every day and Looking at the number of packages since the last five years So we started with less than a thousand packages five years ago And we know of more than 2,000 of them and we encourage a lot of people to contribute their packages upstream We don't really encourage the model of Separate layers like open embedded is is encouraging We prefer to have all the packages upstream to increase the the review and improve the quality of them Which is why this package number is growing and continues to grow over time Speaking of the release schedule so since 2009 We've done we've been doing releases every three months So it's pretty pretty easy one in February one in May one in August and one in November And we never skipped a release or missed a release date except by a few days at most So it's pretty impressive for a purely open-source driven project. So until now We were Sometimes doing point releases for the latest stable release as a way of fixing a few bugs a few issues But with there were no long term maintained Versions so if you wanted to get Security fixes bug fixes build fixes Basically your only option was either to do the back part yourself or upgrade to a completely new release Which means upgrading everything in your system, which is not always possible. So since 2017-02 we decided to have an LTS release So every O2 release is going to be for no maintained for one year We'll see if other people volunteers to extend that but that's a start Which is going to be maintained for security build and bug fixes So there are already been six point releases since 2017-02 so dot one dot two three four five six in April May June July two times in September so almost one per month We've done about 500 commits there amongst which about let's say a third where security update security fixes And this is mainly done by the original project maintainer Peter Kosgard who sits right here And we're seeing more and more people interested in that and so if you use build into devices that you don't want to well fully upgrade Every three few months perhaps looking at this LTS release is interesting And obviously the next one will be a 2018-02 In terms of maintenance there has been a few changes over the last Years we used to have a single Committer as acting as a project maintainer. So that's Peter guys sitting still here. He hasn't moved and because of this increase of Contributors and contributions we added two other committers So it was me first and then our notes join is sitting just here And we also have no physical meetings that we held pretty much three times a year. So we had a meeting Just last weekend before that conference and you can see a few people here working on On on on build routes and we have one after foz them And we know have one more private hackathon between the core developers in December so that helps Make build route move forward Architecture support I think but haven't checked really but I think we're probably the build system supporting the largest number of architectures Ranging form the well-known ones Arm of course x86 and then poor PC and MIPS, but also more like specialized architecture or less known architectures Thinking like now is to micro blaze open-risk Arc super-age and a bunch of others So it's pretty impressive and we've got a number of contributors interested in do more specialized Architectures, so it's a nice thing So in in recent years would have been improved on the architecture side is the addition for no MMU arm support So we know can build systems for the Cortex m3 and Cortex m4 microcontrollers that can run a Linux system We've done a bunch of improvements on the arm arm 64 support so that you can know select The arm 64 core that you're using and decide if you want to run a 64-bit system or a 32-bit system on it IBM has contributed support for poor PC 64 both Slit and and beginning So it's nice to see that's the the company doing the the architecture is directly contributing to the to the project There has been also a lot of MIPS really the improvement with imagination technologies making some efforts to push forward this architecture We've received lots of contributions from them So adding MIPS 32 R6 and MIPS 64 R6 support and adding more fine-grained MIPS core selection and things like that we've added support for completely well not new architectures, but Architecture that were new to build routes open-race syschi and spark64 are completely new The support for m68k was well kind of re-enable it was there but it was more broken for ages and it was fixed and then re-enabled we've Enabled support for blackfin and micro blaze with the uCellope cng support and I'm going to get back to that in the next Slides and we also tend to drop features for that are no longer being used of architectures like AVR 32 Which was also dropped from the canal Recently and sh64 which never really well materialized in the real world were dropped So quite a few changes there extending our architecture support on the toolchain side build route supports two kind of mechanisms to Either produce or use a toolchain so we can either build our own toolchain in which case build will go and build binitils and then build First-stage GCC and build a C library build a final GCC and all the related libraries So that's one way of doing things which we call the internal toolchain backend and on the other side billwood can use existing toolchain so if you have a linear or toolchain around or a Toolchain provided by a vendor you can just tell billwood here is my toolchain and they can it can use it That's what we call the external toolchain backend on the internal toolchain side we added support for muscle the C library that is I think growing in popularity We moved away from Usylipsy which was pretty much a dead project to use ellipsy ng which is a fork of use ellipsy that is actively maintained and a number of the improvements on the architecture support that I mentioned on the previous slides were Contributed to build route by the use ellipsy ng a maintainer. So that's also a nice thing All the different components for the toolchain are really regularly updated So we know of gcc7 support binutils 229 gdb8 Geolipsy 226 which are basically the latest versions that you can find but we are a little bit conservative and We use by default always the version that's the one before the latest So our default gcc version is six minutes 228 and Gdb7 12 We've added LTO so then the link time optimization support and fortune support. Yes There are still some people interested in that We add a toolchain wrapper So it's a infancy program that is replaces gcc and calls gcc itself But does additional checks we already add that for the external toolchain and we extended that for the internal Toolchain and one of the things that it does is it checks if you don't have Header pass or library pass that point to host libraries and it helps detect Well cross compilation issues before they happen if you're cross compiling something but try to link with host libraries or used host headers There's something wrong going on. So the wrappers is ooh something bad is happening here We remove support for eGlipsy because nowadays glipsy has picked up everything that eGlipsy was doing On the external toolchain side I think there were so much less improvements in this area One big change is more internal It's how it's organized in build routes the external toolchain support used to be like one big package supporting all the possible toolchains But it started to be a little bit of a mess. So we split that into multiple packages one per external toolchain Family so we have one for linear or arm one for linear or arm 64 one for I don't know Code sorcery tool chains and so on and so forth. So it's more Easily maintainable, but it doesn't change much the functionality visible to to the user We improved the wrappers that I said already existed for other reasons to do the sanity checking of header and library pass exactly like for the internal Backend and we updated the toolchain to use more recent versions of flinnaro sorcery tool chains We've got a toolchain from imagination Technologies and from synopsis for the arc architecture removed a bunch of old tool chains that were no longer maintained So the usual maintenance thing a side project And that is in somebody dependent from build route but uses it is related to this toolchain work So it's a toolchain.freeelections.com. It's a website where you can select your architecture Select the gillip see the sorry the c-library that you want so it can be gillip see you celebsy or muscle and it has a Lot of pre-compiled toolchain that are freely available. So we have 34 different architecture and variants supported at the moment Multiplied by more or less 3c libraries multiplied by two versions So we have a stable version and a bleeding edge version for each toolchain So it makes I think total 130 or something like that tool chains that are available you click and you have a pre-built toolchain available for that for that platform Those tool chains are almost all of them tested By building a Linux channel and booting it in QMU and all of that is done automatically in the CI environment So it's done in on GitLab CI. So it's a new source for a free available Pre-built tool chains that you can leverage for your projects to save build time On the infrastructure side, I think perhaps the most one of the most interesting change that happened is the relocatable SDK So in build routes when you build there is one of the output folder is output host which contains two things It contains the native tools so the binary programs that run on your build machine Which includes the cross-compilers and a bunch of other programs And it also contains the tool chains this route Which are all the headers and libraries that have been cross-compiled for the target so that the cross-compiler can find them to build other Libraries or apply other applications for the target So basically this output host if you take it and give it to someone else He has the cross-compiler and all the libraries and files that allow This developer to compile applications that can run on the root file system produced by build routes So essentially it is an SDK. It's a software development kit The problem we had so far is that this output host thing was not relocatable So you could use it if you left it at the same Absolute path, but if you moved it around in your system or on another machine, it wouldn't work so I've got a bunch of contributions there to improve that situation and we know have a target called make SDK that kind of post-processes output host and makes it Something that is ready to be relocatable So it adjusts the R-pass encoded into the native binaries to be relative R-pass So that allows them to be moved around and it also installs a shell script that SDK users have to run once they have installed the SDK on their system to fix up the remaining absolute pass because there are still a bunch of them But at least we have a fix-up logic Happening here. So that's pretty nice related to that. We used to have a all the native tools and sys-roots down in host user and then in there Without anything directly under host beside this user folder. So we moved everything up So that's the SDK no more looks more like any other toolchain that you can find elsewhere and Since we were cleaning up the native binaries R-pass We also took this opportunity to do a bunch of cleaning on the target binaries R-pass So that's not related to being relocatable But just avoiding having R-pass referring to build machine locations, which don't make sense on the targets Another useful improvement is the introduction of hashes to basically validate the integrity of files that are Downloaded so each package can contain a package dot hash file next to a config that in file that Describes the config options for the package and the make file that describes how to build that package We have a hash file and it you can put the hashes in there for the table For the patches that are downloaded by the package if there's any and you can also put the hashes for the license files That are inside the table itself so that we can detect if there's a change in the license text So the hashes are checked when the package is extracted so every time you do a build It will before extracting check the hash So even if the the tar ball was downloaded correctly, but it was later modified for some reason on your file system It's gonna detect it at build time The license file hashes are checked when you generate the licensing information So build with as a you know for license reporting Infrastructure you run make legal info and it collects all the license information for all the packages that you have enabled and produces a lot of things that is The license compliance puts all the source code in one place the license taking another and then you can Give that to your to your customers to comply with the different licenses So as I said it allows to check the integrity of downloads That's locally stored tar balls have not been modified detective license terms have changed and also it allows us to detect if upstream sometimes Reuploads a tar ball that is different But with the same name and some open source projects do these terrible things we can detect that and tell upstream Oh, you're doing something wrong here. You re-uploaded and I know foo bar one zero nine You should make a new release instead of replacing a in order one And we know of almost all of the packages with the hash files The numbers here are here. There's just a few dozens of packages missing, but the vast majority of hash files by now So we do licensing report as I said this was is already existed three years ago But there have been a few improvements in there We know use spdx license code to make those information more easily passable and Well, it is basically a kind of a standard here to describe licenses as I said we you we added ashes for license files We added a feature to support storing the source code of binary artifacts And typically that's the case for pre-built toolchain a pre-built toolchain is a bunch of binaries that you download So the package source points to a something that is in fact binary, but to comply with the License you want to also provide the source code for that So there's a new package variable called actual source which you can point to the actual source code So if there is a terrible containing the toolchain binary and the terrible containing the toolchain source You can tell the root about both and it will use the second one for license compliance We've added a lot of license annotations into our packages up to the point where almost all of them have license annotation I know there's about a hundred less than a hundred that still lack license annotation and people are working on that continuously continuing on the infrastructure sign Br2 external is a feature that allows Users to implement package recipes store Def configs or build root configurations and other build related files and configuration files outside of the build root tree So you can have the build root tree pretty much unchanged and keep all your modifications separate Which can be convenient in a number of situations so you can separate your project company specific stuff Separate from the build root tree you can update build root more easily this way and you can perhaps separate things more cleanly It's kind of a simplified form of the layer concept That's the OE and and yokto projects and I think also openwrt as It isn't it isn't as powerful as what yokto and OE allows to do but it provides some of the features so it's been available since a bit more than three years, but it has been improved and The main improvements have been the ability to have multiple Br2 external directories used to be that you can only have one and now you can have several so if you want to separate things into Well more fine grain than just build root and the rest now You can have multiple things separately and we've also improved it Improved the mechanism so you can not only have regular packages, but also bootloader packages and file system image Format supported in your br2 external so just make that feature a little bit more usable on The package infrastructure sides lots of things have improved there So what we call package infrastructures are is the the makefile logic that controls our packages are built So there's a base infrastructure that basically handles our packages are downloaded extracted and patched and then you can use this base infrastructure if your package is a kind of a weird non very standardized build system like And written makefile or shell scripts So you have to describe manually how to configure build and install this package But fortunately most of the open source software use Well-defined build systems the auto tools you make on or other things So we have specialized package infrastructures that define how to configure build and install packages So you don't have to repeat this description for each and every package So we already had a number of these like I said auto tools you make Python that already existed But a number of them were improved or added So we improved the Python package infrastructures to support Python 3.x And when then we added a number of other package infrastructures I'm gonna mention Pearl, WAF, Rebar for well, respectively Pearl, WAF and Erlang packages Virtual package is kind of special It's a package to infrastructure to describe virtual packages And it's typically used for OpenGL because we have multiple OpenGL implementations Typically provided by hardware vendors So we wanted to create an interaction between the consumers of the OpenGL API and the providers of the OpenGL API So that's each Consumer of OpenGL API doesn't have to know about each every possible provider So they say I need OpenGL and then they are the provider says I provide OpenGL and the virtual package Infrastructure in the middle is here to make sure that everybody finds each other So it works well, and we've had more and more OpenGL implementations package in build routes for a number of platforms Perhaps I mentioned KCONFIG package a small infrastructure that Compliments generic package to support running make mini config and make safe dev config for all those Well-known software components such as Linux, Pussybox, Usylipsy, Bearbox, Uboots that use KCONFIG Another one was added for helping with building KCONL modules So there are a bunch of packages that not only build user space code, but also KCONL modules so that can be standardized a little bit So all of those things were Happened over the last the last years Sure, please Yes true so So the plans is pretty much like in every open source projects They are defined by the patches we receive and there have been patches already sent for us and They have been Going through a number of iterations and I hope at some point it will settle and end up into something that can be merged So yes, it's somewhere on the runner But it's not too actively pushed at the moment. So if there is some interest, I believe more help would be definitely welcome Yep, thanks. So moving on graphing we already have a bunch of graphing capabilities to analyze the system that you produce with build route Mainly dependency graphs and build time graphs. We added fine system size graphs looking like this So you can know per package what is Well, what is the contribution of each package to the whole fine system size? So if you want to reduce the size of your file system, you know, oh cutie is the one at fault, obviously And you can also do reverse dependency graphs so like Who is depending on libglib2? So those are the the the packages that require libglib2 So if I want to get rid of libglib2 because it's taking up too much space for example Then I have to figure out why if I really need all those packages So it's pretty much the same the reverse of dependency graphs So that can be really really helpful to analyze what's in your system, especially when it becomes a slightly complicated system We did a little bit of restructuring around the skeleton The skeleton is the base of the root file system. It's the basically the base unit hierarchy plus a bunch of Inletscripts and then files in ETC that gets copied to the target before any other package adds binaries and then libraries in there and We split that into multiple packages Mainly to support more correctly various in its systems bit root supports The fuzzy box in it, which is used by default CSV in its System D as an in system and so we split the skeleton into a common part That's common to all in its systems and then split it into Separate packages the part that's more system D specific or more fuzzy box CSV in its specific So this allowed to avoid having CSV related crafts in a system d-enabled system or the opposite it allowed to implement properly read only support for Read only root FS support with system D Which was something that was not working properly back then and we also added support for merge user So we're where a user bean is the same as been and user is being is the same as been which is kind of a Recormant for system D. So it was added as well as part of this like overall effort So it's pretty good. And I think the last piece is just landed in in that summer Yeah, thank you for the precision Five system supports I think there's been less things going on in this area. So this is the part that Takes place at the very end of the build you have built all your packages They're in the target directory and you want to create the final five system image that you can deploy on your embedded system So we now use MKFS dot exd 234 to generate those corresponding file system instead of Jenny x2 to FS It allows to support somewhat I would say simplify better exd 3xd for images Someone contributed support for a xfs. So apparently using that We improve the ISO 9660 support for people who generate bootable USB keys or CD-ROMs and But I think the main thing that's changed is the generalization of using gen image So it's a tool developed by Pengutronics that allows to generate easily a complete SD card or MMC image for a System so you can describe the different partitions what they should contain it just creates that and this way You can just DD that image to your SD card without having to manually create the partitions and they're put their content So it's pretty nice and we also added a Way of having a custom script that runs within the fake root run environment So that's an environment in in in which we create the five system image So it pretends we run as root which allows us to adjust permissions and then various things on the files And and we tend have an effect on the five system image that's being produced until now it was like very fixed and Thanks to that it's possible for people to have some custom logic edits Inside the fake written environment to adjust for their adjust permissions ownership I don't know extended attributes and stuff in inside the the fake written environment So it's more flexibility at it We already had a script that runs before the five system image is created After the five system image is created and we know of one running when the image Five-stem image is created for adding flexibility Reproducible build support was added So the idea is to you make two builds of the same configuration and you get binary identical results So it's only the beginning that was done I'm making sure that timestamps don't creep into the binaries and making sure that the order of the files is always the Same and stuff like that. So we are far from having something That is complete and that will in all cases generate a reproducible build But it's the first step and we very much welcome additional contribution in this area And the developers who started this effort are no longer active. So there's a room for improvement there Packages signed packages have been updated a lot We've added a thousand packages in the last three years and have been improvements in many areas things like a silo next was support was added Cody go mono was added Gasillion packages for Python modules per modules and many other things Supportful hardware was was improved with mainly open gel Enabling and lots of other things as well Another big area where we improve things is Testing and CI and quality. So we've added a runtime testing is restructured. That's pretty new just merge I think this spring and then improve this summer So the idea of runtime testing is that we were doing build time testing so far So take a build root configuration. It builds cool, but perhaps it doesn't run at all. So what we've done is a write a Small Python testing structure, which allows us to describe a bit with configuration so this one just builds drop bare SSH client and server and then Describe what we want to do with it So we put in an or QMU and we make sure that apparently an SSH never is running So this test is very very simple and some other tests we have are more complicated And so we're trying to make this testing infrastructure grow a little bit to test more Features of build route and make sure that they don't break So we already add auto build build org which Had been running for a while, but interestingly suffered a hard disk crash on Friday before I left for for Prague So we used to have something like 200,000 build results on there accumulated over the years and know it's down to a few hundreds because it's started again like on Friday So the idea here is that we choose a random architectural toolchain configuration a random selection of packages We build that and we see if it Works or not and that allows to detect a lot of dependency problems a lot of Architecture specific issues and stuff like that. So it really helped us improve the quality of build routes. That's still running But we've done a bunch of improvements mainly running the build of all our dev config So we have dev configs for a lot of development boards evaluation boards from various vendors So people can just build a well-known working system and and for you know raspberry pi or bigel bone or QMU Or a bunch of other platforms So we build all of them weekly on on the GitLab CI We also run the runtime test on GitLab CI same thing. We're trying to improve the CI We are preparing auto build support for testing multiple branches and mainly do that on the LTS branch That's not ongoing at the moment and that's something want to fix So that there's been already some preparation work and more is going to happen And we've also improved the auto build effort by sending Notifications to the specific developers responsible for given packages or architectures So it's related to this work here developers file It's for those of you who work on the canal We have a maintainers files in the canal and meet developers file in build with is pretty much the same So it says who is Interested or in charge of this architecture or this package and thanks to that auto build knows Okay, if this package breaks then I can email that person and then say oh your package broke on that architecture in that condition Can you please fix it and we introduced a number of other tools to detect? Coding time mistakes who easily test a package and a large number of toolchain and architecture combinations a tool to generate Python packages so a lot of tooling going on around a build with itself Other improvements that Came up to my mind and couldn't really fit in any of the other categories We've improved support for what we call Linux extensions. So it's Features that are not in Linux upstream but require Patching Linux so like Xenomai or RTI or a bunch of specific drivers So we've improved a little bit how this is end old and it should be a little bit better now We've added support for user space tools that are part of the canal tree itself So things like perf or team on or self test and a few other things And so it's no easier to build them as well We've reorganized completely our get text is end old. It was a bit messy and no, it's a much clearer. We have a System-wide Boolean that says I want to support translations or I don't Which is off by default But if you really need translation in your system you can enable that and that that implication of lots of Packages and help us solve a number of build issues we had We've also added checks on the architecture of cross-compiled binaries if you build a system for arm We make sure that each and every binary on the root file system is really big for arm And that also help detect a few a small number of packages that were a little bit broken in that respect What's on the radar on the radar we have of course lots of other things It's kind of the main Features that I found useful The git download cache so today if you in a build with package say I want to download from a git repository It will do a clone But only keep the version that you selected into a table and if you change the version It will basically clone again the git repository entirely So you do an upgrade of just one commit and you read download the entire canal tree, which is super annoying So what I want to do is we want to keep a cache of the git repository locally so that when you Update the tag or all the hash of the commit it can use the all the objects that it has locally so they are Patches have already been posted for that. They are not completely ready for merging, but it's a very good start We want to do per package out of three bill So you can do a complete out of three bill in bill route where you have the source code of bill route on one side and then multiple Projects side by side, but we want to do that inside bill routes on a per package basis And the main reason motivation for that is to avoid rsync The source tree when you're using a feature like override source here Which is we were discussing this feature right before the talk which is a nice feature when you're doing active development in a package You don't want build route to download the package You want to be able to use the source code that is locally available that you have on your machine and Right now bill route is rsyncing that's the entire source tree, which is a little bit annoying So want to do out of three bill for such situations Another big feature that we discussed at the meeting this weekend is top level parallel build So right now build route builds the different packages sequentially So it uses make minus j is something inside the build of each package to take advantage of multiple CPU cores But it doesn't build different packages in parallel So that's something want to do but we want to do it right and doing it right is not as easy as it sounds We need the per package staging a host directories and probably per per package targets Have some locking in some places. So it's it's it's not that easy But hopefully we'll get there at some point and Another thing that's on the radar are more package infrastructures and the two that came to mind that were already posted were package infrastructure for go and mason so rust was not on my list, but it Like support for the language is has also been posted a while ago So Physically what I think that the key takeaways are it's an active project And emails every day on the mailing list patches applied every pretty much every day We know of LTI LTS releases. I think that's a very very big improvement for for the usefulness of build route in embedded devices relocatable SDK for application developers Updates to our package sets but in both in the number of packages and also in the fact that they are being constantly updated a better Testing effort it could of course be better like all testing efforts, I guess But it's the improvements have been pretty interesting in this area and so interesting new features in the road map Top of a parallel build it cash things like that. I think are really nice Hopefully that leaves a little bit of time for questions. Do you have any I have a microphone here? questions anyone we already support the build of Modified existing modified package in out of three. How are you going to be supporting? This is what's one of your latest so What we support today with this let me go back to this line Overwrite source here. We support that for every package You can write a file called local that MK with as many override source your statements So if you've write for example Linux overwrite source here equal and then I know slash home La-ba-ba Linux what build what is going to do is that when it's going to build the Linux package instead of doing the normal Download extract patch steps it's going to skip do three steps and replace them by arcing from the folder that you specified to Output build Linux and then it's going to go move on with the regular configure build and install steps And then if you run something like make Linux that rebuild it's going to do the arcing again Since it has already been arcing once it's gonna just copy the file the few files that you modified and Run the build step and the install step again So basically if you're doing like a development workflow where you make a change in your soul in your Linux tree And you do make Linux rebuild just rebuilds the one file that you've changed redeploys the target the channel image and you can regenerate your root file system image if you want to So that's that's the development workflow that you can have and your Linux tree is is unchanged Build route will not touch it so it can be like version control under a gate You can move from one branch to the other to come its gate gives whatever you want there another question here Thank you First off, thank you very much for build route been using it for years. Love it and it keeps getting better So thanks first off a big. Thank you one thing. I struggle with quite a lot is um These the sort of convergence between a package built for the target and the Package built because you want to use it as part of your boot infrastructure. So for example a bootable USB You want to use? ISO Linux or something like that, but I actually want to build ISO Linux also as a target and The package configuration system at the moment Sounds like you've made some good improvements here particularly with the br2 external moving of boot into there But I know people have put forward package Sorry put forward patches for grub 2 to build for target as well as building for boot Have you got a pot? You've never seemed to want to accept those patches, but you've been happy to send them out to people who ask for them you're gonna Approach that have you got a plan to do that consistently or is is that still a no-go from you guys? So for the grub 2 case indeed there have been patches and one of the the guys who participated to the to the meeting last weekend was working Precisely in that topic separating more cleanly the tools that we build for the host and the tools that we build for the targets And the build system of bootloaders is always a little bit messy in that respect So perhaps the cislunix state of affairs is not Completely great today and it is definitely possible to improve it by having host cislunix that would build us the host Tools and then the target cislunix that would build only the targets the counter tools I think that's that's definitely doable Just someone needs to do it or to send the patches to do it, but in principle it's doable I just as a follow-up that's exactly what I'm doing at the moment as I've basically cloned the package out of boot Put it into my own br2 external So I'm then having to track the upstream changes that you're making in boots just to tweak the target di r Flags it'd be nice if that was integrated a bit better, but I'm not a complete expert in build route So I don't quite know how to start submitting some patches, but I'm I'll take it offline and yeah Yeah, please some patches. We're differently interested. Don't keep that that kind of things on on your side It's much better if it can be upstream then and maintained Any other question? There's a question in the back Thank you since no more question. Let's back up for question one Actually, what's my day job usually is to build kernel myself and then use build route to build image based on Kernel out of three So could it be like extended this overwrite source, but not using at all packages or whatever of what build route using But using actually image Already busy busy image of kernel. Well The busy image thing is it is trivial It's the trivial part you can write the post build script that takes the busy image from wherever you want in your file system that already exists and then put that into Into the target here that build route has created and then spit out the file system image that that's really one line post build script Where it gets more complicated is with kernel modules because you have to install them into the target But that's possibly also doable with the post build script as well. So with a little bit of integration And a short post build script. I don't see why it wouldn't be possible Well, I'm using Yeah, I've got a lot of flights here. So I can't see you but I can hear you I'm using build route quite for a long time, but I Mean actively in mating mating list, but I have a question like Can we directly use the Linux next? because Most of the patches in the Linux next may not be in the stable version that you You're supported for the specific header files. Yeah, that's right So bit route doesn't enforce any kernel version You have a field where you say which can get repository you want to use and which commit or tag you want to use so You can use Linux next preempt RT your vendor specific git tree Whatever you want. There is nothing in In build route that really enforce you to use that specific in a version the only place where I think we may have that is The canal headers where by default we just have a list of the stable canals But you have an option know where you can say I want to use the same canal headers as the canal and building In which case it will be your Linux next tree if that's the one you've chosen to build. Okay. Thanks. Welcome Okay, get the sign that stats that's over I will be around for the conference and I will Leave the microphone to Jan will also be talking about build route So if you're interested by build route, you can stay in the room and again, I'll be around. Thank you your attention