 Okay, good afternoon everyone, so thank you to my for not over shooting your time your time slot I'll be talking about build route and How we've been using it for a project I've been working on for the past four years So my name is Yan Moran work. I am working for orange, which is the hysterical French telco In this context, we've been using below it and I I will introduce that and just a few words about myself. So on the left is the Yan Moran You have today in front of you, which is doing Linux embedded security network and free liberal open source software at work And on the right the other Yan Moran, which is contributing to build route on his spare time, which has basically the same Interest in life So the context of the project we the team I'm working on is essentially targeting set the boxes the IPTV decoders Without his production devices R&D devices We have various generations of decoders with various performance points and one of the most critical Issue we have is we inherit constraints from legacy One of those constraints is that we are only part of the firmware So the main part of the firmware is provided by the third party This third party aggregates others In the end provides us with a complete firmware The part of the firm I'm talking about is a complete rewrite from scratch from an existing application Three teams have been doing that rewrite about 30 developers for the past four years one team in Toulouse South of France Two other teams in Rain West of France where I am located And they are mostly application developers Definitely not Linux experts if they if you talk to them about these calls They would just look at you in a weird way And they are definitely not a pandem not a embedded the experts either So I have to provide them with some tools they can use So following that we had to choose a build system for our project and we had a requirement a few requirements First we want to use generic solutions. We don't want to reinvent the wheel We want to use Something that is not dependent on the target that is not dependent on the suck that is not dependent on the run time the type of middleware we have We want it to be easy to use. We want to know build time overhead we want to use An existing solution we tried to do with with our own build system and I can't say it was a complete success And most importantly the choice was not mine. It was a colleague of me which has gone to other things now So the first thing we tried to do is investigate the SDK provided by the soft core and it is What it is it is dedicated to the production devices and it's not even the Exactly the same SDK for all the production devices. They are slightly different So very specific to the production device this very specific to the soft core We can do research and development with reference design with that So it was dismissed The other solutions that was investigated And I will stress that from our point of view It's not me probably all the story But what we've seen is that open embedded or is it your toe is mostly a distribution generator It generates firmware images, but as a side effect of being a distribution generator It however is very versatile highly customizable, but this learning curve is pretty steep and because the developers are not very experts in Linux More not very nice for them And we didn't have mostly no in-house knowledge about open embedded So we put that on this on the side for now The other solution was below it which advertises itself as a firmware generator And that's what we are looking for. It's definitely not a distribution generator however, it's Pretty flexible and pretty extendable one. We'll see how we've we've been using beer to extend all that was introduced by Tomah previously The learning curve is pretty moderate and some of my colleagues even like Where Kate you say so sorry So my colleagues even say that it was pretty easy We had some pretty good in-house knowledge of the root. Yeah myself And we quickly dismissed a few other solutions because they did not fit our requirements Mostly probably because they had much smaller community except maybe openable you are so a quick overview of below Tomah has done a lot of things before so Simple efficient and easy to use tool to generate embedded Linux systems through cross-compilations quite a long sentence to say but Yes, that's what we wanted. We wanted to be able to do cross-compilation with an easy to use tool and Basically using bill root is just a kind of running make deaf config and make and that's all on all you get is your the result It's entirely community driven which is good solution which is a good Option for us because it is not a custom solution It uses K config make files as a website and big manual and I find it pretty fun to work with and To show that billwood doing a package in billwood is pretty easy. We've got here two packages First one LPG. It is a it is a Lura rocks package So you only need to state the version and below tool will automatically Know that how to Download extract patch and build it because it is a Lura rocks package. The licensing information is Absolutely not technically relevant, but you need it for the legal information Manifest that is generated at the end and that's very important And Lura LPEG is the simplest package in billwood the smallest one and Fpng is the smallest auto tools based package in billwood You just provide the version the website and billwood will know how to download extract configure patch and install it Because it is an auto tools package So unless your package is doing something very weird Writing a package in billwood is pretty easy. And once you have your packages billwood will Configure build them in sequence one after the other respecting the dependency dependency chain between your packages and at the very end it will Do a kind of cleanup pass on the target directory removing headers static files because you don't need them on the Target stripping executables and stuff like that. And once this is done It generates a file system image For example a tarball, which is a file system image in billwood parlance And an interesting point is is you can hook infrastructure hooks at the beginning of the target finalize step and You can also provide post build strips that are run right at the end of the target finalize hooks and write before generating images And then you can provide post image strips that are run after your images are generated I'm not going into the details because that's not the purpose of the talk But I see how we've been using them and for each packages Standard build procedure Which is again interesting is that for each step there is a pre hook and post hook When you're doing actual development on your machine you can use what we call the override as a city of that doesn't are Think instead of downloaded Downloading the package so you have your source tree locally on your laptop or Computer and billwood will use that to build your package So you can do active development with that and billwood is also providing What we call a beer to external Okay, it's back. Good. Thank you be a to external which allows you to provide Extensions to billwood new dev configs new packages file system and stuff like that and we've been using that quite heavily in our project so the first thing is the basic of our Setup is where you're using a beer to external tree as the As a git tree where each commit is buildable. This is the reference of our project So first thing is you create a beer to external the very many malice one Which just needs to amtify and vast that is that's it. You've got a beer to external tree and then We've been adding a git sub module to contain billwood So every commit of our beer to external tree will use the correct version of billwood That does not gain as much for now except we can add new configuration for all over boards We have a few boards here development configuration and to end and configuration production configurations a little bit simplified of course, but you get the idea and some people are using snippets to have basic Dev configuration files for describing a board and snippets to describe their software stack and We've tried to do that and in the end It's most interesting to have various dev config files that you update when needed because They are not updated that often. So manually managing the dev config file is pretty easy because it's a two-way Mechanism you use the dev config file and use modify it and you save it Using snippets you can't split a configuration into snippets. You cannot assemble snippets into a configuration But k-config does not know does not know how to do the opposite and that's a problem So we are also adding new packages for example a package to store our live application Which is rendering live streams to the user a recorder pvr professional video recorder stuff like that Totally standard to be load packages. You just have to register them in config.in and in external.mk by including them and Sourcing the configuration files totally standard build mechanism oops, sorry Something that is interesting also is if your device is using a specific kind of firmware image with a specific format you can Just create a new type of file system This is exactly our file system is generated in biloot Even biloot uses the same syntax to create file systems. So it's nothing specific. It's again totally biloot start out so here we are using our GP tools, which is tools to manipulate our GP image. Yes And we define the command to generate a file system. It takes a rootfs.tar in input and generates the output and This registers the file system with biloot Nothing very fancy so far One thing you can store in your br2 external tree and what we are doing as well is to store What we call board files. Board files are Basically, whatever does not fit in a package. So for example, the basic skeleton content It's not the skeleton package. It's only its content here some post-build scripts One which is for production one which is for tests and a kind of Some kind of overlays that biloot will copy as is to your target directory at the very end If you have overlays try to avoid them move packages, sorry move files out of overlays into packages If you have fonts create a package that installs your fonts. If you have a Dataset create a package that installs that dataset Files that come from an overlay are not accounted for in various Build load infrastructure. So for example, you don't know why your target is big Maybe all your packages are installing small files, but your dataset is big And the graph the graphing infrastructure and below it will not help you There are other reasons I will explain later And one big thing is because This external.mk file is included by the biloot infrastructure. It has access to all the variables on infrastructure in biloot So you can add extra logic extra make file logic. You can add additional infrastructure so the first thing to do is Is Adding new infrastructure for example make rules you just write a new make rule in your external.mk It's not full And all those variables are standard variable biloot variables And this one is just an example that checks that all the packages in the current configuration Build without depending on the build order. It means that all the dependencies of whose packages are correct Except may be inherited dependencies, but that's not a problem for us So Sorry, you can provide whatever you want here as long as it does not clash with existing infrastructure But there are other places where it is interesting to provide new things for example in the target finalize hooks Which is run At the end when all your packages have been installed You know your target directory is complete so you can run hooks To do things with that directory for example we have a Tool that will clean health libraries At runtime you don't need the same links to health libraries. You only need the libraries Whose file is named after the surname So This tool is just getting rid of rid of same links and renaming the libraries to the surname you can and you should offload this kind of Functionality to help her script by turn on or show whatever you want because writing it in make file is not very maintainable and Your editors will not help you with syntax highlighting highlighting so move them to help her script now with we have a Few requirements for our packages. We want them to do stuff But we don't want developers to write the same code again and again and again in their Dot mk files, so we decided to introduce a kind of package infrastructure Which allows developers to write the standard build road packages without worrying about how the extra features We need will be implemented. So they just have to write a standard build route package Here which is a CMake package With a version a website where to get it which is defined here Define the licensing the Wether it install in staging the dependencies and stuff like that and Call the orange package macro and so far it does not provide anything very interesting except Packages will build their documentation automatically from now on but the mere fact of calling this I'm taking a variable that the package has documentation All these code will be added as if it was written in the dot mk file developers don't have to write a Hook to build their documentation it will be done done automatically so we use We defined a new macro for each package that calls make Changes directory to the package source 3 slash doc So it supposedly builds the documentation and registers a new macro as opposed to look and the same for the a Macro to install the documentation in a specific directory This is not very interesting Because you define the macro twice. Oh, sorry many times once for all which package that needs to build documentation Which means you have to escape the dollar here because it is inside a Macro that is evaluated twice So you want the variables to only be evaluated the second time not the first time so you need to ever To quote so we have a first-world problem that we have too many dollars It's also bad. It's also bad for performance because As you add new macros you will Create clashes in the hash tables internal to make which it uses to find a variable definition So the more macro and the more variables the slower it is so Instead you should do as we've been doing for the translations define a macro that is absolutely generic and call it Call the same macro for all packages that define translations. So package that Specifies that it has translations again. It does on low only Declarative statement no code is written by the developer will automatically get that code to be expanded And if the kids fight to link with tool is enabled and the package has translations it inherits those two macro and one for to as a possible to build the translation and one as an post install target to actually install the translation files and The package automatically gains a dependency on cute five tools without having to write it itself that is If we change the way we end all translations the package will just Have nothing to do because we will change that code once here and then because the macros are defined only once You know, we don't have the too many dollar problem. It is more readable. So our applications need to be run by something else which we called our application manager and Application just register with our application manager by installing two configuration files one for the application manager itself and one for our window manager Where it tells whether it is a full screen or an overlay application or pop up or whatever and those two files are just static files installed from the package directory to the target directory as a post install target hook Which is again a single macro expanded for all packages and This macro just copies the file from the current package directory out to the to the target directory and a specific folder and actually one reviewing this Reviewing with slides and writing them. I noticed that we had a bug here that I had to fix in our code so Do talks to conferences you can fix your bugs And actually, I think Stephen Rosted had the same Comment this morning or yesterday We also have various types of applications. Those that are run in each time or those that are run as services or systems and the same Developers just defines the type of applications. They are installing for example here's the orange level application Defines that it is installing live. So live here correspond to the live here and here And this means that the application is to be started automatically at boot but there are also system applications and services The difference between being when they are started System is started earliest Services are started early, but after system and in it application are started last While all the applications are just started on demand later What we have here is quite complex code that generates a JSON file which contains the list of app in its application services and system demons Don't write that it's totally not late non-leasable you could You could probably write a template and Said your values into that template it would be more easy to read and This code is registered as a post build hook from our application manager app on and Installed into the target directory As the post install target hook What this means here is that again developers do not have to write code or We don't have to have static files that describe the war applications developers just have to in their dot mk file write whether Their application their package install applications or not and this is again very easy for them Writing this kind of code is definitely not easy and for me as a maintainer of this packaging Writing static files is not very nice because I would have a static file for when the live and the pvr Unable or a static file where only the live is unable. That's not doable in the long term This package up on Starts with an a it means that is scanned by make very early in the scanning process however, those variables are only Set when a package is passed. So maybe packages that are scanned later will register applications But that's not a problem because those variables are part of a post build hook Some of us it means they will be evaluated very late. So you can still use variables Even though their values is not yet known That's makes in tax below as a mechanism to define users and all of our applications run as a specific user and That's good for security Because when an application create a file on the file system, it belongs to itself and other applications don't read it so you have to have Various users however the boot allows you to define a user without specifying your new UID and below it will assign it for you Which means that the live application here will get probably the 10,000 42 UID if it is automatic You will run that on the target the live application creates a file that belongs to the live user Which is you ID 10,000 42 and then you add a new package. Let's call it joystick that declares a joystick user and Because J starts before L the joystick user gets the 10,000 42 UID and when you update your device the live user is no longer the girl 10,000 42 it would be 10,000 43 and it would have no longer access to its files And the joystick User would access the live users file So we have some code that ensures that all you ID's are explicit Those are just variables So they are known at the time of parsing so you can just do some check on them and again This is only Declare if code so is the user just after sorry the developer of the package just has to declare a user Using normal build-out mechanism Our application call each other through debus and because they are using non-route users We have to generate an authorization file for each user Which means for example this user is allowed to talk to the debus and call methods from that interface And if the user is not allowed to Call to a specific interface the call will fail However maintaining this kind of file is very tedious because users may add new calls or new Interfaces and they may remove old ones. So you would have bit rot in this authorization files. So we decided that because we are using debus APIs through QDBus, we could scan the code for various known patterns and generate associated authorization files Okay, the code is a little bit complex, but what it does basically is as a post-built build hook it will call this macro which is Just calling a tool that scans the package source tree for a specific UID and Generate a system debus.conf file and this I'll cover a bit later So as a post-built hook, we scan the source code and generate configuration files for this specific user And at install file, at install time, we install them In target For the session bus and in staging for the system bus Remember that we are not the main part of the system So the system bus does not belong to us. It belongs to the main part. We are not running it. We are not Even aware of how it is running. We just know there is a system bus to which we we must provide authorization files. That's why we provide them in staging so that we can extract them easier More easily at the end of the build. But our session bus we manage So we install the authorization files in the target directory Only part of the code is shown here that deals with the system bus But the session one is about the same. It would not fit on the slide And again because packages all over packages call this macro orange package We extend that macro with a new call here, which extends all this code and more for a specific package and Then what the package has to say it just has to define that yes, it is using the system debus API nothing more With a slight exception When two application want to talk to each other Sorry when they want to talk to each other they need the XML file one of the other and the first that comes Loses because the XML of the other is not very present So we have to break the loop and an application just write that it uses extra debus interfaces that are Not known by scanning the code because it is not using to debus and this is What we have here extra debus interface scanning the code for the packages own debus API use and Scanning the code for and sorry added interfaces for the unknown and unscannable interface usage And there is another exception when an application uses a library and That the library uses call to debus There is no way to scan the code because it's a code of a library so we have some way for libraries to export the interfaces they use and because The orange level depends on the libdata model. It automatically will gain a Notarization on this interface and this is done by this code here that scans the in inherited the dependencies of the inherited interfaces of all dependencies so We generate authorization debus config files automatically without the user even knowing what's going on Just by the mere factor writing this line Something that is very Similar to debus is a parmore A parmore is a way to constrain an application to a specific set of files. It is allowed to use with various writes on a file like read or write lock exact map and stuff like that and This is tedious to write because it is a huge list that depends on what libraries you are linked with for example your application may Use slash proc slash months and varies absolutely no reason that it does Because when you look at your application code this path is never mentioned in the code and why does it need that? It needs it because if you are using some kind of shared memory six things the gilip see will try to find a TMP FS mount and for that it needs to scan slash proc slash months But the application developer does not know about that and Even himself is not using share memory. It's being used for example for well-owned and He's just writing an application so Yeah It's great to just to work to write but most importantly. It's very prone to beat rot in case a library changes or developer uses new Files or removes access to a file from his application So It's pretty difficult to maintain a security feature Which is which gets contrary to security itself So we've added a way for an application to define that it installs an executable to be protected by a promo and that's Specific snippet for that application is provided here and this application just requires an access to a file in red mode and an access to another file in read write and lock mode and that's all in a developer has to write however When an application has a dependency on a library which itself needs some access to files libraries just define What library? Provide a par most snippets and this library that I'm a little needed to read its co-fiction file and access to its Socket and bike because the live application is linked to the live data model library. It will Pull in this snippet automatically and the same goes for data files for example the fonts here Specifies that it installs data files to be used by other apartment protecting executables And in this case all TTF files in that directory are load in read mode So we've added a new kind of dependency Which is a data dependency on the orange font package? And this is a reason why you should not write your data sets whether there are fonts images or something else in an overlay Because there is no package associated to an overlay and you would not be able to write anything here having the data set in a file in a package, sorry will enable you to do such things so code a very small snippet of the code that handles our A part of things a few macros that install things from the current package directory to the staging on tour target Plus a few hooks that are registered automatically And we've got something that handles Pretty much everything for for health files and at the very end of the build in a target finalized hook We scan all f executables With a tool I we wrote that I even wrote that uses a root directory staging output Specifically for a specific a current the current binary and so on and so on And we can only do that at the very end because data dependencies Are not Actual build dependencies that is when the executable is installed not all the snippets it depends on are installed yet So we have to postpone the scanning of an executable to the end of the build And that's where we register it But we don't really just register it in the target finalized hook We register it in a specific hook that is called as a very specific moment during the build Because it must come after the health files are sanitized But before the other two other hooks we are calling and I'm almost finished only one slide and then I can go to the conclusion and So far we've been seeing how to hook into the various steps of a build of a package or To look at the target finalized hook into the infrastructure But for some configurations for some boards you will want to call some local customizations And for example, we have a post-build script that will generate for example of an FHX compliant version files remove some files that we do not want on the target for example all the Debuss XML descriptions are absolutely useless on the target. So remove them We also remove the .empty placeholders because a git tree does not track directories So we have .empty files in there a lot more other things But you can use a post-build script to do last-minute fixes or cleanups in your target directory You can also provide many post-build scripts. You can have a production one That is always run and you can have one that is only run in your test configurations and for example we have one that opens the debus to and to TCP so we can provoke Sorry, we can call testing things via debus from a remote GenKids job for example And this is conditional if it is not already done we do it So as a conclusion We've seen that we've been adding a lot of infrastructure We've added more than that have shown that I'm already is short on time so Adding an infrastructure allows things to be done automatically without the developer Doing calling Specific tools. It's just in the make process The developer calls make and it has firmware where all the stuff has been executed and Most importantly, it's systematic because it's done the same way for all packages and of course because it is automatically done It is reproducible, which is very important for embedded systems And maintainable because it is written in single location So if we need to fix it We don't have to end all users of a specific tool to fix it. We just fix it in a single location Finally it is extendable because calling the orange package macro we have introduced Will automatically pulling new features we add to this macro And as a last point as I asked my colleagues to come up with a wishlist for bill route for extended bill route And I have no reply. So I think bill route is okay. And there is nothing more to do with bill route We can stop working I Think I have to be very fast if you have questions, please speak now or Never to speak again Any question can you make the slides available somewhere? Yes, I will I'm a make idiots and I have tried to write your Built-in macros and they never never work for me So some sort of template that I can learn from would be really cool Yeah, so I will be making the slides available on schedule come after the talk Maybe not right now, but before the end of the week. Yes And also just an a slide sign not all the code I've been working on showing here is Not complete there are other stuff, but it gets another view of what is possible Here yeah, if you make a small change to your application for example at debug output How long does the rebuild of the image take? So you mean if I just add debug output to to your application you want to package into the image How long does the rebuild also? Well, it really depends on to what you mean because if you just add a few few lines of code of your application Maybe only the files that are modified are rebuilt because we are using The override associate here feature provided by Bill root here, so a developer when a developer changes the code He has the code on a specific directory, which is copied by bill root And if you just change that code you can ask bill root make a package rebuild and it will just copy I think the code To the build directory and only the files that have been modified will be copied And so if the build system of your application is correctly written like see make or things like that Only the new files will be rebuilt, so it's pretty fast. Thank you other question Okay, so let's call it could call it today. Thank you very much and enjoy your as a boost for Thank you