 Hello everybody, so I'm here to talk about a tool called build route That they have existed for quite a long time in the embedded world But I think could be interesting for people that are working on containers and in data centers and that sort of stuff So what a for start I would like to give a little warning I'm a person from the embedded world and more precisely from the embedded industrial world So that's totally different from mobile Phones are totally different and that's a bit different from consumer Embedded stuff. We don't exactly have the same problems. It's pretty similar, but still a bit different and It's totally different from the data center So again, I think I have a tool that might be interesting for other people and that's why I wanted to talk about it But I may be wrong. So if it does not solve your problems, that's fine That's why you why we are here The second thing is we're going to talk about root file system generation and only root file system generation There is nothing about namespace control groups or that sort of stuff. I love that stuff It's pretty cool, but it's not today's subject So before we get started a little word on why I'm interested in containers and what they mean for the embedded world so containers in the embedded world are Kind of a great technical tool that's still looking for a problem to solve Because we don't have anything like thin provisioning or dynamic deployment or live migrations doesn't exist in the embedded world Our applications tend to be highly hardware dependent. So isolating from the hardware makes no sense We do have some use cases for multiversioning and Different libraries or that sort of stuff, but it's mainly linked to old or closed source software That absolutely want that particular version of that particular library. It's usually more like a hack when we need it There is also a limited use cases for containers as a sort of weak package manager So I won't go deep into Why but in the embedded world in general we try to avoid packages like at any cost because it breaks devices so we usually Reinstall from scratch when we upgrade, but it's a different subject and More and more there are people that try to push the DevOps approach into the embedded slash IoT world But it's not that easy. It's not as easy because rapid deployment does not work typical upgrade of Embedded device can take months or years because they stay offline because people don't upgrade them because Various reasons Which means that whenever you do an upgrade you have one more version of the software That's floating around and that you need to deal with and that will have a different upgrade path So we try to limit the number of version and upgrading in one package in an image means a new version So we tend to do big upgrades We need to archive all source code for legal reasons I mean if you're in the data center you're providing a web service usually and you don't need to redistribute your code You should do it, but you're not legally forced to do it and the embedded a world We are which means that whenever we do an upgrade. It's one more version. We need to archive and distribute So that's a lot of work and the last point rollbacks are really frightening I mean we do have rollbacks mechanism and they're very robust because they're not allowed to fail But you need to remember that we have no access to our devices once they are sold We might have remote access depending on the use case and what's actually implemented But that's very limited and if things go wrong, we're toast. So we tend to be very very Cautious so the last point that stays is containers as a security feature Which again is not that simple. Why mainly because it's hard to Split an embedded system into components. There is usually one user space software that does the Grant of the job and it needs network access because it's basically getting stuff from the network all hardware access because It's driving the hardware and all data access because the data is what we need to drive the hardware So splitting in in front and back and database and stuff like that doesn't really work for the embedded work but still we are trying to look into it trying to see how to use it and Seeing what's going on. So What do we want because of those? Well legal requirements and also because we need to manage software in the very long term like 20 years We need complete traceability to the line of code Everything needs to be checked some then we need to make absolutely sure that every line of code that ends up in our product is archived Okay, so complete archival of the source code all source tarbles all patches when we have to add a patch to upstream We need to keep that patch all overlays because we add files just like everybody to our images We need to archive and all scripts that we run during debugging need to also be archived And we also need to be very independent from our host. Why is that because in 20 years? Ubuntu 26 will not work with packages. We've been building now So we need to archive also all our tools compilers So the source of the compiler the compiler the auto tools the source of auto tools all that sort of stuff So build route so build route is an image building system. It's basically an automated Linux from scratch It has big mega files that will handle dependencies for you So if you ask him it to be build a patchy it will find the dependencies and it will build all the dependencies for you Then it will download everything all checksum is verified by build route directly if you provide patches It will patch your software run configure build step install It's also an image generation tool. So once we have installed We all have to transform this into something that is usable. So that's tweaking permission and adding a su id or a root Permissions on the different files adding customized files and content creating file system images packet packing file system in images into disk images Because we also do host system. It's not just containers Collecting all the licenses and making sure they haven't changed and collecting all the source code for archival That's all the things it does. It's cross compilation friendly because well Most embedded arm device are way too small to compile their own software So we need to cross compile and it's a very old project since 2001 The overall philosophy we have in the embedded world is one command to do everything We type make build route is make base. So you type make and at the end you have an image and everything is automated so Quick example, how does it work? You clone build route You use menu config to customize your belt here I just changed the architecture to x86 64 mainly as an example you type make you have an image At this point you start needing to be rude because we're going to Unpack the our root file system and just launch it with and spawn I use and spawn because it's the one I know but there is nothing and spawn specific in build route So what we have in this image? It's already a working image. It has a busy box So basic shell and utilities Microleap see no kernel we did not enable kernel building It did not only build microleap see and busy box. It did the whole tool change generation Auto tools which are needed for busy box make dev fake root and a few other tools that are needed to build the image It's technically a heavy container because it has an in its system It's a system five type in its system that comes with busy box, but it's in there It took me on this laptop 12 minute to build with a everything pre downloaded and it's 1.6 megabytes total. So it's very small as Comparison I also used a pre-compiled GNU see library toolchain and in that in that same case It took it takes only one minute to compile, but it's 6.3 megabytes large. So The as a GNU to see library is quite larger. Is it a lot? It depends on your use case, but it's like three times large. So it's it's an important difference Something a little bit more realistic. We're gonna do a light container with Apache So light container means no in its system and will try to remove everything we can remove So what did I do? I did I changed a little few more options. I Was tired of recompiling my two chains. So I took a pre Compiled to change from from but clean which provides us with a sort of service I've added Apache and I've removed anything I could remove. So I removed busy box entirely Which means that we have no shell. No standard utility. No nothing I told the build route that this new system would have no in its system So it removed any in it dot D or our system these services that might be lying around And I'd also told him that there was no sim link for slash bin slash sh because there is no shell We do the same thing and we rebuild and we have a functional light container. So no shell no in its system No customization whatsoever. You will have the Apache it works page and that's it It took me three minutes to build and it's 11 mega size. So Apache is a bit big, but it's still reasonable and then you need to Learn to use it because it's not that obvious Apache control which is a standard way to start an Apache server doesn't work Because it's a shell script and we have no shell So you have to launch it manually. So again, I use and spawn which is the one I know And so you just give it as PID to and you tell HTTP D to run in the foreground because by default It will fork and when it forks the main process dies and when the main process dies in the containers container Itainers terminates. So you need to keep it around. So, yeah Very easy to use. That's the whole point very small and very easy So now how would you customize because right now we just took a generic Apache And we have nothing for us the Easiest way and the most common way to do it is with overlays. So I guess most of you are familiar with overlays I mean was that more or less how you do it with docker files, too So I guess it should be okay. Basically you tell a Build route you give it to directory and you tell it this is stuff that you have to put on top of your file system And it will just put it on top of the file system after everything else So you tell it where it is the top dear slash overlay you create the Directory and the sub directories as you want them in your target and you simply put the file in there Still I am a normal user. I do nothing as route you run your make you unpack your file system and you Run it. So in this example, I did a portable service. So I don't know most of you you're familiar with portable service Okay, so portable service allows you to package in an image a binary its dependencies and run it as a system the service and it's basically a Light container plus a service file so it can be used out of the container itself Weird way to define it, but it'll do So the only thing I add is the Apache dot service file and then I run it So again, I don't care about container configuration So I just used the trusted profile from system D and did not go into the details of configuration and It works. That's it. So the the whole point is that using overly directories is Is great for archival because it's trivial to use with git Because you see when your file of are changed you can trivially archive them you can trivially work with them and overall it's It's always the same thing. I mean the whole point was Was build route is that anybody who wants to play with it? You just clone the stuff make money can fit build and it works. It's Just play with it and you'll see how easy it is to use There we go. Those are customization tools We have a patch directory. So there is a directory you can you create a sub directory for any package You want and you put a patch in there and build route will automatically apply the patch to the sources So you have this chain where a build route will download the source check the checksum for you Apply any patch you give you give it and build the whole thing and the whole thing is checked And the patches are again in directories that are is easy to store and get So it's pretty easy to have everything you put in your image stored in a single place and trivial to find post root FS scripts are Scripts which are run within fake root. Do people here know a little bit how fake root works? Yes So fake root, okay, I'll do a quick reminder fake root basically allows you to run Any process and it use LD preload to go Under well between the C library and the process and it will emulate any call that would require root permissions So for instance if you try to change the UID is a the ownership of a file to root It will write down somewhere that you tried to do that and tell you that it worked and whenever you ask him Him about the permission again later It will give you its fruit but it won't do the actual change and just simulate them for you But it's good enough to build an image as A normal user because you create your image like that So pretending to be a root and still be in this same fake root universe You will run tar and you will create a tar files with fake permissions But inside the tar files you will actually have the right UID so that's how you can create a whole tar file containing Block devices or containing files owned by root without being root yourself Linux won't won't allow you to untar the file because you will need the right permissions to untar But you can create the tar file in the same way after that you can within fake root You can then take the tar file and create disk image containing root owned files without being root So the whole thing so this post root FS script allows you to run Shell scripts within fake root to do the kind of adjustments where you actually need to do to be root to do them And then you have less used case which is a post image script Which is more for the stage where you assemble various Partitions and file system images into a disk image. So that's less useful for containers I guess but when we're doing actual embedded stuff we use that a lot So yeah, this whole thing has been around for a long time and have solved the problem of archival and of reliability Which is the whole point? So Why use build root what does it bring so we're source-based you have all the source of everything around Signatures are included in the recipe. I mean build root upstream whenever it Upgrades the version of a package available in build root. It will include in the build root get the Hash of the source so when you build your system will go and download say on github a git repository And then we'll compare that with the hash comings from the build root project. So everything is checked automatically No need to be rude to build images So this morning I've learned about all the different versions of a rootless container building This one is also completely rootless. It does not have any su id binaries Anywhere we just do everything as a normal user. You don't need any root only call to build an image Complete archival complete traceability complete license compliance. These are really really needed in the embedded world. So we Kind of can't work without them Highly independent from the host system. So it's kind of obvious when you sing think a cross-compilation But we do need to rebuild every tool. We need to rebuild them as an auto tools We need to rebuild make because different version of make Produce different built so if you want your build to be reproducible You have to have the same version of make and more important importantly in 30 years time Who knows what version of make we will have and if you have to rebuild an old software in 30 years It's very important to save all the source code I mean if you want a good source of stories where open source saves a day go and find some Some very early Linux users Because those are the ones that had serious problems with hardware and that could sold it because it could patch Kernel the kernels that were 20 year old and it works and it's awesome and you can save your customers lives this way I did it a couple of time so All patches are visible and easy to manage same idea. We have all the source code around Reduced attack surface if you have a binary based Distributions you cannot easily or not at all change compilation options with this system If you if it doesn't find an optional dependency for a software it will compile the software without the support for that optional dependency So reduced attack surface Reproducible builds they are marked as experimental, but In practice they work really well. I mean Build route and yokto the embedded world in general We have been pushing for a predictable build for a long time And we're really glad that people like red hat and they've been have started pushing to because it because it helps a lot So very easy to build portable services and like containers because again You can have everything from scratch and you have all the dependencies managed You don't even need to know what your dependency are. They'll just build it It's easy to debug and hard to debug It's easy to debug because you have everything available you have all the source all the debug symbols It will keep all the build directory. So all the build artifacts everywhere. So in a way, it's easy to debug It's hard to debug Because you don't have a shell. Well, usually you have a shell But if you start removing every tool in your image, you won't have any debugging tool and sometimes it's very disturbing to discover that Nothing works So you have to know about remote debugging or you have to tweak your image to have a shell to debug There's all sorts of stuff which makes it a bit tricky You need to understand Linux that's That's pretty surprising but there's all sorts of stuff in Linux nobody knows about so it's a very frightening Pam for example Or TTY management or all those sorts of subject your distro does a great job of handling that for you except that when you're really working with Mt containers like very minimal you won't have that so you will have to tweak some areas where nobody is very comfortable so know about it and Build time. Yes and no build time nowadays on mother machines are very short Especially when you have already everything downloaded, of course, you don't download multiple time. It just downloads the first time so build time is Might be an issue. It's for you to measure That's the big idea with build it. It's easy and it traces everything and you have all the source code available. Thank you So any questions? I've been working also in the embedded work more specifically with medical devices. Yes So the development flow was a bit different because you wouldn't want to flash The device each time you did a change with the code So you will just have a NFS mount of the file system usually luckily and then you would maybe use everything We are staging folder where you will just install your artifacts to mount it down How is your development flow by using build route? Do you need to flash each time or? Build route is just about Making the files that will go on the target. So you can in this example. I've only generated turbos, but you can Generate file system images when I use it with a remote route on NFS like you do I usually have some well this time They need to be SUID scripts that will just untar the tarble in whatever directory is mounted remotely You reboot the device and you're good. It's It's a process to learn how to make it Build route doesn't care about that. It's the state. It's really the step before and Back then we weren't really able to use fake route for generating the tarbles We use pseudo tar at the very end Because we had some binaries with capabilities set like a cup net roll or so I Things in US version of fake route handle capabilities, but just retest don't take my word for it. Okay. Thanks Follow-up question on that So Yachto, I think uses or gives you the ability to Wrap the resulting artifacts in depths and rpms So you have an option to serve as the image that is other than flashing whether it's actually flashing or what you're describing With you know individual kind of package upgrade operations any plans to that in build route? No That's pretty much the opposite of the build route philosophy. So build route is a Linux as a firmware Yachto builds you complete distribution which includes images to install your distribution But it's a distribution before being an image generation system. So they're really the opposite philosophically and so No, it would be against what? Build route is trying to do if you want packages you should go with Yachto It's the right tool for the job now. I can explain Build route in 20 minutes with a couple of slides. I cannot do it with Yachto Nowadays this Containers and other things are available multiple platform like you can run container on arm or other things So I think currently if you want to build an arm container It will be of you the from image or whatever will be again a arm image so So in a OCA normal if you're a docker you take you can run a docker in Your Raspberry Pi or any other arm platform or any other platform also. So to build that you have to be in a arm Oh, you mean you when you're doing docker for arm you usually build on the arm Yeah, otherwise you have to have a some kind of a that's what my understanding is I never build it Yeah So my question is here can build route can you see if I this sitting using its cross compiler and build to another target for OCA compliance targets. Well, there are multiple answers So the question basically is why cross compile when you can directly compile on the arm systems No, that that's a problem is it will be low hardware. So it will take more time That's one of the reason especially well you you have server arms and you have embedded arms Yeah, and the embedded arms Nowadays could compile that sort of stuff, but it will take hours. It will be incredibly. Yeah, that's what so what what I'm asking asking is by sitting in x86 architecture Can I use build route and build and a? Yes. Yeah, so is build route support some kind of packaging for OCA compliant and You may just it does not handle OCI Images so it would be rather easy to add. It's just that nobody did it as far as I know Yeah, basically I Think you can just take the toggle and put it in the container and you're probably again I I don't OCI and then containers. I don't know well. So if you can answer it It's a good question for me. I'm glad. I think that's people who doing this on jocto. That's how they do it. Okay, I think Then then you need to do the light container like we just described and then you just get the stuff you need and then you unpack that Using Docker or ever use and yeah, you can probably repackage the root file system into Docker I I don't know enough of Docker to do it. I do it with and spawn because that's the one I know, but it's the only one Just a question because of Lightsets thing and the version traceability Do you check in your whole like for example build route repository with all the settings to done into git and that's it or How do you are so? How there are various? Philosophy depending on who you're asking how we do it. We have a git repository where we have build route as a submodule Okay, so we clone build group and we only We work upstream for packages that are in build route and for everything that is specific and we think cannot be upstream We keep it in a separate git repository using some modules and some make file tricks basically So that means our submodules We have over all the overlays all the scripts that are specific to our build and in the build route We have everything upstream and that need to be upstream which is directly there everything is the checksum by the git comet of the under one which Also includes a git commit of build route which includes all the files and build route Which are all the recipes and each recipe contain a checksum for the source it downloads So you have Recursive checking of all the signatures. I have not checked the complete security model But there is no trivial way to go and inject you would have to inject it in the project then you would have to trick all the automated build systems the project uses to check which are spread around the world I don't know where they are to miss whatever checksum you've put in there and and everything recursively so It's seems kind of solid to me, but I'm not a security expert and so I can't see much more than less question Not sure that that's the right question, but can you also use? Statically linked binaries to reduce the size of the image so rather than the the light container. Yes Build route has somewhere in the compilation options something to have a statically linked libraries KVAT Most software are not tested with statically linked libraries, so sometimes they break Alright, but they'll do it knows how to do it All right Thank you very much Jeremy