 Hello everybody. Welcome. I'm going to talk to you today. My name is Mark Corbyn and I'm going to talk to you about my work using build route to create embedded Linux systems for 64-bit RIS 5. I'll start off with a little bit about myself. I work for Embercosm. I'm embedded operating systems lead. I've spent all of my career working in embedded systems, particularly very low-level resource constrained systems, real-time work. Majority of that time I've actually worked on intelligent transportation systems, roadside tolling, equipment, vehicle monitoring, number plate recognition cameras, and developing operating systems for bespoke hardware to support those applications. I've been developing on Linux as a development system and rolling embedded Linux distributions since 1996, and I'm currently the RIS 5 maintainer for the build route project. Today's presentation, very high-level presentation, I wanted to present build route as a quick-to-market way of evaluating a RIS 5 system and getting an embedded Linux distribution up and running. I'm going to tell you a little bit about RIS 5 briefly because I don't like to presume that everybody here is an expert. Some people might come along to see what it's all about. A quick comparison with the other popular build system for embedded Linux, Yocto, and then I'll talk you through the process I went through of adding RIS 5 support to build route, and then a work-through example to show you how you can get up and running build your own embedded Linux system for RISC 5. A little bit about build route now from the build route website. It really sums it up. It's a simple and efficient easy-to-use tool to generate embedded Linux systems. It does build everything you need in one complete package, so a cross-toolchain you can use for an embedded build environment, bootloaders, kernel, root file system image. It supports a wide range of architectures, as you can see I've listed, and specific boards within those architectures have default config definitions that you can use. If you are interested more on build route and its details, you can visit the build route website at the bottom, or you can find various material, particularly from the bootlin website on the internet, which goes into greater detail about the inner workings of build route. A quick overview about RISC 5. Just to summarise the main features and point out Embercosm are one of the member organisations of the RISC 5 Foundation. Particularly good information about RISC 5. You can find particularly the event proceedings I found on the RISC 5 Foundation website. Give you a good overview of RISC 5 architecture. Okay, a comparison between build route and Yocto. I've worked with both systems and they really do offer different things. Build route is particularly focused on speed and simplicity, build sort of cut-down systems and be quick and easy to use. And therefore, it's quite easy to understand and work with and expand as you need. Build route builds you a root file system image, so that will be a binary image that you could blend to an SD card or put directly onto your board. Whereas if you see, Yocto is very, very flexible and customisable, which makes it quite difficult to get to grips with for a beginner in terms of the number of configuration files and settings that you can use. I found a quick search on the internet suggests that there are about 2,300 packages available for build route, compared with about 8,000 for Yocto. Yocto actually builds a package feed rather than a file system image. You can go on to create a file system image, but typically it generates RPMs or Debian packages which you can then use to deploy a packet manager on your target system. The other thing to note is that build route is self-contained in terms of the features it supports, so you download build route and you get everything together, whereas Yocto follows a layers model where you can expand what you have on your system by checking out individual layers for different hardware or different applications that you might need. Okay, adding risk 5 support. Goals really for adding risk 5 support were, as I said, at Embercosm we've been doing a lot of work in the risk 5 software ecosystem and it struck me as a good way to get something up and running as an in-house tool really to quickly put together and evaluate embedded Linux on various systems. It also helped that nobody's actually added risk 5 support to the officially to build route yet, so it was an available project to work with. The choice for 64 bit, as we'll see, was really made by what was available that had been ported for risk 5 already. One of the other goals as well was to look at working towards kind of bringing some of the bespoke repositories in risk 5 into more standardized packages. So, for example, rather than having to go and pull risk 5 tools in different repositories and perhaps put together a system by hand with the various components, it's a nice idea to wrap up the whole process into build route so that you only have one place to go. One of the priorities as well, not for laziness, but was just to minimize work in terms of customizing build route or having to add any special features just to cater for risk 5. The idea was to try and use as much upstream software as possible and customize build routes as little as needed to. The choices at the time, this was back in August last year. The most obvious choice to begin with was the target. QEMU was the initial target. I do now have a sci-fi high-fi unleashed board, but at the time it was the best route to get something up and running to allow people to evaluate the test. As you can see, risk 5 support has been in QEMU since version 2.12, so it's upstream and stable. ToolChains. ToolChain was good to go. GCC has had risk 5 support since GCC 7.1, and I did find that it needed binutils greater than 230 to be able to build a bootable kernel. C-Library. Build route is designed to offer you some flexibility for your choice of C-Library so that you can use lighter weight C-Libraries such as UC-Libc or Muscle. But at the time only G-Libc has upstream risk 5 support, and that's for 64-bit only. Looked at the bootloader. Now, I would have preferred to add U-boot support, but at the time without a hardware target it didn't make sense to do that, so the choice was made at the time to go with the Berkeley bootloader, which is the risk 5 bootloader, and it was not too much work there to add that as a bootloader package to build route. Finally, kernel. There was risk 5 support in mainline kernel back in August at 4.15, but it wasn't able to boot under QEMU. So, I've been using the risk 5 Linux Git repository and using the 4.15 branch for that. That has actually just been recently bumped using the same repository, but it's now on kernel 4.19. Okay, an overview. What do you need to do to put yourself together on an embedded Linux system with build route? It's pretty straightforward. Steps here, overview. Get yourself a copy of the source, either check out to clone the Git repository or download one of the stable release tables. Configuring build route is very straightforward. It uses K-config system. So, if you've configured a Linux kernel before, it will look very familiar to you. You can either run configurations manually or you can use one of the predefined default configurations. So, typically you'd run a make and then specify the name of the default config set up for your target hardware. Building is just as simple as running make. And then once the process is completed, you will find that you have your files ready to deploy in the output images directory. So, that can be either a tab or an image kernel bootloaded depending on what's been specified for your target. And then, final stage is to test it will deploy. So, in my case, this was tested with QMU, but it could be that stage that you copy your files to a SD card or memory device for your target board and then run. Okay, we'll start with the first step and get hold of the source code. Straight forward to show you there, just to clone the Git repository. On my machine, it was less than 30 seconds to check out. And on the disk, the total size of build route was 136 megabytes of code. I did actually, as a parallel process, run the same exercise through with the Yocto RISC 5 meta layer. And initial say, there were similar sort of sizes for the initial amount of code. Configuring, this is what you get if you run a make menu config once you've checked out build route. So, it's very straightforward. Menus up and down, you select options. So, on the left, you have the top level menu showing you the high level choices of tool chains, kernels, packages, bootloaders, etc. And then the screenshot on the right shows what you get if you select target options, gives you some choices for RISC 5. So, you can actually select the different architecture extensions for RISC 5, so that's showing the general purpose. But if you want to individually specify that you've got atomic, floating point, multiplication, etc., you specify those. Next stage, building a system. Just as simple as running make at the command line. On my system, we're talking 22 minutes from start to completed build. Kernel came out with BBL wrap around it, 6.5 megabytes. And the minimal root file system, 3.9 megabytes. The figures I had for Yocto were marginally larger but comparable. And once it had finished downloading and doing a full build on my system, there was about 2.9 gigabytes of space on the disk, whereas the figures I had for Yocto were about 10 times that. It was about 30 gigabytes after I'd run a Yocto build to give you an idea. Okay, next stage on to testing. So, the incantation at the top was what I ran QMU with. So, you can see running with the QMU virtual machine, you pass it a kernel specifying the BBL image file, and in this case, pass it a root file, a separate root file system image as a drive. And you can see there that it's boots right down to a build root login prompt. A quick run through of ongoing tasks and things that I need to be looking at in the future. Firstly, I've got work to do on 32-bit support. 32-bit support and patches have been accepted into the build root repository recently, and it currently builds 32-bit for QMU, but some work needs to be done to look at for the version of Glib-C currently for building other packages. The screenshot you can see on the right, there is the output you get from the build root auto-builder test system. So, as part of the process, I need to work down through those. It highlights for you running continuous builds and produces a report of targets and packages and issues, and you can go and look down at the build root and configuration logs. And it's part of the ongoing process to work through that now both RIS5 64-bit and RIS5 32-bit. Next thing to be looked at really is to look at trying, as I said, trying to migrate as many features as possible to use upstream versions. So, I really need to start looking at whether I can migrate to the mainline kernel and also sort out 32-bit Glib-C so that I can take out custom repositories from the build root configuration and everything at upstream source. There are other features that I'd like to look at adding to build root, especially as the RIS5 ecosystem evolves. I would like to look at U-boot particularly as to get that into a more sort of standard boot loader and add features to that for particular boards. And also as software libraries and other C libraries become available to add those as well and increase the amount of options available. One thing that we don't have at the moment of course is support for particular development boards. So, we've only got the QMU default virtual configuration at this precise time. So, as more boards come out, it would be good to add default hardware configurations for the new boards as they come onto the market on GitHub there where you can go and see an update as to what the status of the various ports and packages are for the RIS5 ecosystem at the moment. I've done a very quick overview. A very high-level view. I'm happy to take any questions. Well, as I've said, at the moment, we're doing parallel work for customers on RIS5 targets and systems. So, I think from my point of view, as I said, it represents a good in-house tool to provide a tool chain and to provide a way of testing applications that perhaps I'm not using build root for the customer, but I want to write and cross-compile an application to run and test on their system. So, I think really as initially as a system, I mean hopefully we will get customers who come along and perhaps want us to add build root support for their particular board and generate an upper system, but at the moment it's kind of an in-house, I see it as an in-house tool, really, to make that available. I've had a brief look at the Freedom USDK. Parmel may be able to help with these things. There is a version of build root that sits down underneath the Freedom USDK. I believe it's a couple of years behind the current build root. And when you look at the configuration, it is very minimal in terms of not being able to specify any extensions or options in terms of I don't think you can specify the ABI or the things. So, it's possibly could do with a... Yeah. Yeah. Yeah. Yeah. Yeah, I mean... I've used both systems, and I think that in my experience, I think they're two different systems for two different applications. I think build root, as I said, is perhaps good for a one-man project for getting a system up and going. Perhaps a slim down system, I don't know whether Yocto easily produces, you know, smaller libraries, quick and fast systems. I see Yocto as adding a more powerful system that you might want. I think it's partly my background in smaller boards, lower resource systems. I see very much build root is sitting towards that end of the market, and Yocto sitting perhaps for more powerful applications, more flexible systems that need more features. So, I mean, you'll find a lot of talk on the Internet about comparisons between the two, and I think they are two different bees for two different things. Does that sort of help? It does help, I mean, because I never really spent any time with board support. Yes, I mean, I think, of course, there's sort of corporate backing behind Yocto in terms of board support layers, and some of the members that provide that sort of backup for board support packages. But I think it really depends on the type of system you're trying to produce, and maybe the number of people you have working on your project. The cousins of build root, there's also OpenWRT, which originally came from, and there was an announcement a few months back on the first release of OpenWRT, the pre-alpha release of OpenWRT for Rix 5, and together with Autana, just left, I did a Docker wine liner that takes this QMU wrapper, and just have, when you run it, you have a shell, and then you can install packages because the problem with build root and Yocto is that, correct me if I won't, you don't have any packages, package manager. You build everything and it's one... Yocto's package manager. Package manager, yeah, yeah, yeah. But with root, you don't have. No. You have to recompile again, and a little bit of flexibility. I think that's one of the advantages of Yocto. A partial upgrade rather than a full system deploy, so that's another choice there. I mean, as I said, coming back to resource limits, systems I've worked on haven't really ever had the overhead to have a full package manager and database down on the target, so I've worked with things that need to be quite slimline in that regard. Anybody else? Nope. Okay, thank you very much.