 Hello, everybody. I hold now a talk about putting support for these nice little gadgets into build service. My name is Martin Moring, and I'm one of the external contributors to the OpenSUSE build service. Yeah, first, the barriers how to join such a project. I'm from a small company that had worked with embedded over years, and I know the Nuremberg R&D people from Suze for a long time, and it was, I think, two years ago that they decided we want to design a new build service system. And that was my time also to say, okay, we had also build services, a build service in place for embedded applications, and let's join forces and also redesign that and put it in. So the barrier was that I had to do with lots of experienced people that had released lots of distributions over the years, so it was not a very community approach in that sense that there were entry barriers from your know-how to solve, so to say. Second thing was that it started with a company open source project that is something different than a community-driven open source project, and yeah, the third thing was we both wanted to re-engineer existing things, improve it, and put together a new generation of that kind of system that had to solve new requirements. Okay, and yeah, the answer, how did I join then, was a pragmatic approach, nevertheless, as in very many other projects in open source space, fill a gap and take some work that is not done and start with that and gain credibility so that there is trust what you can achieve, so you can let you know what the others do and what they are, let's say, what they can achieve, get a feeling and so that you get a team. The result is that I'm now the package maintainer also for the development package of Ops for over a year, and we successfully put in all kinds of cross-bill support into the project, so it was a successful merger. Okay, so much for the social aspect of team building. Okay, let's start. What kind of cross-development systems can we have or do exist? There is, I have also here examples for that so that you know what I mean, not only by theory and category, but also by some example. We started with experimenting with what I call type one cross-built environment, that means you put together something like busy box or build route, put it all in one, that was also our first experiment that we did and build your complete system in one bunch. Okay, that is only the first approach, we tried it, but I will explain why that is not a good solution later. The next variant, and that was already successful and used in the field, that was to implement a tool chain and modified packages in the system and put the build environment for this into the capabilities of the build service, build environment. That was the first practical approach where we had a real result with a real distro build in the end, that is what I call type two. I have two examples here, ST Linux and open embedded. They modify packages, they write own, build descriptions for every package. So this was first approach to get something working fast, but it has usually the disadvantage of much work. I mean, that is acceptable for 400 packages, that is the usual embedded distribution, but if you want to achieve something like open Zusa factory with 4,000 packages, you either need many resources, you have to keep in mind you need to fiddle around at least one day per package and multiplied by 4,000, you can just easily cocculate what that means. So that was no approach for getting open Zusa and existing systems in. Then we had also said we want to have some kind of type three that is use the original source somehow, don't use an emulator with a small modification in the packages as possible. What also turned out to be the fastest method to cope with existing distributions was what I call type four, that is using emulation. Packages typically contain for cross build bad things like build an executable and try if it works. If you have an X86 processor and want to do that with ARM, it results in failing. Or if you want to run at least some test suits, that is not possible. And there is already lots of Linux distributions running around that are more or less natively compiled even on ARM and these kind of embedded processors. And we wanted to cope with them. So we implemented also the so-called type, what I call type four, cross-development, cross-build with an emulator. Let me summarize a little bit up what we had for requirements to solve before we started implementing. Cross-build service is a feature where you can just with adding a repository just build your application for a new distribution. So it's a quite autogonal feature, what I call. You have one dimension that is the processor architecture and you have the other dimension that is build it for that distribution or for another one with different releases. And our goal was to still keep that approach and the user doesn't have to care about some internals of cross-building there. So it should be what I call autogonal. And the next thing is that we had to cope with existing distributions. So it was not acceptable to rebuild everything from source. We wanted, as with current build service, keep the paradigm that you could use and reuse existing build distributions. That means Fedora, Ubuntu, whatever. We had that for PowerPC and X86, but we wanted that also for architectures where our workers are not natively working on. And then something more internal was when we implemented that, the execution path of existing build service shouldn't be disrupted. That might be not something very of interest to an outsider, but for those 12,000 users of the build service, it was interesting because, yeah, not to disrupt current service. I always come to the wrong. Yeah. Another thing is, Adrian told you already, we have a means of putting load to the server and distributing the work with the scheduler to our workers. So if you have a big setup currently, I think we have 250 nodes, right? Something like that. The work should be distributed so that big loads of package builds can be handled. And we wanted to keep that. And also the users should be able to use the local build feature also. So that had also to be enhanced. And yeah, then I as a developer with not having at hand a big disk array to store 35 Linux distributions on my hard disk, I had the problem of how to test that. So for and also usually the normal embedded developer doesn't buy a server with 20 terabytes of hard disk space and 60 nodes to make embedded development. So we had to work a little bit with scalability. You have to keep in mind that a big distribution like OpenSUSE Fedora, etc., they need 20 gigabytes of hard disk space per architecture. And yeah, on the other hand, we wanted compatibility. So we were forced to implement also a way to download the distributions on demand. We have already a feature in the system to couple build services between each other. But that was not usable in this case because, I mean, those 30 distributions were not in the main system. So we decided also to implement some form of demand download of Linux distributions that are stored in FTP trees. And to pick also the metadata of the distribution from the original FTP trees. That feature we called download on demand. It means that Debian or RPM packages are downloaded on demand. And the metadata is parsed when you create a new project from these FTP trees. Currently we support the big three systems in this area. Okay, virtualization. We said, okay, this is a new form of virtualization. Currently our workers use Xen for virtualization and for other processes I put QMU just on top of. So it's a mixed form of virtualization that is used for maximum compatibility. We've also experimented with system emulation, but that was considered too slow. You have to wait endlessly when you have to set up a system in the system emulation. So we had to make some kind of trade-off between compatibility and performance. So it ended up that I used user mode emulation in the cross-built system. Okay, download on demand. Maybe all of you that had used build service locally already is faced with that problem. You want some distribution to build against and need to find out, okay, how do I bring all these DVDs into my system? It's a hard disk space. Adrian has to cope with that every day because people want more and more built targets to build against. And on the other hand, I wanted to make some progress with development and not with getting bigger internet pipes to download all distributions I could take care of. So we said that we had to implement some on-demand system that does that without that developer has to take care of it. And so we said implement it from the original FTP tree. It means download on demand caches only the needed packages. So depending on your workload, you end up with up to this 20 gigabyte being downloaded, but you need lots of packages to build against to achieve this. In average, if you have some hundred packages, you only need up to 500 max per distribution that is the usual subset that you have if you build an X application or GTK, KDE, whatever. It's not so much. Many packages in the distribution are never used when you build a system because they are in the end of the leaf and only the user that wants to run it needs it but not for building. That's a good thing, otherwise it would have been useless what we had designed. And what we implemented is the three metadata systems that currently exist for distributions that is Debian Metadata that is used for all Debian and Ubuntu distributions. We implemented RPM metadata that was used for SUSE until recently until the table overflow and it's used with Fedora and it's usually used with normal RPM distribution. And to run around a temporal problem, we also implemented the all SUSE text system in addition. So we can handle now with download on demand all RPM or Debian based package distribution also for cross build. And as I said, it should be fire and forget and not what version does this package have. I missed it. Where do I get it? I will tell you also something about the implementation what we had to change in the system. And first, there's a little overview slides you might have already seen in Adrian's talk. I will not explain the implementation on that one, but I will explain it with the code base where we had to change. What you see here is usually active or components in the build service source code or active components that run if you set up a system on the server side. Let me first explain where is my implementation. Okay. Yeah, okay. Let me explain it on that one. The build service backend is composed of several servers that keep care of your package base, the scheduling, the patching of jobs, the jobs itself, and generating the build results in the end so that you can use that with your package manager so that you can cascade it. The source server was one of the components we didn't even touch. It's responsible for doing the work when you check in a package in the build service. It handles the source revisions. It does the work what was explained here in the track before when you do branching and everything. Usually there was, I think, nothing to change. I had to check, but at least it's not worth noting. In this area, there was no changes to do for implementing cross-build. Repository server, yeah. That implements that when you start a build that your packages get delivered, that dependencies are calculated that when you build a package that it knows where to get its package from. Usually, in that area, we had to change things, especially for implementing the download on demand service. The dispatcher was changed to handle the new architectures. We have new schedulers for the new architectures. The workers now have to take care about that emulation needs to start when you have an ARM package that needs to be built or run in a worker. I think I don't go into more detail here because it needs too much internal know-how. You can ask me if you have some questions here or start with Michael Schröder's talk on this area first as a starter. If you want to know more details inside how that was implemented, I just want to mention that it's more in the back end that we had to change. The web client and so on, there is only things like that the new architectures are known. Testing results. We wanted maximum compatibility to be implemented. So I put together a large testing base at the moment mostly for ARM because QMU later is not in the shape at the moment to run this completely for all architectures. Help is welcome to change that. And for, okay, PowerPC that is used in embedded space, we have a solution that is faster. We needed a starting point and since ARM is widely used and the emulator is well running, we started with ARM in this area. Yeah. And testing results. We have Debian Ubuntu Fedora, even Mamo was put in. And as an example for this type two build, we have also implemented the old ST Linux distribution, also working, running, building, everything. Debian means edge as well as Lenny Ubuntu means all ports that are present running for ARM. And for PowerPC Fedora is the same. Mamo, I think there's only two versions, one for x86 and for ARM. And yeah, my colleague here, he managed to build packages for the Nokia N810 and to get CSYNC running on his calculator. And yeah, that is also implemented. It was only a test case. It needed two days to implement. We just put the packages in and it worked. Yeah. And for ARM processors, we implemented all the processor levels that you need to run the different types of ARM cores. There was a change in the ABI of the Linux operating system in between. We had to take care also of that. It's called the so-called O-ABI that was the old ABI used in the Linux system in past times. For example, in Debian Edge. But today we have a new ABI that is capable of multi-threading and multi-processing in ARM. And to handle also the newer ARM cores we had implemented also in the emulator away to distinguish between that automatically and so that you can automatically mix and match all the packages. The newest is I think ARM processor level seven. They have a vector unit that means floating-point and multi-processing capability that is implemented. And yeah, to check if that really works, we just installed a Linux distribution on an ARM board and tried out if that works what we compiled. And the next step is what Zonker already told you. If you were on the main track here on 14 hours that we started now building OpenSUSE for ARM with the build service, that's a usual choice to do so. You know that OpenSUSE is built already with build service on PowerPC and X86. And we want the same for ARM. And yeah, so we started bootstrapping OpenSUSE now with that as a little test if it works. And yeah, that test succeeded. So we have the base set of a bootstrap Linux distribution OpenSUSE factory running. Yeah. Road map, yeah, the road map. We want to put that as fast as possible into the OpenSUSE build service in the public one, that's for sure, so that every of you can use it and build ARM packages. I'm looking for what the reaction at Adrian is for this. Okay, let's discuss that in the questions. Yeah, download on demand is a little bit error prone at the moment, so you need to cross check all the time to make no errors. So we want to improve user friendliness of this. You can also use that not only for cross build, that's for sure. So if you have a smaller build service and don't want big copies of all the Linux distributions, you can also use this for X86. Yeah, then optimizations, emulation is sometimes quite slow. And since we also implemented cross compilation, our next step will be to implement compile to optimize compile times by combining what we can do here. Yeah, set up an ARM version of OpenSUSE. I said it already at status that we are at this point. I mean, maybe if all works well, it's achievable that 11.2 could run on ARM, but it needs confirmation to do so. We are discussing that at the moment. If that is achievable and will it be achieved? Yeah? Okay. I hope you didn't mention that's actually building the ARM OpenSUSE on ARM processors itself. Yeah, that works. But you didn't mention that as a type. Okay, yeah. I didn't mention it. It works. Yeah. But the usual mobile phone is not that powerful to build open office with it. Nobody, if you have 60 nodes of PCs, maybe you should have a couple of hundred boards with ARM processors. Yes. We were discussing that. It's more a question of memory needed to compile and not of processor power. I mean, you can do that with lots of them. That depends on the packages. I think factory needs one gigabyte of memory to compile all the packages. But yeah, we could drop packages that, yeah, Adrian. I think once we have a build with an emulator or with a cross-build running, building natively will be a rather trivial task if you have a capable machine to build it natively. So it's actually harder to make it work with cross-compiling and with emulation, I think. Yeah, it works already. Yeah. It's actually the more interesting and the more daring task to implement it and probably potentially more useful to lots of people because there's probably lots of unused PC processing power lying around. Not so much in my home, there's more unused PC power than unused ARM power or unused power PC, for example, what's for me. Yeah. So it's probably a good idea to do it that way in the beginning. Of course, if you have lots of ARM boards, you will be able to use them. Yeah, okay. Roadmap, what happens when we put that in the public service is that there's also lots of these tiny little things popping up that work 100 times, but when you do it 20,000 times, it works one time not. So it's these tiny little bugs hiding that only pop up if you do that in a broad way in the big service. So I expect some work here, as always. We had the same thing with the virtualization thing and it needed a while until it was really stable and I expect the same thing here, work to fix the emulation in cases we could not yet even think of. Yeah. The next thing is non-arm architectures. I said for power PC, there was not that big pressure to implement cross build, but it might be for other targets for embedded space. We need that. And I've discussed that with the QMU people already. We need some help in this area to improve situation, but that is mostly a QMU issue at the moment. And yeah, imaging and these kind of things should be more suited for embedded use. I mean, at the moment, image generation generates DVDs, bootable USB sticks and these kind of things and what embedded developers think of is image types that are used in embedded areas. That means that you directly generate some form of root file system for flashes or whatever. That is not a big issue. It should work already. It's more fiddling around with Kiwi to get that solved. And in the end, what do we want to achieve? Assimilate with build service all these tiny little computers you have in your pockets with ARM cores and others. You have to consider per year, there is a number of, I think, 1.2 billion ARM cores sold per year. That's at the moment mostly mobile phones and most of them are not running Linux at the moment, but this drastically changes at the moment. That is with ARM 5 and later there is no problem at all running Linux. So there will be a huge demand for this new type of service, also in a build service. I mean, for example, the Google phone is one of these examples. They have a Linux inside. The Nokia is one of those devices. Others, many others I expect. There is a big curve of improvement in horsepower in this area. I mean, at the moment, we have systems in embedded area that are as big as my thumb and they can decode HD video streams, have a 3D engine on and reach 1 GHz. So that is, it's a PC in principle. What do we do with that? That is what I mean with lots of embedded devices to assimilate and here comes a mind to build service, right? Okay, questions. Yeah, maybe. Yeah, I have, yeah. I do that, yeah. Andrew? Yeah, I take it then if you can compile on the ARM processor, would that mean that you- I don't- can you please close the door? I don't understand you. Is it possible then once you compile on the ARM processor then that can then be moved to any ARM processor or is it a specific processor class that you can then install on or is it across the board regardless of ARM 5, ARM 7? Yeah, there is already a kind of ABI in place also for ARM so that when you compile something it works on all ARM cores of this class. So it's the same situation then with the x86. If you build a Linux kernel for an ARM, you normally build it for a specific chip. Because it's integrated a lot of peripherals. So how do you have any plans for supporting multiple, how are you going to support the kernel? Are you only going to support the file system or are you also going to support the kernel? No, no. We will also provide kernel for let's say some generic type of kernels for the devices also. I mean that is no problem. We do that also for other architectures. Maybe not in that big number because these devices differ more than in... They differ a lot more. I mean I work at that, but I think for us you would need to build maybe 10 different kernels. Yeah, but that's no problem. I mean we have 50,000 packages inside build service so if we need 100 kernels for it it's more a question how to handle that from the process and who contributes that and who maintains them. But I mean we could solve it by that the silicon maker provides a kernel for its silicon for example for the class of silicon. I mean we should try to combine things and not make more versions of it than needed but in principle I would opt for that kernels are provided by those who know their chip. Okay, I think what you need to do then is that you need to specify that there are some configurations the kernel needs to be on. Okay. Yeah, yeah. Right. So there needs to be recommendations. Yeah. But that will be some new frontier for our kernel team to integrate that from a single source of kernel package with let's say 100 variants built. Okay, thanks. I just want to make one comment. I mean this ARM is an embedded platform and I personally would expect it that plenty companies maybe want to re-roll the ARM distribution for their needs. So it's important that we have ARM in the open user build service to show it builds for ARM and it works in general. But with the build service you can re-roll, re-compile the entire distribution then easily with a different compiler flag for example to support your particular ARM chip. And that's a similar question that I had when talking about... If the door is open I don't understand you. For example on embedded PowerPC there are at least a few different processors that have slightly different options for when you build a compiler or machine options and they are not all compatible. They are usually compatible into one direction but not the other way around. So can I handle this somehow without forking a complete new distribution by saying okay I want to have a PowerPC 405 GCC and a PowerPC 823 GCC or something like that and specify to use that is something like this plan because basically the rest of the user space I only need a different kernel, a different compiler and then re-compile everything with this compiler. Is something like this easily possible? Yeah, I would solve it with the same methods that we do it for x86 at the moment so we optimize for a certain class of processor target with a specific compiler flex but we don't use maybe the instruction set. We have already, I've implemented at least the basic instruction levels as different schedulers so that you can, it's like x86 64-bit or 32-bit so that you can say okay I want to run it on a level x capable ARM core. We are at 7 I think at the moment and for Linux everything from 4 to 7 counts so we have implemented three classes for it that's basically the old ABI and no floating point and then everything up to this level 5 instruction set and everything up to the newest that is 7 with vector unit and everything so I would usually build for one of those targets with pre-optimized I mean it's also a question of what you want to achieve with compiling with a special compiler flex. We have benchmarked that and it usually did generic flex help more in improving speed than if you compile for a specific processor. Just I think you've asked more to recompile the entire distribution but to replace a package for example. And that's something that we can do at the moment but not nicely and there will be more nice way in the future to say simply define recompile open to the factory but use this compiler. Is there was another question on there? I can give an example for, I could give an example for that. With G-Lib C we could provide five versions of G-Lib C that it means you don't need to recompile five times 3,500 packages but it optimizes nicely already. I think we have one last question what Jürgen said that the time is running. I have a question I saw the list of all the supported platforms will the open moco free runner also be supported? I didn't understand you sorry. Will the open moco free runner, neo free runner also be supported in the ARM platform? We have the SDK for open moco. We have implemented and running the open moco or memo. Is that debian based? Yeah, there are a lot of distributions on it. It's also debian based as they ever saw on it. There's a lot of this. Yeah, if it's debian based it should work. Debian packaging we can handle. Okay, then we close the talk here. Thank you Martin.