 Thank you everyone. The night wasn't too hard, too hard, too short, whatever. And may I introduce you to Baptiste Daroussin, a core team member of FreeBSD, author of Poudrier and PKJ, speaking about cross-building the FreeBSD ports 3. Right, thank you for attending. So we tried recently to see how much complicated it would be to get the ports 3 cross-buildable. A lot of people want to be able to cross-build things. We're talking about that for, I think, since the beginning of the ports 3. And we're starting to get something that is able to cross-build. So the goal for cross-building is to be able to, well, build first packages for new architectures or for not fully supported architectures, yes, too. And if we can just start to build things, patch things to say, okay, FreeBSD knows about ARMv6, FreeBSD knows about MIPS 64, whatever, we can have a start of packages and we need to build them. The embedded platforms is getting more and more interest, and most of them are not yet very, very powerful, so you might want to be able to build your own packages on a fast machine, on an AMD64 box with a lot of cores, RAM, whatever, instead of building, I don't know, LibreOffice on your Raspberry Pi. It's also very useful because a lot of package are not really, a lot of different package or needs a bootstrap to be able to compile the first time. So you get a chicken egg problem where you cannot bootstrap if a bootstrap doesn't exist yet, like Java, Haskell, whatever. So you need to be able to cross-build from somewhere to be able to create the first bootstrap, and you don't want to do it in a complicated way. And the last thing is if you're able to cross-build, then you're also able to cross-build for other operating systems, and you might want that if you want to be able to cross-build, for example, things for the Linux emulation, instead of relying on packages provided by another project like TentOS, SUSE, whatever, you can decide to build your own packages for that other operating system. So you would still use your infrastructure to build your own packages, but just target something else. So it's really important to be able to cross-build. And it allows us as well to make sure that the framework is clean enough to pass the right variables to the different build system, make sure that things are respecting C-Flags, are respecting the compiler you're passing and not trying to hot-cut some in a way or another, the pass to its own compiler. So cross-building is something we really want as a quite insurance. If we're able to cross-build, then our package system is quite sane. But it's not something you want for production without testing. You can cross-build things, but you need to make sure that you can run them as well. If you just cross-build, it doesn't mean that you do support them. So we went a very different way to be able to cross-build. The first way was we wanted to, instead of cross-building, we decided that maybe we could emulate things. So we used the QMU user loan emulation thing to say, okay, here in this jail, I'll put only ARMv6 binaries and we will run them through the QMU emulator, not as a full emulation, but as a user emulation, meaning each time I find a binary which is ARMv6, then I will run it through the emulation. If it's an MD64, we will use it. So that was not cross-building. But the good point of this one is we are building packaging in the regular way. So we are making sure that if you are trying to build something on your Raspberry Pi, then we will test something which goes the same way that what you will do. Schoenbrino has done a lot of work in that area and he talked about it yesterday. The result is with that way we are able to build almost all the ports 3 on ARMv6 and the only ones that fail are the two major ones, but most of them are due to not having bootstrap like OpenGDK or because it's something that is not supposed to work on ARMv6. So we got pretty good results with that. In two years there has been a lot of improvements here. The problem with this approach is QMU user emulation is a bit fragile. Basically you have a mapping between CIS calls from ARMv6 to the version for MD64 and the other way around. So you have to implement everything into a huge switch casing. Okay, the binary is trying to do that how I can translate that to MD64 and translate back. And the other thing is it's slow. So it's three weeks to be able to build all the package while on the same machine you have something like 15 hours to be able to build them natively. Not something new. So we get another way which was an hybrid way. We decided to use the user emulation for everything that we will build and we want to use a cross-toolchain to be able to use to natively run binaries that are often called like CC, STRIB, LDS, etc. So I won't go further on this because this is not constant computation. This is just the background of where we're from. So we modified, we added a new kernel module which is able to say that, oh, this binary is actually an ARM binary so I prepend the emulation on it and run it. So it was okay but that doesn't really, that improved a lot the speed but that's still slow. It's a week now to get, so we get from three weeks to one week to be able to build the packages. So we still want something faster. So the true way is to go through the real cross compilation. It's faster. It's simpler because basically if your framework is good enough it's just passing a new compiler and a couple of environment variables so that it knows where the ARM basic libraries are and everything is run with native speed. So it's cleaner. And it's easier to use for a regular user as you don't have to know about the bin missing which, or you don't have to know about QMU and you don't have to prepare a spatial environment. So the overhead of this approach is on the pod street if you want to cross build the packages you have to cross build, to build them, most of them twice. You have to build them, all the build dependency, the lib dependency in the native version and in target version. Why? Because if you think about libxslt for example it has two things, it has a library so you want the target version so that you can link to it properly. But if something is running xslt proc then you need the native version. So we had to figure out a way to install the same package twice with two different binary formats and that it does not conflict. So first we had to have a look at how building system works to make sure that we can plug easily in it to make them do something nice with cross building. So first you have auto tools. Well auto tools is their usual configure, make, make, install. So surprisingly it really works out of box. Most of the time you just take it, pass it through the right variables and you get good cross build binaries except that no one knows how to use them correctly. And you often end up with, you often end up with people not understanding how cross building should be done in auto tools building temporary binaries in the target format instead of building them in the native format and then you try to run them to, I don't know, convert this file to something else or build this into this other object and it just fails. So the reality is still most of the usage you can find are working out of box. CMake is basically one of the good player. I haven't had major problems with it except people trying to be smarter than what CMake propose and trying to extend this their own way instead of getting through the CMake documentation. Somehow the BSDMake files are mostly properly done for cross compilation. It's more a matter of how people do use them, I mean, using outside of user source or user ports. I mean, someone figured out that it's quite nice and easy to use to build my own thing. So I will use them. And we got the same problem like with CMake auto tools that people don't understand and sometimes we have targets that generate binary and use that binary directly instead of the native one. So now the bad players. So please never ever use this one. This is probably the worst build system I've seen. It has no... It doesn't define a framework, a real framework to do things. So it doesn't know about the basics that anyone is expecting to. It depends on how the people using it are implementing it. So basically you can have Scones that knows about this there or not. You can have Scones that knows about I can pass a different compiler than the one you do expect or not. Respecting C flags or not, whatever. So Scones is... Please don't do that. Building is simple. It's just... I take this C file, I take this compiler and I generate an object out of it. Yes, it's simple. Real world building is not that simple. If things like auto tools, CMake or BSDMakefile, whatever are that complicated it's on purpose. It's because there is a lot of use cases you don't know about. So writing by hand your own file, except if you have very good experience with building things on different operating system, different architecture, different versions, different trying to cross build or not. If you don't have this experience please don't try to be more clever than people that spending a lot of time on this. Use something that already exists and is able to do the good stuff for you. Except Scones. And it's kind of free homemade build system but it's not even going through make or whatever. Please stop with those shell scripts that tries to be clever as well. There are a number of things where you have oh, please build me and it does some magic, you cannot control what is going on inside. So avoid those and you're in the sane environment. The main complication we had now when we try to cross build is so Perl, Perl is nice. It's cross build friendly it's cross build friendly if you have a look at the documentation. But if you try to do it in real life it's a bit different. So it requires when you do oh I will cross build this for this target, it requires you to get actually the target box somewhere. It asks you for an SSE user a password or a key and it connects through this box execute things over there, bring them back into your system and build. This is really not what you want when you want to cross build things. The other problem is Python. So Python is cross build friendly it's not all it's supposed to do good things, whatever. There is one thing they forgot about they do build the Python binary and then they use it. Well if you cross build you get an ARM Python binary on an AMD 64 box then you try to run it and you just say what is this? Can I do something with that? There are patches for very very long there are patches for very very long in the Python bug tracker and almost each release they say okay we fixed that and then you forgot part of the patch and I think I think now I think now in Python 3.4 it's fixed but I haven't checked yet and because in FreeBSD we do like a lot to complicate our life because others are not complicating enough for us so we do that ourselves and I don't know how we ended up with the Python port and the way we do build that but it was everything but how you should build Python. Thank the Python team now it's fixed and we are using the regular build from Python so I can apply the regular patch is available to make Python cross build and I can cross build Python properly. OpenGDK well this one is surprisingly cross build friendly for real it really builds properly on the cross build and it works well it generates a binary I haven't tried to run it but I have a binary in the end so that means with the cross build framework I am able to generate the bootstrap and I decided somehow in the middle of the development of GDK6 that being able to build Java without Java is something crazy so now they enforce Java to build Java so you need to build the first bootstrap and now we can do that because we can cross build for ARMv6, for Spark 64 whatever except that to get into OpenGDK you need and without both of them you have pretty much nothing in the parse tree and so we need to figure out a way for both of them to fix now the tool chains well I said tool chains because we have different tool chains in the freebie of the project we have AlLVM Clang and we have the very old GCC so first Clang Clang is wonderful for us it's one of the most cross build friendly it's multi target by default so you build it and then you just have to specify your target and it does the right thing to generate you the right binaries and call the right, be neutral so it's nice you don't need to have 10 different copies of Clang one for targeting ARM, one for targeting MIPS whatever so Clang is nice the problem is well Clang is working properly on Intel boxes and since freebie of the 10 we have it working properly on ARM, XSEC, BigMDN and older targets are not working with Clang so one of the point of the cross building is for example to be able to target MIPS which is usually a very small CPU, low power whatever so well Clang is nice allows us to do a lot of things except that our targets are not supported right now by Clang MIPS should be supported quite soon I heard poor PC is quite close to get supported as well and someone did the work for Spark64 so you should have a Spark64 back in soon GCC, well first we have GCC4 too if you try to do anything more than with GCC4 too you won't have much in particular in C++ if we want to do cross building we cannot rely on GCC4 too so we need to go to a newer version of GCC and having cross build friendly the second thing is as I said in freebie as you would like to complicate our lives so I don't know in that case if GNU people were complicating our life or we did or a mix of both each one being pissed off by the thing is we have a couple of patches that we need to have GCC knows about some of the targets we have for example we have patches to get arm binary to get GCC creating freebie as different arm binaries and those patches were never upstream so if you want to use a modern GCC to generate arm target then you don't have you won't be able to build arm packages so recently Andrew did the work of patching GCC4 8 so we need to do the same work to put that into the 5 branch because I think it's the last open branch but we are slowly getting there the other thing is it's not just a matter of format we have a couple of extensions to be able to build the kernel with GCC and those extensions we have either to push them or to maintain something on top of the new GCC so to build packages I just need to make sure that GCC knows the format we know but if I really want to be able to cross build a whole system with an external toolchain then I need also to upstream more patches we've added recently to GCC and that will be a bit of work because GCC is a fast moving target and since Clang is out it's getting faster and if you do something for GCC4 8 it's not that easy to port it to GCC5 you might have to change a lot of things or not you don't know but still it's a lot of work and it's not really a cross build friendly compiler well it knows how to cross build for sure, hopefully otherwise there's a lot of things that won't work today the thing is you will need to have a copy of GCC per target you want to aim at you cannot have one single installation of GCC and say I will build this for MIPS well you can but it's not working exactly as you expect and of course we have two compilers and the new one is trying to get as close as possible to GCC except sometimes and in particular and things that are interesting us in the case of cross building so you get to through all the documentation you can find on the internet on how to do proper cross build and things like this without having to prepare yourself for special target for that and then you discover that this almost works and I don't know why the wrong binutils are being used whatever so that means in the framework I need to first discover if you're trying to build with GCC or if you're trying to build with Clang and pass the variables a bit differently depending on which of the compiler you're using binutils so in the base system we have the latest GPLv2 binutils plus a couple of patches on top of it and I decided that I won't use this version for cross compilation I will use the most recent version of binutils because it supports more targets because it's simpler to get through the port cross build version of binutils the thing is as for GCC we didn't have streamer patches in particular the ARM support because GCC all version we have in base it was out before ARM was popular so we didn't have proper support for ARM so we had to write it ourselves but of course we didn't upstream that so that was a problem and I don't remember who did the work of adapting our patches to the newer binutils but now it's fixed and next upstream version binutils will have our patches to build to cross build and binutils is cross build friendly you can have a multi-target version of binutils saying that okay I will build these binutils and depending on the flag I pass it will generate ARM binary or it will generate MIPS binary or native binary except that GAS is not able to be multi-target so basically that makes things a bit more complicated without GAS you don't get very far so we ended up having to create one binutils per architecture we want to target two just because of GAS well actually it makes things simpler because if you have a multi-target binutil then you have to manually specify the LD flags you need for the given target and if you have binutils that only target one platform then it automatically knows what the flags you want to specify for this target so speaking of complicating our life I need a sys route to be able to build packages I need to say to the linker here is my headers for ARM here are my libraries for ARM you need to link against them etc so we need to be able to build a sys route out of the regular freeBSD sources so we used to have we still have something called mechxdev this was basically trying to build a cross-toolchain based on the on the toolchain we have in the base system and then cross-build all the libraries and put that into a nice fashion into a directory where you can find everything ready to be able to do cross-build well it worked so it created a sys route a cross-competition tool it worked pretty well but it's inconsistent of a diversion in the port 3 we do support freeBSD 8, freeBSD 9 freeBSD 10, freeBSD head so we need to be able to provide the same feature whatever version of freeBSD we use so I need to be able to have a sys route for freeBSD 8 as well as head and mechxdev basically is only properly working right now in what will be 10.1 it's only properly working if you are targeting something that has clang because it used to work properly on 9 but when clang went in there were some magics that was involved to decide oh I use I need to build this tool or not into the toolchain I need to build this tool or not into the toolchain and because everyone was happy to get rid of gcc probably then if you're targeting something which is still using gcc it just broke up because some part of missing so we need to figure that so the solution was okay we can use xdev to build a sys route but the toolchain whatever we will use clang from the port 3 so I focused on clang and I'm recently switching to gcc to be able to get the other architecture I focused on clang I decided that the less building the user have to do the better they are because in freebsd10 clang is good enough to be able to cross build I will use base clang if possible if you're building on 9 then we will use a port version of clang so that's why we will fall back if your version is too old on the port version port 3.3 is enough I probably switch to by default having a 3.4 version and in that case if you're on 10.0 it will automatically use a version from the port 3 and we will use binutils from port all the time because it's simpler for us to say okay this is the version we want instead of has base the right things the right binutils to be able to do to do our stuff but using binutils from the port 3 has kind of quirks for us basically gas is more pedantic in some areas and most of our assemblies were missing some end sections and newer gas were just dying on this so we had to fix them so right now we have fixed the arm version I need to check all the other targets to get someone fixing the other targets and we need to create some ports that are able to build the C-Sroot because the user just want to get into a directory say I want to build this for 3BSD 10 arm the user don't want to have to prepare a lot of things by itself before so we need to get a port that is able to fetch the 3BSD sources the version we do support and generate the C-Sroot out of that okay so creating the C-Sroot now we have seen that Mekix Dev is not really nice for us so we don't have a target for Mekix to create a C-Sroot we need one really it's something that user will often want actually everyone building appliances on 3BSD I guess creating its own C-Sroot often so we need one but yeah because otherwise it's something really easy to do you know just this small common line to be able to with a couple of magic in the middle where you say I don't want this but I want this no really we need a target who knows that you only for example in our sources and curses needs to build first a build tool to generate headers or yeah it's headers and who knows this no one so we need a simple target create a C-Sroot for me please take this compiler take this this linker put everything in there we need that that's what Mekix Dev was intended for at the beginning if we look at the ports infrastructure it was surprisingly not intrusive to add or cross compilation and basically you need to keep a track of what is your host compiler because the native compiler because you will need to pass it to seeing that have build tools they want to have native we need to switch the default compiler to the cross compiler and we need to point the strip command because we use it a lot in the port 3.2 strip command from the cross binutils version and well I need it to modify a bit package because package is introspecting binary not introspecting is is asking the host what is your target when it tries to build what is your native architecture when it tries to build a package so it can write okay this package is for an md64 whatever so you can write it in the port 3 saying this is no arch package you can change it but by default it chooses the version of the host and to do that I don't rely on the kernel because you can run a recent kernel with an ancient userline so I rely on the userline and I read the bin shell binary because I guess everyone has bin shell but if you do crawl build you have a sys route so you don't have any bin shell I had to modify package so that it can read it can read the ABI it supports from one of the files inside that sys route and I decided to use one of the files you will always have which is crt1.o and we have a couple of variables like making package well we don't use package config on 3bsd because package config is quite a crap right now where you need jlib to have package config and package config need jlib and jlib need to have package config we use something called package config but it's basically exactly the same and support the same syntax and the same features as package config so you need to tell package config that I have a sys route somewhere so instead of querying the .pc file from the host please query them from the sys route we have to change a bit a couple of behavior so we have two kind of dependency macros in the post framework the lib dependency and the build dependency and I have to say ok if something is in the build dependency or in the lib dependency please build it twice the native and the cross build and I needed to be able to install packages into a different destination so what we do is we natively install the dependency on the host for the native dependency and I added a dash-dash relocate option to package register so I can install the target into the sys route so I have two different clean environments where to put my batteries and and I needed to get a couple of matching saying ok to cross compilation then I have a dependency on the sys route which is this port I have a dependency on this binutil I have a dependency on this compiler and then you have tweaks from port by port because we have a lot of people trying to be more clever than what is already offered so you have to fix what they did so how we do we did fix we have two choices that is for a very long time a pearl cross project which never get never find its way into the main pearl which basically provide auto tools to build pearl it works very well if you want to do cross-building the other thing is you do provide you can do it through providing preceded config.h files and so you have first to run the pearl configure on the different targets you are aiming at you get back the headers and then you remove from it everything that you know that is version dependent architecture dependent and you basically do the configure job by yourself instead of letting pearl doing it that's what is done for example in openwrt and other embedded linux platform that do cross building so what we need to do is bring the patch that are needed to 2.7 so we cannot patch so that it will build a native version of and target version of but we can patch it to say ok I already have my system and it's a good version so if you build like you want to build meaning only target when you would try to run python use the version from my native package I have already installed so that is what we need to bring back and I think all the stuff has been committed into 3,4 so I need to check scones well there is no solution for scones I mean I think somebody is now using that and somebody is probably the only one that managed to get something working with scones and they can cross build somebody I don't know how they did that all other projects I never managed to get something really reliable based on this ok so from the port user from the un-user point of view when you want to cross build all you have to do is you go into a port you specify one magic macro and you say ok I will build for this version of 3bsd and this is the IBI we do support and create a package for me you can do that as a user it works and what you will do is it will check your compiler ok you have clang inbiz I will use it it will say ok I need this double slash bnewtils arm whatever so it will install it and it will go through a port which I haven't imported yet but which is 3bsd sysroot ARMv6 quite long name it will build a sysroot install it and you have to know yourself how to build 3bsd sysroot you just let the portrait do the magic and well it's supposed to be without a provided sysroot with a provided sysroot a lot of companies are using 3bsd in their appliances so they are already building their own sysroot and they don't want the overhead of building again a sysroot on top of it so they want to be able to say ok I still don't want all of this to work out of box but I will tell the system that my sysroot is there so you can specify it with another macro that your sysroot is there and the portrait will do all the magic all the dependency install the native one install the target one and create the package and everything if you have already all the dependency installed everything is done as a regular user you don't need any root credentials the limitation we have is a bit late I think so the limitation we have now is the base system that still the version of the base system that still use GCC are bringing Leapstone RSE++ from GCC42 and it gives us a nightmare because you'll get a mix with newer stoners of Leapstone RSE++ and all the one it gets complicated so I need to figure out a way to think that the other thing is um we need to use GCC anyway for everything that is not supported in Clang so OpenMP for example is not supported in Clang everything that is not respecting C++ is relying on GCC and we have a couple of we still have a lot of people using those weird nested function in C that is a GNU extension which is not supported in Clang as well so we need to to find a way to have this clean by the way we for for OpenMP we have a solution well using GCC but not linking to GNU standard C++ which is we tweak a bit G++ so that it uses Leap C++ it's quite easy to do and well right now until yesterday I wasn't able to do anything with GCC only platform yesterday I managed to get first packages building for ARM Big Indian so using GCC for 8 so we are getting close thank you without the S you have any questions it's not exactly a question but since I'm OpenBSD I don't have to be diplomatic about things upstreaming patches to GCC it's definitely GCC's fault it's almost impossible to do a real open source work with the GCC people because you do a patch then you sub-nepid upstream and they tell you that it's not the latest version so they don't accept it and it takes at least one month to try, usually and then you try it with the newest version of actually non-working branch on your operating system and you submit a bug report you wait for another month for somebody to fix it then you realize that your patch has changed, that you need to rewrite it you submit it again then you wait for another month you realize that you missed something in the incredible coding guidelines that they have and they send it back to you telling you to fix it because you missed the space somewhere you wait for another month you realize that it doesn't work anymore with the version so it's not fun at all maybe you can do something about it since you have lots of money in the FreeBSD foundation but if you really want to get upstream patches you have to pay somebody to do that because it's not fun the main point for us is that we need to do something so in the end so yes Jordan so obviously with this work you're seeing a lot more interdependencies between package building and the source tree what is your long-term roadmap for essentially kind of merging the notion of cross-building everything so one of the nice thing I saw with that is I'm now having a prepared external tool chain so I can build the base system with an external tool chain so my roadmap is first get things working with GCC because that's where we basically need an external tool chain and once I have this done then go back into the source tree create the maxis route finish what I need to finish to use an external compiler in the base system but basically with the simple with this simple thing you're able to use an external tool chain totally so the source root does not need the base compiler at all well basically this is extracted from Mexico's dev so if you look at megfile.inc1 in the base system you have a cleaner version of this and compact it a bit but I just tweaked it a bit to pass for example it's not tweaked like this in the base system but yep I could have stood up I've been talking to Glenn a little bit about the notion of release profiles to drive the make release inclusion process with a profile that describes essentially all the pieces of base you want and all the packages that you want because essentially to build an appliance that's what you're doing it's folding in the nano BSD functionality and making it part of the release building process so that's kind of what I was hinting at when I was asking about your longer term roadmap ultimately I want to build it all together I have another project in fact for that and well it will use all of this but it's separate because port 3 and base system are living in two different repository what I need up is finally a third one and I use my Pudriere tool I added a new sub command it will happen probably in the fourth version of Pudriere where you can specify okay I need base with these options and these packages and please build me a free BSD release as it's done on the free BSD project or please build me a USB stick with this package install and whatever or please give me a preceded jail so I can deploy somewhere with those package and it will do the magic of building everything, preparing the environment and in the end create the media Any more question? You said you're going to use Clang from ports for older versions of BSD? I can I think you will for cross building right or No I will use the base version if it's recent enough and I fall back on the version from ports if the base version is not good enough for example right now I use always Clang when I can because the older version we have in base system is 3.3 which is good enough but I'm more and more tempted to switch to 3.4 which is way better so meaning that 10.1 will use base version and 10.0 will use the port version Will you also use a new version of Lipsy plus plus in that case or will you always rely on the one from the system? I will rely on the one from the system because you will be linked to a given version so you need to keep that one Thank you