 Okay. Okay. Good afternoon, everyone. Our next presentation, Making Build Systems Not Suck by UC Paganen. Please make him feel welcome. Thank you very much. So I am UC Paganen and I'm here to talk about build systems, but I'm going to start off with a small disclaimer. So for my day job, I work for Canonical and I'm working on the Winterphone and I actually have one of them with me. So if anyone wants to ask me questions about that one, please come see me afterwards. But this presentation is my own free time project. So anything that I say here, it does not necessarily reflect on any opinion of anyone else, including my employer. Now, I was watching the CPPcon presentations a few months ago and there was a choice code in one of the presentations by a boost developer, which I wish to start this presentation with. And he said, let's talk about build tools. All build tools suck. Let's just be upfront and that's it. And if you talk to people, this seems to be a general consensus of some sort of Stockholm syndrome where everyone has known this for a while that the existing systems are not really good. For some reason, nothing has happened. So let's look at some ways in which they actually suck. And the main thing is, at least in my personal opinion, is that the current build systems don't support the flow. Now, the flow is a psychological phenomenon originally found by this guy, whose name I'm not going to even attempt to pronounce. But the point is that when you're working on problems or athlete or something like that, when you start working on something, it takes you about 30 minutes to actually get into the problem. Oh, is there a problem? Okay, cool. And then at some point, the rest of the world disappears and then it's just you and the problem and you can actually totally focus on that. Now, this is very hard to achieve. It takes about 30 minutes on average. It's very easy to lose. So if you work in an office and your manager comes and taps you on the shoulder, there you go, you just lost your possible flow. For programmers, this looks like this. So there are three phases of a programmer's life. That's where you edit stuff, when you debug stuff, and then when you build stuff. And two of these are productive. One of them is not. And if this takes longer than five seconds, then you lost your flow. And another way of saying this is that a running compiler holds a mutex on your brain. The compiler is running, you can, it's like... But there are also some practical problems. The basic design of any system is that simple things must be simple and hard things must be possible. And if you can make the hard things easy as well, even better. So let's look at the simplest possible thing that you could actually do. There's the Hello World C application. And if you compile it, I want to compile it with auto tools, then you get to meet something like this. And if you do the calculation, this thing has more boxes and arrows than the Hello World application has characters. And if this is the way your system is, you might have a complexity problem. Let's look at something slightly more difficult. So you have an application which uses some sort of dependency, let's say, GDK3. And if you look at the way people tend to write CMake files, you usually find something like this. So you have project definition, minimum version for that, you want to use package config, you want to search for package config, and so on and so on and so on. So this is seven lines of code. Fairly readable, it's not that difficult. Except that there's a bug in here. So this should have the word required in it because since it doesn't, if the system package isn't found, it will just continue on. And then when you try to use it, you get interesting error messages. Okay, so seven lines of code, one bug, it's not that bad, right? Well, there's a second bug which is actually over here. So when you use these things, what you need to do is you add the include directories so that the compiler can find your headers. But a package config file might also provide extra compiler flags, and this doesn't add them. And most packages don't use them, some do, and if you are used to doing things like this, then you get interesting bugs to fix. So, okay, seven lines of code to bugs, that's not that bad. Well, there's a third bug in here. So just like with the include directories, this only adds the libraries, but there might be other linker flags as well, usually link directories. So you need to use gdk3 underscore ld flags. So, I don't know, doesn't seem very simple. Let's do something harder. So if you want to use precompiled headers, and so if you don't know this is a method for accelerating C++ compilation, sometimes by quite a wide margin, and GCC has supported this for, I don't know, 10, 15 years, and absolutely no one uses it, because it's really hard to set up. And there's a bug on the CMake bug tracker, which is in this address, where the official stance is, sorry, but doing this right as a first class feature is very non-trivial. Every platform does KCH differently, so it's hard to define a common interface. It's probably possible we have no motivation and time and funding to do it. And, okay, so this is totally fine, because when you write any sort of piece of software, you decide what your system does and what it doesn't do. But then it goes on to say that currently CMake does provide enough primitives for projects to do it themselves on each platform. Now, if you actually code this page, you can find that people have added multiple different modules as attachments to this bug, and hey, okay, here's my implementation of pre-compiled headers. There's about five of them. I've written one, and there are a few other ones as well. And they all have one thing in common, which is the fact that none of them actually work. So they kind of work when you don't do anything too tricky, but if you try to do anything fancier, then it fails in, again, interesting ways. Um, right. So, um, what, so if we, let's imagine a world where we have a build system which doesn't suck, so what would it like? What sort of design features would it have? Let's start with something simple. So when you run your build command, you either get an error or you get a fully built thing. And producing something silently that's actually wrong is not something that you should ever accept. And if you have some flags in your system or options in your system that allow you to build very unsafely, and which are possibly even on by default, it's not a very good thing because you really don't want to be debugging problems where you have style, stale files. Then another thing is that you should do the thing that's common by default and then allow people who don't want to do the common thing to do something else. So if you look at the way most make files and so on are written, they all do almost exactly the same thing. And there's very, very small variations on, like, in the edges of the case. So let's just do the thing where you, okay, let's do the common thing, make it very simple, and then you do something else. Then there's this one which I feel is pretty self-explanatory. And also file names can have spaces in them and that should just work. And you, as a developer, shouldn't have to do quoting. Especially if you start quoting your quotes in order to get them through the multiple layers of things, then you are in inception land and you're not very happy. This is also so best possible build system would be invisible. It would just be your brain and do stuff. We're not there yet, technology-wise. But in the meantime, so what we can do is minimize the time that you need to do to write your old build definitions. This would be very simple to do. Because all that time that you spend writing build definitions could actually be spent on doing the actual code, which is much more fun. Then there's this way you, the user really shouldn't need to tell things that the system already can find out from somewhere else, such as what is the flag to turn on debug mode. Or because it's different in different compilers and if you just use dash g, it will work almost all the time, except when it doesn't. And these are things where you shouldn't have to deal with compiler flags. If you want to, go nuts, but you shouldn't have to. Global variables, we have come to the conclusion in the software engine that these are bad and global state is bad. But build systems are almost entirely of nothing but global variables and global state. If you want to, like, find out what sort of things affect the build options of this thing which I'm going to build over here, you have to read through all of the code because it's totally impossible to find it out otherwise. Then there's the build speed. Build speed is essential because it's wasted time. And dirty tricks to make it faster are totally cool just as long as it's not exposed in any way by the implementation. Having sane and sufficiently rich data types, there's always a plus among other things. Having an array which is very rare among build systems. Or having objects and all that sort of thing, which makes everyday development life so much easier. This was something that came as a surprise to me when I started working on this project. One of the main problems when you do these sorts of things is that you, by accident, write a dependency loop where something depends on another. And it turned out that if you design your system properly, it's impossible to express a loop inside your dependency chain. And then all, like, massive amounts of complexity just go away. User interface is a rough outline of what it should be. Every build system has come with this big red button. And the idea is that no matter what you do to get a build, you just do this. It's the same operation every single time you don't have to care. And the system takes care of all that in the background. You don't have to, okay, so I edited the build definition files and then which means that I have to... No, no, no, let's not go there. Then the not invented here syndrome is bad, so let's steal everything that the other build system had, which is actually good. And there's quite a lot of that. So as an example, the Auto Tools two-phase build is a very, very good design. So first you do your configure step and then you do your build step. And this nicely separates out the various speeds that you need to do. Now this is a good thing, so let's steal that one. CMake, one of the best things about CMake is that the build is defined in terms of a virtual computer. So you don't write... The backend is not exposed to you. You just write in this abstract thing and then this can be retargeted to Visual Studio or Xcode or whatever. And this is a really nice thing, so let's steal that one. Scones is... Basically it's a library for Python and then you import that and you write some Python code and then you can get your build out of that one. And the reason people use Python instead of Perl is that it's actually aesthetically pleasing and it's nice and it's really good to use. So this is something that we should do also. Then there's Jip. Jip is the build system for Chromium. And basically it's a bunch of JSON files. So you write out your JSON file which gives you the state of the system and then it just builds from there. And the main point of this is that it's not a Turing-complete language. You cannot program in it, but still it's expressing enough that you can compile the entire build tree of Chromium. And whenever you can do something that's not Turing-complete, you should do that because it makes everything so much easier. And Jip is also all about scalability. So the scalability of your system should start at 10,000 files and not end at there. Then there's QMake and the QBuild system, which is their new one. And they have native QT support and it's very popular and it's a bit tricky to set up, so we'll probably do that as well. So let's take all of these things together and try to make one build system that has all of them. So what would it then look like? So let's start with the Hello World project. So it's two lines of code. First you define a project which has some sort of name and then the languages that you want to use. And then you say an executable with the name of this thing and then the source files that are in it. And that's it. So it's two lines of code which doesn't seem like much, but it actually gets you quite a lot. So you can build this with Linux, FreeBSD and all sorts of things, OS10, Windows, Visual Studio, MinjiW, and all those sorts of things. And compiler warnings are on by default, so WOL and WPedantic because if you're not using compiler warnings, you're not doing software engineering, you're doing astrology. And then you have different build times which what CMake does is that you have debug builds, optimized builds and all those sorts of things. And you just say, now I want this type of build. Cross compilation. It's just using a different compiler, really, so there's not that much difficulty. And the outputs you get from this are native binaries, executables produced by the native tool chain that you can actually run directly under GDB or Valgrind and not with lib tool dash, whatever the command thing was. So if you want to use a dependency, start the same, then in the middle you say, dependency gdk3 dash zero. And this then uses package config in the background and then it returns you this kind of opaque object. And then when you have an executable which you want to use gdk3 in, you just say dependencies and then the list of dependencies that you want, in this case gdk3. And you don't have to babysit any of the compiler flags or anything like that, you just do it. In this unit tests, you just create an executable and then you say a test called simple test, run this executable. And if the return value is zero, everything is fine. There's the precompiled headers, which you remember from earlier. It's really difficult to think of a way to have a common interface. Well, here's my suggestion for a common interface where you say that the CPP precompiled header file for this target is this one. And as an example of the kind of performance gain that you can get, I was working with a Qt 5D bus tool and it's using this device and it took me about two minutes to compile from scratch and then I enabled the precompiled headers for the Qt headers and it went from two minutes to less than one minute. And it took me less than one minute to write the file. So this is a very good use of your time. So let's look at a slightly more complicated example. So let's build a C++ shared library that's using Glib in the implementation. What are the unit tests for that? Install it in the system proper directories and let's create a package config file so other people can use it. So at the top level of the source tree, this first we start the project and we have a C++ then we add a global argument to use C++ 11 and this means that it's used in every single C++ compile that you have. And the thing is that you can set global arguments but you cannot unset them. So there's never the question of is it valid here or not. If it's in a global argument, it's always used and you can't prevent that. So then we find the Glib dependency and then we have an include directory and we need that in the search path for headers. So we do include directories include and this again doesn't set anything in the global header path. It just returns this opaque object, which we're going to use later. And subdir means to go to this sub directory and execute the method file in there and then come back. And then we do the include directory, we do the source directory and we do the test directory. Now inside the include directory, there are some header files we just want to install them. In the source directory, so we have our shared library which has the name foo and it has these two source files. And the include directories to be used for this particular build are from the INC variable which we made earlier. It uses this dependency and we want to install it as part of our install depth and we'll install it in the prefix lib directory which is for the system default. And then there's a simple helper function for package config and this libraries is such a list of libraries that you need to link against to use the thing that you specify a few pieces of metadata and then you're done. And as part of Ninja install, it will generate this file and put it inside the proper package config directory. Now let's then we need to build a unit test. So we have a test with the name, a source file and it has the same include directories and then you link it against the library that we built earlier. And then you define a test with a specific name and that's the build definitions in its entirety and this is really the entire thing. You don't have to write anything else and you can almost do it once and then you can just do it from memory and you don't have to look up all the manuals and stuff. And while we are on this subject, so let's build a Qt5 application. So we need to find the Qt5 dependency and it has multiple different modules like widgets and debuts and declarative and all those sorts of things. Now we just want to use widgets. So we have an executable and these are the source files that it contains and these are the headers that you need to preprocess with the mock preprocessor. These are the UR files that you need to preprocess with the UIC compiler and this is the resource files that you need to do with the resource compiler and this is the dependencies that you want to use when actually compiling the C and C++ files and this is how you compile Qt5 applications. And looking at the performance, so here's a ARM board which I have and it's about a gig of RAM and it's running Debian unstable and this is compiling the glib library. Now I removed GIO just out of laziness because I didn't have time to convert GIO as well. But the point is that you compile with the order tools build system that they have and then with the method in which I did the port. So if you do the first configuration step and you disable the optimizations so it's the same and you run order gen and this takes about five minutes and if you do the same with Mesen it takes about 24 seconds. It doesn't do quite as much but it sets up all the tests that you need in order to actually compile glib and the unit tests and all that sort of stuff. So it's about the same. Then you do the actual build. So it's a dual core machine so you use two things and it takes about five minutes to build and if you do the same build with Mesen which uses Ninja it's one minute and 28 seconds. I actually don't know why this is so fast. It really shouldn't be. So other tests that I've run it's about 10% faster or something like that. For some reason, if anyone has a good idea of why this is happening please come talk to me because this doesn't make any sort of sense to me. It does build slightly less code but like 10% less maybe. But this is the actually important one because this is the one that you deal with every day. So how much does it take to do an incremental build? If there are no changes at all the overhead of make and order tools is three seconds and slightly less for Mesen and if you do a simulation of if you change one file so you just touch this one file and then you do a rebuild it takes one minute and 18 seconds and anyone in the audience wants to guess how long it takes for Mesen. So how much? 40 seconds. Anyone going lower? 0.2 is a bit too low. So it's 1.1 seconds. Now the reason for this is that there's an optimization trick which is stolen from a Libre office and they stole it from Chromium. So what actually is happening here is that the order tools also takes about one minute or one second to compile the actual shared library. But then it relinks all the test applications. Now what Mesen does is that shared libraries are defined by the list of symbols that they export. So when you compile that you export the list of symbols and then when you compile it again or relink it again, you extract the list of symbols again. And if it hasn't changed then you know that you don't have to relink because nothing has changed. And using these sorts of tricks the day-to-day development becomes really much nicer because it's a very common case where you have one shared library and then you have a bunch of tests for that. On the desktop configuration time usually takes less than five seconds depending on how many tests you have. And now a built times is less than one second. I've never seen more than one but that's mostly because it's using Ninja which is made of awesome. And there's only one process so it doesn't do the recursive make kind of thing. So it has the entire dependency graph in memory at the same time and then it can just saturate the CPUs at all time. Okay, so let's look at some advanced features now. One thing which you usually want to do is that you build a program and then you use that to generate more source code and then build that. So how would you do that? Well first you generate your compiler and then you create this thing which is called a generator and takes the binary and it tells, okay, it produced these two files for each input file and these are the command line arguments that you pass into the compiler and then you just tell it to process these files and then it again returns this opaque object which contains the header files and source files that are generated. And then you build an executable and you put that in the list of source files to use and then you're done. Now, the problem here is that what if you are cross-compiler? Because if you cross-compile, then you can't do that because you can't run the executable on the build host. So what would it take to actually do a cross compilation? Well, there are two options for this. One is that you don't have to do anything at all because there are cases where you can actually run your cross-compiled binaries natively. As an example, if you're using wine. So if you're compiling for, with Minty W or under Linux, you can tell Mesen to use wine as an executable wrapper so that it can run all of these binaries and then it will use it automatically there and you don't have to do anything at all. Even better, it will also do this for your unit tests. So if you are forced to develop with Linux and Windows, you can do them both from the Linux, from the command line in one step or two. So you have to build either one, which is kind of nice. The other option is that you do this and you tell it that this native is true, which means that it uses the local compiler and not the cross compiler to build this particular one. And then it will always just build that using the system compiler and it will work. Now, you can't actually install this one in the cross-compiled because it's wrong, but you can use it to build your own. So another thing, which is, is that usually projects need different kinds of options. So you can define these options and they are actually strongly typed. So you can have a string option or a boolean option or an enum in this case where you have multiple choices and you can only select one of those. And then from the command line, you can query what the values of these are, you can set them and then inside your meson configuration files you can get me the value of this thing, of this option and then you just get that directly. The list of languages that are currently supported. So these go in the order of how much do I actually need to use them myself. So C and C++ are the most best supported and there's Objective-C on C++. A Fortran, there was a guy in Spain who sent me patches for about 10 different Fortran compilers. And then there's the other ones, which kind of work, haven't been all that much battle tested yet. So there's a very, very test-driven thing. So there are over 100 unit tests and each of these is also a sample project. So how would you use meson to do a project that does a static library, a shared library, and so on and so on. And all features must come with a unit test and so then we can actually tell that you don't break things by refactor. And there is one controversial feature. So if you're a really old school, you need to be careful because the system will not allow you to do in-source builds. You always have to have a separate build directory and you put stuff in there. And this is not out of personal hatred to people who build in-source, but it turns out that you can actually do either a system that provides in-source build or an out-of-source build. If you try to do both, it will break in interesting ways, which I don't have time to get into. But these are one of the things where if you are used to building in-source and then you actually try building out-of-source, it's kind of like when you find revision control for the first time, it's like, yeah, I don't need that. And then you try it once and it's like, join the dark side. It's actually pretty good. And there are benefits to this because as an example, if you want to run the Clang Static Analyzer, the steps to do it are always the same. You create a temporary throwaway directory and then you do your steps in there. And it's guaranteed not to clobber any other build currently ongoing. Even better, you can create your own target and then you type ninja static analyze and it will run the static analysis for you and it's guaranteed not to clash with anything else. And if you have a system that does allow in-source builds, this is something that you cannot do because you might have stuff in your source tree that's from somewhere else and you have to delete that, then you have to do your build, then you have to restore your state and it gets really, really complicated. A bit of a side tour. So there are quite a lot of build tools and there are also quite a lot of IDEs and these work together very poorly because you have to write your exporter to every single one of these and some of them are really, really funky with the way they do things. Now looking at these, also the obvious solution is that you have some sort of common format in a way which build tools can expose their system to the IDE and the IDE can load that and things become simpler. Now the problem is that this doesn't exist or at least it didn't so I had talked with one of the guys doing Qt Creator and he helped me create this one so this is a very simple JSON file format which you can use for IDE integration so you can basically introspect everything like project source files, unit tests and all the environment variables that you need to set in order to run your tests, the command line arguments that you need to set in order to run your tests and with this we can finally reach the thing that Java developers had in 1996 or thereabouts is that you can actually click on an IDE, say run your tests and then if something fails you can right click on it and say start this in the debugger and it will set up everything for you. Now I don't know if a single IDE that still does this but from the message side it's already there and really if you're an IDE developer please do this because I want this. So then what can you build with it? So the way the message is developed is that I put in stuff that I think is interesting and useful and then I grab some projects that are large and try to compile them and then add all the features that are necessary in order to make them build and until you already saw Qt Creator is several thousand files and the scalability seems to be pretty good. MAME is huge and they're built with just plain make and it's quite well done. It's interesting. MESA3D, they have an interesting thing where they have XML definition of their of the OpenGL system and they generate code from that. That's quite interesting. Now the last thing is the eternal battle which we heard in the previous talk that there are people who really want to do stuff with distro packages and there are people who really want to embed everything themselves and then usually they start arguing about this and talk past each other and it's not very productive. So is there a technical solution for this? So in MESA3D what you can do is that you have any sort of project that's run in MESA3D and you can use it as a sub-project. So then it becomes a sandbox part of the build that you already have. If you're familiar with Go, Go has the Go get which downloads stuff and puts it and then you can use this pretty much the same. So how you would use it? So first you try to find the dependency with these normal things and if it's not found, then you just use the sub-project command and it builds it as part of your build and then you can just use that one. So here's what it looks like in practice. This is a simple SDL2 application running in three platforms at the same time starting from the left. There's the Ubuntu 64-bit and it's using the system packages. Then there's Windows XP using Visual Studio and a zip file of the SDL library that I just downloaded from the SDL homepage. And then there's finally OS X using a framework version of the same libraries that you can also download from. And you can see the build definitions behind each one of the things. The far end ones are actually exactly the same and if you would join all of these together there would be something on the order of ten lines of code. So closing, it's Apache licensed and there's a reference implementation which is in Python 3 so the definition of the system doesn't expose Python in any way. So if you want to reimplement it in C++ or shell scripts or whatever, it's totally possible. It's available in Ubuntu 14.10 and Debbie and Jesse which should be released at some point, hopefully. And they give us all the... there's actual documentation. So there was a documentation mini-conf early on and documentation is important. That's why I haven't written it. Any contributors are welcome and if there are any questions, I'll be happy to take them. Questions? Build systems are often divided into categories of those that use time stamps as the fundamental test whether something needs to be rebuilt and those that check some of the source code. I wasn't sure if you actually mentioned which one it uses in Mison. So that depends on the back end that you use. Currently there's only Ninja and Ninja uses time stamps. Is it possible to use a back end which is based on check sums? If someone writes it, sure. So the time stamp thing is something that if you are using something like NFS then your time stamps can vary wildly but if you build a local file system or local drive, I haven't had problems with that but maybe you have. The second question is speed of build on windows. I often find that build systems can perform extremely well on Linux and other Unix hosts but the same build system on windows might perform extremely badly and for those trying to build software for both platforms, the sort of tests that you've described how is the performance on windows? So the performance of the actual build depends on the back end again because I just generate the back end thing and Ninja was created by guys who are working on Chrome. So they use it to build Chrome on windows as well and they have done everything possible to make it as fast as possible and unless they're mistaken it's the fastest possible way currently known that you can build stuff on windows. You mentioned that you've got a set of languages you really support. I was wondering how difficult it is to say you're also generating some documentation in your project or you want to use it or two languages to say like with no JS you'd have JavaScript which you might want to package up in some way but you also might have some native C bindings to some library that you're building at the same time in one module. Is it possible to have like adding an additional language to the main language of a project? So you can have as many languages as you want. Okay so a project isn't just one language. So if you want to compile let's say a Java with JNA then you have several files and you have your shared library together and that's it. So possibly related question there. Is there a mechanism to basically add custom build commands for again generating documentation delegating to other build systems that kind of thing? So yes so you can have your own build targets which either just run some command let's say one long plan format just a run command just like the static analysis and you can also specify your own targets saying that this will produce this thing and you run it this command to create it. Can you run build stages in parallel? How do you mean? Like if you had a sort of a workflow where you do compile and then you do and then you have a testing stage and the testing stage has several tests many of which can be run in parallel safely. So the unit testing system of MESSEN runs tests in parallel automatically and you don't have to do anything and if there are tests which cannot be run in parallel then you have to specify parallel false. We still have time for more questions. Can you build configuration spread across multiple files or do you have to specify everything in one file? So you can separate out into multiple files. So if the subdir command is like that and then you execute the file in there. Currently you can only have one definition file per directory and it always has to have the same name. This might change depending on how the development of the language goes but so far it's been sufficient to have only one. So how long have you been working on this and do you know, do you have any feeling, how many users you have or are there any very big projects that you can give us a reference? So I started this two years ago and I know some people are using it. Now here's a piece of advice for everyone. If you ever create something new create a file name that is unique without the period without the suffix because the definition file is messen.build and you cannot search for this word in Google or GitHub because it will helpfully split the word at the period and you can't search for the exact phrase. So it's hard to find these files. I get back reports fairly consistently and submissions of new features so people are using it hopefully more after this contrast. Is the port to build Jellip with Messon available somewhere? So can you say again? Do you change Jellip to build with Messon? Is it available anywhere? Anywhere? The port you made to build... You mean these? Jellip. Whenever I do one of these I send an email to the mailing list of the project involved saying that hey I did this thing if you are interested in doing it and if you look at the... On the wiki page there is a link for the mailing list of each one of these. We have time for one or two more questions. Sorry, follow-up question. Just that neat trick that you mentioned about looking at the symbols in shared C libraries I don't suppose your Java module does anything similar? No. But Java compiles so fast that usually you don't care. I didn't mind AOSP which is humongous so it really helps when... My condolences. But the thing is that I don't expose any sort of implementation detail. If someone has an idea how you can do this with Java and make it faster, I'm all for it. I send patches. Any further questions? There's still time. There's one. You listed off a number of places where it would work. In sort of the big three, how portable would you say it is to other places like FreeBSD, Solaris and more exotic places? So what you currently require is Python 3 and then you require Ninja. These are all very portable. The biggest thing is that you need FreeBSD already works. When I ported it from Linux to FreeBSD I had to add a few search directories to four shared libraries. It took me a few hours. The bigger problem is that if you want to use the Solaris compiler, then you need to write a compiler definition file. Which is not that much of work but if it does something crazy then you might need to change the code. But thus far adding new compilers has been fairly straightforward. So we'll have time for one more question if there's a question. Have you done any work around building with on systems where there are system libraries available basically builds where you tell it ignore the system library and build with this separate copy of it that you have in a user directory or whatever? I have done some. Do you have some sort of problem case in mind? I'm specifically thinking in the case of kind of the Python scientific ecosystem of building things like intervirtual environments and that kind of stuff where not using the system binaries can actually be an interesting challenge. I haven't actually looked into that but if you tell you have to have sort of compiler flags to tell the compiler not to search somewhere and you can add arbitrary command line flags so it's very simple. If you can do it with this then it should be doable. If it needs some more magic then I would be interested in hearing out because it's something that I would really want to support. Okay so that's time but just to thank you for your presentation today UC, we have a small gift and thank you very much. Thank you.