 Is it actually time to start? Sorry? Yeah? Okay. One, two, three, four. Okay. Oh yeah? You don't like it? Okay. Okay. I don't think we can do that. There's a package that's got lots of links in. Okay, well. Okay. Do you want to cover that here? Or should we talk about that later? All right. Okay, it's around starting time I think. Okay. Yes. All right. Do I need to start? Nobody seems very interested. Right. Okay. Everyone shut up. Listen. This is a super low tech presentation. I'm afraid I've been busy all week. So I've not done any fancy slides and I don't get it. Thank you. Ladies and gentlemen, please welcome our host, who presents us a nice talk about Booster having DBR in the first system. Signed off. So what I'm actually going to do is read your 50k email for this talk. I hope you enjoy that. So, I don't know how many of you read this diligently when it arrived at three o'clock this morning. I hope everyone is fully familiar with the material at hand. So in case you aren't, I shall tell you which bits are important. So last week, just a week before DebConf, we had a bootstrap sprint in Paris with most of the people who've been taking an interest in, especially the theory of this. So, and we got a surprising amount done, actually. It was quite productive. We came up with some fairly crazy ideas. So, well, Helmut did, actually. Blame Helmut for all the craziness. He's good at crazy ideas. So anyway, I will endeavour to tell you the good bits without being too tedious. So primarily, so I hope some of you are familiar with the subject matter in general of the fact that bootstraps, in Debbie and I, are difficult, primarily because of slightly dependencies. We've been kind of working our way through this for three or four years now. And it's been relatively slow to actually get to a finished thing, but we are making progress and we understand the problem a lot better. And there's various things we'd really quite like to have in Jesse. We haven't got much time left, which is one of the reasons we had this sprint just now was to give us a fighting chance of getting some of these things done once we'd agreed on what the hell it was we were trying to do. I hope you're all familiar with the GNU terminology. Build architecture is what you build on. Host architecture is what you're building for. Well, what the code you build will run on. And target architecture is if it's a compiler or a tool like that, what it generates code for. So, unfortunately, this isn't in the ideal order. So Helmut has been running this thing called Bootstrap, which is a really nasty hacky script. If I find the right window. So there is a web page all about it. But it basically does the whole Bootstrap from scratch. So builds across tool chain from the Debian packages and then starts cross building the core 160-odd packages you need to bootstrap. And the moment it gets about 40 packages in before things break. So that's running on the Debian Jenkins infrastructure and sends us a lot of tedious ISE messages every time it breaks. Which is mostly. So that's a really useful thing, because you get to find out what's broken. So part of the stuff we discussed was all the things that go wrong. So I'm going to come back to some of this, because to be honest, it's detailed for most of you. But there's a lot of little things in the archive which are problematic. You don't actually know what the set of essential packages is until you've built them, because the information is in the binary. And so you can look at another architecture and hope it's the same, but nothing necessarily causes that to be true. And things which depend on build essential in base packages don't say so. And whilst you're bootstrapping, you haven't got build essential yet. So you need to know which parts of build essential something actually needs. And we don't write that down anywhere either. And things depend on virtual packages. Which is normally fine. So provides. But again, during bootstrapping, you don't know which package is going to provide the provides, because you haven't got the list yet. So you don't know what to build next in order to get the thing you need. And there's four different ways we could fix that. You could provide the list of provides in the packages list. You could just not depend on virtual packages if you're in the essential set. So we could just say that people shouldn't do that, because if you depend on the real package, so instead of depending on libxpap, you'd depend on libxpap1, and then we'd know what to build. The problem with that is that that makes people's transitions harder. I mean, that's the reason people do this. You could swap the names around in the all lists. And we could also, if it's a real meta package like say libdbdev, that's a default package which sets which version of libdb is actually the current libdb. That's fine, because it's a real package and we know to build that and then everything works. So these all have pros and cons, and there's reasons the way we do things the way we do, but they all are slightly problematic for bootstrapping purposes. And basically anyone who cares about this stuff, we would, if you can read this, basically I don't want to go through it all in great detail here because there isn't time and it's kind of dull. But if people can understand some of this stuff, it will be nice to know which of these options you really, really hate and which ones you prefer. Because we may not have understood the full situation and we're very much focused on this particular part of the problem and there's reasons why people use virtual packages the way they do. Source versions and binary versions vary blah blah blah. Another thing we talked about was partial architectures. This has been a subject for like four years. We worked out one way we could do this UDS, I remember years ago, but nobody's actually cared enough to go and make deep package understand about hardware capabilities, exactly which ISA something's built for. So when we specify an architecture in Debian that's an ABI, technically not an instruction set. So you can build RMHF for V6 if it's Raspbian or V7 if it's Debian. Still RMHF. And it doesn't say anywhere in those packages that these packages will only run on V7 hardware and these will run on V6. So you can't do automated please don't install stuff that will just explode on my computer. And it would be nice if there was some metadata somewhere that said what it was expecting so that things could either stop you installing stuff that won't run or install optimized packages suitable for your ISA. So we currently do that in the package namespace by, you know, for a few packages that care like libc and mplayer and things we have dash i3 at 6 versions but it would be nice if it was actually an orthogonal bit of the package metadata. So we went through that but MIPS people particularly care because they've got stupid instruction sets that don't just go forwards like they do in sensible architectures. So on x86 you get the core stuff and then you get some extra instructions and then you get some more extra instructions and some more extra instructions as we went 586, 686, whatever hell we're on now the MIPS people throw away old instructions when they added new ones so the only thing that everything will run is now extremely old and rather inefficient and they'd like to be able to use some newer stuff but they need some kind of mechanism to really make that practical in Debian without just rebuilding everything and having a separate repository. So as ever somebody needs to do the work the deep package maintainer didn't object violently so long as it wasn't part of the dependency tree. So if you build the same package for two different ISAs it's the same package so far as deep package is concerned in terms of what it satisfies in the dependency tree it's just an optimization which ISA you built it for. As soon as you try and put it into the what makes these packages different from deep packages point of view this goes absolutely crazy and would not get that past the deep package people. I think that's a reasonable stance to take but yeah I'm not sure anyone's actually going to do this work. We were hoping the Raspbian people would because they had this problem pretty badly but they didn't care enough they just did a separate repo because it's easier unfortunately that's always easier for any one case but it doesn't really solve the general problem. So a bigger subject is cross compilers in Maine so everyone's been going why don't we have cross compilers in Maine for some years now because we've had them in Mdebian since 2004 and various people have made extra repositories and well because it's difficult is the main reason or is this awkward so we discussed various aspects of this as was covered last year the naming conventions are a bit odd because you have a triplet GCC is the binary you run but you install a package called GCC triplet and people who aren't familiar with this go why and now would be a good time to change everything around if we were going to try and normalize it and make it all orthogonal but we decided it was too much like hard work and there's so much documentation being written and given upstream conventions it didn't seem worth the pain. We could use some provides to give extra names to packages to make it a bit less confusing that's fairly painless there's a problem of how many cross toolchains do you want there are a lot of architectures the full matrix of everything for everything is an awful lot of compilers most of which are nearly useless so you only really want cross compilers on fast or possibly popular architectures targeting other things so pretty much you want AMD64 to everything you probably want PPC EL to everything now as it's fastest but we haven't most people haven't got one of those so I guess that's not huge demand yet and there's a question of where this data should live you've got binutils and GCC and some other things like package config and you quite like your set of pieces to match up and if we wrote this all down in one place that would make it a bit neater currently that's not done each package decides for itself which flavours it builds and I am quite interested to know which sets people want so if there's anything unobvious there's a hand up over there please come and tell me that you want some particular obscure combination the first thing I ask is are you talking about cross toolchains with the call of building W packages or just in general so there's the set that we actually build in the archive that are available to just install as binaries obviously it should be very easy to build anything else that you want for some obscure purpose what is it worth us building for everybody yeah so I think there might be some of these no M6CK no no of course not but I mean you might be wanting targeting architectures which are not in debut yes that's absolutely true so it wasn't clear to me if you think that's out of scope or that's part of the scope no clearly that's useful and indeed necessary especially when an architecture is new you can't use any of the multi-arch stuff for example so yeah and people need to do that I don't know which architectures now that we've just added a couple of more important ones most of what we're interested in is now in Debian but I don't know what's outside that we think is important we should we should definitely all the cases where there are sort of obvious pairs or triplets I guess with arm so I want ppc64l to ppc for instance and that's because sometimes using often even using multi-arch cross-build is now more convenient than using traditional things like minus m32 exactly multi-lib builds particularly with I think ppc64l to ppc does well it kind of has m32 support you have to be excruciatingly careful with it I'm generally in favour of having a toolchain for one thing and as little multi-libbing as we can get away with but obviously some architectures like x86 that's deeply ingrained and we can't throw that away but yeah we are definitely interested to also have cross compiler for architecture which are not in Debian right now for example for QMU we need to build a firmware and we don't provide the firmware for alpha for example so the QMU system alpha in Debian is basically useless without it and yeah that's true did anyone take any notes somebody take notes and still for spark spark64l and even if you want to run ppc64l you need a 64-bit firmware so sloth it's cool and you need a poor pc 64 cross compiler so I know it's solved more or less by building a cross compiler in the package but that's very ugly that's the only solution we have now and we will be very interested by having cross toolchain instead so you're probably the expert on weird packages that need to build bootload of firmware and what set of my few of them yeah yes it seems slightly perverse building a cross compiler just for that one thing but as you say the alternative is to build it in the package so yeah it's ugly but at least it means a normal user can patch the package and rebuild it himself up to now it was not possible so if you don't have the hardware you cannot modify and I'm not sure it's very compliant with the DFSG in that case yes okay so is that someone taking notes somebody please come on just to volunteer God stop work because I'll forget so yes there's that general question so the next point is yeah I think things like codex and csh would definitely be a valid target of course not a host and so that's basically fixed with the bare metal toolchain set which because it doesn't include libsies is a whole much simpler and that already works and is already in the archive so that's done and and then there's a problem there do you might some people might also be interested in doing similar things for MIPS with the big 32 MIPS based MIPS people might want an equivalent so that's currently it's like 50 multi-libs for every conceivable variety of ARM Cortex core and I guess you could build another one for every conceivable flavor of MIPS widgetry the other thing actually that's an architecture that we do need to target and isn't can't be multi-arched is Mingoo 64 the windows things because we build windows tools actually it's part of the I part of the bootstrap Jesus so yes there's lots of little interesting pieces to this what else is there so at the moment because not everything not all the dev packages are multi-arched there's a lot of stuff where things still ship headers in the user include rather than architecture specific directory and we don't really have good tools that will be useful to have a linty and check that told maintainers they were doing this and could they please stop and multi-arch things because we haven't been doing that much building in the so if you're building a charoot always you don't notice this problem too much because you only install the stuff you wanted but as soon as people just start using cross compilers willy-nilly possibly on their real systems which we don't really recommend you'll get wrong headers until we have multi-arched everything I'm not quite sure how far through that process we are I know Ubuntu is further through it I think most of that's come back into this devian there's a little wrinkle with GCC multi-lib which provides an asm link to a particular native architecture and if that's present when you cross build you'll get wrong stuff so doco does this mean that asm link thing does that mean we can only build in a charoot because you'll always have that thing present on a normal system or you don't need to install GCC multi-lib okay so when do you need to install GCC multi-lib well if I want to use multi-libs okay you see I don't know what I'm talking about it turns out right down the front well basically anything that uses minus m32 or minus m64 in a sorry in a build process is going to need GCC multi-lib as a build dependency so I mean yes anybody who isn't already doing all of their package builds in in a clean charoot should have started doing so several years ago but this is another reason why it's a good idea if you're also doing cross build so if we can flick with that it'll stop people shooting themselves in the foot I really wanted to say something yes at some point it will be nice if we can also directly get rid of multi-lib for architecture we already have in the archive I say for example for if you want to build I3-866 on AM64 you can use a cross compiler get rid of get rid of the BRH package we have in the archive they are very ugly and it causes a lot of issues because right now we don't support cross architecture conflict so we end up with a lot of people installing libc6-amd64 for I3-866 on AM64 system because for example S3-64 depends on it and people say oh I have a 64 bit system so I will install a 64 instead of S3 and that's a pain to handle because both the native libc6 package and libc6-amd64 wants to provide the LDS linker so yes Dr we should disagree with you um we really need to sort out cross build multi-arch build attempts on build these before we can do that because we do have packages GRUB-AMD64 GRUB-EFI-AMD64 builds with minus M64 on I3-866 for people running I3-866 on systems with 64 bit EFI firmware which do exist so this is this is not very good sorry to use this as a problem you are quite right go ahead whisper is that one of the things so there's I could just carry on with this marvellous so yes we already have a cross binutils package in unstable which has been there for a while that's a relatively simple piece of work on its own GCC is harder and we have this problem that because the cross compiler depends on um depends on libraries of the host architecture you've either got to build those again so you bootstrap so you get a libgcc-cross package for the host architecture or you've got to use the one the multi-arch libraries in the archive so we either have to be able to do builds using multi-arch which is not something we've ever done before or we have to use the bootstrap toolchain method for which packages have been in Ubuntu for some time now at the moment those packages are broken and we need to fix that because we have to have some of those for at least architectures outside the archive so in the meantime I generally like the multi-arch build method because it's a lot simpler and already works but these archive changes last night we've got sbuild working so sbuild now does multi-arch builds if you have a dependency on colon rml it'll just stick it in and then do the build which works nicely you will if your cross-toolchain if cross-toolchain packages only build for one host architecture target architecture rather than the thing that builds for 17 which would just go wrong I propose to upload like a cross-gcc 4.9 rml package rather than a cross-gcc that builds 57 cross-compilers because one of them is guaranteed to go wrong and the thing will never work now maybe one day we could make them cleverer but I think one architecture at a time is a wise plan so there is a git repo in the alayoth cross-toolchain project which is where we're coordinating this stuff which generates a load of source packages one for each target architecture and that all works nicely so those cross-toolchains are available in dmocogansecretsource.net repository you can have here somewhere if you want to just try stuff out now he's been building those for a while and that works quite nicely it's actually one liner to just build as long as you can do downloads during the package build which of course we don't allow in the archive unfortunately associated with this you need some extra bits and pieces like cross-package config so part of Helmut's crazy scheme so the moment in a bunch of the way this is done is there's a cross-build essential package which depends on the foreign libc and a triplet package config to bring in the pieces you need and d-package cross to bring in the auto comp settings and that works but what you could do is split package config into two halves so you put all of what's currently in there in package config bin and then have a little empty package config which is multi-arch same which just contains the triplet and that way you don't have to specially install triplet package config and it doesn't have to be something that's part of build essential multi-arch build will just cause it the offenses will just pan out and you'll get a triplet package config command whenever you build for an architecture which is really rather neat unfortunately the package config maintainer is not convinced it's an argument we have to have similarly we can do the same thing for gcc so this was the really crazy part if I scroll down a lot come back to some of this here we are now you have to read this because it does your head in it took us two hours we all think about this stuff on a regular basis to convince ourselves that this isn't totally mad and does in fact work we have a problem at the moment that packages depend on gcc-version something depends on gcc 4.7 because it doesn't build with 4.8 or 4.9 but what they actually meant was gcc forehost 4.7 because it's actually the compiler for the thing you're trying to build for so when you're cross compiling that's a different package to when you're natively compiling and there's this question of how do you translate build dependencies when they change when you're crossing so we had six different schemes listed on a big web page and we went through them all and invented a seventh one which is in fact to use multiarch if you make little fake multiarch packages and arrange things properly you can now depend on a gcc for build or a gcc for host package and they could be versioned and that will magically install the right thing that's correct and the point is if you use this you now have to use triplet gcc everywhere you can't just use gcc and expect to get the right one because the question is what does that mean which gcc so this is quite radical but it's also quite clever and we believe it works these packages are as of yesterday or no today in the secret source repo we're just putting them in at the moment I'm not sure we've done it all yet so it would be nice if people experiment with that and if we believe it works we should perhaps start using it the transition plan is probably ok we can leave the existing gcc packages more or less as they are so anyway that is quite a challenge, I'm not going to try to explain in detail because I only get it wrong use multiarch it just means the right thing is getting installed Steve wants to say something so I was reading this in your mail and it was not altogether clear from my reading gcc-4-host you actually mean for instance gcc-4.7-4-host exactly ok so just that comment it's a little bit unclear also you saying it's in the secret repository and that people should try it out I think we might want a more unambiguous pointer to that then yes I should there possibly isn't in this mail because we forgot I'm sorry there, toolchains.secretsource.net oh it actually is secret source yes and I'd say there isn't a full set of packages there yet because we were just working it out but there will be very soon, maybe later today if we are organised they are also in my people.debiend.org repo as well but this has probably got more stuff in at the moment and Deemer isn't going on holiday for the next two weeks so if you do actually want to try it it might still be working what else is important the other interesting idea we had was we thought about the auto comp problem so at the moment if you cross build with auto comp something somewhere needs to tell you how big your ints are and your longs and a whole lot of other things like that you can't run a live configure test on because it's the wrong architecture and auto comp has good support for this there's a whole load of magic variables and that is currently stored in dpackage cross and if you want to use it you have to set config site equals slash etc dpackage cross config architecture so that it gets pulled in but actually if you look in a modern configure file it already lists a location in user lib something or user share something and we should probably be using that selecting all the config for all the packages in the world together in a bucket that we maintain badly is the wrong way to do it a package should be supplying its own config settings we should have a mechanism to install them in a .d directory and suck them all in that's kind of the right way to do this so we thought that through and have come up with a scheme which ought to work although we tried it for 15 minutes and it didn't seem to so that could be improved but we'll kind of go away because that's its last important job I think whether it's going to get done right now it's not actually very hard it's a shell script that has some local overrides copies in all the things from the directory and we use our own we should be able to migrate fairly easily as packages add their own config that would overwrite the existing set of stuff so that mechanism works well I think it's better fit the way AutoConf expects to find things you don't have to set a magic variable if you forget it won't work it is possible to have co-installable toolchains so you can have an i386 toolchain on a md64 machine and that fits orthogonally with all this other things but that involves swapping all the sim links around between triplet GCC and GCC which one's the sim link which is a bit of a pain as far as we can tell it is possible and Helmut's quite keen to do it and it is useful for things like trying to build anything hascally on an i386 machine because it uses so many billions of pointers you use half as much memory if you do it with an i386 compiler so that is still attractive to some people and it would be kind of neat if we made it work but I'm not sure many people above from Helmut care he's prepared to do the work Doco didn't complain too much about having everything changed around what else cool now multi-arch builds so as I said one way of getting cross copiers in the archive is to just build GCC against the existing libc and libgcc1 and libstudc++ makes it a very simple build because you haven't got to do the whole bootstrap dance but it only works for architectures in the archive and it only works if the archive can cope with not being self-consistent within an architecture which is an assumption we've had since forever I fixed the s-build part I assume some things in want to build DAC and Brittany will get confused if you have dependencies outside the architecture now I don't know if anyone here is sufficiently familiar to know what will actually break Colin knows if somebody does Brittany better nobody else's volunteering Brittany will yeah, Brittany will just break heart this can never migrate you're not going to get anything like this unless it's manually overridden by the release team so there is a faux packages scheme, faux for French for false in Brittany so the release team can manually say pretend that this package stanza exists is that how cross-build essential works in Ubuntu yes, it's hammered into the big stick you want to do that rather than forcing in other ways because otherwise Brittany may decide to trade off your installability against somebody else's which is not what you want so it can be done but it will be a hassle there aren't many packages that should need this I guess some of these bootloader packages conceivably will be depending on they tend to be standalone they have cross-arch build depths but they tend to be very standalone at runtime okay, so it is only cross-compilers basically cross-compilers, wine few other things like that okay, so working out how much so at the moment the question is is it more work to make that work or get the bootstrap tool chains actually working in Debian I don't know whichever one's working first we can stick in is the current thinking on this procedure I am more interested in getting the multi-arch stuff working but I've done a bit of both we do have to get a move on because we haven't got very long if we want to get any cross-compilers in Jesse and I'm away for the next two weeks so yes there's this question of you can do this either way and we need to make at least one of them work sharpish if anyone wants to help with that me and Dima have been doing some useful work yesterday I don't know if there's time for more sitting down over the next couple of days what else have we got actually building other stuff using your cross-compilers yeah, we did that part yeah yeah, so things you shouldn't do when cross-building depend on foreign binaries things like help to man where you just run the program and then stick it to output in a man page it's very convenient it's very annoying for cross-builders because you could just depend on the native version of help to man but then it might be a different version and you'll get the wrong stuff what does that matter maybe not, maybe I think we don't have a mechanism for that's right, we can't build depend on the same version of the package in the other architecture we have a way of expressing that because that's what you actually mean it's also madness self-depending well effectively lots of things self-depend already they just don't say so and it's not until you try cross-building that you discover that the thing self-depends so if you wrote it down, that might be a good thing anyway, so that's the thing so help to man is very convenient but it does annoy all the cross-builders now in many cases just missing out the man page when cross-building is just fine it'll work for all the purposes you care about but that's one of the examples of why people say well we could just cross-build this architecture when you could but you'd get different stuff or you'd have all the dots missing or whatever so cross-build packages do not always come out the same unless we do quite a lot of work to solve problems like this one of the things we did do was fix LibTool hooray! so LibTool has been blocking multi-arch builds in Debian for two years for the bug going, what the hell do we do about LibTool the problem is that 99% of things that use LibTool just use the shell script and it's all architecture independent but the tiny fraction use the actual LibTool binary which is ArchDependent and the question is how does that fit with multi-arch so that when somebody depends on LibTool which half of LibTool did it want we never expressed that so Docko actually did a load of testing a while ago and the maintainer wasn't quite sure what to do and kindly said please just upload something, let's see what happens I'm totally broken in between and then fix it yes we did upload Docko uploaded two broken versions before well one of them was my fault okay, cool anyway so we've only done half the split so I think we've split up but later on we should split these two halves and see just how many things actually break because they did want the binary of LibTool not very many I think is the answer and we should probably actually change that dependency in the package so they're saying what it is they're depending on but the point about this is that an awful lot more things will now if you do apget, build, dep dash, architecture, something package stuff will install well not yet because LibTool still depends on LibTool bin okay so does that mean it doesn't work if you need it on the build architecture and the host architecture still so the plan is to drop the dependency on LibTool bin after we identify all the okay good yes I think having explained in words is a lot clearer than trying to understand what's written down okay so that's half done, we've made some progress we noticed that Guile didn't cross build if the target was ARM64 which was annoying so this is why reboot strap is useful even though everything's broken reboot strap will still make test stuff and we can find out what's bust in time to fix it that was actually quite easy to fix so Guile has to guess the word size and endianness of the architecture so it can cross build all its own stuff Guile's a lisper like schemie language thing and yeah it can cross easily but it just needs to know what it's how big the words and endianness is on the target and it just guesses that from the triplet with some ugly string matching which oddly enough was completely wrong for ARM so that was easy to fix blah blah blah stuff about what we're doing with the running more tests so we already have the sort of weather tests for weather installability but this was about build dependency installability adding it to Ralph's list he's already put our new architectures on his page within hours of the announcement that was very helpful it will be really useful to know how much for archive does in fact cross so I did some tests a while back and Colin's been doing slightly more often in Ubuntu who run a test every few months I think it depends how much work it depends how much work it has to do so if the package tracker told you whether your package crossed that would be useful if there was a field you could put in so this is never going to cross it's a dumb idea, please stop badgering me that would also be useful so I have a spare box I'm going to start up my crossing infrastructure again at the moment mostly you can't cross install dependencies for an awful lot of stuff and we need to fix that we're working on it what else so bootstrapability whether you are in fact blocking the bootstrap or not, I have five minutes left should also be in the pts system similarly if your packages have not been multi-arched it will be useful to tell people that so for a long time still the multi-arched docs tell people not to worry too much about their dash dev packages that is wrong and out of date but we haven't actually changed the docs so oddly enough most maintainers are doing what they're told and we should probably fix that in fact we should have fixed it ages ago someone please edit the wiki page oh build profiles, we changed it all again I'm sorry so once we got game in a room with the man who had done all the patches we were able to have a long confusing argument about exactly basically whether if you have specified two build dependencies like stage one and cross is it both of those or either of those so what went into deep package was not what Johannes originally envisaged and we were finally able to sort this out and decide something, they changed it after I left so I left like 6pm on the last day and the last three hours after that it all got changed again so so the profile dot part namespacing has gone which is quite radical because after thinking about it for a long time we couldn't think of a situation where that namespacing was different from just adding another word of the name for something else it amounts to the same thing so that's a hell of a lot simpler and shorter and nicer so this is what we've ended up with so it's taken a long time but I think it's actually quite a sensible design in the end which is good so patches for that are done and currently being tested I guess I'm still not quite sure so that should be in the deep package I guess it's not deep package is going to be released with Jesse yes that's right we can't get it in stable stuff but we can get it in what's going to be released so we can use it from now on or from release on yes so the profile prefix has gone there's a big explanation of why and that will seem sensible blah blah blah so if you use a stage build there's a question of how different can it be from the normal build is it just dropping dependencies and you're allowed to do something completely different and the deep package way of thinking is if you depend on something it still needs to provide the same interfaces so if anything you drop is kind of functional it really ought to have a different name that's quite strict but it is probably the only thing that would allow proper automation so that's what we will be telling people to do so you can leave the docs out that's all fine because that's hardly ever a functional interface what else so we'll make the no doc thing official which has been used in quite a lot of packages for years but it's never written down in policy anywhere that can be used either as a dead build option or as a profile just about out of time reboot strap I've already told you is this handy tool and botch is now in the new queue as of about two days ago so that's the bootstrap ordering tool which is basically about 40 scripts that do interesting and useful things about whether things are installable and whether they're cross-buildable and drawing graphs of just how tied together everything is and ranking things in order of how many other things they block Josh wrote 40 man pages in a day which I thought was pretty impressive so anything else really important I should tell you in the last two minutes add a target architecture to the deep package we've been able to specify the build and host architecture since forever but you could never specify the target architecture that was easy to add I want the multi-arch interpreter problem so we had a massive discussion about this last year at DebConf and came up with the solution but it turns out that it isn't going to work because of the way deep package is so I'm going to say about this in the remaining 30 seconds you could pretend that architecture all packages were in fact architecture any packages if they were involved in a tree so it's basically things like Perl modules you have architecture any, architecture all and then the way multi-arch works it doesn't care what's below that but in fact they still have to match up so if you pretended that those things were virtually architecture any it would all still work the deep package would have to queue track but Guillaume didn't like that so that's not what we're going to do so we don't have a solution yet this is still a problem there are about 300 packages there's that multi-arch spec changes page lists all the affected packages which is an awful lot of Perl and quite a lot of mono somewhere never mind yes I should shut up because I've run out of time if anybody cares about any of that stuff please come and talk to me nearly fade into 45 minutes sorry