 Hello. More talk about cross-compilers, our favorite subject. So it's quite a lot in here. Now, the basic point is that we don't have cross-compilers in the Debian archive itself, and we should fix that. But I think it's actually useful, unless you're all serious experts are ready to go over a little bit about the whole cross building environment, which we've developed over the last couple of years, just so you know what the hell we're talking about and how this is supposed to work. So as ever, the first thing we have to do is explain what build, host, and target mean, because when talking about cross-compilers, it's very important that when you say host, you mean the thing it's going to be run on, not the machine you're building on now. That's build, and so on. And target is the code you are generating for. OK. Yes. Exactly. So a normal compiler has all three of these things the same. Life is very simple. A standard cross-compiler, you build it on one machine, and you run it on that same machine, but you target something else. And you can, of course, cross-build a normal compiler, in which case the build machine is different from the host and target. And we do actually do that when you bootstrap a new architecture. In order to get a compiler for the new architecture, the first thing you have to do is cross-build it. Bigger fonts, OK? Control-shift-plus doesn't work. Great. Bigger font. Is that good? Is that about right? Thank you. So one of the conventions which has existed for a long time, basically, because of auto-conference, is if you want to run something which acts on the target. So there's something different according to what code you're about to generate. You prefix your command with triplet. So triplet to GCC. So ARM Linux going to API GCC generates code for ARM Linux going to API, as opposed to just GCC, which is assumed to actually mean build architecture dash GCC. And in fact, that now works everywhere in Debian and has done for quite a long time. So you can always do whatever you could do. Triplet GCC. If the triplet is the build architecture native one, it still works. So this is nice and orthogonal now. So all you have to do to make everything work is to make sure that when you configured things, or when you use commands, those commands with the prefix are issued at the right time. And the right program gets run. This is the multi-arch syntax for specifying the architecture something is installed on. You mentioned GIR scanner there. Is that somebody actually done that? No, it's made an example of something a little bit further up the stack, which is the next thing you hit. Damn, you got me so hopeful. No. So this is another thing that has architecture-specific behavior. And right now, we can't cross-build anything with GIR scanner in it properly, because we need a cross GIR scanner, and nobody has written one. So I have looked at it, but not written code for it. It looks like the first step is probably to convert all of the stuff in it that has detected type sizes into the really rather alarming autoconf macros for doing this. But if you haven't seen it, autoconf has a thing for detecting the word size and endiness and sizes of arbitrary C types and that sort of thing for the machine you're building for by an incredibly scary piece of M4 that goes off and bisects it by repeated compiler failures, but also it's a marvelous piece of code that you should not cast. So, yeah, so the GIR thing is that's gobjet introspection for everyone who hasn't come across it, and the whole of the GNOME stuff all now uses a lot of that to define its interfaces as well as its documentation. So it's actually at the moment, you can build quite a lot of stuff by just not doing that, and it still works, but that really isn't going to last very long anymore. It's probably broken already. So for cross-building further up the stack than a base system, once you want GTK, you need that, and nobody's done it yet. Sorry, just as a point of information, the package names are the other way around for historical reasons, I guess, so GC-triplet. Yeah, indeed. She's annoying, isn't it? And I think package config is an extra hyphen in it. So it's supposed to be concepts rather than details. I shall go and edit it. Yeah, sorry, it's gobby. Just fix it. So, yeah, she'll understand the multi-arch syntax. Could we use that? So we now have IndiePackage, which is already in stable. The build depends qualifiers, colon any, colon native, and colon specific architecture, which allow you to depend on things. So basically, in general, if you depend on something just without specifying anything in particular, then you want the same architecture as I'm in now. Well, and the multi-arch stuff will magically go, ah, it's a library, which is multi-arch same. Therefore, when you depend on it and specify that you're building for a particular architecture, it will go off and get the libraries for that architecture. And that all just works. colon any and colon native let you specify the exceptions where you're asking for a something library and building for ARM. But actually, you wanted the native architecture version of that library because it's something to do with a build and not something to do with the code you're going to run on the other machine. And we have the ability to depend on specific packages and other architecture. But in practice, you can't use that because it doesn't work on buildies yet, previous discussion. So we need to experiment with that and find out whether anything really bad happens if you enable that. It should work. Yes, that's right. But the point is that we got it into stable in time so that we can use it, which is good. So you can now just do app get build depth dash a target architecture package. And it will go off and get all the things you need to build on this machine and all the libraries you need to build for the other machine. And this is actually very nice indeed. It just works. The thing you can't do yet is use build profiles from apt without some patches, mostly because we failed to agree the build profile syntax in time for the last release because we had a nice thing with angle brackets that was very simple. But it didn't pass Muster in WNDevelop. We have a new proposal, which was mentioned yesterday. And we expect to actually implement that any second now. But this is very cool. It means you can do a build specifying a build profile, which basically is necessary for bootstrapping to say, just the first time you build libc, you haven't got libse linux yet, or lib audit, both of which are now needed. And you have to have a way of saying, please build without that. Well, if you want to automate this stuff rather than do it all by hand. So I noticed there that in your example, you're showing a build profile equals and stage one comma cross. And I had argued throughout the discussion that we don't actually have a use case for more than one build profile. And that the only build profile we actually cared about was the breaking circular dependencies, which basically only required a stage one. So I wondered if you could. Me and Docker both think it should be split into purpose. So I was going to ask for an example of why you need a cross build profile here and what that was legitimately used for. We'll get to one later. Well, maybe not a cross build profile, but something like a test profile. So we did want to differentiate between build dependencies which are only needed for testing things or for build dependencies which are only needed for cross compiling and things like that. So you're right. You can just do it with one thing. But once you start doing a bit of this, it becomes clear that you go, well, this bootstrap means I need to change this dependency. And if I was crossing, I would need to change this dependency. It's not actually the same thing. They often go together and you could just decree that. But I don't think we lose anything by being able to. We lose some of the simplicity and it introduces confusion. It makes it harder for maintainers to understand what's going on. And I think my brain rat holed five levels deep from that comment from Doko. So I think we should probably postpone that discussion until the end. Yes. The spec allows us to do more than one. But we don't have to. What else have we got? So cross compilers. So what we currently have, so for a long time, mdebian.org has been building a pile of cross compilers for most, if not all, Debian architectures except when some of them are broken, which is quite often. It got to the start back a couple of years ago. They were all broken. And Al Vero turned up one day saying, I need to build something. And your cross compilers are knackered. And he fixed it. I think it was the MIPS has three multi-libs. And that was all wrong. And then he fixed all the others too. So the man's a genius. I was very impressed. So that works to an extent, but it suffers a bit from tending to be out of sync with Debian. So right now, if you try it today, last time I looked before I went on holiday, it was out of date. And so it wouldn't install. The common cross compilers are in Ubuntu and have been for a while. So ARM-HF, ARM-EL, and now ARM64, sorry, and PowerPC. Okay. All from AMD64 and from i386? Yes. So later on, we get to discuss the matrix of cross compilers we can actually be bothered to maintain. There's an enormous number of possible combinations and most of them are a bit useless. So the other part of this that makes it work is a magic package called cross build essential, which does the same job as build essential. To just say, if you're cross building, you always need these core things. So a C cross compiler, a C++ cross compiler, cross package config, and a C library for the, or libcdev for the host architecture. How does host target, let's do target architecture. Is there one cross build essential package for every target then? Yes. So you just install cross build essential ARM-HF, installs the right stuff. And better than that, S build already knows that if you asked to cross build something in S build, it should install cross build essential thing. And so you just get all the right crap pulled in and it just works, it's very nice. An S build profile? Oh, no? It doesn't need it. He was asking whether we have an S build profile for that. Right, we don't need it because S build has a step where it installs all the build essential stuff anyway. So it just installs cross build essential, whatever, at that point. Yes, that's right. Well, it's tooled when it's targeting and it knows that build not equal host means it needs cross build essential. You do S build dash dash host arch and... Right, sorry, I was meaning to just say that there's very good iteration with S build here such that S build knows about cross build essential and wraps everything for you and such that... So it already knew about build essential, right? We just told it about cross build essential as well. Yep. Now, of course, the LLVM people want some different packages for cross build essential. So we might have to have, I think, maybe we just want LLVM cross build essential. That would work. And that is actually defined in the sbuild.rc file. So it's not hard-coded. So in fact, for the moment, there isn't cross build essential package in Debian because it hasn't got any cross compiles to depend on, so it's a bit useless. But you can still specify that you want libc, colon, blah and some things if you have got them. That those packages do exist in my Bootstrap repo if you want to have a play. So this is actually all very nice and basically works beautifully, except in a few cases where it doesn't, which we'll get to later. These cross compiles have been built in different ways over time. So Mdebian has used a tool called build cross, which has been an experimental for a long time. Actually, I tried using it recently. It works very well. So that basically just goes through the process of all the steps you need to do to build a cross compiler using dpackage cross, downloading the libc from the other architecture, blah, blah, blah and can do arbitrary combinations of... It'll do a whole set. So you say build from these things to all these things, get on with it. And if you're really lucky, it'll build all of them and not fail halfway through. So we did try building an IA64, an arm to IA64 cross compiler a couple of weeks ago because somebody wanted one and it almost worked. Meanwhile, in Ubuntu, a couple of years ago, Martian did some fine work to produce a cross toolchain. So there's a package called toolchain base architecture, which uses the Linux source, binutil source, GCC source and eglibc source to go through the whole process of build the Linux libc headers, build GCC stage one, build eglibc stage one, build GCC stage two, build eglibc stage two, build GCC stage three, i.e. the one you actually wanted. Oh, and binutils at the beginning of all that. And the GCC default stuff. What's that in there? What's that in the package, I think now? Yeah, so that's nifty. The nice thing about that is that that works on existing buildy infrastructure because it only depends on binary source packages. So all of these things produce a source package. So, ideally, you just use the GCC source and build it in a slightly different way with the profile, for example. But you can't do that at the moment, so that's why it was done like that. Um. Dun, dun, dun, dun. Well, so yeah, you get all those packages out of it. So it uses the d-package cross thing to generate a C library and a libgcc for the host architecture. Target architecture, I don't know. But installed as a binary package for the build architecture. So you're not actually using the one. You're not using the multi-arch location for that library. It's got its own copy, which is okay, except when it goes wrong. You end up with two copies of libc on your system. And just occasionally, things link against the wrong one. That shouldn't happen, but I've seen a few cases of that. So, you can build the cost tool to generate a slightly different way without going through the whole bootstrap process and just building GCC against the libc and libgcc for the host architecture. That now works nicely. That was a GSOC project last year by Tibolt Gurke and Docker who's integrated all that. So there's this magic variable with depths on target packages equals yes. If you set that, you just build GCC against the stuff you already got. But that needs, that's a multi-arch build. So the build of the compiler is now depending on the C library and the GCC from the other architecture. So we can't do that in the archive until we've enabled it. And the question, one of the questions for today is, do we wait and do it that way, or do we just do what Ubuntu is doing now because we could upload it tomorrow? And the main reason we're worried about that is the transition, whether we make lives difficult for ourselves if we're going to move from one to the other. And Docker doesn't like this plan much, but I do. We'd have any of this argument for some time. Sure, whatever. One of the reasons I'd like something sooner rather than later is that I've been running, well, kind of falls over a lot, but I've been running a cross builder, not a cross builder for Ubuntu men for some time. And it's about a third of the packages just built. And I would very much love to see what state all of Debian is in. I can't really do that until we've got cross compilers in the archive. And I'd like to have that in place the sooner the better so that we can start distributing the work of actually fixing packages rather better than we currently do. Yeah, it's true. So the plan was in fact always to upload. We agreed at the MW and Sprint in February two years ago that we'd upload those packages forthwith, but for whatever reason, it never happened. It made a lot more sense then than it did now, but you're right, we should probably just do that. I just worry, what happened? If we do that, and then if we try to transition to cross dependencies, do we have a problem with a lot of replaces and conflicts and things, or does it all just come out in the wash and it's easy? I'm not sure. Are those cross build dependencies or cross runtime dependencies that we're talking about? Your binary packages would be coming from different source packages, probably, potentially, so you'd have a lot of, so when you take over a binary package from one source package to another. Okay. Yeah, that's trivial. Right, okay, good. So the current way to build a cross compiler can be simplified, so I think it's exaggerated to speak about a seven-step bootstrap. So if somebody wants to work on that, please contact me. So yeah, I would like to see some simplification there, but I don't want to spend the time myself at the moment. I mean, it's trivial to take out the binutils parts and put them, so I get to that in a minute of what I think might be a sensible plan. Right, so one design decision for the cross compilers was to have them stand alone, and well, as it's currently done in Ubuntu, we are able to upload the cross compilers to the archive and use them for cross-building. I do not want to rely on foreign architectures to do that, so. Except that if you're multi-arch cross-building, you'd do anyway, because you have to have a matching libGCC everywhere, so you've already version synced to your... But in this case, I already have to have these architecture available, and that's not the case when I'm bootstrapping. No, so I agree, so bootstrapping, you'd have to do something different. Also, there's always the risk that if things ever go slightly out of sync, you end up in the case where you have to reboot strap. That's the other consideration, and so bootstrapping for your cross compiler may not be a one-time operation. You get too far out of sync. I mean, obviously, we have the packaging to do both of these things, right? So at the moment, it's completely trivial. It's setting a variable, which way you build, and yeah, I think it's important that we keep that functionality. It's just a question of what the default build is, whether you do all of it every time, or just, okay, we should probably, we'll carry this conversation on. See how far we get through this, and we'll come back to arguing about that. So yeah, I think Binyutils is nice and standalone and simple. You just have a Binyutils cross-source package that just builds for all the architectures we're supporting. Does that seem sensible? You should probably leave Docker with a microphone. I think he's gonna have to answer all these questions. If we split the Binyutils part out of toolchain base, we just have a thing that builds, just cycles through each architecture we're supporting and builds a Binyutils foo. So, well, the current packaging, so we only built the G-Lipsy parts and the kernel headers out of the base toolchain package. And, okay, Binyutils is already separate. Yes, and GCC is also separate. So what I was doing was to just, well, to limit the bootstrap part into the G-Lipsy packages. Right. So I built this hallway compiler just to build G-Lipsy and then I don't have to care about interdependencies for GCC anymore. Okay, yes, and there's another package config, so cross package config actually consists of nothing but a link. So the actual package config package contains package config cross wrapper, which basically sets a path and then runs package config. So it's a, each cross package config thing is a rather trivial package and they could all be made out of package config for each architecture or you can make one for each of them out of the toolchain base thing, which is what happens at the moment. Doesn't really matter much. I'm not sure if anyone cares. I spent a long time deciding that I couldn't decide which way was better. So if you're arguing that you would have the bit made out of the package config package for each architecture, you mean you would still have to build it as a separate binary package then? Yeah, I think so because, well, yeah, I have to. You could have a package config. You're not cross-installing package config as part of your build system, you're installing the native version of package config, not the cross version. So saying that you would ship the ARM Linux new ABI package config in the ARM-HF version of package config doesn't help you because you've installed the AMD64 version of package config, not the cross one. Yes. I thought you were suggesting that the AMD64 version and indeed every other version of package config would include simulinks for every architecture we've ever heard of and then we just make it multi-arched foreign. It is already multi-arched foreign. Right, but then it would actually work as multi-arched foreign. So yeah. It's sort of a slight lie. If anyone can think of a reason why it's better to do it one way or the other, I... I think it's better to do it the way we've already done it because that way people who want to do their own targets aren't stuck arguing whether their made up architecture is important enough to include in the package config package and so forth. Yeah, I was gonna say pretty much the same thing. It seems a bit of a shame to ship packages which just have simulinks, but it's nice for stuff... Asimlink. Or Asimlink even, yeah. But it does mean that if you have your own package to do this stuff, then you can... You're not linked to the package config... package lifecycle, basically. You can do your own thing on the side and then once it's already... Yeah. Get it all in. You get that independence. I mean, I'm not sure it's gonna change much, mind you. So one of the things that would simplify all this a bit is if we actually had source dependencies which I saw someone mention on the list. Never occurred to me until someone went, we could have source dependencies. So we have to build... There's all that stuff to build binutil source and build GCC source and build Linux source. And Linux source is annoying because it's not quite the same as the actual Linux package whereas the others are the same as the package and have the Debian foo in. I'm kind of conflicted about that. It seems kind of theoretically nice, but given that you always guaranteeably have access to a mirror with source on it as part of the... During a build, why not just use apt-get-source? Someone I found it really annoying is doing the new architecture bootstrap in order to test anything using the build mechanism, you've got to fettle the package so that it rebuilds the source package with the little patch in so that you can do the build as a thing. I'm not saying that the dash source packages are a nice way to do things. I'm saying that you can just use apt-get-source. Yes. In your build. Yes, that's right. It may not... I thought that was kind of... Is that not found on? It may not actually be configured that way right now, but you are guaranteed to have access to a mirror with source on it. You can synthesize one if you need to. Yes. It is. S-build fetches it. There's other stuff that could be nice with beyond just getting the source, if you could say build depends something called on source, you might be able to pull in transitive build depends then. Well, I first... Currently, if you build depend on binutile source or GCC source, they also declare dependencies and stuff that they need. So if you just do the apt-get-source stuff, you don't get that, and so you have to maintain your own set of dependencies. I'd naively assume that you weren't allowed to do apt-get-source in the middle of a build, and that would seem terribly shoddy packaging, but yeah, that would work just fine and actually be slightly easier from my point of view, but yeah, I mean... Okay, so anyway, that's what would be nice or maybe it would be nice. I don't know. Nobody... It's not holding anything up. Yeah. Why not? Backbone. Backbone. So now I have to put myself on the view first. So the reason you're not allowed to do that is because a build is not guaranteed to be online. So if you run apt-get-source, you're assuming there's actually networking or some source repository at least, which is an incorrect assumption. That's what I thought always. So you're guaranteed to have a mirror because your build dependencies are coming from the mirror. You're also guaranteed to be able to fetch the source for the package that you're building so that has to come from somewhere. The builder may not be allowed to access an external network. There's no reason in principle that it would need to access any different network than the one that's pulling its binary build dependencies from. However, I don't think we currently actually have anything in the design which requires that the source is available on the mirror that it's pulling the binaries from because the source package you're building may not be in the archive yet anyway. It may be pulled from somewhere else and it may have some special way of actually getting that in the S build. While that's true, we're not talking about the source of the package you're building. We're talking about something that you might be source-dependent on, which has got to come from the mirror. So I don't see the difference between apt-get-source and a hypothetical future source dependency from that point of view. Okay. Well, I think we'll have to continue this because there's a number of other items to come up. I would like to point out that there are plans to enable network namespaces on Linux build these to shut down any network access. So just because there's a mirror available at the time you're fetching the build dependencies and downloading the package doesn't mean you have any network left when you run the build. We're planning to use this for other things anyway and there are already packages that use this for other things. For instance, Debian installer will be unbuildable if you remove that capability. So I think the project might not like that very much. Okay, right. Well, we'll punt that for now then and come back to that. A Mingyu man over there would like partial architectures for Mingyu architectures so that Stephen, do you want to mention just what's used and what would be useful? Yeah, so this is just in general for partial archers when you're cross-building stuff. If we can have it as a multi-arch partial arch that's recognised by all the various tools then we can ship libraries. This is what Steve was saying in the earlier buff. The nice thing about multi-arch is that you can install libraries that you're never going to use on your host system that match the target and your cross-compilers can use them all. And then that way because of the way AutoconFu works there are a lot of packages in Debian that just do the right thing when you cross-compile them and so then you get for free tons of libraries on your new cross-targets. So the thing that's missing is that Deep Package doesn't actually have a Mingyu architecture defined yet and it probably should and then it's pretty much just works, right? Yeah. It's the same problem as the ARM non-thing from the previous buff is other architectures that we're never going to run but we'd like to build for. So one of the things to consider is how many cross-compilers do we wish to support? Right? So one of the practical problems from doing this for years turns out to be that you've got a lot more things. The more you support the more likely you are to have one of them not build at which point you can't upload your package or whatever, which is why Doco wants nothing to do with this because he just has to worry about the native compilers working and I think it's quite true. We want to keep the standard GCC native build separate from all the cross-compilers because otherwise there's just way too many ways this will go wrong. So at the moment we pretty much build AMD64 and i386 to everything but people are going to want ARM64 to everything soon. Why? ARM64 server boxes to build stuff on and nobody cares about that old i386 nonsense anymore? I mean you're right, this is a slight in the future but not very far. Well... I... Let's talk about how long it takes to bootstrap ARM64 right now and how crazy it would be to actually want to cross-build from ARM64 to anything else. It's possible that in the future ARM64 hardware is actually going to be comparable to something else for a compiler host but that's far enough in the future that that line is clearly high in the sky. It's not today, you know. Next year that will be quite... That remains to be seen. Sorry? Remains to be what? He just compared you to our channel. Okay. So I would like Damian just to start on a very small subset and this small subset should be maybe just AMD64 for the host or maybe AMD64 and i386. Because we know that all stuff works. Right and if somebody wants to extend that later that's not a problem but... We have got some stats from the NDebian downloads to find out which compilers people actually downloaded and you're right. The overwhelming majority are AMD64 to ARM something. So it is true that I do want to like the native tool chain have built from separate sources and the cross-through chains. What I do see in a window for all our supported cross-through chains it's just extra work to upload binators four times upload GCC cross four times and things like that. So maybe it was something about to build all compilers targeting complete Debian architectures in one source package so that maybe may save some time. So you've got a source package that builds all the AMD64 to something like the binutils. So the trade off there is having a separate source so that at least that part versus having a lot of repetition and rebuilding rather pointlessly. If we just use what we already have then they're separate. So one of the things that's quite interesting is something Helmut's been talking about this week which solves the GHC problem to be able to install both i386 and AMD64 native compilers on the same system because they'll both run and we sat there and worked out that that's perfectly doable and actually doesn't conflict with the way the cross compilers are built. In fact it makes it all a bit more orthogonal I actually quite like that idea so we'll get to that later on there's a whole load of stuff so if we get that far so from your point of view you still install GCC 4.7 but actually the packaging behind the scenes changes a bit so one of the things that came out of the Razbian discussion this morning from the Razbian people they said one thing we'd really like it to change was if there was one place to change the compiler defaults because otherwise there's an awful lot of source packages where they have to change the runes but I don't think GCC makes that easy does it? There isn't a place could we do that? My impression was that the Razbian people that want to have a package where they could specify different target defaults for compilers that doesn't match quite what you did in the wiki so well for GCC we do have only defaults packages well that's GCC which GCC gets run but their point is they need to set the defaults build options for V6 instead of on V7 and then rebuild everything and they don't want to have to edit eight source packages well I wondered that will that work? it should if everybody's doing their packages correctly it should be a de-package build flags thing so if the point is you want the compiler's behavior out of the box to use whichever flags you're talking about when it's used outside of a package it's when you build the compiler so they accept rebuilding the compiler set they just don't want to have to edit lots of sources to do that okay so anyway if we think we could do that it will be very helpful to them well I guess in the case of Raspbian in the case of Raspbian do they actually care about what are they configuring in the compiler that's different than the way we're configuring it other than setting the default target the compiler itself would need to be built to not use V7 instructions otherwise you can't run it on Raspbian everyone get that? so yes the compiler has to be built for V6 as well as all the stuff it builds so anyway we should have a look at that it will be nice yeah so there's these various ways of building compilers so there's a bit of a tension between multi-lib and multi-arch stuff there's a lot of things you can do one way or the other you can either put things in multi-lib locations or you can put things in multi-arch locations and use that and the moment the GCC packaging supports both of these and it's quite complicated now on the other hand it's all been done so we might as well leave it there so an x86 people expect to be able to do dashM32 on their AMD64 compiler and get i386 stuff and we can't change that because it's probably in millions of build scripts everywhere I would prefer a world where people didn't expect an RMHF compiler to also do RML stuff and have all the bits installed without them installing an RML cross compiler but again Docker thinks that's useful and I suspect other people do too I have encountered some packages that appear to think that this was something that you could do so the question is do we think we should support that do we think we should just tell people to install both compilers well as long as upstream supports these options and we can use RML and I think RML and we can use M32 and hard codes them in packages I think Debian should support these that gets used on ARM as well certainly used on i386 a lot I agree we can only take that away for Ubuntu to have soft float and hard float modular options because when we did switch over from Army L to Army HF it did make sense to to run these binaries on the very same system and yeah well sometimes we may drop them but okay so I mean I prefer to keep it disabled and wait for people to bitch a lot well it's a bit of complexity we could remove hopefully we will see this for arm 64 again yeah true and we will see them for big Indian and little Indian 64 again yeah you're you're probably right okay thanks well so one of the other things about all this stuff the way all the multi-arch stuff works it's very good for doing distro cross-building but because of the version you can only install the host architecture version of the library the same as the build architecture version and so if you wanted to build a very latest version for your target then that's harder because it'll tell you you can't install the standard system library you need to stick it in a sys route or something so this works beautifully for distro building you get great consistency multi-arch all just works as long as everything is built to the same versions it's less useful to for doing random upstream development and I don't know to what degree people find Debian with multi-arch a bit of a pain in this regard I never do that so I don't care but we may find that doing this everywhere actually makes upstream development using your cross tools more painful rather than less I don't know if anyone's run into trouble trying to use this I guess not your old Debian users and we haven't enabled it yet so ever like yes so there has been some grumbling from upstream we're told by Ben I get we've done it so we'll see how it goes I mean in in principle you can use sys routes and stuff still and that should work something we should check out there's a Debian cross mailing list just about to be created whenever the mailing list masters get around to it I think it's agreed and generally considered useful it's been noticeable it wasn't really one place to discuss all this cross stuff over the last couple years that were where the right people hung out so if you're interested in this when the Debian cross mailing lists appear I guess we'll mail Debian develop and you should sign up my phone I feel obligated to mount a quick defensive multi-arch here I'll be brief so yeah multi-arch doesn't solve all the problems but it does solve the problem of if you are targeting a distribution that you are running it's it lets you do things very effectively and if you're targeting something that's not what you're running you need to use a trute but you previously needed to use a trute or something else anyway so it doesn't make the problem invisible but it also doesn't make it harder than it was but you may need to use a trute and build it for both architectures so it can be slightly harder if you need to install the build architecture version as well because it's something part of the system then you have to have a matching version of the build architecture library and the host architecture library in those tells you can't install it well only if it's something you need to install for the so I mean for the build architecture in the first place yes so yes there's a little bit of it there are a few corner cases where it gets a little bit fiddly but for the most part it's it's not making things harder than it was before for whatever you should be able to get and if you're targeting something that's not not deviant to begin with you should just use this route and you shouldn't be trying to use multi-arch for it indeed so you should get all the system stuff exactly matching by default and you can specifically explicitly say and this extra bit from here so we're about to run out of time other things I should mention there is this problem of specific build dependencies so a number of packages depend on GCC specific version and the problem is that when you cross-build it when it says I build depend on GCC 4.6 when you cross-build it you actually want to depend on GCC 4.6 triplet or triplet GZI around it is and actually you might mean I depend on both the build architecture version a specific version and the host architecture version a specific version so that's something we simply don't support right now so we need a mechanism two things have been suggested we can either there aren't very many packages which do this mostly GCC binutils are the things that get depended on so there's a few dependencies which basically have to be translated to a different package when you're cross-building and we could have some so Colin suggested a scheme I probably got the names all wrong but the idea that there's there's a field in the package that says this package is a translatable package name and you should translate it when cross-building so we just mark the set of packages which have this feature and then something apt-d-package whatever would pick the right one or you could have a substance far so you build depend on something dash substitutable bit the problem is you can't just change it in the dependencies because it depends what you're building for so it's it's special in this regard now I have strong opinions about which of these two we've got a whole minute to a sub spars in build depends is a significant departure from how we use those currently that implies some sort of post some sort of post-processing of of the source package not done at build time and that would be it's not a good fit we should do that way okay Mike I just noticed that the proposal to change them a native compiler to use multi-arch package names would solve the issue with having to specify cross-profile in the substarts case as well so you just say GCC dash version dash hosting new type and everything works in that case as well but you don't know which host type you're depending on until you build it because it depends what you're building for yeah you still need subswars but you know don't and no longer need a profile yeah yeah you still need the subswars you still need the variables but you don't need the profiles in that case okay yeah he's right and that's not gonna listen all right and there's a way without subst wars of if you use ridiculously long dependency strings and use the cross-profile together with every architecture you can build for and right but it would get around that problem in an ugly manner true so well I like and build depends on binutils dev or annoying turns out that actually all the things I found really want libibity dev so we're I'm supposed to split that out of thingy and he hassles me on a regular basis because I haven't done it for months now so we'll fix that and there was this interesting proposal to rearrange the as referenced up further up the thing about being able to install I through it's about basically install any set of architectures side-by-side I have to stop apparently and so I wasn't gonna go into all the details but all the details are listed here and we will put that on a wiki page I think it's quite a good scheme thank you very much I'm sorry there's no time for questions please hassle me afterwards