 Okay. Hello everybody. Currently we have with us Hector Oron, Guillem Hover and Steve Langasek. Hector is mostly involved in MDBM stuff. Guillem is our beloved DPKG maintainer. And Steve Langasek was... Okay. So without further ado they will just talk about multi-lib and how to deal in a proper manner. And I want to point out, and they wanted to point out that this is actually not something where they are presenting something and you just watch. You should participate as a proposal and actually your input is highly welcome. Please welcome them. Hello. Hello everybody. We are here together to present a multi-arc proposal, which is... The objective we have is design. We are currently designing and implementing a multi-arc capable system for squeeze release. Because this is the contents we are going to talk about, some definitions and some requirements we've been discussing on some specification. And then we have some unresolved issues and we'll have some time to discuss. This is the multi-arc. Okay. The Muslims did it for us. Okay. We need to define what multi-lib is in order to understand everybody. Multi-lib is about installing runtime libraries for different ABI architectures. I mean you have a 32 and 64-bit libraries you want to have in your system. Currently we have an integration with the EA32 Libs, which is not a very good solution. You also have MIPS 3-arc, where you have three ABI's and this doesn't work. And then we have another problem, which is cross-compiling, which I'm embedded people are more concerned about. It's about co-building and having the headers and libraries in the right place to be able to bootstrap another root file system inside your system to export to a target system like OpenMoco or other devices. And then that gathers together into multi-arc proposal. So the proposal basically is to include triplet names into the library paths. So you can have many architectures, the libraries for many architectures in your system. And you can have, this is the 64-bit and 32-bit for the Intel architecture, but you can also add ARM or MIPS or whatever architecture you like. And then there are some requirements that need to be implemented. So the package should allow to install packages from other architectures and that implies to modify current policy with new fields, like multi-arc field. And the multi-arc field has some values. You can pick the same architecture and the foreign architecture, it explains by the name. And then we just introduced an allowed tag for some cases that will go through it later for interpreters. Like Python can have two-way dependencies but will go through this later. These are the control fields, some of their requirements. I don't know if you want to go one by one or you just can read it or if you have some questions you can ask. The thing is, yeah. It doesn't say in the specification why we chose to do a lib triplet as opposed to use a triplet lib. I've seen people have good reasons for that but it will be useful for someone to explain to everyone why it's that way around. Is this one on? Actually I was wondering if anybody had that question. Why are we not just doing by-arch or something like that? Closer? Is this close enough? Okay. The FHS solution and the LSB solution that exists for this which is implemented by other Linux distributions is by-arch. Which lets you use a lib32 path. So why use lib triplet? Why not use the existing cross compiler toolchain style? Why not use the existing cross compiler toolchain style? There's just a prefix on the beginning and the file system of cross install stuff looks the same as it does everywhere else. As opposed to having the triplet after the lib with the user include making our lives complicated. Part of that was because there are, in principle you could have cross installation of executables as well as cross installation of libraries. And the architecture qualified paths that are used for cross compilation mean something different in that case for executables. Because cross build paths, you do have executables under there that are related to like GCC and whatnot. Those paths don't imply that the binary you're installing there is that the host or that the build architecture is that ABI. It implies that the host architecture is that ABI. So in order to avoid that inconsistency and ambiguity, given that possibly eventually we would want co-insolability of executables with magic paths, I think that was the original rationale. That's at least my reason for not trying to use that. Tala Foghan is the original proponent. Well, I guess there's some dispute about that. He was the first one who spoke to me about it. But he did the paper for his doctoral work, wasn't it? His master's work, that's a lot louder now, isn't it? Tala Foghan had written up the paper that proposed this. And he was who introduced me to that, but unfortunately he's not here. So I can't really speak to why it was done that way originally other than half remembered rationales from third parties. I think the main reasons, one was to not pollute the root directory. Because if we need to have multi-arch libraries for slash-leap, then you will have to have the triplet in root slash-leap. Sorry, triplet slash-leap and then the... Yeah, but... So if you end up having four or five architectures in your system, then you are polluting the root directory. That was one reason. Because currently the triplet under user... So the cross or cis-root way of doing this is that you have the whole... It's a CH root. So you have there everything. So you will have to pull out the slash-leap somewhere else. But I think the main reason was to not pollute the user and root directories. Okay, we'll go on and... Some packages would need to adapt with this tax. And then we are thinking of a smooth plan for transition without having to break the system. So you just manually configure your package manager and you have a multi-arch system. This is the Python case I was talking about where you have some packages that depend on the interpreter. And then there are others that are libraries that also depend in the same package. In this case, you need to differentiate. So we introduced a package like Python should be marked with the allowed flag. And it depends on dependency. We add a new field. It's two dots and any. I don't know if you want to explain this. One of the reasons when we were going through the cases that we could want to use multi-arch, the obvious case is the shared libraries which before there was a slide. So we use the same because you have a binary and then you have the shared library which should match the architecture. And that's the trivial, the obvious case. We have the other one which is the foreign value which is when you are calling, for example, when you're calling a binary or when you have a script to interpret. And for those, it doesn't matter if the interpreter is one architecture and the script is another one or if you are invoking binary from another architecture. And then the third case was when you had a binary which could either interpret or load a binary into its own other space. So in this case, we cannot use the same field value because otherwise we have to choose between either semantics. So we don't want to change the semantics for the dependencies. So they should not be looser than they are currently right now. So for the interpreter case, we mark the interpreter itself and then we mark each package which we want to have a looser dependency. What I was describing at the time was that multiarch was a field we set on the package nodes themselves for us a lot. It lets you annotate the dependency arch rather than just the package that's the target of the dependency. I don't know if that helps other people think about it, but it helps me. That works for the share library and for the same and foreign values. But for the interpreter, one of the options was not to mark it as multiarch and then just mark the packages that could be using the interpreter by calling it or by being interpreted. So one of the reasons that we added the allowed value is so that we can control when an interpreter is actually multiarch and other people cannot add dependencies to any other architecture without the interpreter allowing it. I'm not sure if that makes sense. We'll continue and we also have the architecture independent files, the config files, documentation files and data files. So the package will implement internal database and we'll check the reference counting and check if the file is already there. And we should maybe think about if we have some differences. At the time that we discussed this, we checked a bit what was the current use for, mostly this is going to be used for user share, dog for example. So what might affect the files not being same on different architectures could be timestamps for some. So we should probably review if there's any case where we might want to share the same file across or that that file might be different on different architectures. But at least the cases we consider, they are compressed and they should not carry a timestamp and they should just be the same. But that's something to check before hand. Otherwise the other option is just to split those into a common package that can be shared across other architectures. We already faced this problem with the GDB debugger for cross installations. You have to override the manual page and when you try to install it, it fails because the main page is right there from the native debugger. So this would help in these cases. And the future of VR packages. There are some cases that we need to still have some VR packages. In case like GCC multi-lib, these packages need to continue to be cross built. I guess I'd like to spend on that a little bit here. So one of the requirements that we put in place for this spec, so everything we're describing here was actually, it's the outcome of a session that we had at the Ubuntu developer summit this past May in Barcelona. It was a very convenient opportunity to get these guys together since two of the three people up here are local to Spain and we had a number of other Debian folks at UDS and Ubuntu folks who were interested in this topic. So we got together and hashed out this spec there at the time. And one of the requirements we put in place for the initial implementation here is that it not require archive side changes. So when we talk about BIARC packages continuing to be supported in the archive, that's because the only way to get away from those is by allowing cross architecture dependencies or build dependencies in the archive. So that means a whole lot of archive side work that would have to be done to make this happen. And to avoid that, at least in the first round, we've conceded that we have to keep some of these packages around where we have an AMD64 package that ships 32-bit code, which means we need a GCC that can build 32-bit code as part of its main GCC package, the GCC-multi-lib package here and things of that nature. And then for APT, you just add a field with brackets or this kind of sign, the architectures you want for your system. In the sources list and also you have to send a command line six syntax in the column preceding the version. Then we have the unsolved issues which are the dev packages, which are mostly headers and some libraries. And this is a concern for cross compiling and we are still talking about how should we handle these kind of packages. And there's another issue, we could have a co-installable package executables. So you can run those executables by this virtualization layer, provided by being FMT kernel module, like Kiermuth people does. But this is out of the scope of this implementation that could come afterwards. And it would be nice also to have auto detection of the ABI you are running. And you can run for MIPS N32 or O32 code or 64-bit code. There's also a proposal to have part of architecture to keep the other ABI packages like these architectures that need to be complete. And also for embedded bootstrapping to be able to generate like Usylipsy root file systems or bootstrap exotic architecture like SH4 or other architectures. But that Debian doesn't really want to have in the archive, but at least you are able to bootstrap it if you want. And also having cross architecture dependencies. This is, right now we are not handling that, but it would be nice. So it's an unsolved issue. Now we can discuss about all this we are proposing. This is the current layout we have in our system, where you have all this EA32 stuff. This is really a mess. And then under user triplet we have our cross building stuff. The binaries includes all that. And then this proposal aims to use these kind of paths. And having libraries under library triplet and the include files could be under user include triplet and have the compilers and everything like linkers looking there. But this is a concern for cross compiling. We are not sure if this is the right way of doing things. And we think this is an important thing and we want to really have the good solution, but we don't know. We need to do a lot of testing and see how it works. And this is like another proposal for cross compiling. It's about having a root file system inside your root file system, which is called system root. You can pass that to the auto tools or any build system. And he knows where the build system knows where it needs to go to be able to cross build. So it would be maybe for cross compiling having a system root not under OPT, but here it's there. And having a dependency on the vendor and the triplet. And inside there you populated with all the stuff you need for your target machine. Which doesn't have to do anything with the host machine libraries. Because you might want to have some libraries without ex support, but in your system you have ex support and things like that. And we need to thank Debian, Ubuntu and Mdebian people and many individuals that has been involved in a lot of talking and deciding things. And this is not something we just wrote up right now and showing to you. And there's more people involved in this. And any comments, suggestions, patches, whatever you... We have picked the DebianD package list. And if you want to... you haven't read the specification, we have it under Ubuntu wiki. And you can get this presentation by cloning and Debian Git repository. And that's it. So you said earlier that one of the outstanding items was Dashdev packages. In your list of paths you've got triplets under user include. So what is there that still needs to be done? We don't know because we haven't actually looked at it. That's why it's just out of scope for the initial implementation to avoid actually having to spend the effort to figure out if there are pieces missing. It may be that there's nothing missing other than just installing things in that path. We simply haven't really evaluated it fully to the point where we were willing to specify it and say, yes, this is how it's going to work. So it was really just an issue of limiting the scope of the initial implementation so we had something that we could go ahead and implement. And we didn't see that the Dev packages would have any impact on the package management implementation. So we just said, yeah, we'll leave that for later. Everything that's listed on that list of unresolved issues are basically things that we know there are people in the community that some of us want to do these kinds of things or somebody else in the community has an interest in doing this. And we're expecting them to all be areas of future work on this. And if anybody has an interest in one of those, I certainly encourage you to work on that and build on this work. So that leads on to my next question, which is what bits are left to do before we can release this and squeeze and how can we make it happen? As Steve has said, what we have been focused or trying to focus on is to get the basics for the runtime so that we can get a system that can use this. And we have also tried to consider things that might imply changes in the future. So that's why we have been talking about the cross-toolchain case and we have been thinking about this. So it's not something we have not thought about completely, but something we have not focused or energy to think about. So that's why we want to present this to a wider audience to see if there's anything essential for the minimal implementation that we might really want to put there that might block later on to extend it to the way they're used. And yeah, to answer your question, for squeeze we will need DeepakSupport, APD support, and then we probably want most of the base system switch to multi-arch. So that will be part of the toolchain for libGCC and then glipc and then few other libraries that are required because one of the main points is that we want these to be an extension. So we want to be able to incrementally add this support and not require that we upload everything on a flag day. So once the basic stuff is there, you should be able to start adding support for multi-arch in the upper layers. Yeah, I really congratulate you for deciding to try and do this in an incremental way instead of a flip a bit on the world kind of way. I think that's essential to having this actually happen in our lifetimes. Well, I will apologize once again publicly to everybody who's had to deal with it at all for the existence of IA32 Libs and all of the horror that has created. And those of you who don't know the history really should study it because it's an excellent example of nothing lives so long as a temporary hack. You know, I just, I want to sort of voice my encouragement and enthusiasm for this. Like Steve, I found this very frustrating. I participated in some discussions actually in Malaga many years ago now with Talif and others about some of the initial thoughts that led down this chain of inquiry. And I'm really pleased to see you guys closing in on something. If there's anything that I or others who care about this can do to help bring this to closure in time for inclusion in the squeeze release, please let us know. Even at a minimal sort of, you know, solve the interesting executables case level only. I think it would be absolutely wonderful to get this in. And my quick look through on the material that you've got, I don't see any glaring holes. There is one of the challenges with that dash dev packages is that they are historically combinations of architecture specific and architecture independent content. And so one of the things that we might ponder doing as we go forward is refactoring some of those dash dev packages so that, you know, we don't have the situation of a single package that simultaneously wants to behave like an arch any in an arch, you know, specific package. But I don't see anything in the structure that you've got here that should get in the way of doing something cluful with that. Two questions. First, is there any Linux distribution at all that currently implements multi arch and not by arch? No. Okay. This is Debian innovating once again. We're also breaking the FHS by doing this, so we'll have to amend the FHS once we're done. Before this goes into the archive, there'll be a policy patch. My second question is what is the overall goal? Do we plan to have the entire archive be able to use multi arch by time? Like get the base in for squeeze and then have maybe for squeeze plus one basically every library in the path archive be able to use a multi arch? Well, as I said before, one of the main goals was to make it incremental. So if there's no need for really not use libraries to be multi arch enabled, then there's no point in doing the work. So one of the nice things is that we can just switch whatever we need. We don't have to switch the whole archive. So I mean, if there's people doing the work and it makes sense, yeah, sure. But that was one of the main points that we should be able not to switch the whole thing. I think it's definitely an open-ended transition. Getting like 5% of the library packages converted over takes care of 95% of the pain that we live with today as far as by arch packages on Debian and Ubuntu. It's open-ended. I think eventually we'll get to the point where if multi arch succeeds, if we get a base system that works, we'll start seeing this being done by default in the helper packages. CDBS and DHMAKE are going to start hopefully using this by default and then we'll see gradually over time all libraries will use it. But I don't really see any need to set a fixed schedule for that. It's really, you know, if we get the base system for squeeze, we're all going to be a lot happier. Dropping the by arch packages would mean that we lose the 64 bit support for architectures where we have a 32 bit user land, but 64 bit kernels. Is there any partial architectures? Spark S390. Yes, partial architectures. That's one of our unresolved issues. And partial architectures is a feature that depends on the archive changes that would be needed in order to drop by arch packages anyway. So once the archive changes are in place, there's no reason not to do the partial architectures, in my opinion, because it's really straightforward at that point to implement those once the archive changes are done. I want to comment on Mathias' answer question. Sorry, we have been talking about the by arch case on the toolchain stuff. And there's some weird cases by having, I mean, if we keep the by arch libraries and the multi arch ones, maybe some cases we may end up with the same content in different packages. You may end up having 64 GCC and then deep GCC from the other, the foreign architecture. But I think we should for now ignore this case because most of the packages in Debian using deep GCC from the foreign architecture are not going to be using the by arch one. So by arch is mostly going to be used privately by people building using the toolchain for the current architecture targeting the other one. So I think for now we just can't ignore that case and just assume that it will work. My question is what would this break? You are taking care of the implementation and you seem to be doing a very good job of it. But I can imagine there will be some cases where a package doesn't work when you install a multi arch version of it. Something like loading plugins from a hard coded directory and things like that. Is there a way you can spot these issues because you're not necessarily running in a by arch environment? Right. So as far as plugin paths and things of the sort, it's not been explicit in the spec. Ian has requested that we make it more explicit. Changing these paths is something we expect to do in the source packages. So we're going to be changing the source packages so that they are built to look for things under these paths and built to install under these paths natively in the source and not using, there have been various schemes proposed in the past where D package would remap at install time which runs into the plugin path problem. So since this conversion is being done on a per source package basis and it requires somebody to actually touch the package to make it happen, it should not break anything. There will be cases where you'll be able to install libraries in a cross architecture environment where they haven't been tested because all the dependencies are there, you don't have the native version of the package installed, you just install the cross arch version and it may or may not work because nobody has touched that package yet, but it shouldn't break anything that works today with this implementation. And there was a part of that that I was thinking for some reason I should have you follow up on it and I don't remember what it was, something about... I think it comes back to our R path discussion earlier if you want to cover that. I just want to clarify that if something breaks that should be a packaging issue. So the tool change should work transparently and the package management tools they should also work transparently. So if an application is not able to load the plugins from the correct path then that's as if right now you will place the plugins in a path that the application is not looking for. Right, so the comment is that packages today, library packages could anticipate this multi-arch change by going ahead and starting to use those directories, the user lib triplet and the user include triplet for their installation because the tool chain pieces are in place where everything will look in the right place for that. The only thing that won't work today is you won't actually be able to co-install them. So we could convert glibc today to use those paths but then d-package and apt wouldn't actually let you install the foreign version anyway so it doesn't actually get you very far until we have the package manager in place. So I just wanted to say that I've seen so many of these multi-arch proposals over the years and I'm always very critical about almost anything. And I just wanted to say that I think this is a really good idea. I've got a few sort of minor quibbles but by and large I'm just looking forward to it. I'm sorry to be out of character. Do you want my quibbles now? So I had two comments. The first one is that you might get somewhere useful by having a conventional sim link that loops the native arch triplet, whatever you decide that is, back into slash lib. And that would mean that no files on a normal i386 system would actually move and that might, well firstly it's sort of politically easier because you don't have to explain anything to anybody. And it might stop some things that currently work from breaking because the file moved and the code hadn't been changed to look in the new place. I don't know what you're thinking about. I probably need to think about some more. So while you were talking I just now thought of a case that that breaks which was actually one of our use cases in the spec which is the idea of live migrating a system from one architecture to another. I don't see that there's necessarily a problem with that. You just have to shuffle files about. Right. Without the aid of the package manager to do that then. Right. But after you were done it would all be in the place you expected it to be. It's a possibility. I think that you suggested, or you, somebody mentioned that Federa had a bug where they accidentally cross gridded a system to a different architecture. And it, I think we said at the time that if we do that, that if we could do that we knew that we had won. But it might be prudent to include, to build in some kind of protection against that and require a little bit of manual effort to do that kind of thing. So I don't know that I'm entirely opposed to it requiring something outside of the package manager to change the native architecture. I'd like it to be something a little smoother than having to identify all the files under user lib that have to be moved aside so that you can cleanly cross upgrade. Making it difficult, not wanting to do it by accident, certainly understandable. It's not a frivolous use case. We actually talked about this when the RML port was getting started about whether this was the right way to handle upgrades from RML. And we ended up having other things in place at the time that actually made more sense for RML being as embedded of an architecture as it was at the time where you didn't want to have two full file systems worth of data. So one possible way to do this is to only have this simling on old systems that predate the whole multi-arch proposal. And that way people with old systems don't have to complain and people with new systems get shiny new cross gradable ones. I think this needs to be hashed out in a bit of a smaller setting than this really. Or the really simplifying assumption of just create a tiny package that delivers that set of simlinks that's configurable for a given architecture. And if you want the backwards compatibility install that package and if you want to live in a brave new world don't install that package. Would we install that by default on upgrades then and should it be just base files then at that point? Now that was one of your quibbles. Yeah I had one other quibble which was about the DeepAge having checksums. Now I know that lots of people want DeepAge to have checksums for other reasons. I'm not wholly opposed to that but it does involve some extra disk space and make the file database bigger and stuff like that. So you could solve that same problem of detecting installation of erroneously different files just by comparing the file that's there already with the one that you're installing. Yeah but then you have to compare all the files. I mean say you have four architectures then you have to compare each one against the other ones. So if you have the checksums already then you just compare the hashes or whatever just compare the hashes or you just do the hashes once. Sorry you know that you've already installed the previous three successfully and therefore you can trust that DeepAge has already done the right thing and compared to the previous ones. So it means you need to do them iteratively but you don't need to remember the previous checksums. Yes so keeping the checksum around would be somewhat of an optimization in the case where you have a larger number of architectures installed because you don't have to re-compute the checksum each time you are installing a new architecture. On the other hand the number of packages where you're actually going to have to care about that is probably small enough that you know six to one half dozen of the other. Right. And presumably the question is whether the overhead of doing comparison per file is greater than or less than the overhead of storing a checksum for every single couple of hundred thousand files on the whole system which you would have to do. Well right now we already have most of the packages in Debian already have the MD5 stamps already so I don't know. The problem is that DeepAge doesn't have the guarantee that the package will provide that file so DeepAge will have to make sure that either at installation time it produces the hash and then probably won't DeepAge to do that as well at build time until. Right the real difference is not I think in terms of disk space but in terms of how much memory and startup time DeepAge uses reading it all. Yeah sure. You could require only the you could use the existing MD5 sums file that's in all the packages most of the packages anyway and just say that if you have a multi arch package you have to have one of those MD5 sums. Except for the build Daemon bug we had recently in Ubuntu where the MD5 sums for a number of desktop files were being changed after DH MD5 sums was run. But we need you to fix that anyway. Oh yes. I'm saying it's not necessarily a good idea for D package to rely on. At least not for now. Yeah one of the thoughts at least for now is that I don't think DeepAge can trust the MD5 sums from the packages. And if we have for now is a transition period we make DeepAge produce those at installation time or at least verify that they match and then the build tools they produce them. Then after say a release cycle we can just like or if we mark those as like trustable then we can stop doing the checks in DeepAge itself. So I mean that's that's probably optimizations. Yeah as I say this is all like quibbles really isn't it. Might be best dealt with on a mailing list. Yeah. Just a quick question is the Debian multi arch mailing list then officially dead. Is there such a thing. So. Well I don't know it's not. Yeah. We. Yes there's there's an Alioth mailing list for the multi arch project which is it seems like that at least for this part for this spec it makes more sense to have this discussion within the deep the context of the deep package development community rather than a standalone mailing list which in fact I'm not sure any of us are subscribed to. Yeah on that point I just did want to you can you bring that URL up where they the spec is again. You don't want to just Google for Debian multi arch because you get half a dozen random old proposals all of which are mutually contradictory and out of date and you have to find this URL. And at some point some kind hearted person is going to have to go around and chase up all these old things and make them say actually this is dead now. Yeah yeah I thought somebody would say that. Does policy have a significant amount of Google food is what you're saying. Because we I'm not sure if anyone or most of you have read the spec so and we went through it pretty fast. I guess we are going to post post it on the mailing list and look for reviews and comments. But if there's anything that was not clear from like a fast explanation it will be nice if just bring it up. Is anyone here like do you have any doubts or do you want something to be detailed or explain or anything. I just thought it might be useful to say something about the cross compiling section of this which as you say is explicitly our spec for today because in the interest of getting something done ever. So I wouldn't say it's out of spec for today out of scope for the the original spec which is step one. I'm happy for us to use some of this time talking about where we go from here and what bits we should do next and brainstorm about those. We care about the things right now that if this isn't going to work further down the line we should fix it now because otherwise it's going to be very painful. Now I mean thinking about this for a while my personal conclusion is that I think we can use this for cross building that so long as everything is in lockstep. You're only building the exact same thing as the Debian version for the thing you've got installed. So yeah all you're doing is changing the architecture not changing anything else as soon as you want to change anything else dependencies versions. You know you're actually building for an embedded thing and you want to pair things down a bit. We've got to use something else which I think is the probably the better suggestion is the CIS route scheme but we still have because you need a whole lot of Archie independent files which might be different. And I don't think we can do that with what we could do with an infinite array of triplets throughout our file system but I'm not sure that's a good thing. We have discussed this before and if you are targeting a different system then that's not Debian anymore. So if you try to mix them that might work or not but yeah. Right so if you're cross compiling to something that's not Debian what are your libraries doing and usually live anyway. That's an FHS violation and I'll write you a ticket for it later. User arm triplet is also not in the FHS. So the fact that you're doing that is we turn a blind eye to it. Right well right so this is an example of how not to do it the multi-arch way that you know for things that are not doing. If you're targeting something that's not a Debian system it shouldn't be in user live anyway because it's not going to be part of the Debian package management. It can be in slash opt if that's appropriate. If they're third party vendor packages then opt may be appropriate. User local may be appropriate. Any of those things may be appropriate and yeah. I was just going to say that it is very important that Debian is good at cross building and is a good cross building platform. And you know in the real world you are often targeting something slightly different from what you're building on. And that is a big deal for a lot of people who actually amazingly even with the current infrastructure people use this quite a lot and there's all sorts of crudiness. So you know that is a big deal and I think everybody recognizes that. There's complicated questions about what the slickest way to achieve it all is. Right one of the things that really kind of falls out of multi-arch is that cross compiling doesn't really is not really any different than the native compiling. And if you're targeting something which is not your standard Debian packages you put it somewhere else in the file system. You can still use the multi-arch directory hierarchy within that context be that in slash opt or user local or whatever it might be. And it's symmetric all the way around and your distinction there is not oh I'm cross compiling versus building natively. It's oh I'm building for something that's not managed by the package manager. Carrying on slightly from from that. Has anybody this bit of an open end question sorry has anybody tried to make some kind of educated guess on the impact on binaries that don't come from Debian. Presumably the well behaved binaries must continue to work otherwise we'll be creating a flag day for ourselves. But there are plenty of non well behaved binaries that you I heard our path mentioned earlier on. There are plenty of non well behaved binaries that our path themselves that load plugins manually from weird places. Has anybody thought of either a way to preferably a way to deal with those transparently or ways to document have a marketing campaign and why you shouldn't do that etc. Well yes Ian's suggestion for sim linking the native directory back to the user lib deals with that. I'm not content with that solution but yes people have thought about it. I wanted to say something about cross compiling because it's true that you can't in general build. On a Debian system a package is targeted at something other than the Debian system you're building on but that doesn't mean that cross building isn't important. We all do a lot of cross building. We cross build packages for different releases of Debian and I think we should also think you know include in you know along with the our universal operating system. We should we should be thinking very hard about how we can make people be able to cross build their Debian derivatives maybe for a different CPU architecture maybe because they've got some other purpose in mind and so they've got some set of changes they want to make to Debian. After all we are in the business of making free software and the point of free software is that you can modify it and if that involves rebuilding everything you should be able to do that too. Well the standard mechanism today for building for a different release is using a trute or a VM and I think that's I don't think multi-arch changes anything in that regard. It just means you can have a trute which also has another foreign architecture inside of it. Right. I'm being flagged that we're short on time here we've got three minutes left is that what that means. I was going to offer you said Steve that you weren't terribly satisfied with the sort of top level sim link package thingy. You should be at least as satisfied with that as you are with the notion that somebody is running foreign binaries on the system so. I mean if somebody is willing to subject themselves to the whims of somebody else's binary only software releases then a few sim links to keep it working seems pretty trivial to me. Sure but having those sim links installed by default in such a way that they break my architecture upgrade case. You keep using this word default and to me that's like that's the horrible worst case. It's the thing you install if you insist on belts and suspenders when neither are necessary. One of the problems with that is that it might be more difficult to catch up packaging absolutely absolutely true. And it's one of the reasons I don't like it as a default is that I think if you're going to change this that you know that the right thing to do is to not keep those crutches around if you don't need to. Yeah sure so I mean someone wants to package such thing and it doesn't get installed or pulled by default by anything then I guess should be OK but I would rather not see this like on normal installations. I also think you know we have to be careful about being worried about the effort involved and the reason I say that is that there's always this issue when we contemplate things like this that might have a ripple effect through lots of packages. We seem to get very hung up over the how much time and effort it's going to take to get through a transition. The reality in my experience is that if we come up with good technical solutions and we get a few key essential things working in the new environment and people see it and recognize that. Oh yeah this really does work and it solves a problem and it's not all that hard to do. That's when people find themselves enabled and motivated to dog pile on and actually do the work of pushing the rest of it through. So my suspicion is if we get this working at all in a releasable form for squeeze that squeeze plus one is likely to be you know a reasonable goal for completing whatever transition we think needs to be completed and I'm not worried about that. OK well we've had time called on us so thank you all for coming it's been a good discussion. I hope we have some more good hallway track discussions about multi arch over the course of the week we're all here. Please don't disturb Guillem with your questions because he should be working on the implementation but I'll be happy to be his secretary and field any questions you might have.