 I'm going to talk a bit about Multi-Arch. I've been doing this as my master's degree this spring. So we actually now have it quite a bit further than we had when I gave a bit of the same talk a year ago in Port-Port-Legra. So, okay. We'll start with, basically, this is what I'm going to talk about. What Multi-Arch is, a new and scary and exciting feature of the package, which Scott, who might be in the room, has backed out. Why Multi-Arch? Because it appears that not everybody thinks Multi-Arch is a good idea. And why not Multi-Arch, which probably addresses some of those concerns, and a bit about how to do it in the context of Debian. What the current state is, because we actually have a current state now, like a year ago, and some questions and hopefully some answers at the end. Multi-Arch is having programs for multiple architectures installed on your system and working at the same time. It means that if you're on AMD64, which will be by far the most common use case, you can have a 32-bit web browser with flash plugins, if you so deserve, while still running BNLS as a 64-bit program, if you want to do that. In Debian, we're probably not going to make every package Multi-Arch, because doing that would mean making loads and loads and loads of changes, and it doesn't really make any sense to be able to have both a BNLS as a 32-bit and a 64-bit program at the same time, or does it make any sense to do that for said or anything which does not have a binary interface? So the scope is... Oh, there it actually worked. So the scope is to have this Multi-Arch just for library packages, so you can install programs, but most programs won't have an interest in being co-installable. Some exceptions exist, such as GCC or possibly Perl and stuff which uses plugins. Multi-Arch consists of two parts. It consists of a file here, standard or policy or whatever, and it consists of a package system implementation. The first, we've discussed a bit, you end up with something resembling this, where you'll have all your i386 Linux libraries in i386 Linux. You'll have your include files in i386 Linux. User includes i386 Linux, sorry. The other part of this is getting the package to actually understand that those packages which you install are quite but not exactly the same, and as we know, the package can't have multiple packages with the same name installed at the same time. RPM can do that, which I think is a very bad idea the way they've done it, but that's another discussion. So the package needs to actually know that those two packages can be installed at the same time, and they have different architectures, and they know that they will be co-installable, which you'll tell them by using a Multi-Arch colon yes field and the system in the control file of the package. A little bit about the package feature dependencies. This is a feature which Scott has been talking about a bit. It's a bit scary because it changes lots and lots of assumptions. The main reason for doing something like this is once we have loads of derivatives and suddenly you don't have a timeline where you have a version which is less than something, you suddenly have the possibility that a feature is fixed or introduced in, say, some version in Debian, another version in Ubuntu, and a third version in Progeny. So you can't really have, depends on something bigger than that version because it doesn't make any sense. So this is a proposal. There's no code for this yet, but this might be the way it will be done, where you can actually say, okay, this package needs those features from DebHelper. We saw this as part of some of the backports from, if you wanted to build OpenOffice on Woody. You actually had hacked buildDepends in the OpenOffice package to buildDepend either on this particular backported package or anything bigger than this version, which obviously doesn't scale at all. It also includes system-provided packages, so libc6 will depend on a magic i386 Linux package or it will depend on system on i386 and Linux, on i386 Linux, obviously. On AMD64 it will be AMD64 and Linux and so on. The last part here, where it talks about what the package SHLabs will be doing, that's probably the syntax which will end up in capital Debian slash control. Most people will never see that unless you actually tear apart a web file by hand or you go into the build directory and look at what the heck is going on there. This has a certain amount of complexity, which probably isn't so good, but at the same time we've discussed this and we ended up, first we had an implementation which only did multi-arch and Scott, who is the current Deep Hatch maintainer, didn't want to just add yet another feature, he wanted to generalize it a bit and this makes multi-arch a lot easier. It means you can actually say that... I'm just concerned that the features mechanism also has scalability issues. Yes, it probably has. You can't have features for everything and you could foresee, for example, that you had a feature every time you fixed a bug, so a package could say, okay, I need a dev helper with those bugs fixed where you would end up with a feature line from here to hell and back. No, we're outside hell. Just speak up and I'll repeat it. In some measures, dependency solvers for RPM, there are at least three out there and part of the reason dependency solvers for RPM are such hell to deal with is because Red Hat has at least three different kinds of dependency. They have package version dependencies like we do. They also have feature dependencies and then they have a third kind, which I can't remember. File dependencies, that's right. This turns out to be really hellish for problem solvers like the one we have in Act and I just wanted to build upon BDEL's concern by being a little more specific. Yeah, it's a comment from Ran about having multiple ways to specify the pants as problematic for problem resolve or package a relationship to resolve a search that's apt. I'm saying it's not a theory. One question. The real world actually sees this in RPM. Yeah. Sorry, Abba. There is no court talk. If there is a court talk, then it's hard to ask, but there's just a set of reasons, though. If you're also doing, say, for example, a package that's on AMG64 only, you'll just list, but it's still, let's say, I386. Or can it say list I46 if it's only applicable for I386 or Pentium or things like that? It hasn't been decided whether this will actually support sub-architectures such as I486, I586 and so on. It's certainly a possibility, yes, but so far the main interest has been around not just architectures, which are different enough that you can't mix them freely, which you more or less can if you have the right CPU for, say, you have an I386, then you can run any I586 package. So the main interest hasn't been in optimizing everything, because it appears that even though some people claim you get loads of performance increase in practice, in most cases you don't. So it's only interesting for a very, very small number of packages. Like LibC, where you just have LibC-I386. Okay, why mult dodge? This is adding a lot of complexity to the system. It's adding complexity to the factory solver. It's adding complexity to the file system. It's adding complexity, more or less, everywhere. And why the hell are we doing this? We have some small, non-portable programs, open-office.org. ExperienceMind, which more or less causes the creation of IA32 Libs, which is now more or less the biggest source package in that bin, because it includes the source code of GCC295, GCC3.0, GCC3.3, and LibChillib CX386, and some other small packages, which is obviously not a very good thing, and it means doing security support for those is less than ideal. Please wait until you get the mic. There's a lot of people here who weren't there during the last talk, and I said this also. But if you're going to ask a question, please wait until the moderator comes with the microphone, and please sit on the recording as well. The reason for having an open-office like this is, of course, that open-office for some insane reason includes assembly code to do its own C++ object thingy, which is totally insane to have in a very processor, but it's there, so... This is the reason why we don't have open-office on all our architectures yet, more or less. It's interesting for embedded development and cross-completion, because then you can suddenly install... If you want to cross-compile onto ARM, you can just tell the system that, hey, I want those installed, and even though I can't actually run them, you can just install the libraries and use them to compile with. It means that you can do the same thing with if you're on PowerPC, but you want to use an i3-8.6 program, you can just use QMU, and it can actually hook into the packaging system, so your packaging system knows that, okay, I have QMU installed, I can actually run and install i3-8.6 binaries, which is kind of cool. I'm sorry, this microphone thing is completely unworkable. The question, if I can remember what I asked was, has anybody actually done that, or is it just theoretical? The framework for doing it is there. I haven't actually tested it, both because I don't have a PowerPC system which runs at more than a sales pace, and because it's not my main interest, but yeah, certainly the framework is actually there. Proprietary plugins and software, we can try to ignore it, we can try to make it go away, and hopefully it will. But at the same time, people are using Flash, people need Java, where you now have Java on lots of platforms, but not everywhere. In some cases, people actually do have to use proprietary plugins and software which only exist for, say, i3-8.6 Linux. It's also another point which I forgot here is for some ports like the mythical i3-8.6 NetBSD, Debian NetBSD, it can just have the base as i3-8.6 NetBSD, REST is compact. So you actually just install the regular, your regular game or whatever, and you don't have to have a parts debinorg with 20 gigs of NetBSD software which nobody uses because the port has three users. And also, of course, because we can, it's cool, it's elegant, it's a lot more elegant than the current solutions, which are a mix of using lib64, stuffing stuff into mule i3-8.2 Linux and having some magic code support for accessing that instead of regular slash if this is a 32-bit program. I can see BDL is trickling here. It's a really gross hack. So I had a couple comments. One was to address BDL's concern. I think there's some additional infrastructure that's needed because one of the things you can think about is you could potentially have software emulators for a whole bunch of different architectures and you could have one system that has four or five of them installed and you would need to determine what your priority was going to be about when you wanted to run things. You have to decide, say you're using apt or aptitude or something to install packages, you need to decide which one you want to install by default and set up some preferences to determine if it's available on this architecture I want to otherwise use this architecture, otherwise use this architecture, that kind of thing. He said, do we really want to solve this problem, do we want to live in that world because it gets horribly complex. And then one other comment about why multi-arch, most of the things that you pointed out are very useful for people who are on non-I386 because that's where the world is and it's very useful if you're on a non-I386 platform to be able to run some of these other things through emulation. But the other thing that this will eventually buy us is the ability to migrate architectures. And so maybe someday we can eventually get away from I386 if we have the ability to have a long-term transition between one from the other. You can have these binary packages installed. You can still rely on all this old stuff for a long time and maybe eventually move off of it. So why not multi-arch? It's a lot of complexity. Yeah, it absolutely is. But so is maintaining I32 Labs, which I have been doing together with VDEL for a while. It includes evil stuff like diversions. It includes stuffing sim links where they shouldn't go, breaking the FHS once in a while or all the time. And I don't even want to think about the security implications because I32 Labs is most likely, it probably should have a couple of great security bugs against it because the source code and binaries aren't updated once there's a new security release, which would mean somebody would have to do a new I32 Labs upload every time somebody did an X or Lib C or some other small package security advisory, which happens a couple of times in the release cycle. We need to change some core stuff. We need to change the package and together with the package aptitude, everything which knows what a package dependency looks like, basically, which is a fair amount. It also includes a fair amount of user interface problems like Matt Taggart was talking about, how on Earth is the user supposed to actually be able to tell the system that I want this and navigate this some useful way because some way if you have support for five different architectures, you don't have the ability to install 20 or 15,000 packages, you have the ability to install five times as many with fun stuff like dependency across architectures because in some cases you can have an architecture-specific tab, like said, which you can replace with... It doesn't matter whether you're running an AMD64 I386 set because it has only a text interface, but the packaging system has to actually know this and try to decide or give the user the option to decide on which one he wants. The tool chain needs to change. GCC4 was fixed last night to actually do... By arch, it now selects user-include I386 Linux or MD64 Linux based on whether you pass M64 or M32 to it. Gileb-C, the patch is there. It was supposed to be in search. I don't think that upload ever happened or at least I haven't seen the bug been closed, so I think it's all missing there. Benyotels needs some changes. What's amazing about those changes though is that I did this as part of my... Just a second, Brian. As part of my master thesis, as I said, and all this software has been patched so many times that there's actually no original source code left. It's all patches, which makes it even easier to patch again. The patches for Benyotels, for instance, is less than 10 lines. Patch for GCC is on the same scale. It's actually very small changes. It's just that they're there. Brian. To get back to something you were covering earlier about what you just mentioned again, which reminded me, you said we have architecture OS-specific sub-directories of UserLib that I understand because there are ELF objects in there. Could you explain why we also need them for a user include? Because header files I've seen generally have... use the CP processor to determine what architecture they're dealing with. So in principle, you shouldn't need different header files for different architectures, should you? In theory, you shouldn't, but in practice you often do. Because what you have is stuff like... In some cases, config.h, which is generated by Autocon, gets included into the included tree, stuff like that. So basically, the only cause of not having a user include ArchOS is you save a bit of hard drive space and potentially you get a lot of headaches instead. I've been spoiled by iMAKE. I forgot about Autocon. The last thing, which is kind of covered by the first one as well, but DAC, KT, FTP, Debin, org, all scripts which touches that needs to actually change to understand any changes in the pan syntax. I'll be showing, or I'll actually show that up here. To address Brandon's concern, if you do have header files that are arch-independent, they can still go on us or include, which will be fine, and that's kind of nice, because that way... So ideally, you would push everybody to try and make everything FTF so that it would work properly, and then in the rare cases where they don't, then you put it in the other place. Yeah, we're not ripping out any support. It's very important to actually preserve backwards compatibility, which is one of the reasons why we're just adding stuff onto the dependencies and so on, instead of removing anything, because we need to preserve the meaning of the fields as they are today. So if you have a deb, which we're just semantic of the dependency fields, means it's something today. It needs to be the same in a multi-arch world. If not, you can suddenly install a libc6 on your AMD64 system, but this libc is for i386, and naturally everything breaks down, which is very bad. A lot of this complexity seems to come from the modification of the developer packages and to support cross-compilation to different architectures. Wouldn't that... I mean, I can see possible, slightly, cludgy but much more localized changes that you could do to make it work at runtime only, and that might be a lot simpler. The main, or the biggest complexity is actually in the package and handling the dependencies and getting those things right. That's much harder than the file system. The file system layout is easy. The file system layout is, like, more or less all interesting software except for X, some other stuff, uses Autocon, which means that, more or less, you pause that double dash, include there, and then live there, and it just works, bearing in a bug, so of course. It actually has a built-in search path for stuff like the CRT.o. Exactly. That's the compile time stuff. If you decided that you didn't need it to work for cross-compiling and you just wanted it to work for executing things, then you wouldn't need to patch all of that other stuff. Well, you would actually have to do that because you need to... You can't reliably move a shared object after it has been compiled or after it has been told where it's installed. You could make the dynamic linker sort it out. Yeah, that's nice theory, except that you break it with DL Open, you break it with lib tool. There are lots of stuff which breaks on that assumption. Try moving .LA file once, and you'll see it break in spectacular ways, trust me, or ask Keyback. Well, I've not been changing the source code to LD.so. I have. Right now, I mean, if you want it to... If you change LD.so, which is where all the dynamic linking happens to look in a different place, then it'll look in a different place, and everything will look in that different place. Yeah, but you still have to actually have a way to decide whether... So you can install... Say you can install a... You could then say that the package puts its sem link in your celeb, which points to just a lib i386 Linux, whatever, and you would still have the problem of what then happens if you try to actually install for both... If you install lib foo for i386 and lib bar for AMD64, and you compile something, trying to... You'd have to have some kind of thing to convert the package. That wouldn't be very hard. Sorry? You'd have to convert the package, either convert the .deb to move the files about, or you could have it done in solder. Anyway, this is all getting a bit out of hand. Yeah, actually getting the file system right is not the hard part. It's the package. That's how the... Until somebody fixes Autoconf to actually pass those default paths, that's more or less pass that to Autoconf, and it does the right thing. The other thing we need to do is split packages, which is not so fun, because we'll end up with lots of dash-common and dash-dev-common packages because at the moment we don't have a... There's no way that multiple packages can provide the same file. So we're adding a small hack to the package where it can... If you have two packages which provide the same file and it's assembling to the same place, then it's just ref-counted just like the directory today. So you end up with a... Any lib package will have user.libfoo or libfoo.zero link to user.share. doc.libfoo.zero-common, which includes the copyright and all that stuff, which actually needs to go into the package. That's a hack, but it's the smallest hack we could actually find to do, and we have to provide copyright information and documentation for obvious reasons. The current status. In Deben we have... The tool chain is more or less ready. The maintainers are already fixing their packages or they have accepted that. They'll have to fix their packages at least. Patches are available and it works quite well. I've been running a system with both some multi-arch and some non-multi-arch binaries for more or less the whole spring and so far haven't really run into any big problems. Upstream, LSB, FHS, they have a bit of interest in it, but it's mostly along the lines of, yeah, we'll see what Deben does and whether it actually just blows up in spectacular ways or if it actually ends up being very, very nice and useful. I think and hope we'll end up with a nice and useful and everybody will follow this because at the moment, using stuff like just Lib64, it breaks down once you pause to two different architectures, which you in some cases do on, especially once you do stuff like Deben at BSD or... Deben at BSD and D64 will be able to run like five or six different ABIs. So... Oh, sorry. Questions? You're already asking. What do you... Spark is currently a 32-bit for most packages with a few packages having both 32 and 64-bit binaries. Have you considered what... or is anybody looking into what multi-arch... what Spark and multi-arch should do? As far... I don't know Spark very well, but as far as I've understood, it doesn't really make any sense to run everything at 64-bit because 64-bit is by far slower in most cases. So basically what you would end up is having... for the packages where it makes sense, you have Spark 64 architecture and you provide those, but it doesn't have to be a full port or anything like that. So multi-arch here is you can... we can use 100 years or 10 releases or whatever comes first on doing the transition. So we can actually... there's... as long as you have the core support in ld.so and libc and so on, you actually don't have to have... you can change your libraries at a pace and the order actually which you want to. So I would just see Spark as having a Spark 64 port, which just included some small stuff such as libc and PostgreSQL and some other stuff which makes sense to have 64-bit versions of. Okay, so my question is, as far as I understood multi-arch is still... at the moment when I started to turn Debian, multi-arch was already to be implemented in the next days. So that's for me the status of multi-arch as far as I can see. And my question is now, when will multi-arch become real in your opinion? When it will become... When will multi-arch be just something normal that we are going to use every day? So do you suppose that we will do that for edge? For edge plus one, edge plus two? When will that happen? And how probably do you think that your expectations will be met? Well, I need to actually check whether the base support went into search because what we have here is it would be very, very bad if you upgraded your system and stuff started to break. So this will probably have to go across two of these cycles, where we first upgrade support into the base stuff before we actually start moving files into the correct directories. Or they'll at least have to depend on the proper version of libcs because it's an SS-slips change. So I'm not sure you can actually do it because of the package. So because you're changing the syntax of the pants field. And so older deep packages. I need to test whether they just go blah, or whether they do something sensible, saying blah, blah, doesn't really understand this, but we'll try to go on as best as we can. So depending on what happens there, I think we actually need support in deep package and app and everything before we can start using the syntax in control files. At the same time, that's kind of, it's kind of independent of moving the libraries in the file system because what you will have if you just move the shared objects, they'll all be in the right places and stuff will look in the correct place and so on, but you won't be able to install the .apps yourself. You can't install .apps for more than one architecture until you have a deep package which understands it. The question from Bday was whether we could kind of skip through some hacks and just get this done in one release cycle. More questions? So if there are no further questions, thank you for your nice and interesting talk, Tolef. Thanks.