 Welcome. Now we will see the talk of Mulchark in Debian, six months or six years on with Steven Langacheck. So who in this room has heard of multi-arch before? And who's heard about it sometime in the past three, four years or so? Okay, okay, so who remembers how long ago we started multi-arch? Those are the old, old, yeah. So yeah, when I say six months or six years on, I refer to the fact that for about the past six months we've been actively making progress on multi-arch. But here's a quote from our good friend Talif Fokhen, who I don't know if he's in the audience with us today, but he made a comment that IA32 Libs is now the biggest source package in Debian. This quote from Talif, I got from the videos from Debconf 5 in Helsinki. 2005 July, which was when we started, actually, well, in fact it was earlier than that that we started exploring the solutions to this problem and the fact that we did not have a good way to retain compatibility with 32-bit binaries and 32-bit code on 64-bit systems. Well, now we've finally gotten to the point where it's worth talking about some actual solutions that we've come along with. So yeah, this is a problem that's been known for a long time. The solution is just six years late. Or 10 years late, I guess, by some people's reckoning. Honestly, I don't remember multi-arch being discussed back then, although IA32 Libs was a going concern. That's right. So, IA32 Libs, it's BDale's fault. So, looking a little bit at the recent history, multi-arch was basically stalled for a long time. And in 2009, I took a serious look at what we needed to do to try to get things actually moving along. So, in May 2009, we did have the apt and deep package maintainers attend UDS, the Ubuntu developer summit in Barcelona, Spain. And we knocked out a specification for how we thought this is going to work at the package manager level. And got a good spec out of that. August 2010, we had another follow-up boff at DevConf in New York. Identified some more issues there regarding how things actually needed to work regarding the library paths for certain architectures and how the tool chain was going to actually handle this. Because we realized that, oh, the architecture name that the tool chain uses is not necessarily the right thing for a perfect multi-arch world in all cases. So then, come February 2011, over the period of about two months, and thanks to the sponsorship of Lenaro, deep package multi-arch landed in Ubuntu. So we had for the first time a distribution using an actual implementation of deep package that could install side-by-side libraries for multiple architectures, provided that the library packaging was done according to the package manager spec. And everything would actually play together. March 2011, the directory names that we'd come up with the previous summer we decided to get rid of, and came up with a different solution. April, Ubuntu 11.04 released with multi-arch support in the distribution. It was not enabled by default. Nobody was actually exposed to this as an ordinary user. But the support was there in the package manager and about 100 libraries give or take were available that had been converted for multi-arch. And it was enough to actually install the 32-bit flash plugin package on Ubuntu 11.04 system if you enabled a PPA and twiddled a few knobs. This just worked and you didn't have any 32-bit package being installed by an AMB64 package and all that goodness that is why we love IA32 Libs so much. And today, in July of 2011, most of those patches from Ubuntu 11.04, well, in fact, all of the patches have been sent back to Debian at this point. And not all of those have actually been applied yet. Some of them are in the BTS at the moment waiting for the maintainers to upload because I think I was still pushing some of those patches when I was on the train here. Anyway. But in fact, there's now 135 of them in Debian and unstable today that are all set up for that. And the numbers are growing. So who knows what the people in this room will do between now and tomorrow. Maybe that number will go up a little bit. We'll see. So before going into the technical details of the actual implementation and talking to you about what we're doing with multi-arch right now, I want to highlight multi-arch as an example of how to and how not to make large changes to Debian that require coordination between many different maintainers because over its tenure history it has served as both. Part of why we stalled out, well, there are many reasons why we stalled out. But let me look at a few key lessons that I think of the takeaway from multi-arch and hopefully can be instructive to people here if they have changes that they want to make to the distribution so that they don't, you know, run afoul of some of the same problems we had early on. And one of the things that was crucial to actually getting multi-arch finally on its feet was having a written spec that recorded the shared understanding that we had come up with in Barcelona to say how this was actually supposed to work. And that was important for a number of reasons. It meant that when we were talking to other people in the community we had something we could refer back to. It meant that three months after we'd actually agreed on something we could refer back to it ourselves and actually remember the conversation we had. In fact, the apt implementation was a summer of code project in 2009. And David Kalnishkis has just an outstanding job of getting multi-arch support into apt, the higher level package manager. And the work he's done there, stellar, but he did happen to overlook a particular bit that we documented regarding how we handle architecture all packages, which we then had to go back and, you know, fiddle with after the fact. And it was important to have captured that because otherwise none of us would have remembered what we were talking about when we wrote that. So written specs, splitting your work into bite-size deliverables is another important thing here. You know, going into Barcelona one of my goals was to pare down the scope of multi-arch to something that we could actually achieve in a reasonable amount of time that we go out and say, okay, well, what's the minimal thing we can do that gets us over the hump as far as delivering this? And so we said, okay, there's all this great stuff about cross compilation and, you know, partial architectures and all these other things that have come up in the long discussions around multi-arch about wonderful things that this would open the door for. But if we tried to tackle all of that at once, we would spend all of our time talking about it and never actually get around to implementing anything. So the spec that came out of the Ubuntu developer summit was here's how you install two shared libraries side by side and how to make it work, and that's it. And by really trimming that down and just, you know, making sure we had a good solid spec for that, it was something that we could go out and implement. And that's been done now. And also making it clear how other people can help. So there have been people who have been early adopters of multi-arch for their library packages. We had, in fact, even in the, not sure if in the Lenny release, but certainly in the Squeeze release, there were a couple of libraries that had already started using the multi-arch paths. Despite the fact that, you know, they wound up not actually being finalized and we had some changes after the fact, which was fine because it was out there and it had actually, the support for it had already been put into EGLibC and GCC in an earlier iteration. So we were pretty much committed to providing compatibility for that in any event. But the fact that there was documentation out there and people could go out and do it without having to wait for some sort of central committee to, I don't know, do it for them or tell them one by one, it's your turn to do this. The fact that people can actually just jump in and work on this stuff is a huge benefit. And so this wiki page up here, I'm not a big fan of wikis in general because I find that they are, it's too easy for them to drift over time. But this is exactly the sort of thing I do think that wikis are useful for in the deviant context is you have the mailing list discussion, you work through exactly what all the issues are, and you use them as a place to capture specifications or documentation, which exists as a permanent record. Instead of having people have to dig through the mailing list archives and have people tell them, oh, okay, well, you know, go back six months or six or seven months, I don't remember exactly what month it was into the archive and look up this thread that these two people talking on deviant development, it doesn't work very well when you do that way. So the more we summarize the things that we actually do understand, and this is kind of coming back to using written specs, because wiki documentation is in a sense, a form of a spec in many cases. That goes a long way to lowering the barrier to entry to this kind of thing. Another key lesson and none of these things are particularly earth shatteringly insightful. These are all kind of, obviously, if you think about them, but I'm laying them out there anyway. There's nothing so permanent as a temporary solution. We were having a conversation in the hack lab just the other day, and I made a comment about, well, IA32 lives was always intended to be a stop gap and Collins reply was, well, yes, it was a stop gap. It was just a very, very large gap. So now that we've gotten here, all this time waiting and the work that's been done on the tool chain, on eglibc, on the package manager, what does it actually get us? Why is multi-arch actually relevant? Why do we care about it? Well, there are a number of things that it does for us. It gives us cheap emulated environments, which allows you to only emulate the parts that need to be emulated. So basically, you can throw whatever you're emulating, say you're doing QEMU emulation to test out mono to see what it does when you run it on ARM. Well, rather than having to have a full ARM system image or whatever, your emulation is going to be slow enough as it is. Why slow it down further and make it more bloaty by having to have a full chrute or a full system image that emulates the entire thing, if you can just install the ARM mono package and have everything else running x86 and only emulate the thing you're actually trying to debug. So cheap emulated environments emulate only the parts you need to. Another thing that this does, which is important to a lot of people, is cross compilation is no longer special. You get it for free because cross compilation and native compilation are no longer different. We no longer have this special hierarchy under slash user where you install your cross build environment. And as a result of that, you know, all the specialness to cross compilation kind of just falls out of the equation for the most part. There are a few things where, you know, some build systems have something special, but in 90% of the cases, it just becomes a non-issue anymore. So when we deployed this in Ubuntu, we had to deal with CMake breaking because of the path changes. In the process of fixing CMake so that it could deal with the fact that your libraries are not necessary than your user lib anymore. They're in a subdirectory. And to deal with the fact that headers might not be under user include, but in a subdirectory. Well, the logic to tell it what subdirectory that's in happens to work, whether it's a cross compiler or a native compiler. So as a result of this, CMake cross compilation of anything using CMake pretty much just works as a result of that. Now, the actual details of the patches, we've gone through a few revisions of those to get that upstreamed, but it's just automatic cross compilation savings. Now, some of you may not be in the practice of doing cross compilation today. So you may be wondering why this is actually relevant to you. So who in here maintains a package? Okay. Who in here has had one of those packages fail to build on ARM or MIPS or M68K? And who has logged into the Debian Porter machines for ARM or M68K or MIPS in order to debug that? Who enjoyed that experience? Who enjoyed that experience and didn't have root on the box to be able to install their own build depths? So the process for debugging build failures for packages in order to have the portability that we value in Debian historically takes a lot of these centralized resources that have to be shared and managed by someone doing work to make sure the build environment is available, configured, has the build dependencies you need in order to get on with debugging. If you have a cross compiler available and you can easily do this on your own system, well, you don't need that hassle of SSHing to another machine and asking for build dependencies to be installed and waiting for it to build on a very slow native environment process. You can reproduce a lot of these issues on your own systems. So if you ever run into the case where your package is being held out of testing because upstream changed something that regressed on this architecture that you're not actually all that familiar with and is very actually slow to compile and reproduce things on, you can save yourself a lot of time by doing a cross compiler and just dealing with all locally. And in some cases, you know, maybe you still need to actually have a real system of that type to reproduce the last bit of the build failure on because you're trying to run some code at the end and it only fails on this machine. But then you've got your binary built instead of the build having thrown it away as part of its troop cleanup at the end of the failed build. You've got your binary and you can just dump it over there. And you can also iterate it faster because cross compilers for some of these systems, it's a lot faster to do this compilation work on your X8664 laptop with a super fast disk and lots of memory so you do it all in tempFS and everything else than it is to go to some, you know, ARM system that has a USB disk and 512 meg of memory and it's just they're very, obviously there are some differences here where cross compilation is a win and not just for people who are actually, you know, developing for those architectures as porters. It also gets us cross grading. Yeah, the comment there is that, well, if you look in the history of multi-arch in Red Hat where after they deployed this, they had some bugs reported by users who had inadvertently switched their systems with an RPM command or whatever from one architecture to another without meaning to and things broke. My answer to that is if we managed to get those bug reports, we've won because this is actually a very useful thing to be able to cross grade in some cases. Over the lifetime of the discussion of multi-arch, we've had architecture transitions where we've moved from one EBI to another where it might have helped, I mean, so armed RME L was a transition. Now the machines you're doing a transition on, most of them didn't have a whole lot of disk space and it may not necessarily have been all that useful for many of these machines to try to cross grade from one architecture to another in place. For many of these machines, it was still going to be faster to just do a reinstall. But having the capability would have been nice. It would have allowed, you know, testing some of this out in a more of a smooth transition, a gradual transition, test one bit at a time and that's a good tool. And obviously for x86, there's no such reason why we would prefer to reinstall instead of doing a cross grade. In fact, lots of people have said, you know, I'm running 32-bit because I installed the machine on a disk 20 years ago and I don't want to actually have to go about creating a new file system and running the installer again, so I'm on 32-bit. Even though they've moved that disk from one machine to another or, you know, they've moved that file system from one disk to another and the machine that's now running on is actually 64-bit, but they're not taking advantage of it. We also have in the near future, we have RML versus RMHF as a pair of architectures which are compatible at the kernel level, but not at the user space ABI. We're having this capability would again be nice. It also gives us better support for binary software. Now, there are mixed opinions in the Debian community about whether this is something that we actually want to support and a lot of people think that, well, binary software that's only for those people who haven't seen the light and run 100% free software on their systems, but here's the thing. This is work that we are already doing as a project. It's in the social contract regarding that we will commit to making our system suitable for using non-free software on top of, even though we're not necessarily going to spend a whole lot of effort on it, but as a community we have decided that there are cases where it's important. That's why we have this IA32 load package that's been in the archive forever and year by year it grows a little in order to support some of these use cases which we don't have any other way to handle on AMD64. As a community, we've never said, okay, we're going to throw IA32 lives out because it's so ugly that we don't want to support it. So instead we have this wart on the archive which, I don't know if anybody knows how big the package is in Debian. I know the source package in Ubuntu is over 650 meg of a source package, so the source package does not fit on a CD. Imagine trying to maintain that and think about what kind of maintainability that actually gives you for that package in support of something that is an important use case to a lot of users. Even if those of us in this room don't want to use the non-free flash or Skype or wine or care about these things, those packages are in the archive for reason and it's because users do use them. So having a better way to support those without making a mess of the supportability of the archive is very important. So what can you do to move multi-arch forward? So same link to the wiki that I gave before. It gives all the details about how to convert a shared library, which is the common case for a package that is going to need some work to adapt to this new multi-arch world. And this is not something that you have to go run out if you're a library maintainer and immediately convert your library. If you do, that's great. I mean people have done that. I keep being amazed by some of the libraries when I was collecting the numbers for this talk about how many packages have been converted in Debian. I'm like, wow that's a strange library to be converted and, you know, it's there, it's done. But this wiki page gives basically step-by-step instructions on how to do the conversion for most of the common build systems that are in use today in the Debian archive, including a, if you're not using a build helper, this is how you would want to approach that. So we have this new interface in deep package architecture, which you can use to query the correct subdirectory name that you're going to use. And policy has already been updated for this. There's an exception to the FHS, which allows us to say, yes, we want you to use these libraries, these directories for shipping your libraries, which includes both the shared libraries, static libraries can be moved as well, your .so sim links, your package config files, .la files. And if you, if you have a shared library that loads DSOs of any kind, where the library has plugins, well, you're going to need to move those as well, because otherwise your plugins, if they're under user lib slash foo, and you try to install plugins to make both of the, both versions of the library work usefully, you again have a file name collision. So those also have to be moved over. And handling that is going to require some, some kind of coordination between the related packages. You can do that using breaks, if you've got a small number of packages and you know it's a small self-contained set, and you just want to get it over and done with, upload them all at once, breaks against the old versions, and, and you're done. Alternatively, in some cases, particularly if it's a library that might have third-party plugins provided by somebody not shipping the packages in, in Debian per se, it may be useful to do some sort of a patch, a patch to the software, so it looks in both the old and the new paths on a transitional basis. Now obviously, if it finds something in UserLib, that something that it finds there is not going to be available for both architectures, but at least it means you have, again, a smooth gradual transition. Now you also have the case where you might have helper binaries that are being used by the, the library, where for whatever reason the, the library, that is part of its maintainer script or, you know, examples escape me at the moment, other than libc-bin, in fact. But if you've got helper executables that are built as part of the, the library, you probably don't want to put a separate copy of, of those in the, the multi-arch library directory for each one of those, because you really only need one. It's an executable. If you can execute it, it does its job, and hopefully it's architecture independent in what it does for you, and you don't need multiple copies. In which case, in fact, policy already says helper binaries don't belong in the shared library package, because that already breaks SO name changes. Now this is just one more reason why we should, we should split those out, and, and those can be left in user live in a separate package. So packages that are dependencies of shared libraries, because they ship data or executables, those packages, once you've split those out, it's important if you're going to have them work in a multi-arch environment that they'd be marked as multi-arch foreign. So multi-arch, in the process of exploring this, the question of how to make these, this all work in the package manager, we identified that there are in fact several different kinds of dependencies that we need to be able to handle. There's, there's the case where one package depends on another because it links to it, and you have to load it into memory, and obviously the code has to be of the same type because you're not going to have, you know, mix and match object code of different kinds in the same process, memory space, and, and have that work. So that's one kind of dependency. Another kind of dependency is I call this thing, it's an executable, I run it, and it does something for me, and, and the, the interface there is the, basically an exec boundary. That's a different kind of dependency. Those may be the only two cases. I guess the, the third kind is I, I depend on this thing but I don't actually care about multi-arch for such and such a reason because I am the only thing in the world that does this, and it only exists for one architecture. More or less the, the, the binary only case or the, I've not been ported to anything other than i3d6 case. So in order to, to distinguish between those dependencies in the package manager and have the package manager always do the right thing with the dependencies, that means we do have to annotate them and distinguish between what kind of dependency am I? And what we've looked at was the fact that the, the most efficient way to annotate this for the common case was to mark the package being depended on to say what kind of package am I? So a package which provides an interface that is not an ELF library interface, it should be marked as multi-arch foreign, if anything that is a library that you want to have two of installed at the same time depends on it. So this is multi-arch foreign and if you maintain any binaries in kind of the base system, you've probably already received some bug reports from me asking you to add that field to your package. And, and now one of the interesting thing about this, this multi-arch foreign field is intuitively we would not think we would want to do this for a, a package which is architecture all. You would say it's already architecture all, that means it's architecture independent, so we don't need to add any additional information, we can just use that fact. Well, there are a few cases where it matters. Thankfully we wrote this spec to capture that so we could remember what those cases are. So if you're interested in that issue, it's documented in the spec and see, see previous comments about the importance of written specs. So that's basically it as far as, as what, what we need to be doing right now to implement multi-arch and, and make things work. It's, I'm happy to, to, to go into more detail with people individually on that, but I do not intend this talk to be a tutorial per se because I intend the wiki documentation to be complete. So if you find that the wiki documentation doesn't answer your questions, ask them, and then I am happy to add the answers to the wiki. But the next question is, is what lies ahead? Where, where are we going from here? So the, the shared library question is pretty much solved as far as what we're doing and how we're getting there. But there, there are other things that, that are on the, the roadmap more or less, which are important to people. So I, I mentioned earlier that the, the deep package multi-arch implementation was sponsored by Linaro, which is a, a industry consortium which focuses on improving the state of the art of Linux on ARM. And of course the reason they're interested in this is because of dev packages. So we kind of want to get to the point where dev packages can also be installed and used in a, in a multi-arch environment and do some cross compilation support there. Now I do actually have a demo, which I'm going to talk over. That's not the right thing. Where is my scroll back? There we go. So I figured I'd put together a little bit of a demo of a, of a slightly hacked together charoute, doing cross compilation of a package. And unfortunately, the package I picked for my target once I started timing it is quite quick. So let's do this. It takes longer to install the build dependencies for it, even from the local cache than it does to cross compile the package itself. Now the, on, on the auto builder, the last time this package built, it took about seven minutes. If you see the, the output here, it's, it has a list of colon RML packages that's going out and downloading according to what I've specified that I need. The, the true I'm installing this into right now has a few bits that have been fiddled with by hand. Um, libc6-dev currently is not quite at the point where we can co-install two of them. We, we've just done, uh, Aurelion and I have done some patches, um, uh, which is just this week to make that happen. Um, because there's, not all of the G libc headers are the same on each architecture, and if they each want user include, that doesn't work. So there's a little bit of moving things around that has to be done for, for, uh, libc. But you see, we're unpacking all this stuff. Um, once you've got that base level sorted out by hand, most of the rest of it just kind of works, um, at least for one architecture at a time, even though many of these packages have not yet been fully, uh, multi-arch enabled at the, the dash dev packages have not been multi-arch enabled. Um, but the, the runtime packages have been. So the fact that I'm only installing one, uh, dev package at a time, because I'm only building for one architecture at a time means it happens to just work. So we are, this is kind of a rigged demo in that sense, because it's not fully multi-arch dev, but, uh, it's close enough. And we will eventually have, in fact, um, uh, David Kalaniches has also, uh, committed this week a patch to implement apt get build depth dash a architecture, which is not the same thing as apt get dash o apt architecture, such and such, because when you're cross building things, you actually have to distinguish between the build dependencies that you need to run and the build dependencies you need to link against. So again, you have to know the difference between your host and your build architecture and do the right thing. So that's actually already implemented in apt get, um, but I'm not using it today because we don't have enough information in the dev packages to correctly mark them as multi-arch, uh, to, to make that work. But, you know, we're talking a minute and six seconds to, to install the build dependencies. Obviously I am talking much longer than, than it takes to run the demo. Uh, the cross compilation requires just a one more little bit, which is, you know, you have to have a cross compiler installed and I'm, I'm using the, the arm cross compiler, which is available in Ubuntu. This is a, an Ubuntu on Eric's route. Um, so there is an arm, an x86 to arm cross compiler available in the Ubuntu archive. And, uh, you see that I have not finished this sentence and, um, we've already finished the compilation. So, uh, that's pretty much it for the, the package build. Um, and, you know, I should have picked Mesa or something so I could, um, explain it all in the time it takes to cross compile. But, uh, yeah, it, cross compilation, it, it really does, does make a difference, um, to, to building some of this stuff and being able to iterate things rapidly. Uh, there's lots of use cases for cross compilation so it's really very exciting. So that's one of the things is, is how do we get the dash dev packages there so we can automate a lot of that. Um, and that means having all the, the dash dev packages installing where they need to, um, so that we can co-install them and so also that app can figure out what kind of package it is and do the right magic when installing build dependencies. Uh, so we already can move the .so.la.ac, .a.pc files, uh, when we, when we move the shared libraries, provided you even have any .la files anymore since this breaks the, uh, libtool.la references and so we're, it seems that we're kind of removing a lot of those in the same, at the same time we're moving things in a multi-arch. But, uh, it, this is not enough in the general case to be able to say yes, this, this dev package is now multi-arch colon same and can be used in that manner because sometimes we do have architecture dependent, uh, headers. And there are cases for that where you have an auto-generated header that pulls in bits at build time based on architecture. That's the, the common case for that. Uh, and, and this is not, not uncommon by any means. Some, some particular libraries have already dealt, dealt with this on their own such as GTK, G-Lib, those kinds of things have their own architecture dependent headers which they already install under user lib and use package config to connect the dots and make those headers available when you're building. Um, and, you know, multi-arch does not obsolete that simply because the, the way you install GTK and G-Lib and the, the header paths that are used, they deliberately added an additional subdirectory to, so that you could, you could co-install multiple versions side by side, multiple versions of their headers and their development packages side by side. So as a result of that, the package config still has its uses in those case and it's not like multi-arch means, we'll move it all in a user include now. But there's a lot of, there are a lot of libraries that do not, uh, do this today. And as a result of that, it's, there's some, some work we have to do to figure out what the best practices are as far as what headers we're going to move, what headers we're going to leave in place. There's a, there's a very clear trade off here between the work the maintainer has to do to maintain the, the package and keep track of architecture dependence of headers versus disk space used on the end. Now, for most dev packages, header files are, are a very my, a very small percentage of the package size because if you've got a .a file in the package that, you know, the rest is just nothing. And, but, so it may be that we'll decide what we want to do here is just say if it's a multi-arch package, just put all the headers under the sub-directory and call it good. This does break compatibility with some things that are not multi-arch ready yet, uh, including some upstream software. So, as we do this, we will have to, to contend with that as well. But, you know, this is, this is one of the next steps that we're, we're going to be tackling here. Um, we have a long list of libraries to convert in order to be able to drop IA32 libs, and we are working through that gradually. And when I say we, I mean collectively, and it's not just some sort of coordinated effort. These things I, you know, I mentioned that we had 84 libraries in, in Ubuntu. Um, those patches have all been pushed into Debian, except for about 20 of them which spontaneously appeared multi-arched without me having any opportunities to send the patches up before they got done by the maintainer. So as soon as the floodgates opened, you know, so it's really great to see this moving forward. Um, but in addition to the, you know, there's a long list of libraries we have to deal with before we can actually get rid of all of IA32 libs. But the other thing we can do is we can start picking off the reasons that IA32 libs are on users systems by taking the reverse dependencies, looking at the libraries they actually need, taking a small set of those and, and one small wedge at a time, clearing those out so that, so that IA32 libs doesn't, doesn't have to be installed unless you're using basically wine, I think is the big offender there. So, sorry? Wine is an offender in many cases. Wine is an offender in many cases. Maybe I shouldn't have repeated that on the mic. I don't know. Um, but, so, and I'm happy to help identify some of those lists of packages that are candidates for conversion, the low-hanging fruit if anybody wants to help with that and is interested in doing some work on patching those. Um, the other thing that's going on is we do still have to get the multi-arch support merged into d-package mainline. Um, it is in the Ubuntu d-package. Uh, it has not yet been merged into the WND package. It's still under review by the, the d-package, um, primary maintainer, lead maintainer. And that's going on in parallel. There's no reason why we need to stop the work we're doing on library conversion waiting for that. This is something that we can, we can be making good progress in all these libraries so that the day that d-package is ready to go, IA32 lives disappears from the archive. Um, now the other thing we're going to want to do is once, once d-package is ready to go, uh, in order to make use of that and get rid of the IA32 lives from the archive, we have to look at how we're going to deal with the fact that now AMD64 is missing some packages in the AMD64 packages file. So your I386 packages file that you want to be available in the common case, we're going to have to decide if we're going to enable that by default or what we want to do there. Uh, there are a few bugs to sort out with EG-Lib C, nothing too major, just a small issue where you, if you happen to install a by-arch uh, EG-Lib C package which exists today and combine it with certain multi-arch versions of EG-Lib C that you can clobber your libraries in slash lib with the wrong version because it follows a sim link called slash lib64 and we have some ideas about how to, to clean that up, all of which are um, terrible, terrifying, but um, I don't, I don't think anybody in the room here minds if, um, the path to their elf interpreter disappears temporarily while we're in the middle of this, it's, it's a, haha, haha, um, yeah, we also, there, there's some work we have to do here to make, make uh, multi-arch usable for up streams because, you know, this is, this is something that perhaps other people who have been involved thought this through better than I did, um, and understood better than I did, what I was getting into. They did not communicate that to me in a way that I was understanding at the time. But you run into these things like, oh, well, you have a multi-arch libc6-dev, so you want to have two of them installed, which means the file that the path user lib crti.zero is no longer available because you can only have one of those at a time. Which means that if you're trying to do upstream compiler development, those compilers have to be multi-arch aware because they need to be able to find that kind of thing. Compilers and upstream build systems, we've kind of worked through this with a couple of upstreams already where CMake, the CMake maintainer in Debian has been very proactive in addressing this for that package. Python, we're trying to address this. One of the things that's missing right now is to have a generic non-Debian interface to figure out what the right multi-arch path is. So this is one of these things that we're going to be working through and trying to talk with the right people in other distributions and in the appropriate standardization bodies to develop these cross-distro interfaces for that. So that's what's on the radar. The last slide was basically my to-do list, which I will happily share and chunk out to other people as they desire. But even beyond that, there's a lot more that we can do with multi-arch. This basically comes down to the break it down into bite-sized pieces. Well, all of that, that was the bite-sized piece that we could do for the initial implementation. Now that that's out of the way more or less, there's this whole range of things that these are the things people have been talking about for years that wouldn't it be nice? Wouldn't multi-arch be great because it would solve X for us? Well, now we have to have those conversations because now it's actually feasible and it's really about the policy that we want to have for how this stuff is supposed to work. It's not about the implementation. We have to decide what do we do with an AMD64 architecture that happens to use a 32-bit legacy bootloader on systems that use a BIOS? Should we actually have an AMD64 grub PC package? Or should we actually have the AMD64 image reuse the one that was built natively for I-36? Just to give one example. I mean, this is only one example and it's one I'm familiar with, but there are other packages in the archive where we are build depending on GCC-multileb in order to build an AMD64 package with GCC-M32. For no other reason than that, it's the only way to get the 32-bit non-portable to 64-bit executable on your system. Things like that. I'm very much looking forward to having those discussions with the community. I think we're going to try to have a bof after this today to talk with the BuildD maintainers and the FTP masters and the release team and whoever else is available to try to sort out what kind of policy we need to have in order to make this kind of thing achievable in the archive. Yeah, partial architectures that's going to be a lot of fun when we can start actually having some of those although people keep trying to promote the partial architectures to full architectures. They keep finding reasons that the old wisdom that you don't need a full 64-bit port on these architectures because all it does is take more memory and not run any faster. So you only care about a few libraries while people keep finding new things. So I'm out of time unfortunately so the future of multi-arch is basically up to you. Do with it what you want, make it great and I will take I guess only a couple of questions perhaps or one question. Hello, my name is Marcelo and I'm really new at this. So do you have something like a tip to beside all of it? I don't know. For a really, really new person I don't know. A tip for a new person? A key point. I'm sorry. I'm really new, I'm sorry, but I like it. I will read it. So I will try to have fun with it and I'm sorry if I'm taking the time. So I guess my tip for a new person is avoid maintaining libraries because it's a bottomless hole that you will never get back out of. So Steve, you mentioned the word wine before. Actually, I can't have the idea of a multi-arch or something like that. Actually wine is just another ABI interface so it really should be not be handled as wine on easy access to AMD64 but to have the proper solution. So you are proposing that we will start up a new partial architecture in Debian which... Doesn't sound the correct technical answer. I mean it's nothing for today or tomorrow but... That sounds great. Let's talk about that. A lot of packages for example said if you follow wine using wine tricks you just download some windows builds from some unknown websites which actually we could just compile in Debian natively for the other ABI which would of course be way more correct to do it. Like 7-Zip or anything else like that. Steve, thank you seriously. As one of the few people who has uploaded IE32 libs, thank you. And you know I do need to call out the fact this is not been a one-man effort although I'm the one standing up here giving the talk because apparently I am the glory hound or whatever I am up here. There are a lot of people who have contributed to making this happen. You know, Aurélien and Matthias for the tool chain and Ejulip C, David Karnischkis and Michael Vogt working on the app implementation. Raphael Erzog and Guillaume Jovers working on the D-Package implementation just to name a few of the people who have been so instrumental in making this happen. And so you know this is a group effort and you should all be very proud of what has been accomplished here. And you know it's been a long time coming but our answer finally the implementation knocks the socks off of the RPM one. Thank you Steve.