 Mae cwestiynau ei wneud. Giseb, os yw gwheith. Ieithdaf mynd i barach. Mae'n mynd fydd. Oherwydd, dwi ddig yn fryd的fain yn y cwestiynau. Wrth gwrs, mae'n gynhyrchu a'n ei ddim yma. Rwy'n meddwl i'r hyffordd o'r f ard gewesen yn diger arall. Rwy'n meddwl i'r ffordd o beth, sydd y dyna gwrdd yma, yma. Dyna yng Nghyryffred ddaeth. Dwi'n credu i'r ddweud, rhywbeth y gallwn i'r rydyn ni wedi'i arddin ni, yn ddod i'r rhaid i ni'n gweithio. Rwy'n gweithio eich rhan o ddod i'r amlwg, rydyn ni'n gweithio? Rydyn ni wedi gweithio. Dwi'n rhaid i'n ddod i'n wneud i gyd i 15 munud o'r ddwyfn o'r ddiogel. Rwy'n gweithio i'r ddod i'n gweithio i part 2, iddyn ni'n eith agorod o gyffredin ni'n gwneud gyffredin ni. Rydyn ni wedi cael ei gwych am ddod i gyd o gyffredin ni? Wugi? IRC is asking if there is a kind of guide for test your package cross-building. A guide, so there is a link on the last page of the PDF which has basically the place. Yes, if you want to find out about the cross-building stuff, that URL is what I think of as the top of the document tree. So there's a link there to doing multi-arch cross-building, which will be probably the best place to start. I'll add a link to this. Right, okay, so I just give it a name. So this will be... Arch cross-building still says new document. Okay, I didn't see that. Ah, there, right, okay. You are. Ah, that will be useful, actually, yeah. Right, so there's a gobby document there. Let's see, let's put a new document in. Oh well, we'll do it that way then. So it's a bit difficult running a boff in a room like this. It doesn't really work. Come closer, come closer. Right, okay, so this document contains a list of seven issues, which I think were all mentioned in the previous talk. If I went away with any sort of answers or opinions on them, that would be helpful. So running foreign-arch binaries during install of library packages, does everyone agree that just saying or true everywhere is okay? I didn't do that. Somebody else is highlighting. Oh no, it's just somebody's colour, isn't it? Yeah, it says running foreign-arch binaries during install. So this is libglib that runs whatever the hell it is it runs. Did I put the examples in here? Yes, so glib compile schemers and gtkquery in modules 2 and all this libgvc5. I don't know what any of these things do. But when we're crossing, I'm pretty sure we don't care, and running those binaries won't help. If you have QMU installed, it'll just do something which is probably the right thing. It might use the wrong files. I don't suppose the maintainers of these packages are here? No, of course not. So, does anyone have anything to say? I think we should just say effectively, if you just say or true, then it becomes a warning. You still get it printed out as a couldn't find file blah. The only problems we had when we were trying to do this with MDB and Crush was that sometimes some of these packages will put a dummy file or they'll put a placeholder in the way, and then when you're trying to install the package you just cross-built, it won't overwrite it. So you won't actually get the cache and you have to rerun the thing on the device. You have to invent a way of regenerating this data which should have come from build time. It was moved out of the post-inst into the build structure some years ago, and there was some confusion with a lot of the... I think it's mainly the known maintainers. There was some confusion about exactly why that was done with the idea that it was because of some kind of cross-building issues. So we need to go around that loop again and try to work out, go back to the history, why was that change made in the first place? Was it for their reasons or for ours? Was it a mistake on our part? Can we undo it please? We did have these things taken out and done at build time and now they've been put back into the runtime installer. I seem to remember that when we were doing this there was a mix between packages that did this work in the post-inst and packages that were starting to do it in the build system. Obviously for cross-building purposes the post-inst is the right place to do it. Whether there was some kind of sequencing problem there and whether deep package triggers could have fixed that but weren't available at the time and therefore they did it in another way during the build we've got to go through that with the relevant maintainers. Those were the issues that we found at the time. Are there any known people here? So I guess we'll send patches in that say or true for now and see if anyone complains. Yes, you could be a bit cleverer than or true so you could check whether you are cross-installing. I don't think the post-inst is going to have build arch available unless it runs deep package architecture to get it. I'm not sure the post-inst can rely on deep package architecture being present. Can it Steve? No, because it'll still run the wrong version. In the post-inst you're not going to have any clue as to what architecture you were built on, which is what matters here. Does an installing package not know what architecture it is? No, when you're installing it you know which architecture you're installing on. Yes. Does a package not know, a deep package knows what architecture it's installing for and it knows whether it's foreign or not. Sorry, yep. I'm looking at the other way. So we can always run deep package dash dash print architecture and we'll get to know what native is. So I don't know. We could try a fancy test but I'm not sure if that would just go wrong as often as the existing thing. Do you think Steve? Microphone. Come and sit closer. I know you'll have things to say. So one of the considerations there is when you're talking about the post-inst doing things that you don't care about when you're cross-building, the distinction there is not whether it's a native architecture package or a foreign architecture package. The actual distinction is whether you're installing it because you want to use it as a runtime library or as a build dependency and that's where it gets tricky because if I'm cross-installing i386 versions of these libraries, I expect the post-inst to run and succeed and if it fails it makes a difference. So good point. I'm not sure that I see a perfect solution to all of this anywhere because we certainly don't have any way to express the idea of I'm installing this because it's a cross dependency and I guess we could encode some conventions and have those populate the environment or whatever. If you did this test right you can say if I'm cross-installing and it failed then that's not a failure. Except that's not... If you're installing i386 on AMD64 it's a cross-install, if it fails that's actually a bug, not something that should be ignored. Indeed. It would be a compromise because it would usually work and you'd still see the warning but it wouldn't actually cause the package installed to fail and do all the right. Right, so I guess as far as compromises go the current one you're going with is as good as anything else as far as I'm concerned. If we're going to do something different we should figure out exactly what the semantics are to do it right and so we actually have some capability of saying ignore this failure when cross-installing and otherwise no. Now actually you could encode this logic in your cross-build environment and have a d-package divert lying and wait on the file system if you know you're going to be doing this and you could just have this pre-divert the script have it replaced by a simlink to been true and then the post-ins succeeds but we want to kind of get away from encoding all that logic in our cross-build environments. Actually Steve's a question for you as much as anything else. When we're doing this in the post-ins is there a way of working out that the compiled binary the alt-binary we're trying to run is something we can safely run because of the setup we currently got. If you're on I386 and the script wants to call a binary that we know is AN64 we know that's probably under certain ways or the other and that would be okay but if we could determine from even something like LibMagic that the file is not going to be able to do. We want to sort of, is a runnable tool which would say yeah, this is I386 and AN64 that's runnable, we expect that to work. No, however, I just had a great idea for an evil hack which is QEMU works by installing a logic for telling in PROC you tell the kernel how to run this elf binary. You could tell the kernel that the way you run all elf binaries of this architecture is by passing them to been true. Can we temporarily install a bin format handler and then take it away again, I guess? That's quite scary isn't it? That's a little bit scratch box. That's not completely crazy. That's an interesting idea. Somebody write that down. Okay, I guess that's enough of that for now. Charpath, should we ever need it? Is it important? Are there things... It seems to me that anything where in the build we have to take the arpath out again is just because we built it wrong in the first place, isn't it? And it will be hard to fix. Yeah, repeating for the benefit of the video. Sometimes that is a pretty fundamental bug in the upstream build system and maintainers are using Charpath-D because they can't figure out a saner way in polynomial time. Okay, so it seems to me that bin utils ought to be able to help with this because it knows about all the obj format foo. I don't know how hard it is to make a Charpath that would deal with foreign binaries properly using bin utils multi-arch style stuff. Or whether in fact just not bothering again is ever really a problem when cross building. I guess you end up with binaries that you didn't want, and that's probably bad. So if anyone wants to have a look at how difficult that is, it's just a little thing in the header, isn't it? There's a few bytes in the header. It's just that you need to know whether it's big end or little end end and how many bytes things are encoded in in order to fiddle with it. So maybe we could just make Charpath a bit smarter and it would just need to use LibBFD or something to do the right thing. Is it our tool, in fact? Sorry, minor thing, Wachie. Can you increase the font size and basically make more of the pad visible? No. Left. That one. That one, right. Edit. Is that enough? Is that adequate? Thank you. Good point. Can we get rid of that as well? Whoa. It's enormous. No, that's much better. You're right. Sorry, I can see it from here. Okay, so I think that's enough of Charpath. Multi-arching Perl. So one of the reasons, things I forgot to mention is loads and loads of build dependencies don't install is because something depends on Perl or Python currently neither of those are properly multi-arched so you don't get your dependencies and there's an awful lot of things that use something Perly or something Python-y. Perl actually seems to be quite simple. There is one Perl library. Some packages do link against it so if you have a C API to Perl it links directly against that library. So we need to be able to have that multi-arched but I think we just have to make so at the moment there's a Perl package which doesn't contain the library it just contains some docs and there's a Perl package, Perl base which contains the library and everything else. I don't know why that is. Does anyone here know why that is? I mean it's possibly because if you had a separate library and Perl base you'd have to be really careful when upgrading them to make sure you did them both together because you're using Perl during the upgrade and everything would blow up. I'm assuming we have mechanisms for that stuff but in fact I think all we have to do is make the library in a multi-arched path inside Perl base exactly as it is now declare it aloud and we're done. Does anyone disagree? Does that make sense to you, Steve? Sorry I missed the first bit of what you were saying so I'm trying to remember exactly what the details are of how Perl is put together because I think at one point the LibPerl library package was a virtual package on some architectures and a real package on, well not a virtual package but a dummy package on some architectures and a real package on others and I was just looking now and I see on that on AMD64 it's a dummy package and I don't know why that is. Because there's a Perl API package as well which I guess is the one you depend on to get the right version. So Steve McIntyre is saying that it's a real package on i386 and a dummy package on AMD64 which I remember that being the opposite because what I remember is someone repeat what Noodle said. Please can you repeat for the stream? If you actually look at the description of LibPerl on an AMD64 box it says that it's shared Perl library for architectures where the Perl binary is statically linked to LibPerl which is only i386. So it's a dummy package everywhere except i386. Okay. Yeah the point for the i386 being static I think was to do with fpic. Does that make our lives harder in terms of just saying Perl is multiarch allowed and we can make LibPerl multiarch same I guess. Does that mean we have a problem if you have LibPerl base that has got the forum library and then LibPerl i386 when it's in a different package. I guess we'll have to sit down and have a think. Unfortunately I don't think there's any Perl core people here. There's any Perl module people. But it looks tractable to me and I think we should just try it. Python is harder. Docko's been looking at that. Docko? I was just saying Python multiarching. You've been having a look. Where are we at? What do we need to do? What's left? How hard is it? OK, but do you know what needs doing? Or you're not sure yet? Well, the thing is that upstream is not cost-buildable at all, so you have to get cost-build support upstream first and then you can think about... Well, making it multiarch already so that we can install the parts separately is technically a separate problem from can we cross build the package itself. We'd like to do both of these things but for everything else to be installable, we just need the multiarch stuff in the packaging. Yeah, but it needs to be done. It's being done but it's not yet ready. OK, because I thought Python already cross-built. OK. Um... So do I just leave that to you and it will happen one day? Do you have any idea when one day is? After DebConf he said. So Python and Perl are both quite big blockers in terms of being able to install cross dependencies. So I'm quite keen to fix those kind of next really. Yeah, of course going back to Perl the issue is if you want to cross-build lots of your Perl modules then you don't just need a multiarched Perl. You need Perl with the architecture definitions in the essentially the site config already set up for the architectures you're cross-building for. OK, so the kind of Perl config type which the build presumably generates and we don't currently put in a dev package? It basically defines it as you build it natively. So what we had for an initial Perl cross-build was we just ended up with a config for each architecture would end up being installed. So there's a separate Perl cross-building question. So there's basically creating a config for each architecture kind of manually and then saying right that's how you cross-build Perl. So don't run all the gubbins it normally runs which it tries to do on a native machine with an SSH connection. Here's the answers. Just use that. And that works. You just have to maintain it for each new architecture. But someone upstream has kind of auto-conflict all the innards of Perl to make it cross-build properly to the Perl list about a year, two years ago and was roundly ignored. Nobody said the sausage. They didn't say no. They didn't say yes. They didn't say anything. So there's a pending question of whether we should just say, do you guys want to do this so that it just works forever more? I don't care. It's been on my to-do list for, well as you know for months and months to go and talk to the Perl folks about the cross-build stuff that we did have working but that was against Perl 512 and the world changes totally with every Perl release. So the patch Steve's done is Patrick Modermont's been updating it for 5.14 so I think we have without too much work a build that will work for now. It will be nice if upstream wanted to fix this properly. But I think being able to install everything else that depends on it part is a lot more important. Well, I guess we need both ultimately. That's what's really holding up everything else to find out whether they even build or not. So, you're saying we need the modules part so does that mean we need a Perl-dev that contains arch config for each? Yes, essentially. If you want to be able to cross-build any of your Perl modules, then you need that. So this comes back to a more general question of we've got lots of things which have some kind of arch specific how I built myself config. All those config scripts that aren't currently package config and Perl and Apache has its own weird crazy shit which is even crazier. And should we stick all of that in some kind of cross-support package so you just install all of it for all the architectures that we support in a big bucket so we know where to look? Or do we have lots of tiny packages containing cross configs? I'm not quite sure what to do with that. So the moment we're collecting all the auto-comp information inside d-package cross and maybe we should rename it to be called cross-support or something. That sounds reasonable. A cross-support-architecture or something. And then the question is how would that collect build config from a whole pile of other packages? We can just maintain it manually but that doesn't seem very likely to stay working for very long, does it? I guess that's what we're doing for auto-comp stuff. Yeah, exactly. Somebody has to do it so pulling it together into one central place maybe? Okay, maybe that's not crazy talk. What else is on this list? Python. Docker is going to fix it all. Yeah, so all these config files so most of these are nearly architecture independent until you multi-arch the package at which point the libpath gains an architecture dependent bit annoyingly and the dash C flags option is often architecture dependent because it says with dash SSE or something which I assume that depends on your arch. So, yeah, I guess we either stick something in our cross-support package or try and make people use package config. I guess you haven't actually tried this with any upstream saying can we just use package config please? I have no idea whether it's likely to be. There's reasons they're avoiding it because they're kind of core packages and they don't want to depend on package config. No, we had some success with I think it was core burn orbit when we were doing this and we went upstream and they just said oh we forgot to take that out we don't want to use it anymore how do we use package config instead please? Okay, so a lot of this is probably just old and nobody's seen a need to change it. Okay, so TCL is a bit different from the others because the script lives in userlib version TCL rather than userbin. So, I don't know whether we can just multiarch the script and have the version for that architecture so that's the other thing you could do with these is you could install each one and use a lib arch foo instead and then just keep them exactly as they are and make sure the build can find them. Right, I mean a lot of the problems with trying to get rid of any of these per library config scripts that are running around right now is not so much persuading upstream that they should use package config instead. A lot of it is to do with these are now interfaces that are exposed to the software that builds on them. Yes. And there's going to be a long transition period if we're saying we have to fix these upstream. Now the idea of moving the tools into an architecture-qualified path and just using a path setting in the build is one possibility. That's a transition we could do relatively quickly just within Debian because you just have to poke the related set of packages to export the right path in the environment. So yes, you mean anything that builds against one of these libraries generally uses config-cflags to make sure it's built with the same cflags? Yes. I haven't got statistics on just how many uses there are of these things. I haven't seen huge numbers but as you say it is an interface. Okay. It is possible to make some of the old config scripts just to be an empty wrapper around the package config call so that the upstream package could provide the PC data. Something over there. Mike over there. You can work around some of those. We need more mics. It will be a lot quicker. You will get more problems than you solve. It's better to fix it properly with package config files to begin with without wrapper scripts because it's going to introduce more bugs. The main problem with trying to turn the script into a wrapper script around package config is that I haven't actually gone out and done this for instance for free type which I maintain is that you don't get the triplet logic. So if you actually have to get the PC file which is architecture specific and you're not using the package config wrapper that has the triplet qualifier which auto conf needs to work out which version to call and so we don't have interfaces to do that so that's why I've not done this. The right answer is going to require fixing in the build system of the reverse dependency so you might as well just have them all use package config directly anyway. What else have we got here? Package config triplet foo I think this general concept is useful to call anything architecture specific as triplet thing which we can probably do in quite a lot of places. Auto conf already understands that so it's convenient to use that mechanism. We've done it for package config and it works so there's package config crosswrapper is one file and all it does is sets the path according to the triplet name you called it with and then does its normal thing and that works fine. So you need a package for each architecture which just contains the link which is kind of sucky and we could just put them all in cross support and have loads of links but the problem is that then we have the link with no package config behind it maybe and I don't know whether if auto checks that it's working I guess that won't matter and it's kind of nice if we could just ship the whole caboodle rather than have hundreds of tiny packages containing links for not just package config but a whole lot of other things as well potentially. At the moment the the package cross config files it does just lump everything onto your system so you install the package cross and suddenly your system knows what the endianness is of spark 32 and all this kind of crazy stuff because it just lands in one big place. That's not a problem for auto conf because of the way those files get used. But it does mean that we've already got a mechanism to split those out into individual binary packages and then you've got packages that aren't just same links, they have got useful data in them. I see what you mean. I mean if we split deep packages. If you split them there then you've got somewhere to put everything else that is too small to put on its own. Yeah maybe that would make more sense. So at the moment there is a package config cross package which generates 13 tiny packages. That's what Ubuntu's done. This actually comes out of the tool chain base foo package and I'm increasingly coming around to the idea that I think I want a multi-art support package to dump a load of stuff in sorry a cross support package to dump things into. Yeah and this objects introspection is something which is awkward. There's a wiki page that I should have put the link in to which is here somewhere. Here we are. Yes magic. So I still haven't found anyone to explain to me exactly how this works and what it's used for but this is a reasonable description so anything, any binary you build that uses geobject library scan to find which geobjects it used and because geobjects know about I don't know what it is they do you can generate an API from that as my understanding and quite a lot of the GNOME stuff uses that and it all makes sense because you build things and then you build the API from it and the docs from it and everything and it all matches up so you can't screw it up. It's just not cross friendly. So someone should get really enthused and make it work. I don't have anything useful to say about it at the moment. At the moment we've been able to just not run it and the core packages that seems to work okay but I bet you can't build GNOME without at least some object introspection working. Anyone have any opinions suggestions people we should hassle I don't know no okay we don't know it's not critical right now we're trying to bootstrap core systems you can kind of ignore this if you want to cross build stuff with desktop packages in you start to care about this. That was my list of issues there's the QEMU problem I suppose so quite a lot of this you can gloss over by just installing QEMU I don't know whether it to nail biners will run but they might still run on the wrong files but it lets you get further it's not much use for new arch bootstraps where QEMU doesn't exist which is currently what I actually care about so I'm mostly avoiding it but I guess one of the things I'd like to do on the rebuild D was have it running building everything with QEMU and without just to see how much difference it made but unfortunately that was awkward to configure because rebuild is a bit thick so I haven't yet if anyone is enthused to try that that would also be very interesting I'm pretty sure that the config scripts and the caching scripts they look for things in non triplet based files and they will just use whatever they find so they could quite easily pick up the wrong architecture but like to get the wrong architecture and scan the wrong stuff but that's quite an easy check because you can just do an MD4 checksum compared to the original architecture of the content of the file and you should find whether those actually are I-286 copies on RML etc so that's the other thing which is definitely of interest and needs doing is tools to check so it could be lintian checks or some other tool to actually look at the output package so I've got a part of cross built binaries which currently aren't exposed on the net but they could be very easily and just comparing the differences between the cross built package and the normal package how many files are missing how many files is the wrong architecture and then look at the binaries themselves and say do I have the same elf header parts does it look like it might be the same thing I think if we had those QA tests we'd be a lot more convinced that what we were building was actually useful I haven't done any work at all on that I'd love somebody to if we could just knock up a few tests it would be really useful and then we could start running them so that will be fun to play with this week maybe does anyone have anything else they've thought of, noticed I guess I could show you the web page bit just because it's quite pretty there's a lot of these, do it again this is the output you get that's unstable lots of packages what architecture it built for and when and what went wrong so as you can see we have a lot of build depth fails especially here and you can just click on the log to see what barfed if your window is the right size you'll be able to actually see it let's try that here we are so generally right down the bottom it says it depends on bin utils but it's not going to be installed sounds like something that ought to be working doesn't it I don't know what went wrong there so it's now very easy to see whether your package cross builds or not go and check if it doesn't see if you can help in the case where it's a build dependency problem there's not much you as a package manager can do but if it started to build and failed then you can fix that and we love more people to be doing this because whilst it trundles along it's quite slow is there any way you can parse those logs and actually get a listing of the ones that are being blamed for being uninstallable yes there's all sorts of things you can do with that log pile and generate a much more exciting web page so I mean my error messages on the end are a bit low tech at the moment so I've got some nice orc scripts which suck out the some things it recognises unable to determine build status probably failed that usually means something like didn't actually download any sources it hardly started but a lot of the time you've got failed build depths and metlences but it doesn't list here which packages it was and a bit more grepage could have worked that out so it started off as some orc and this page is actually now generated by a pearl script which isn't quite as disgusting and in fact that the grepage should just go into the pearl script and be one thing so yeah that's in the xbuilder package so if we can make it less crap that would be great especially because you can write pearl so yeah we could also what would actually be nice was historical statistics so you can find out the state of the last time things were built so at the moment if there's a successful build it goes into reprepro and it never tries again so we don't get to find out if in fact we broke it later at the moment but that's like Debian you know as long as we built it once that counts until there's a new version of the source and then we check we can still build that so yeah it would be nice if we had today this many packages built and yesterday many and last week just so we could see whether we were going up down no no no so yeah there's a lot of statistics we could do ideally I was originally I thought if I used the build the infrastructure then we could use the PG status stuff that we used the existing build the thing and that would be great but that is dependent on the build the database format it's quite closely tied to it so unless you're actually using our crazy build the stuff you can't use PG status which is annoying so because I picked a much simpler build the I had to write my own crappy interface so yeah find your room for improvement there well I think you're probably all bored now so we should stop unless anyone has anything else to ask about ok we're done thank you very much you have still 10 minutes if you want 10 minutes you have yeah I know but we're finished there's no point sitting here unless anyone actually has pressing issues to deal with we can all go and drink coffee