 Sorry about that. Oh, apparently I was muted. Yes, so there's been various of these cloud operating systems, mostly they're like, for example, the OCaml people have this thing called Mirage, which is absolutely great because it means you can run a completely OCaml system as a VM all by itself. And it's absolutely marvellous if you like OCaml, and if you don't want to write your whole programme in OCaml, then it's not really for you. Some people who have involved in NetBSD have done something much more interesting. What they've done is they've sawn, they call it Rumpkernels, they've sawn the VM system off of NetBSD, and the system called layer, and they've turned NetBSD, the kernel, into a library, which you can link against NetBSD libc, and now you can sort of compile your programme and link it all into a single address space image and run it on various things. They've got a zen thing that you can run the programmes on. They've got one that they call POSIX, which lets you run it as a process, sort of a bit like UML only. It really is a single process, it's got no scheduler, nothing like that. This is quite good, because this means you can take allegedly normal programmes and compile them for this environment. But their build system is a bit bad, and even if their build system wasn't a bit bad, it poses certain problems. Now we get on to our nice Debian world. Here's a nice diagram of some packages. You have some source packages. I've not shown all the build dependencies because there's going to be no room on this slide. We've got some packages. You tend to have some kind of, maybe you get it from Git, maybe you have an upstream tarball, you add some Debian files to it, and you build it and it produces a .deb. That's all very nice. This is what I now need to do. You will notice that I have got zen in this three times, so the zen upstream source code has to be used three times in this build process. We need to build it once in the normal way to generate the actual hypervisor and all the things that people are expecting. That build, at the moment it doesn't, yes? Did you already tell us what's the problem you're trying to solve? Yes. What I'm trying to produce is, I'm trying to build these NetBSD Rumpkernel programmes in Zen, because we want to make, we want to use the Rumpkernel thing that the NetBSD people have done to run parts of the Debian tool stack and its support software in VMs. Do you want to do it in Debian way? Ideally before I bake too much, when I build a system in stone, I'd like some advice on what would be nice for this all to look like. What I've got here is boxes, and the round boxes are packages that mostly already exist, and the square boxes are things that don't have a place to live at the moment. I have a working directory on my computer that contains stuff, but exactly how this should be put into something like a Debian archive is less clear. The ultimate goal is, for example, we've got a thing called Pygrub. Let's take Pygrub because that's a nice easy one. Nice easy one, the top right corner. At the moment, most people using Xen use a thing called Pygrub, which is a sort of half-assed emulation of Grub1, Grub2, Lylos, SysLinux, anything else that anybody decided to put parsing code in it for. It's written in Python. It links to libfshimage, which you may or may not have heard of, and it's a file system driver in user space. This runs in your master control domain, that is your host VM, looks into the guest's file system, finds the kernel according to... It can even display menus and then passes that. It finds the kernel and passes it to the rest of the system for actually booting the guest. This is not ideal because if your file system driver has some kind of bug, then that's a security vulnerability. There are also other reasons why it's kind of not ideal. The console handling of this weird extra process that's not running in a guest is a bit messed up or at least complicated. Now, pvgrub2 is going to solve all these problems sometime in the next decade. But in the meantime, I'd like to do a stopgap solution, which is, well, we'll build Python for NetBSD run kernels and then we can run pygrub as a VM, and we'll grant that VM access to the guest's file system, but it doesn't need any other privilege. It provides the kernel to the tool stack and... Yeah. So... You see I've got this rumps and bibs and tools here. This is a combination of an enormous pile of NetBSD source code, some of which is rump kernel specific, but most of which isn't... It's basically just NetBSD. So you see I've got 64 megabytes of .git directory. I did this du the other day. If you unpack that, it's a third of a gigabyte source tree. You build that against some of the outputs that you get from the ZenBuild system, or at least some of the outputs that you would get from the ZenBuild system if it bothered to ship them. And then you get well, a bunch of .a files. It's all asynchronous, very confusing. A bunch of .a files. A bunch of headers. Weird compiler wrapper script. A spec file. A spec file for GCC. That kind of thing. Not a proper cross build environment. We don't want to rebuild GCC because we're just targeting the native CPU here. But we need a completely different symbol namespace, maybe some different compiler options. Things like that. So then you use that wrapper script to run Python's configure. But of course you can't really I mean, you can't do this in a Python tree that you're also building the normal way because it's a different environment. It's a bit like a cross build. As far as configure is concerned, this is a cross build because the executables that come out of this process are Zen guest images and can't be run as executables. So configure can't run them. So technically then yes it's a cross build. And then of course we need to mix that up with the pieces of pygrub itself that came out of Zen. Now things get even worse with one of our other principle targets which is QMU. QMU is used in Zen systems to emulate a PC. If it's a purely virtualised guest it needs an emulated PC and we use QMU for that like everybody does. If your QMU has a security vulnerability in it then that's a vulnerability in the whole system. At least if you run any HEM guests. We don't like this. So for a long time we've had a thing called, we call it stub domains but basically you run your QMU in a domain of its own. It has privilege to access the glory of the guest. It doesn't have privilege with respect to the whole system. So if the guest has some kind of it has some kind of trickery and it takes over its own QMU then well fine, that QMU can't do anything that the guest couldn't have done itself anyway. So yeah, good luck with that. Great. Unfortunately the version of QMU that we have managed to do this to is a decade old fork. I'm not sure it's even in Debian still. There is since then the upstream QMU community have rushed off and become a hive of activity and they do all sorts of much better cooler things now and it's much less crazy. And now I want to cross build that. In order to cross build a QMU that can do all the hardware servicing for a Zend domain that QMU needs to be linked against a pile of libraries that come out of the Zend tool stack libraries special functions for accessing the memory of other domains and manipulating them in various ways. IPC and inter-domain communication mechanisms and those are all in Zend.Git. But of course we don't want a standard build of Zend.Git. Those libraries if you just build them in the normal way are built for local operating system whereas what we actually want is want them built against this NetBSD thing. So now we're going to have to build Zend.Git again using this weird wrapper script. So you run the weird wrapper script Zend's configure, notices it's cross compiling, you get a subset of the build deliverables out of your build and then you have some.a files which go together with these.a files and some include files and then you can run QMU's configure against the two things together and it's almost like you're setting up a little parallel cross world. Okay so I'm going to stop hand-waving now about this diagram and start hand-waving about what might be done. So that somehow the most official or kind of formal way of doing this would be to say well these NetBSD rump kernels are there very like an architecture you run a configure, you run a make they produce deliverables so we'll call them an architecture so that means you have a rump kernel AMD64 a rump kernel i386 a rump kernel ARM AMD64 and so that's four architectures they'd be partial architectures because only a tiny subset of the archive would be built here we go again and all of this infrastructure with four additional architectures in the archive would exist only for really for the benefit of Zen which is itself just one tiny thing in the Zen Ecos in the whole deviant ecosystem so I don't think that's really a sensible thing to do so another thing I thought was so that's obviously not on if I were to try to do that people would hate me I probably hate myself as well so I'm not going to do that so on the other end of the scale it occurred to me that I could upload I could cause the Zen.Git package to produce a copy of its source code as a source dev and I could probably even persuade the QME people and maybe the python people to do the same thing and then I could have some package that had all of this and a giant nightmare build script and that just you know by hand did all of this stuff and it would be sort of a bit like IA32Libs anytime you wanted to add a new one of these you'd have to go and find the original packages and there might be a whole stack I haven't got as far as compiling python yet I've been working on QMU but I imagine that in order to get python to work there'll be a bunch of libraries I have to do first so I have to go to each of those libraries and persuade them to use one of these source packages and then add them to my giant build world thing yes and you really am I on yes and you really wanted to be in the devian archive all of it well what I want what I want to be in the devian archive is this file because this file right as far as the user is concerned this file is a piece of the infrastructure for the Zen system and it should be used actually this should be used by default you shouldn't be using your host compiled QMU by default you should be using this because it's more secure so all of this infrastructure is just there to produce this deliverable which dumps on the user system in userlibzen never heard of it type place and the tools automatically pick it up and the user is completely oblivious of all of this so that's one reason I want it in the archive but once you start doing this kind of thing the next thing that happen is the user will say oh wow cool does that mean I can write Python scripts and have them run in Zen directly so now they want this Python run image and they want to be able to meld it with their own Python scripts which means that the tools that probably live in here that manage the file system image that this domain runs on also want to be delivered to the user of course those are all BSDFFs so we're going to part of this is I'm sort of taking a whole operating system and saying this whole operating system is just we're going to package it up neatly in a little package with not too many tentacles and barrier in Debian you were first I've got pretty much the same kind of problem with Windows cross compilers so the aim of the Windows cross tool chain is to end up with same as you with your QDM image that ends up in the Debian archive we use the Windows cross compilers to build Windows executables that end up in Debian installer and a bunch of DLLs that end up in Wine Gecko that's used for wine and like you say once you start doing that there are other people who say oh it would be really nice to have all these libraries that we could use with the cross compiler for Windows and then you end up with the same kind of problem asking people to produce source devs but then you get push back from the other Debian maintenance which is perfectly sensible and they say this only makes sense if we add it to the proper Debian architecture and then everything just sort of falls out automatically with configuring and so on because you just get the triplet and it all works but if you don't do that it's a nightmare amount of work with all the source packages that need to be used and you either do it on the side somewhere and then you can't get it in the Debian archive or you do it properly it's tempting to suggest that you could rather than everybody ship source devs you could from the bottom up for each Linux architecture pair for the NetBSD thing that you're trying to produce build a binary package which is labelled architecture MB64 but is in fact contains NetBSD MB64 stuff off in some path that nobody cares about and you could deal with fixing up the paths in your build system that would at least let you get the binary objects into the archive and your top level build could be could just produce QMUDM but at least then you'd be able to have build dependencies for sensible-ish things Right, so what you're suggesting is that Python should be the Python, the real main Python package should be changed to produce these rums in AMD64 libraries and things off in a separate path Something like that in separate binary packages That makes Python build depend on this Yes it does Alternatively you could you don't necessarily have to have everything work the same way you could have a Python Zen separate source package which you occasionally re-sync but have other packages which are less complex and intertwined than Python do those binary packages directly Right, so That means convincing all the maintainers to take all that on Another approach would be the other libraries the other library side that you want to build grumps and biners that would mean convincing their maintainers to take that on The other approach which you've mentioned on the mailing list already would be source build dependencies I commented on source build dependencies at the last DEB Conf and I'm not sure I managed to convince Walter that this was legitimate but given that every package already requires to be able to fetch its own source as part of its build process I don't see why you can't legitimately do up-get source in a packages build stick it somewhere in your builds tree and then do that. You don't need to invent infrastructure for source build dependencies to make this work in the worst case you need to fix a couple of firewalls and build days or something but there's no reason why you can't just suggest having Debian rules call out get source Yes People have some people appear to have a visceral reaction to this but I don't see why it is illegitimate Right and up-get source is obviously quite different from calling up-get install it just fiddles with your current directory state That would certainly make all of this a lot easier You could make a little python's n package that you could generate these little stub packages that would go ah well I'll just get the python source and I'll mess with its build system slightly put my weird compiler on the path and then I'll just build it it'll all be fine You would want to make sure that you're getting the versions you expect or something but you might need sanity checks on all sorts of things but it doesn't seem fundamentally very difficult Right I'm going to put in like a build using then am I Yes That matches a build using semantics precisely This is a novel and radical suggestion Thank you minister I'm very surprised to find you proposing it because the idea that the package build should go off and fetch stuff from somewhere Traditionally we've done that in places like sbuild and not in the rules file and it's not declarative either It is indeed not I'm certainly more comfortable with other things like I think things like partial architectures are a much more elegant approach to this a ton more infrastructures to stand up Right So I'm suggesting this not as a particularly elegant thing but as something that as far as I can see should work today without having to build up a load of stuff I guess if I do do that then having deployed it it will be very easy to persuade people that should be some better infrastructure for source package dependencies This was true of IA32 Libs too and look how long that took but yes It might be hard to get it through you now Do we have anybody from the FTP team here Are they listening to the stream? Right well maybe we should do that So yes So we've talked about partial architectures a lot over the years but nobody's ever really got very far with them at least not in Debian Archive Is the men blocker simply working out how we do a build D that doesn't actually try to build everything as is usual How do you mean deep because it has a windows architecture element doesn't it somewhere I've seen it in triplet table I'm sure that everything would be a cross build The main blocker for me anyway is getting the patch that provides the windows triplet as an architecture into deep package it means convincing game for now So I was going to say that in terms of actually getting partial architectures off the ground one of the things that I think has always been a de facto blocker is we talk in general terms about how we can do this but I think somebody has come up with a concrete proposal and show that yes if we did this this is the set of packages that would make sense to build and why and we have a buildable closed set that is useful to do this for that's less than the architecture and has some sort of a policy around it which isn't altogether arbitrary and I think if we had something defined like that then you're right we would need to have the implementation of the builder as well but that it's not going to move until somebody puts together a more fleshed out proposal The closeness of the set is of course much easier when cross building so perhaps this winds up effectively blocked on build profiles so that we have that more official but you know most of the binaries we need presumably in fact come from some other architecture because you're using them as multi arch and things and cross building Right I mean if you did this if you did this as a partial architecture I imagine it will be like a little multi arch stub thing but you you'd have to make sure that nobody thought it was a good idea to install the you know the python dumps in AMD64.deb is not really usable as a for satisfying multi arch cross dependencies. Well this is python net BSD arm64 isn't it or net BSD something. Is that right? Well it's the thing you know the actual python file that's got the python code in it isn't okay it's an elf but it's a compressed elf kernel and to execute it you feed it to the Zen tool stack as a kernel image and you can't run it to run python scripts on the host. Right so that is then not a python 2.7.deb that is python 2.7-stubdom or something.deb it has different semantics it deserves a different page name regardless of what you do presumably this would be not too terrible for the python container since they wouldn't actually build this normally it would be a thing in their control file with an architecture field that's caused it not to actually be built for anything useful. Right I worry though about wrinkles so we've got here the other examples that we might want to do some cross builds for windows maybe wine or whatever it's likely that this also would require some violence done to the upstream source code I'm not expecting to be able to compile python unmodified it's possible in principle it should just work but there's going to be some bugs and some of them are going to be bugs in python that you know I will try to fix upstream but there's always a lag with that and some of them will be bugs in this whole rump set up well when I say bugs in felicities ways in which it's not like a real system and there have to be some bodge. This is where I think Debin Ports can come in handy because in Debin Ports you're allowed to carry packages from the main archive with your extra patches until they get merged upstream or merged into the main relevant package that's an excellent idea all I need to do then is to depend on something is to have my my you know your main Zen hypervisor get me all my stuff.deb meta package just needs to depend on the relevant package from ports and nothing can possibly go wrong then you try to get this into testing yeah right exactly one of these binaries that you're trying to build only useful in on a already AMD64 Arch install or do they are you aiming to have these packages these new ones that you're building be usable without the rest of the Linux host architecture well okay so in principle supposing Herd reported to Zen and could be a Zendom nod in principle the same Python Rump Zen AMD64 libraries executables etc would do for the Herd as would do for Linux and actually the build system for this stuff the host libc and so it doesn't involve the host kernel headers it involves NetBSD kernel headers which are supplied here at the bottom left and it gets more confusing than that because if for some reason you're running an i386 domnall rather than AMD64 domnall there are a couple of reasons why you might be doing that you probably still want to be running an AMD64 Rump Zen PV stub images so probably we only want Rump Zen AMD64 and not Rump Zen i386 and the i386 users will be using AMD64 packages and that that can work at the moment because as I understand it the GCC that's in Debian i386 can generate AMD64 code it just doesn't come necessarily with a libc but I've got a libc that's fine so far ad get source in the rules file is the most promising suggestion here or go around all the packages persuading them to use a source package deb one other option that you could consider that would probably be a lot of work would be to have your Rump Zen every one of the packages that you actually want to be available here under each one of the architectures which is kind of like what Fedora does for their MinGW toolchain sorry I'm not sure I follow so on Fedora the MinGW toolchain is MinGW32- GCC and then there's a huge slew of libraries that are the Windows binaries that you could use to link against if you want, say, SQLite for Windows then there are a collection of MinGW32-Sqlite packages it sounds like this is kind of analogous to what you're trying to do here you have a bunch of Rump Zen binaries that you want to put together and possibly libraries that they share except they are only shared at built time Fedora, you're duplicating all the source packages so it's kind of a mess source packages which are the same thing with the prefix right, right, I mean I could that's another thing that I could try to get through the FTP team anyways, our arch hold as well because it has a foreign arch inside it, right? it's arch hold in the central arch hold in the central not arch any yes it's jolly good it moves the problems elsewhere like they put these x steps I presume they didn't make that I was thinking of the selfie I'm not even sure it works anymore yeah, I mean I've placed it two years ago yeah, that was before so it seems to me we're not very keen on the IE32 libs-ish kind of we'll have one thing that knows how to do it all and does it all in the right order the remaining options are the IE32 libs-ish remaining options are probably that they all involve devs that contain these rumps in AMD64 libraries and at the end PV kernel images and the question is the question we're left with is how do those devs get built particularly from Witzel's code and our options that have been considered so far are we should have source steps we should go apt-get source in Debian Rules or we should duplicate the source code does anybody have a fourth option that's less bad has anybody been writing this down by the way no, I see I'm going to have to write it up later okay well I guess that's some kind of conclusion that at least allows me to go to the FTP team and present them with something that looks like a choice using Debian Ports was mentioned yes, Debian Ports is no good because these deliverables need to be in main this deliverable here needs to be in well probably main AMD64 multiarch so unless we're allowed to depend from main into ports which obviously right, but that's obviously not I can't see how you could ever get that to work properly well you just ship and app sources the decent effect with your Debian Ports T I would point out there is one at least one very important precedent for packages using well not app get source but app get in other ways in their build system and that is Debian Installer it fetches rather than using build dependencies to fetch bits of the installer which would then have to go in devs in some strange way or something that uses a specialised app configuration in its own build system in order to fetch the installer components that need to go in the initrds that it is about to shove into the archive via byhams right, so effectively it uses app to download devs u devs u devs, but from the archive yes, so it seems at least in principle obviously source packages are supposed to be allowed too then great brilliant I haven't done it in the Debian Archive I've done it in Ubuntu Archive but I was fetching source packages and then I would cross compile them and then I would use the results again to bootstrap the android toolchain but I was explicitly doing app get source and then declaring my build using such that I don't have to duplicate gcc and libc and blah blah blah this way I essentially was built depending on a source package ok, which which source packages contain the app get runes so that I can I can send you an email I'll send you an email great well maybe that's actually the answer to my problem by rain well thank you very much thank you