 OK. So, nid oedd yn ei hwn i'r cwyrdd a gwrs-dysgros. Felly, rwy'n meddwl i chi'n fwyaf sydd yn ei wneud. Felly, rwy'n meddwl am dda, rwy'n meddwl i chi'n meddwl i'ch hoffi. Felly, rwy'n meddwl i'ch hynny, mae'r ddiweddau ac yma. Nid oedd yn ymddech chi'n gwych. Rwy'n meddwl i chi'n meddwl i chi'n meddwl i chi'ch gyrdd, i chi'ch gyrdd yma, I did the same thing again, but do it in a charoute, which is, in practice, the only way to build anything where any of the versions of anything are different from the versions in your main system. And the scratch box scheme where you don't tell the applications they're cross building or any of the tools at all, you just do a build and then magically shuffle paths round using LD preload behind this back and use QEMew to run non-native binaries during the build. This is very cool and avoids most of the problems, obviously replaces them with another set. Compilers is easy, just Mdebyn's been providing cross tool chains for all the Debyn architectures for eight years now. They just work pretty well. The days when you have to go and get a random tar ball off the internet, which somebody made once, and then keep using it for five years because it was anyone that worked, are largely passed. Those compilers will soon appear in normal Debyn. You don't have to get them from Mdebyn's archive anymore. It hasn't happened before because Debyn has no way of building things that have cross dependencies. And it was to build a tool chain, you have to depend on the foreign sea library and there was no way to tell an auto builder how to do that until this week. So sometime in the not too distant future, first the ARM compilers will appear and ultimately a big pile. So we maintain compilers that run on all the fast architectures, so AMD64, i386 and PowerPC, the ones you are actually likely to have. PowerPC is not really very fast anymore, but it was something people wanted to build on and targets of all the targets people have asked for basically, which is... Zombie, what's the current list of target architectures in the compiler pool? Didn't we have an i386 as well? No, iA64, that was it, because people tend not to have one of those, but maybe want to target it. So that just works really. The only catch is that because we're not building in the Debyn archive, our version of the compiler is sometimes out of date. Because of the way they depend on GCC base and libGCC latest, so even libGCC 4.1 will have dependencies on libGCC 4.4, something or other. Quite a lot of the time our versions are installable without a bit of force, which annoys me, because it's all beautiful and you should just use our compilers and then people go, it didn't install, it says it can't install because of this, that and the other. Once they're in the main archive, I hope that will go away. So, what we do, actually I should have brought it with me, but we make this machine that speaks for people who can't. Basically, it runs Linux on top of which, so a pretty boring base system on top of which we run our user interface application, and that invokes a proprietary text-to-speech synthesizer. So, like many embedded things, it's pretty much a one application device. We have a part of code for that, and in practice it works out we've got about 15 packages we need to build out of hard code base instead of Debyn's. Quite a lot of those are just little config packages, dev packages, there's a gsm, it's got a mobile phone in it, so there's a libgsm library, which actually came from OpenMoco. So, we have this arrangement, it just kind of grew really, so we have our own repository archive, all the code lives in subversion. For no particularly good reason we chose CMake for the build system, mostly because I hate autocomf with the passion, I thought anything else has got to be better than that. CMake's probably better than autocomf, but only slightly. There's one machine responsible for doing the builds, which sends out commands to native architecture buildies and cheroots on i386 and MD64 machines. At the main reason we do cross building is because it is much, much faster. So we can build it on a native ARM box for a while we had to do that, but it takes about an hour to build our application. So if you're sitting there developing going, oh what happens if I change this? It's really boring waiting a whole hour to find out whether it worked. You could have eight changes in a day if you're trying to fix things, and that's really, really tedious. So cross building, it's done in about five or six minutes. So that's a major attraction to make it worth the aggravation and pain of keeping it all working. One thing I should say, the RepriPro, if you haven't tried to use it, is great, and the RepriPro man is amazing. I complained about something not working at 5pm one day, having spent all day working out, but it definitely didn't work. It was broken and it was fixed in CVS by 7pm that evening. It took him three hours to go, oh yeah, it's a bug. I found what it was and I fixed it. If only everyone maintained their packages like that, it really would be marvellous. We use it for quite a lot of different repositories and flavours, and it works pretty well. So for many years, the base distro on this device was familiar. I don't know how many of you remember that. That was done for the handhelds IPackage devices, PDAs, handhelds.org did it for the IPack PDAs. So it's a very slim distribution. The whole thing with X is 16MB or something. So it's particularly suited to our previous generation of hardware, which didn't have that much flash. The thing is, familiar runs on IPackages, not devs. IPackages are in fact the same as devs in later revisions. There used to be tar balls instead of archives. But after a while they got bored with that and found it was more convenient to make it the same format as devs. Except that they're still slightly different because the control file is called control instead of debian when you actually unpack it. Minor details. So in general you can usually rename an IPackage to put .deb instead on the end and it'll just work. But it's a bit more reliable to unpack it and repack it again. So we had this problem that whilst the system was familiar, we had to have IPackages to install and there has a flat repository structure. Everything just goes in one directory and there's an index at the top. It's all dead simple. Whereas debian has our nice pool repository system with lots of directories and paths. But I wanted to keep a deb repository because that's what the tools do. Reprepro produces a proper debian archive so if we'd use FTP archive it does the same thing. But it turns out that Reprepro has a function to do this. So it's actually possible to keep a deb archive under an IPackage archive in sync every time you do an upload. Both get regenerated automatically, which is quite neat. So we name our releases after lakes in the Lake District. So the last one was goats. They're going alphabetical order so that's about the sixth one. People want me to do unhelpful things like... So familiar is really old. The distribution is about 2003 that we're building on. So it's got ancient versions of the sea library in it and generally ancient versions of everything. You can just about build that on edge because that's fairly old. The sea library is a bit newer but it works well enough. But we need to make the transition from ARM to RML and of course the RML stuff is quite new. It didn't appear until GCC 4.1 on a bit when it actually started working properly. So pretty much in practice you can't build RML on edge. The tools don't work. So you have to have Lenny vintage. But the powers of the B. So we have these different releases and basically head is called development equivalent of unstable in Debian speak. But we can't just build development on a Lenny platform to actually release it. People go no, no, no, we've tested the goats version. That's the release software. You've got to build that in new world. But you've got to still carry on building it in old world for ARM. We wanted to be able to do both of these things at the same time for a while. Which is of course incredibly tiresome. So now my build system instead of just going it's new. I'll build it in the new things. Well it depends what target architecture you want to build it for, which cheroot you want to build it in, which is a rather strange way of looking at things. And the other problem is that, what problem exactly, but if you're sometimes cross building and sometimes native building you want to get pretty much the same results out. It all needs to be compatible. So those are the things I needed to make work. As far as I could tell about a year ago, nobody had ever tried using CMake for cross building in Debian. The documentation on the CMake site said, this is how you cross build and that didn't work. It wasn't right. So maybe some other people did it and didn't write the answers down. But I may well have been the first person to do this. So CMake has a fairly sensible system. It's a lot easier to understand than the way AutoConf does this. Basically, if you're cross building, you just specify a CMake toolchain file, which contains all the stuff you want to change whilst cross building. So you specify a system type and a processor type and a compiler name. And the root path there is basically the path that things will be installed under. The stuff they don't tell you about is a whole lot of magic CMake runes that say, and only look in there, don't go and look in the normal directories as well because you'll get the wrong ones. And if you use package config, which in practice you need to, in order to make a CMake file which will both cross build and native build, you need to have magic runes for where the package config LibDeer should be. But as far as I can tell, that file will work for pretty much any sensibly written CMake build for anything. As long as you don't do anything too exotic, that is the answer. So one of the things I'm in the process of doing is putting essentially that file for each architecture into de-packaged graphs so that anybody who ever builds anything with CMake will just find it works auto-magically if you do de-packaged build package dash a target architecture. So to make all this work, we have the main server machine which dishes out build requests to machines of the appropriate architecture to build it. So there's a script called build. I couldn't find a system that did this. There's various build systems, but none of them do cross building. Apart from OpenSousa's build thing, which I've recently come across, whether in fact that's something we should try using. So I wrote my own script which allows you to do specify a cross if you want one, otherwise you'll get a native build. So basically it just says build package called SL40UI. Clean out the old SVN checkout just to make sure you're building from fresh. And just do the defaults, which is the development head at CVS on the current development release. If you just say build SL40UI, you'll get a native ARM build. And if you want to build an old version or a particular version, you specify an SVN branch. You can specify a target architecture because the default is still ARM. It's about some point it'll change to RML. And so on. So this is actually quite useful. You can also specify. So it defaults as many things as it can. So it knows that if you're building for ARM from development you should be doing it in a Lenny charoute. Actually it doesn't, if you're doing it in an extra charoute at the moment. So if you want to actually build it in a Lenny charoute you can just say release equals Lenny and go okay I'll build it there then. I don't know how many of you have thought about this but if the whole Debian system is predicated on the sources being in the repository, and you build those sources in order to upload the binaries for each architecture. But in most operations the code is normally in SVN or Git or something, some version control system and you actually want to build versions of that and then upload them to a repository which gives you a little problem with version numbers because Rep repo will refuse to reload to upload a package that you've already uploaded once with a given version number. You can't just upload it again and it says no you're empty five sums have changed what are you doing to my database? So you have to have a mechanism to make sure that every time you actually upload a package you really did change the version number. So in fact the build script now checks before doing a build did someone remember to do dch-i to bump the version number because there's no point in me doing this build for a quarter of an hour and then discovering that it's exactly the same version that's already there and refusing to upload it. So we now put that check right at the beginning and say you forgot and there isn't in now a I think I had it on the previous there's a test build option which allows me to just go well build it anyway but don't upload it at the end because that's pointless it won't work. I don't know if other people have come up with other schemes for doing this it took us quite a long time to work it how that should work. Well by putting everything in a Debian repository it does actually make your life quite a lot easier I guess everybody here already appreciates that but um so we have a tools repository which is for stuff that's installed on build systems so the actual build scripts on the host the build tools client package which goes on each of the build cheroots um there's an MSP GCC package for building MSP430 code and a whole load of random scripts basically so anything that you regularly install on developers machines if you put it in a repository suddenly everything's available and with a bit of that people are running the same versions all that stuff actually comes out of the version control system like everything else but once it's in the repository it's easy to distribute and the same goes for people doing testing and actually installing machines in production you have a particular released suite um uh which contains all the stuff that should be being installed on machines at the moment and all the stuff that people should be testing at the moment um and those those are all the packages which go on the actual device as opposed to going on your build system uh so we currently maintain about well actually we keep all the old releases just hanging around in ReprePro so every release name we got released is still there um because people go oh no we need to be able to rebuild ancient versions of things I'm not sure they ever will but uh they like it not to disappear so we have a seed for messing about in a development for doing today's builds in uh we actually found that you really need a pair of releases um so each of these is a suite name in uh Debian Speak um so it's all that's all one ReprePro repository but it contains all those suites and you need two for each release because you really need one which is actually released packages which somebody should be testing on a device and the other which is effectively proposed updates for that release so stuff we just changed today because somebody filed a bug but in practice we found it very useful to be able to stage them for a bit so you might collect two or three packages you change the user interface and the speech library and a couple of other things and the config stuff before going right let's migrate so you do a ReprePro pull to migrate those down one release so for a long time we managed with just the release but it annoyed the testers when suddenly somebody fixed something and the version changed and they went hang about I was testing that um so having them paired like that works pretty well now once we get to the whilst you can have we have multiple architecture builds so we have the ARM version of something and the i386 version of something so that you can run the software on your desktop machine you don't have to run it on the device we do a GTK build so that you can test it on your machine um but it turns out that for the transition from ARM to RME L you actually need two ReprePro repositories one for the old stuff and one for the new stuff because essentially the old stuff is familiar compatible even though we build it on Debian and the new stuff isn't it's all proper Debian without so familiar there's just little differences like familiar loses LibZ instead of LibZ 1G um and the version of LibGCC is slightly different uh and the these days the name of the GIF library has changed it used to be LibUnGIF4G because of the GIF patent regs and now we've fixed that because it's gone away um so as I said ReprePro provides this magic system for synchronising an iPackage repository with a DEB repository um slightly confusingly it's the log option in your ReprePro config so as well as logging to a file we've actually run an arbitrary script for each type of upload so this basically says um log whatever was uploaded to go to stable log um but every time there's a DEB imported run the update iPackage script um and so this is actually quite a powerful functionality you can do pretty much anything with this so ReprePro just runs whatever script you told it to with a dirty great list of parameters telling you whether the um it was added um which repository it's being uploaded to what sort of file it is which section it goes in what architecture it's for uh package name package version number uh and then the actual file and then if you're replacing so it's not a new upload it's a replacement upload you also get the old package and the old package version number so given all that information uh you can write a script which in our case essentially goes uh if you're adding or replacing you have to make sure you get the right parameter so nowhere in the ReprePro documentation does it explain that there are a different number of parameters passed when you're replacing than when you're adding that's a secret um uh very important piece of information that uh so you have to make sure you're operating on the right file path uh because it moves about depending which operation you're undertaking but once you've worked that out basically you can package x uh and then dbd-e you will unpack your um Wnarchive doesn't seem to be a single command for unpack a Wnarchive am I just stupid or is there really no way to do that nobody knows seems to me they're ought to be but you can unpack you know basically you can unpack the control or you can unpack the data you don't appear to be able to pack both of them seems a bit weird um but anyway so you unpack the two halves separately uh in the right way and then you run I package build on what you just unpacked so that basically turns a deb into an I package um on the server oh that's about oh yeah I discovered that the bootstrap is a very nice shell coding if uh if you always wondered how to process a set of um options that um might or might not contain equals you know sometimes they equal space and sometimes they're just option parameter um debootstrap has rather cool code for doing it so that's where I nicked it all from uh on the server different to the building tree roots wonder what I meant to tell you there ah yes so we have two scripts called build the one that distributes builds and the one that actually builds it in the tree root um one of them probably ought to be called build client or something so the the build that runs in all the tree roots is effectively equivalent to s build uh in the Debian repository uh the thing it understands is cross building and s build doesn't do that yet I was planning to look and see whether we could basically add this functionality into s build so that everybody can do this um so the server script decides from what it was you built it for and where the request came from what architecture it should be built for works out what tree root name that implies so we have a tree root called um AMD64 etch and AMD64 Lenny and I386 etch and I386 Lenny and then for all the old releases if you still need to be able to build for an old release you still need all the tools you use at the time so there's a AMD64 goats etch and so on so it works out which tree root it should be building in um and which rep repo repository it should be uploaded to when it's finished and then basically does an ssh to the machine in question to the tree root in question and runs the um the build script with the same set of options we just used this is actually very cool so you can you can put a tree root command inside an ssh command um the whole thing magically works apart from the fact you need ssh-t I don't know how many if you don't do that it all works beautifully apart from the bit where you do the upload at the end and you know how scp shows you the files that are being copied um and then it kind of prints them out again that bit never happens if you don't put ssh-t and which is something to do with connecting terminals to terminals over ssh um took us years we had for about six months we just knew that it all worked except for the upload at the end and then it would roll C and then it would finish off you know which is kind of crufty but before we worked out why so the only problem with all this game is that you need to maintain a lot of tree roots containing the right stuff for all your target builds um and we also discovered that the eye package script I mentioned reruns it's a flat architecture so every single package builds several debs um it runs the re-index thing about six times it takes ages so we rearrange things so it only does that once for each upload which saves quite a lot of time um so the client build script checks out the appropriate version of svn works out whether it's newer or not than what's already present and aborts if it's and it just re-indexes them all and that actually takes ages and Repreprer runs it for every single debut upload so it's only present and aborts if it's not newer then uses apt cross and m source to install the cross dependencies now as I've said elsewhere this is the bit that's actually slightly broken you can't have a batch root and just do um MD build whatever build debs and expect to get all the cross dependencies installed on leni it falls over because of alternate dependencies so in practice you have to pre-install a reasonable portion of what you want to build with then you build db build package and then upload the results and we can automatically have them go to the correct Repreprer repository even when there's more than one because it's based on where they were built so each de-root has the correct thing encoded and it's done so to make a new release you need to make a new svn branch you need to change all the change logs names in that so that because where an upload goes to is controlled by the change log name you know unstable normally in Debian um sometimes proposed updates or something um in order to be able to rebuild old stuff from a released branch we change the change log so that it automatically gets uploaded to the correct suite um we make a new router fest that points at that suite so that when you install it on a system it automatically gets its packages from the right repository and then install it on the install machines and that thing I mentioned about having a stable and a testing branch for each actual release as I said apt cross doesn't so if you were doing this natively you can use pbuilder to satisfy your dependencies so the problem with using apt is that apt only looks in the repository to work out what the dependencies are it doesn't look at the code you actually wrote so if it's not in a repository yet the first time you build something you have no way of knowing whereas pbuilder looks at the actual Debian directory in front of it um and so does mbuild which is why we use that um in the cross version so what we really want is a script as good as pbuilder satisfy depends um that understands crossness um if you don't keep clearing out your shrews every time pbuilder style then um you know if you install the wrong version of something you accidentally upgrade it to something newer from some other release that's it it just stays there indefinitely until you notice that you're actually building the wrong version of things now um we discovered that cross building isn't exactly the same as native building in terms of libgcc linkage you get libgcc linked in on a cross build when you didn't on a native build which matters because the etch version is not the same as the familiar version so a couple of things we found we had to do a native build right at the end before we released because it works but you get a little whinge about unsatisfied dependencies um the one thing that the system doesn't do is ensure that if you do a new arm upload then an army L upload happens at the same time or an i386 or nb64 it doesn't keep the architectures in sync somebody has to just ask for a build um it's not automatic which is the next thing I was planning to add because that's quite annoying um that could either be done using the one of build database or repbree pro has recently gained a needs build option which in theory should do the same thing so like I said I'd like to make it smarter about keeping the architectures in sync um it might be nice to use scratch bots because that avoids all this pain of having to get cross dependencies installed and working in the same way um and of course multiarch is going to make the cross dependencies part entirely different um I'm not quite sure how that's going to work I think I guess you'll need to tell app to get what architecture you want to install the dependencies for because there's no way it can tell somehow you've got to say I want the army L libraries or the um whatever target device and presumably pbuilder could gain the same functionality um so it works I don't know if anyone else has come up with a similar scheme if they have apparently they're on this room anybody have the mayor project we are using also the open source of build service which is actually very similar to what you're trying to accomplish using what sorry other CISA open build service that we are using in the same way we are using in the same way and it kind of acts like what you're trying to accomplish so I would actually recommend you to look a little closer to it and see if it can be abstracted to actually do this kind of stuff because it's the way you're moving in terms of for example wanting to look at scratchbox is actually what it does currently where it builds on top of send virtual machines and QE musty roots and so on so I think definitely there might be some potential in that because it already handles many of for example the builds on criticisation and stuff like that okay yeah I started looking around the web pages trying to work out exactly what it did and didn't do and whether there was anything we needed that was going to be difficult or you know if we could add it easily or whatever so if you've already had a go you can explain to me how it works and that's probably a good start but also the people who are working with the open source of build service they are very friendly and very talkative so if you want to work with them they are very interested okay yeah I went to half a talk at Fosden and they were saying please we'd like to be able to build everybody's everything in the whole world just tells what we need to do so I wasn't entirely clear whether you used it remotely or whether you installed it locally or whether you could do both because the server part is basically open source so you can put your own instance but you can also talk to a command like client or web client okay so yeah thank you so do they support only scratchbox building or standard cross building as well what they are basically concentrating on is having suite building as such and then they combine this with QEMO if they have to do cross compiling and then on top of that they might put in a scratchbox like approach like replacing some parts like adding a cross compiler replacing the shell with a native one and so on okay any more for any more we're all bored we want some more coffee jolly good right