 Mae'n ein gilydd yn cael ei wneud o bobl i unibyrndau syniadau fel y desitops ac mor ddechrau, ydy'r cyfan y bydau a'r cyfan y bydai. Ac mae'n olygu ai cyfan. Y fyddwch i hollol y byddai yn cael eu teimlo yn yr un bwysig i'i fyrdd y bydau. Mae'r gweithio sy'n cael eu bydau a'r cyfan y bydau a'r cyfan y bydau i'r hollol. Mae'r gweithio'n gweithio rhan oes yn ei gweithio, As well as the challenges that we face in trying to make Devian much, much smaller, as well as make it flexible enough to go on to these small devices. The history of the project has meant that we have tried a number of different ways of doing things, and over time we have learned lessons from one method and we have found the benefits of a different way of doing it. The current method is a composite of previous ways of doing things, trying to get the best out of each of the previous ways of trying to do embedded work in the Devian packages and trying to get around some of the limitations of other ways of doing it that has been embedded or GPE would do at the moment. I will also say from the bottom of my heart that I am trying to get the best balance between the various compromises that are inevitable in this kind of work. There are a set of tools that are now in Devian on state lab testing. I will go through those, how they work, how you use them for comparing your own packages and comparing packages for the Devian itself or for the Devian users. They can simply bootstrap, install and keep their own systems up-to-date with App-get and De-package and App-to-tube, where they want to use simple, normal Devian tools, simple things, play the intuitive way of handling a Devian system and recover the developments that are needed for the future and how you can get involved. That is what we are really about. We are a sub-project of Devian. We are part of the Devian family. Not all Devian developers are Devian developers, but there are. There is a big overlap there because a lot of people with Devian developed experience as well to help with the embedded work to get the best compromise between the systems. The existing multi-architecture support in Devian is one of the things that Devian is famous for or all the different architectures we support. Now that is very useful when you work in an embedded situation because often you are trying to work out how a particular package can be post-compiled. The configure script or something like that goes wrong and it tells you, I can't work out or I get the wrong value for a particular check. When you've got the build logs for the native package that we're still using, then we use the Devian source package as our source rather than going back to the upstream source. We're using Devian almost as our own upstream. So we've got the build logs for that package and we can use that. We'll see later how to use that in a cache file we can pass to the configure and say, don't bother trying to run this test. If it's, for example, a test that tries to compile some code during the build and then run that code to get a value back from a library or some kind of environment variable or whatever, you can't do that when you cross-compile. You'll have the wrong problem. So you need to pass that value to the tools. When you do that by using the multi-architecture support in Devian to provide the data we need, that's where we're filling the gap between the i-hack and the desktop. It's the moment. A lot of embedded work stops at the level of i-hack, possibly a bit more powerful than that. You need both sides. You need the mobile phones beneath the same system with the scale. Hello. I'm only here for the start of the context. I just get to do different styles. It's very exciting. It's already set out for what I was going to say anyway. We started this a long time ago. We've been digging away at this complicated problem of what is the best way to make Devian smaller and generally applicable to small stocks. This summary, essentially, we've tried this about six different ways so far, and for various reasons we've gone there. That's not really going to work. However, we do now have a fair amount of experience with what does and doesn't work. I have more or less two completely starts. There was the original scheme, which kind of peated out in 2003-ish, partly because the current developers went in a completely different way. In 2005, Philip started off with a stag work, which is a fine example of how you can use Aputin d-package across building. I took that on to the stage concept by making that more flexible. The slim guy at Siemens has done very fine work on making UC Lib C work at Devian. Basically, after talking to some developers, particularly Julian Dillby, we came up with yet another scheme, which is what Neil's been doing next in job implementing over the last few months. That's pretty much working there. You can actually use it. We have one package, UC, which is called the major regulation, and I will fix this week. That's where we're at. Those are the important points to why we use Devian for Mdevian. We want a non-commercial distribution. We want package management because it's well respected. It's robust. It's flexible. We want more architectural support because we know the package is built on architecture. Therefore, if we get build problems, we know their cost build problems and we know where we can pitch the fix and the patch and both. Because of the huge number of packages, there isn't any point in going for this kind of scaled Mdevian setup if we haven't got the number of packages behind us that are quality-assured and we know we can actually use. The problem with using Devian is that Devian is all about native compilation. Cross-building has a bad reputation. A lot of the release managers, FTP masters, they don't want cross-build packages up in the Devian archives. There are various problems with doing that. There is a challenge there in making cross-compilation for Mdevian more palatable for Devian developers, making it easy for them to migrate from native development to cross-development. That's why it makes it so easy to use the existing tools. We use a CDBS, we use a Dev Helper, we use a D-package app to do all the other tools that people normally use. They're all there. They all work. They just need sometimes little wrappers to help them work for the cross-compilation and will be providing patches and help them need Devian developers to make their packages more suitable for cross-building. The trend in Devian is that disk space is cheap. In Mdevian, space is a huge issue. We have to make sure that everything is as small as possible, as small as good, tiny as best. We have to counter the general trend within Devian to make things, with everything enabled, all documentation available in the package. We have to work against that and be able to allow both packages to be built, but make a much smaller version that is suitable for an embedded device. There is some documentation you may need for an embedded device that can be done through context-sensitive help and stuff like that. You don't generally need the whole documentation that is in a desktop package because you can't view it easily on an embedded device. There are issues with optimizations. We've seen already that sometimes optimizations can cause a very obtuse bug. It's very difficult to pin them down, so we're not changing optimizations as a matter of course at this stage. We might have to in due course, if that's going to be a problem, whether the bug is due to any kind of cross-compilation or some other optimization issue. Those kinds of issues need to be challenged. We can work down. The software management is more or less covered by the tools. You shouldn't have too many problems with that. You also have different dependencies on the MDebian package versus the original Devian package. The reason for that is to try and trim down the dependency tree, just strip out things that we don't need, and MDebian isn't going to be used though as part of the essential packages. We're going to use a much smaller FFs so that the whole dependency issue is not to be rejected. Part of that is using other C libraries, like UCLipC. The player's ways will do that with separate repositories so that you can always have the UCLipC linked version, or you can have the GeneLipC linked version cut to the power of a device in what you need. So what we've gone through by combining S-Lint, Stag, and Stage, and working with existing Devian developers, we've come up with this composite method. To make it familiar to Devian developers, we're re-using your existing language management system, all the tools are there. We're just trying to make them flexible with new packages. So we've got deep package cross, which has been in the Devian archive for some time. App cross, which is new, which is just a version of app that goes and gets what you actually need past the deep package cross. And mdevian tools is a tool set that does the wrapping of your normal build tool. So if you would normally use dbuild, there's mdbuild, which just does the extra things you need to do for cross compilation and has extra support. There are, if you would use DHmake, we have Enmake. You still need to have a Devian package to start with, but then you run Enmake to mdevianize it to start the process of stripping things out. We definitely need to reduce all these package sizes. At the moment, the test packages we've built so far are about 68% smaller than they would be in Devian. Now that's good, because we can be working with that. We need then to strip out some of the extra dependencies and all the documentation goes. Now documentation isn't just your doc package. It's your copyright, your train flight, your news, your weekly, your streaming, all of that goes. User shared doc, user shared man, user shared info will all be empty because you're not going to have that sort of system on your embedded device and consume the huge amount of space. The translation files, with a small number of packages we've got at the moment, we've got a way of splitting out the translation files into dedicated packages. Downside with that, it doesn't scale particularly well because you go from 80 source packages to something close to a thousand binary packages. So there's almost a tenth or increase in the number of packages that we end up in the repository. That doesn't scale to Devian. There aren't other ways of working around that, but at the moment we're happy with the smaller packages we've got to split out the translation files as ultimately generated packages of their own. You'll see later how that works. The changes we make to your Devian package, the source package, we keep in our own SVNs patches. You can see all the changes we make and you can see what we've done. Changes will be to things like Devian control, Devian rules, and Devian files in the Devian sub directory, and Devian chain loads so we've got our own versions. There's a new way of dealing with cross dependencies, which is Devian X controls. Some of them came up in the lucky ideas box which we're building into in Devian tools. We're trying to be as portable that we'll have both installed on one system. Something to do with all the various dependencies. That's the main three packages you end up using. We will be building, or we have built, we will continue building, updating, finally, tool chains for the various cost completion you need to do. We've got i8x named D64 within regularly. You've got a little bit of an issue RPC. If anybody's got a RPC that they can contribute as a build D for building tool chains, it takes on the load off by the laptop that they're struggling with. The packages remove all the man pages, the copyright, the chains log. They also remove all the UDEM packages from Devian control. We're not dealing with a Devian installer issue here. We believe it's a different kind of installer and then we don't need those kinds at all. AC tried other compiled tests and then configured. That's where the problems come in with build logs and actually providing a cached variable to cover the background to that. When we split up the translation files, then the user will need some way of picking up new translations. We have got a tool that will be in Devian only that will cover that so that users will get any new translation packages that have already got installed. Everything is controlled in a normal Devian directory. There's nothing extra going on. If we need for any reason to patch the actual upstream source, then we'll put that in Devian patches and that patch will probably go back to you with a user type or a cross build saying can you help us out here? This particular part of the upstream source doesn't work well in the cross build. The build script from your Devian tools. It's all about applying patches and then building the patch package, managing patches as you change the files yourself and keeping build logs as you go along. Then again once the Devian tools are finished, it will be back to the normal Devian tools, the package, app, aptitude, whatever you want to use. The translation tool for the user is that final package that is lying at the data. The simple thing is that it just checks the list of supported locales set up on the device, looks for matchstall locales and says that you've got some new translations available for packages that have already got installed. You save or the hassle of this one. A lot of people make it on the first but user shared doc and you get all of this huge box space taken by user shared doc. This is from my own laptop here, RBC. I've got a fair number of development doc packages installed but you can see there that user shared doc is only 250 megabyte and that is optional, stuff I've chosen to install in their old space. User shared doc cow had all that any choice about it. Every time you install a package it will install whatever translations come with that package, whether you use them or not, whether they support it or not. A lot of the translations that are installed in your system you can't even use without doing reconfigured doc cows. Each time you reconfigure the doc cows or add a new doc cow, each app get upgrade that you do to involve any kind of doc cow package takes longer and longer, so there are more needs to have more than you need set up for the doc cow. On the first graph on the left as you see it, the red slice is user shared doc cow, the blue slice is user shared doc. Even if you took out user shared doc you'd have a huge amount of space taken up by user shared doc cow. The image on the right is a view of the disk space taken up by user shared doc cow itself. You can see that it's very evenly split up. Yes, some particular translations are more popular than others so you have a slight amount of increase, but even the biggest individual translation directly there from French is only 9 megabyte, yet the entire directory is 291. So you can see there, if you strip that out and you just have two or three locales configured, you've saved a huge amount of disk space. As a summary of how you would normally deal with a Devian package, you've got a source repository, you build a package, you've uploaded a binary repository, you've got a local package cache. Now, the lines in red there are where you have problems with an embedded device. You can't build from the local package cache on your own system. You can't build to install on that device. So the cost building scheme in many ways is very, very similar. It's just that you don't have the option of building on the target device. Now, obviously, each Devian package, each Devian developer, you'll often choose how you actually build your Devian rules and which methods you use. There's a lot of negative talk about CDBS, a lot of particular developers don't like it, but it hides too much from you. But in terms of cross building, CDBS is the best thing around. It's very easy to cross build because everything that you expected to do, all we need to do is supply a replacement make file for where CDBS would use DevHelver, we use MDevHelver and make file. It just omits certain parts of the build process. It stops the manager being installed, stops the change loss being installed, all that kind of stuff on there. Now, obviously, those kinds of changes, you can't implement those in your own packages because they would break policy and then they would complain. But in terms of end Devian, that's what we need to do. DevHelver, when you start to get into issues, it is quite easy to patch DevHelver and stop these things being built. But a lot of packages don't confine themselves to only DevHelver rules. There's a lot of packages that will combine DevHelver with other methods like deep package calls directly or install, copy, move. All of those things make cross building up package difficult. So there are issues there where if you use, some would call them hacks, some would call them ways of working around limitations in DevHelver, but if you use those in your package, then you may need to wrap those in either DevBuild options or something else conditional so that there are ways that they can be omitted or skipped, particularly if they relate to any kind of documentation or other examples and other sorts of tools that you might not actually need in the main package. So we process DevHelver scripts as much as we can automatically. We'll take out the rules that are obvious. We'll take out the stages that we neatly can, but there's a fair bit of manual editing involved to actually end Devian-ize a DevHelver package. So if you come up with a standardized way of conditionalizing a rule of smile for Devian-builds, because if so, the notion of starting to submit some of those is just a type of entertainment company. That's on the council. That is on the council. The thing with DevBuild options is that it's not designed to break Devian policy. So if you put something in like no docs, well, it's not going to admit the change level and the copyright notice because that would break policy. It's not going to admit the man page or the input page generally. So there are other ways of handling those. What we're looking at at the moment, just the last couple of days, is a system of filtering things like that within the package itself. So the deep package doesn't install those things. If we pass the right options to the deep package, it simply won't install changeable page, input pages, or anything you use ShareDoc, or anything you use Shareable, a normal process you use Shareable callus as well. But that will be an optional thing within how we handle deep package, both in installing and building packages. That's very early days yet. I probably had it in my account back. What do you think of the suggestion? There's certainly no problem with defining something like an embedded tag, if you will. And then asking package maintainers to include a conditional to support that. Something like a source packages list file. Because while I think you're right, that generally end user or physical build options like NoDoc, you might not want to twist that into delivering policy non-mining things. There's also no policy requirement that says the heavy source package can only be used for building decades-on-years packages. That's true. I personally would think it was a great idea if we had some sort of standard conditional, whether it's end-to-end as a tag game or something else. I don't really care, but if we came up with a well-defined way of conditionalizing the source packages so that you didn't have to maintain a lot of the stuff wherever in a separate archive, but in fact you could split things as patches back to the package maintainers and then just build with that option all the time. I think someone on a package would be a very reasonable approach and I would certainly encourage package maintainers to accept such build. I agree. There's a lot of packages where that would work. There are a lot of packages, particularly packages that you would tend to use for a root FS and the actual installation process that are already incredibly complicated, something in a GCC 4.2 or something like that, where if you add that kind of wrapper in, it can be even more complicated. There is a role. There is a role for the simple packages. The ones that you see in the BS, they're usually cross-billed for the first time, but then if you've got a simple package with just some basic, almost template-like, dead-panel rules, that will be a good way to go about it. We have had that in the past. In fact, the package cross provides a generalized mechanism for setting environment variables, but we kind of went away from that for the time being to prove that this will all actually work. The point about the current system with just a set of patches to the rules files is that we can just do that, and I'm sure that this has all bothered us at which point that we've been going, look isn't this great? Now, we want your wall to make these changes in your packages and listen to how to do it. We've kind of put that to one side from time to time, but I agree, you're absolutely right. There has to be a standard mechanism. Yeah, it's part of the way of persuading them in developers that cross-building is something they actually want to support. If we go too early and say to maintain a book, use a list of books, all tied cross-built, and please can you put all these patches in, then a lot of them are going to just get left. So if we can persuade them and show them to everybody that this is really working, this is what it can do, that we work on from there. That shows you how the version string changes. The version string is critical in keeping all the various build files of the dev separate from your normal dev-in builds for the end-dev-in builds. It's just a simple suffix that is incremented as we make new end-dev-in releases, and it reverts to one when you make a new release yourself. So that's where the patch comes from for dev-in change log. Even though we don't actually use the change log, and the change log doesn't get installed, we still need to patch it just with a simple, automated lap through lines to enable the version number to keep your packages distinct as you build. And then at the end, what the wrapper tools do is simply they simply call de-package build package with the A arch option. So the package is built entirely as normal. It's all down to the changes in the rules file, and they control it. Yes. You've got de-package cross installed. This is the A dependency of end-dev-in tools, and that provides a diversion to the de-package cross version that understands the A switch, and that's not what to do. As far as actually using the tools, there's a small amount of setup involved in getting hold of the primary tool chains, making sure they're installed correctly, making sure that you've got all the various dependencies involved and available. So M setup is a simple thing. You don't even need to run once on each development system that tells you what it's going to do, and then just installs the end-dev-in repository in your app sources and gets the toolchain. It isn't really practical to upload the toolchain packages to dev-in itself, so you have to have our own repository added to your sources. But that's the way that we do that. We do that automatically as you install end-dev-in tools, because no one adds the source to your list. The main way of running end-dev-in tools for building end-dev-in packages for our own users will be to use m-source. It's a way of using app-get-source and applying patches before we actually go much further on from that. So you can use app-get-source to yourself and do all the work manually. This is an automated way, it's an optional component within your end-dev-in tools that does a lot of the work for you. It also creates records in our SPN if you've got access, or it uses our existing patches if you're just using anonymous SPN. And then the actual build. The script is MD build. It's the version of D build that understands the patches and understands how to create the build log and deal with the SPN. Again, if you've got SPN access, you can commit those modified patches and the build log back to our m-dev-in SPN to keep on the representative data. And you will then upload the as-normal.changes file. You can sign it, you can... At the moment, we don't really enforce signing it, but you can sign it as well. And that's the summary of that workflow. So you can see that M setup is a one-off. If you want to use SPN, you can go to m-source. If you don't, you need to do app-get-source. And the end-make is the script that actually does the initial patching, the initial preparation of patches creates the safe directory of un-changed files to the written generative. If you've got a username, you can actually use the SPN for MD build, build script, and launch the sign as well, and that's where you get the changes coming in. Patches themselves are stored in a series of files in the top-level source directory. That's where you would normally find the built package, the built dot-table itself. So above your... above the dev-in directory, above the actual package and version directory. And that's just listing what the various files will do. You've got the four main files. MD-in control-in is only made if you're actually using a control-in yourself. And the bottom there, the MD-in file patch, if you've got other changes that need to be made after your dot-install files or perhaps other changes to other files in MD-in, then MD-build would notice that, would pick up that change automatically and generate a patch so that we know what's been removed from the other files in the MD-in directory. If you also create a new patch for MD-in patches, we'd encourage... if the package doesn't already support the patch of a user CDBS but doesn't use the simple patch system for CDBS allows, then we'd encourage that to be added in. And then you'd just add a new patch if you need to. Not many packages so far. You've built 8.5 source packages and all you have to generate two or three source patches for MD-in patches. And those can be done stream as opposed with the tag across it. And again, it's not something that we would ever make release critical or anything. It's just one more or a minor vote with the cost of the tags. Can you help us out? We can get that fixed. Well, it will actually have been done that way so far. Now, this is where the dev-in developers themselves come in. These are the problems we come across so far at the packages we've built. If you use config, actually, please make sure that at least you've got the build and host options available to deep package architecture to get the right variable for via md build and pass it to a configure. That way a configure will understand that it's cross-compiling. If you don't have build and host in your complex then we will have to add it for you to get it to a cross-build and you end up with, eventually, you end up with a bug report from us. But if you get a chance, if you're using config in any of your packages, just make sure that the build and host options are the actual lines you need to add to call deep package architecture and get the values you need to pass into those options. Now, the dev-build option that was told me earlier, no docs, no check. No check is particularly useful when you're cross-compiling and you cannot run what you've just compiled on that system. So you have to make sure that things like make check if make check is entirely architecture-independent and fine but we don't really need to build and check doing the run anyway because that would have been done on me on the dev-in source. You could do it but in general most make check runs will use some kind of compiled architecture-dependent routine, someone along the line and that will break. So if you use a make check then please wrap it in the no check, the dev-build option so that we know we can skip easily. The other problem is we've given us this with one package already. It builds fine and you'll get messages as it gets, build once, build twice as a dev-in package. When you try to cross-build it some packages are not idempotent in terms of their built files or built sources. There's a package or a file that's built generated from the source but then it gets cleaned away and it can't be regenerated by a cross-build. It's obviously a bug in the package but it is something that will come across to a file. And again any kind of compiled code anywhere in the build for any reason whatsoever we need to have either a way of skipping it entirely or finding a way that it is trying to obtain and pass that in so we don't need to get that very good. So there are going to be some programs where that's not going to be possible. So those of you who wanted an embedded version of Nethack you really have to build a special level compiler with the host compiler. Is there any ability to use the host compiler at all? You would have to have some kind of emulation. If you can't build that package you'd have to have some kind of native run wouldn't you? Unless you can run that script using the build architecture and get the values you need via that way. This is the problem we are currently having with GCC. It gets confused about what is the build, what's the host and what's the target. Generally we're talking about building a machine is the power PC or the AMD 64-886 we want the host to be generally or one of the other and we want the target to also be on. Now if your package can support that kind of thing and it can actually build on one architecture for another then that's fine for the host and the target but it's not. But in particular is that going to be useful to an embedded device at run time and store time? It's just an example. But there may well be packages like that but I'd have to find some way of working around it. You can fix that by making a package for the build machine which does the job in a second and a native package which is used at build time. That's the way this is dealt with in OE for example and we'll probably end up having to do some of that for fancier things. Those are the several changes we end up making to your Devin source package. It's essentially the same but lots more on the version string. There are issues around your priority settings because we are probably not going to stick with your existing priority settings. What Devin defines as essential is by no means essential to end Devin. Pearl is a classic example in that setting. So in order not to generate false warnings with apt and d-pack if they're wrong you may well have to modify your priority settings to come downgrade them from required or essential or important down to optional first year. The reason that we have to run these wrapper scripts is because of the problems with Devin rules and Devin control. We have to run and patch those before we can start running d-package. We can't put a lot of these changes into d-package or into dev help or directing. We have to wrap the process in other scripts to run the packages first then run d-package to modify the files. All we've got left to do in many ways is make sure that these toolchain updates happen as smoothly as possible. It's very difficult because what we've found with GCC 4.2 was that it spent quite a significant amount of time after Edge was released when it wouldn't be cross-built. It wouldn't build an arm for everyone at all, even natively. So there are periods when we have these problems where the toolchain is very difficult to keep up to date with Devin and Stable doing those transitions in so long a quick time. You need to try and work out whether there is a way of maintaining an installable toolchain doing a transition in GCC itself. My main emphasis on the main test environment for Devin tools in the Devin packages is GPE. That's an embedded environment for IPEX. There's a good start-up for the rest of us before the packages. Get in more packages built with MD build. Get in more of these cache files and build log variables built into the packages. We've got a CH root builder as well now along the lines of PD build which will allow you to build a package within the CH root as a cross-build. That's very useful because it means you don't have to install not only all the dependencies of your nice gluey package but cross-built versions of all those dependencies as well on your build system. The CH root will take care of that or void it. We're now starting to use EDoS depth check to make sure that the packages we upload are actually installable from the packages file. The CH root uses pbuilder code, uses deep bootstrap with some modified distribution files so that we only use the files that we absolutely have to. It builds an environment that is suitable for cross-building so it builds all of the devian tools into it. Initially it looks a bit of a larger thing because you've got to have build in this particular CH root but it's not in a CH root you've installed this is the CH root to build and then we use devian X control a new control file in the devian sub-directory that specifies what dependencies we need to have available to the cross-compiler during the build. So these are the libraries that are going to be linked against your package during the build and that are actually going to be installed alongside the package before it runs on time. You've got to separate out if build depends then. You want build depends for actually running the build itself so that's the thing that dev help first CDBS order come order make but then you want the cross-dependence which you will need to have available to the cross-compiler and those will be things that will play the libraries, the dev packages that you might bring in whether it's x11 or whatever else you need to use, gdk, glib that kind of thing will need to be installed as a build of cross-dependence the address of the server, the CH root you separate all the dev packages away from your normal build machine you don't need to have hundreds and hundreds of cross-built dev packages installed in your main machine One of the things we have been able to do is also build a testing CH root Deep root so I've done that quite easy but it gives us the advantage that we do have a usable toolchain to at least carry on testing builds even if we're not building against them still we're building against testing just until we can get a new toolchain build there's a lot of work still to do so if you want to help out there's lots of us here in the room already anyway but join us on the ARC and on the website part of the work is convincing now that there have been developers to support cross-compilation that's where we've come to at the start where we need to show that something really is working then we'll get more different developers on-site then we'll get more just behind us and carry on testing and improve documentation Has there been any interest in the DEBI community for building for the DSP platforms like I guess there was SOCDA in the next generation No, at the moment we're contributing ourselves to supporting architectures that DEBI really supports because of the close ties when we're doing anything so right now anything even remotely exotic isn't supported however, SLIDN for example does support a whole pile of architectures that DEBI doesn't support by making de-packaged changes so I'm sure we can move to supporting more architectures in due course and it's mostly a matter of de-packaged support but right now we're contributing ourselves to a simple set of things and there's a whole lot of debate about exactly how you manage similar things which would count to one DEBI architecture if you want to build it to all different ways and you'd like to make that at least possible if not easy but that's not really something we have to worry about today Do you have any JLPC platforms or is it all UCLPC? At the moment it's all JLPC UCLPC has only really become usable in the last couple of weeks and we'll be setting up separate repositories for the UCLPC linked packages Will that be your new architecture or how will you handle having JLPC and UCLPC packages? We could have in this discussion on the mailing list there's two ways of doing this it is possible to be installed UCLPC at the same time and just build against one of the same toolchain or you can say well okay that's very cool but it's completely stupid thing to do If you only ever what really want to use all UCLPC so would you make them separate architectures there's a whole lot of questions about overloading architecture to now mean different DEI variants which are actually compatible that's how slim did it in the two years ago now it works fine so what I have for debate is very weak it's like Ron Farrow he's trying to make a good work he wants to answer on how we're going to do it same I think the MDB bot was probably the best place to talk about that so anyone who cares please come along especially if you actually know anything about it You talk a lot about building stuff but are you actually using these packages that you built can you say something about what is actually usable already almost this is what we were talking about earlier there is a particular problem with GCC 4.2 that goes back to the builds for target confusion that happens within the Debian build which doesn't seem to happen upstream so there is a particular package that we are missing at the moment that we wouldn't normally need for a root FS and we can work around that we don't want to long term work with fixed GCCs that would sensitively cost build GCC will cost build in terms of making a cost compiler but we are trying to prepare packages from GCC that will actually be installed on the host device so it's more like a Canadian house but yes just in the last two or three weeks that CHroot work has been extended to include a sandbox which uses Deeper Strap and a following switch to create a CHroot that is ARM compatible so you then install the end-of-in packages into that on the host device and that as of last week that was almost working I've got a couple of changes to make in busybox but yes it's almost there so you don't fully expect to actually have a working root FS that all came out of the tool because it had to budge this week and we don't even know it, it's not working so it's sort of the same context as the question about the jobs in places you see Lipsy stuff I'm actually mostly a bit of a design of dinner a really small business that probably would have a run very cool to at least be that do you guys spend any time thinking of using other stuff we're trying to support as much as we can if we can support the smallest possible device then we will go that way it all depends on whether you can get a small enough system you've got the wood deck house down to under 15 meg already we'll hopefully get it much less than probably so yeah you're right, a lot of people want to do that and there is a trade-off between you know we don't want to reproduce build root it works it's easier to drive there is a minimum size of machine for which shrunk deviant package and it's still including all that infrastructure about menu systems and stuff actually makes sense you're just talking about isn't it a toolchain but yes a toolchain is there and it can now support it's usually called a studio it's a little bit of a burden so yeah those toolchains they've been around for like three years quite a lot of people are late all the time for such things you've been mentioning convincing package maintenance to make the packages cross-building it seems to me that there are two separate things one is making them cross-compilerable and one is making them and then friendly in the sense of removing them as a shared outcome it seems to me that making a package cross-compilerable without necessarily removing the shared outcome and so on is a lot easier and probably easier to convince people to do it I've been in my own language part of the work is one of the things that we've done today which is a toolchain within the package so that things like these shared don't have to be the concern of the development of the package we'll take care of all that but the simple things like making sure that configure supports build and host making sure that your cross-builders are important making sure that you don't try and run things that aren't going to be compelling me in the host architecture all those kinds of things that's what we would like the development to do is to make sure that there are all packages that we'll take on from there making them their main package but as long as the underlying package has things like that built in and with some kind of understanding in the background what cross-compiling really needs and the kind of checks you need to make does it go through rules thinking oh hang on where's that file coming from and why don't we do that some other way to say cross-compiling and debilising are in fact entirely separate confusing fashion here it is possible to use the existing tools natively and avoid the whole cross-compiling thing you can just see if it works you can just debilise things and build natively and see if it works we haven't really emphasised that because that's mostly not what people are doing I did do that on my machine but what we now really need is some web stuff so you can actually get an easy list of how much stuff is still broken natively and how much of it is only broken because of the crossness but certainly I think it's important to try and deviate to everything deviate to cross-build that can be made to and that is not fundamentally our problem we've kind of taken it on so my question kind of builds on that because what would really be neat of some way of doing automated testing in QA so my question is really what are the best practices for target emulation can you use things like ZEN or QMU and is there a target emulator for the arm or other other features as far as the cross-compiling support goes as Workie says you can test it natively you don't necessarily need to make sure that you run on an embedded practice as long as the support is there then that's half the battle but yeah Workie, do you want to take on me? I think probably let's talk some of that in the end actually about the runners for that question and yes I'm sure you can use QMU and stuff I'm not, I don't think he is he is five minutes back we don't know who's the answer I assume you're using a busy box inside of the dev and I'm just wondering what you've done about the problem of all the dev packages that have different utility so the busy box version is more that kind of thing it's early days on that the one problem we had with the rootFS at the moment is that although I built a busy box busy box wouldn't complete the deep loop service installation because I hadn't migrated the busy box Udev configuration into our endebian configuration so it was a default dev in busy box obviously it doesn't do a lot of the things you wouldn't need to do if nothing else is installed so that's part of the next build of the busy box is trying to solve all the kind of problems one side it doesn't need testing on the actual devices you don't see what else we actually need to add to the busy box support there will be issues later on that actors about to build don't build and don't run with that but we know that GPE and its libraries and applications they do run with busy box because that doesn't do what GPE does that's what GPE actually is doing so within what we're doing now busy box is a good solution we can always then replace busy box as we go on from there it's a great time for me to try to allow any dev impact that should be put into the end of consistently used but we don't have the general solutions on the impact and all these behaviors yes busy box is only really going to be essential for the router fast in the initial installation you can replace it after that before you might be complicated actors or actors that are expecting new extensions so we could even make many different installed images one of which is a busy box based one one of which is almost a full Debian installed based that uses the full deep engagement map and everything else that goes with it before you tell us and all that because all depends on the device you're trying to to support how much space it's got I mean in fact this concept I think really is to have a busy box based Debian and a standard totally Debian and Debian quite sure we've managed that just two of the possibilities is the easiest way to do it but I mean I don't know whether there's any type in Debian generally for making a busy box a sufficient root process to suspect it's not that big of a deal actually I don't know how much you're all setting more experience now how many things break could we just let everyone do it you just sort of say you're based and you replace all the busy box or all the you use busy box for everything that's capable and doing the already things like sysbieded scripts that depend on all your certain behaviour of knowledge or your experience that's not bad there's not really a huge amount of space that's going to come up is it like the user interactions that we're going to have to say there's no standard here for a lot of it we've got the open embedded patches and files that work with that so if there aren't changes we can use their work to help us we've done that already a lot we've taken a lot of their patches to help us get some of our packages to successfully cross compile the target root file system we said still two megabytes there I mean I'm not sure that it's taken up by Debian package management and you can have it removed on a script that's what we're going to be asking start root.ms was actually just using busybox to replace Debian package so the biggest part of that was GWC I'm looking at splitting the local support from GWC which is a very big component of that binary mass and getting down a course of two five megabytes GWC and down another 2.5 GWC GWC so the options are there