 A few slides, and they're just text, so they're really boring, so we shouldn't spend any time on them, of course. You know that I have been building a bunch of embedded little computers. I have some examples up here. You can look at them if you like. Lots of them. You want to look at some embedded computers? Yeah, they're really cheap. Here's a computer with a power of the limb lander. When it landed the Apollo on the moon, it cost me 80 cents. Yeah, so what are we building? We're building a bunch of tiny embedded devices. We're using a bunch of different processors. That one uses a ARM processor, Cortex-M3 by STM. I have some more here. I think I flung them all. They use the Cortex-M0 processor. They're embedded processors. They are not what most people think of as an ARM processor. They can't run a telephone. They can barely run a remote control. These devices have very little memory. The biggest one that we've used in the product is 128 kb of flash. They didn't have a file system until a couple of months ago when I decided I would build a product with a microSD card, so I put a fat file system in them. They do not run Linux. We're not ever planning on running anything like Linux. We're not even barely planning on running an operating system. What I want for an embedded ARM development environment is something that looks very much like the GCC AVR system or SDCC. Or even as I learned this week was a good simile, the MingGW environment, where you're building an application on Debian that runs in a completely separate environment. It has nothing to do with Debian as its target. Debian is just the build environment. The current tool chain that we're using came from the Summon ARM tool chain. It's basically a script on top of a bunch of tar balls of upstream snapshots. So they snapshotted bin noodles, they snapshotted GCC, they snapshotted the GNU loader and the GDB, and they built this shell script that compile all of them and did a pretty good job of putting the other usable tool chain. This works great on the Cortex-M3. It doesn't build correct binaries for the Cortex-M0, but it was a really good start. It gave us an idea of what the system was gonna be like. We built a script that takes this tool chain and installs it in slash opt. It's not a Debian package, so we put it in slash opt, and that's what we've been building all of our stuff on. It works fine. More recently, well in the last year or so, Linaro started putting together a snapshots of a Cortex-M tool chain at this URL here. So you can actually get a Cortex-M tool chain here. This one works great on the M3 and the M0, so it's definitely an improvement. Linaro has been pushing their patches upstream. I don't know, because I haven't had time to look, how many of these patches are still not in upstream GCC? They have a commitment to getting everything upstream. I don't know how complete it is. The other thing that we're using is we're trying to use a tiny C library, and the C library that we're using now is not NewLib, it's PDC-Lib, the public domain C library, which is the smallest C library I've seen, and it definitely strives for reuse of code and easy to understand functions and from a porting perspective, it's awesome. It took me about a couple of hours to get it running in our environment. From a C library implementation perspective, it really kind of sucks, but well, for instance, the printf code, this is awesome, right? You want to print an integer. Well, the PDC-Lib code, the only integers that can print are 64 bits. So it casts everything to a 64-bit integer and does everything it's 64-bit ints, and of course, the actual printing function, well, the simplest to understand integer printing function is recursive, so it's sitting there recursing down the stack with about six 64-bit values being pushed and popped off the stack for every digit that it's gonna print, and there's 64-bit numbers. So I got like a couple of k bytes of stack usage out of that on my four kilobytes of RAM processor, and not so good. So Lipsy is really the problem from I don't know what Lipsy to use, SDCC and GCC-ABR both have a native Lipsy. When you get SDCC or you get GCC-ABR, they come with a Lipsy that's kind of suitable for their environments. SDCC in particular has this heavily optimized lots of assembly code, Lipsy, that's kind of a marvel of 8051 assembly code. Horror show? Marvel? For ARM, I don't know what to do. I'm using PDC-Lib right now because it was really easy to get it working. It's totally unoptimized, it uses a lot of stack space. It's very small, it's very portable, and I didn't have any trouble getting it working. Some on ARM tool chain and the Leonardo stuff both use NewLib as their C library. And the trouble that I had with NewLib was it was enormous. It took like kilobytes of space online, but yeah. So if you want to save space, are you doing link time optimization so that like the functions in the C library that you don't call get stripped away? I am not doing that, no. I assume that the library was built so that file level linking would be sufficient to get just the code I used, maybe that's not true. Okay, even for static linking? Even for static linking? I would have thought I would just pick the, sorry, the dot, a's or whatever in the dot I saw. No, the dot l's in the dot a that I used. You need some specific options to strip the things that you don't use. Okay, that's something definitely to look at. I really don't want to use link time optimization as it currently exists in GCC because it gets rid of all of your ability to debug applications. Have you tried? It removes all of your debugging ability. You can't get stack traces anymore. It's like, seriously? You people think this is a great way to compile stuff? I just looked into doing, Eric just looked into doing link time optimization in Mason. It's like, I can't support that. My users get no stack traces. To be fair, you can't necessarily have both, right? You either optimize to make it absolutely tiny or you make it debuggable. You can't have both. Or at least they're in direct contradiction. Yes, I understand. I would like to at least get some semblance of a stack trace when a crash occurs. Throw away all the functions I'm not using but leave a stack trace bit in. Yeah, exact, in any case. The other option, of course, is UC-Libc, which is very Linux-based. It's not exactly micro anymore. It's like a third the size of G-Libc at this point. I don't know, it's huge. The other question is how we wanna package Libc for this environment. Right now, I'm just taking the Lenaro bits and compiling them and sticking in slash opt. I could clearly just turn that into a Debian package pretty easily. From a package or perspective, that seems kind of nice. I take an existing upstream that's maintained and I put Debian P in it to make it taste like Debian and get it ready to install on Debian. What could be better than that? Well, the problem with that, of course, is that 99.9% of the code in that package is already in our source tree in terms of the Debian GCC source package. So the question is can I do something like meanGW which takes the Debian GCC source as a build dependency and installs that and then builds from that source? And if the Lenaro patches were all there, maybe that would work. Hello, am I on? Yeah, they are doing a pretty good job of upstreaming things. So it's usually what's in Debian is a bit behind. Right. A few months, which for your purposes probably doesn't matter very much. No, it doesn't matter. The question is are these Cortex-M patches that Lenaro are doing in a completely separate area also integrated? I think they probably are. So yeah. Christian, behind you, the question. More for a comment on the question. I've tried coming from the other side. So taking what is currently there in the Debian archives, working on Cortex-M3 devices as well. And with just some tuning of compilation options, I managed to use the cross compiler from Debian, which is basically GCC ARM Linux GNU-E ABI to run on Cortex-M3 devices. Oh, very cool. The only thing that I didn't get to run was basically all the GCC internal functions didn't work. So if you wanted to do anything that's like... 64-bit arithmetic floating point. Anything like that. You can get the basic examples running though. Yeah. Okay, so that sounds like it might be pretty... Yeah, it's not that far apart. Okay, and the last comment I had is I was talking with Doho yesterday, or two days ago. I don't know. It's debconf. It's all just a haze of beer-infested cheese eating. I think I had about three pounds of cheese for lunch. That's awesome. So what Doho's question was, is this just another architecture for multi-arch? And it looks a lot like multi-arch. You're doing cross-compilation, but it's really not multi-arch at all. And I think Doho and I both agreed that trying to do this in the multi-arch environment did not make any sense, because if you accidentally ever include something from user-include or user-lib, you have failed. So we're gonna install this like, clearly when install this like GCC, AVR, MingGW, SDCC, it's a cross-build environment solely. There's no multi-arch stuff here. Now, if multi-arch gets even more complicated and gets to the point where cross-compilation is just multi-arch, then maybe we can consider changing it. Way on. So cross-compilation is just multi-arch already. But the problem is C libraries. So the MingGW people have exactly the same problem. They have a non-pozic C library, and we go, we can fix that, so we can change things so that you don't assume a pozic C library and move things around. The question is the degree to which you actually use system libraries at all, right? You don't want Lib C, and you probably don't want many of the others either. No, none of them. Yeah. On the other hand, the SLIND people had this working in 2005. They did a deep-package architecture which was UC-Lib C blah, and that worked fine. I was amazed to find someone was still using it this week, and you go, that's seven years old, that release. So you could do that. And the advantage is that you get to use exactly the same tool chain and all the build-foo and all the other stuff. It becomes a standard cross-compile. Right. And the question is, is that more aggravation than having to build your own tool chain with some different options? I think so, because the last thing I want to do is create.debs. I'd have no need for devs. What do you want with the package tool chain? Oh, I want a package tool chain, but when I have the tool chain, I don't want to build devs with that tool chain. Right. That has very little to do with your tool chain. I mean, tool chains build code, right? Packaging is totally separate. But what I'm saying is I don't need all the packaging tools on top of the tool chain that help us create. Yeah, but you don't get that. I don't need autoconf, I don't need... Yeah, but packaging and how you build stuff is totally separate from how is your tool chain packaged. Right. Oh, so you're saying that taking advantage of the multi-arch stuff would make packaging the tool chain a lot easier? Yeah, and all your tools work the same as they did before. If you didn't need, you know, if you wanted to get a little bit bigger, because there's all M3s tiny right now, right? Yeah, I know what happened. It used to be 8-bit Pick World, and now you've got an enormous 32-bit processor, and it won't be long before you've got a 64-bit one. Just watch. And people go, oh, actually, it would be handy if I could use this little library. Right. And so that's why I wanted to hold this buff, largely, was to say, you know, we're moving from this GCC-AVR-SCCC world into a larger ARM world. Does it make sense to continue using their model? And what do you think? Did you actually look at how GCC-AVR works and is packaged? No, I just used it. Yeah, I think it puts files everywhere. I mean, there's no consistency whatsoever. And so I think that's putting this ARM cross-compiler in the standard framework of multi-arch, maybe a bit more consistent with the rest of the system. Yeah, it certainly would be more consistent. The question is, do I want to do that, or do I want to just hide it in some directory? I mean, we have a directory, we have a file system specification and just dumping things into a package, specific directory would be pretty easy. The AVR compiler doesn't respect that specification. Okay, and maybe not a good example. He's lost his train of thought. I have. I don't know how to talk, does it? Do a microphone, try and find it again. Yeah, I would say that if you look at it from a soft, just a toolchain point of view, cross-compiling or multi-arch is in the end, just whether you stick your stuff in slash user, slash target or slash user, slash lib, slash target. And I think it would be nice to have, and then when you do the sort of debit multi-arch stuff, you also add user-include to your path. Yep. But from a sort of just sort of consistency point of view, it would be nice to just have everything in user-lib targets. So the multi-arch style, even if your own toolchain doesn't add the user-include, but plus the user-target stuff isn't actually FHS compliant. No, it would clearly go in user-share target, a user-lib target, yeah. Yeah, so then that's basically just multi-arch minus the user-include stuff. Yep. Was the, someone said Michael, you can use sys-root as well as multi-arch, right? So you build a compiler just like all the others and you can still use dash dash sys-root, put all my stuff in a different bucket. Yep. Which may make a lot of sense for this context just because you're not doing package building and you're just putting random crap which you want to build against. You know, you're not necessarily packaging your PDC lib in a conventional way. This is the question is, do we want to do that? So yeah, you have, just because you use all the standard toolchain building foo, doesn't mean you can't also do sys-root building. We have both and I believe that all works but it's not very well tested at this stage because we're all doing distro building so we don't use the sys-root stuff. Right. And like him, I've always just used the standard compilers to build non-Lib-C stuff like bootloaders and things and actually that works fine so long as you never put a print F in at which point your binary gets 250k bigger. Well, and use it and starts calling functions that I don't have any ability to implement like read. That's right. But so you know, it's not difficult. It is nice if you get a bare metal toolchain. I'm not sure how much difference it actually makes using a bare metal toolchain versus a nominally built against Lib-C toolchain with just Lib-C turned off. Quite a bit actually. The non-ABI target in GCC is dramatically different than the Linux-ABI toolchain and in terms of the interface to the internal GCC library itself for it's completely different. I haven't investigated enough to know why it's different or how it's different but I do know that I was completely unable to get anything working with a Linux targeting ARM compiler that putatively output M3 code. You said you got something working. Yeah, basically the only thing where I got stuck was that all the functions in the standard C library were of ARM32 code instead of thumb code. So you had a BX instruction and then the cortex said no way. So if we could, if I think the... So one of the key issues there is that you have, anytime you're building things with GCC you have GCC Lib under the hood and if you have a compiler which is targeting a Linux G-Lib-C architecture it assumes it knows something about the actual hardware you're targeting as well as that it can rely on G-Lib-C being present for the final linking and so you end up with a Lib GCC which does not let you do what you think you asked it for if you're using a Linux targeted compiler. Yeah, the point is that Lib GCC is built using some particular code so as soon as you include it if you actually link against it then you get code for possibly the wrong thing. So one of the things that Simon Richter did years ago was just replace Lib GCC for a UC Lib-C one and that was the use UC Lib-C, that package knobbled your Lib GCC to what you wanted and that works, which is quite... Yeah, so I mean there's potential here for figuring out how to be more efficient about taking your compiler which supports everything and figuring out how to drop in the GCC Lib that you need, the Lib GCC bits for whatever you're targeting and supporting that a bit more dynamically, having that package separately. I think there's room for improvement here. We haven't really gone down that road at all. Yeah, I mean it would actually improve our lives greatly if Lib GCC wasn't so closely tied to the GCC build because it's actually also in lots of ways. This also ties into some of the by-arch problems and why by-arch remains a thorn in our side even in the multi-arch world because it's not sufficient to just take your AMD64 compiler and say, oh well I want to build for 32-bit unless you also have 32-bit G-Lib C somewhere and the natively built 32-bit G-Lib C doesn't quite line up in all the right ways, both path-wise and I think some of the contents as the one you would get from a by-arch build. So it's, there's a lot of complexity there. With regards to multi-arch, I wanted to say though that the thing that makes multi-arch compelling aside from having a very pretty elegant file system layout, which of course we all know and love, but that's not really a good reason to in and of itself to change how you're building your compiler. The thing that really makes multi-arch compelling is the fact that you do your native build for the library and then it gives you everything you need to do cross builds and it's the same package on both native and cross and you're just installing it in different ways and you are, you're installing it on a foreign architecture and you're making use of that as part of the build environment but you don't have to rebuild any of your libraries. If you're targeting bare metal that's less compelling. Right. Yeah, there is no, there is no native environment here. It's just a cross environment. Right, and so from that point of view, it's, I'm not sure it's worth the added effort to get your compiler to do multi-arch correctly because I don't know that we actually have all of that entirely upstreamed. Our GCC package putatively does multi-arch correctly. The GCC package does but I don't know how much of that is upstream but if you're basing off the GCC package and relying on that as a base than that. Right, and the question is, does our current GCC package have the code necessary to compile Cortex-M3 and M0 targets? To compile to actually targets. We just need some way to specify another, another compiled form of the G-LIB. So I think the summon arm tool chain thing does something like multi-lib. No, the summon arm tool chain doesn't actually have multiple target support. That was one of the bugs in the summon arm tool chain is you hard-coded whether it was an M0, M3 compiler and that they built the whole tool chain for that particular set. The Leonardo tool chain does support both. It does have multi-lib support in it. So I was, so with the Leonardo tool chain I have separate PDC-LIB libraries for M0 and M3 because they are different architectures. I just have a question about this Lipsy problem. I mean, you have PDC-Lib, you have new-lib. I'm not sure you can switch between the two with the same compiler because some bits of the compiler are really intricate with the Lipsy. Yeah, I noticed that in GCC. So what I'm doing now is I'm building the tool chain and the tool chain build uses new-lib and then when I build my application I just throw away the new-lib part and use PDC-Lib and it seems to work fine. Isn't that linking much less of a problem in an ARM non-environment than it is in a, because I think it's just the code that ends up in libGCC that is effectively related to which C-Library you, how we're using for the target environment. And I don't know what that relationship is. Does anybody know? Other than being incestuous, yeah. Those functions have to be in a particular form. They've got interfaces to things and they've got code that they're in and so the problem with GCC design for all these nice things we want to do is that they've always gone, well, you build it for the stuff you want to build, right? And then we've had, you know, it was never designed to build a compiler that could do lots of things and then you choose at runtime which ones you want to do. Doesn't really want to support that which is why all this is a bit painful and multi-lib kind of gives you that but there's still some assumptions. For bare metal, actually, couldn't you just get away with a stage one GCC if you're just doing C? Well, no, you still need GCClib because GCClib contains all of the support for extended precision arithmetic and... Right, yeah, if you need that then yeah. You always need that. But you get a static libGCC when you build a stage one compiler and at the moment it doesn't include user include because I can't get it to. So if you build the one I've got at the moment it's perfect for your purposes. Awesome. So who else is using M3 or M0 parts or trying to use Debian to do embedded arm development? And Zumbias as well. So there's like a handful of us. What are you guys using? I know what you're using. You're using the M3, the regular arm compiler. What are you using? I used the handmade script that basically does what someone armed to change those. Okay. Actually I do use some an arm tool chain because I can't strip the code down to a level where it doesn't call anything. GCC has previous knowledge of, but that's usually not feasible. Have either of you tried the Lenaro bits? No, I've so far only worked from... No, well, let me think. I'm not completely sure. And what are you using? I'm using just upstream GCC and binotiles with NewLip, Volkswagen. So you're effectively doing what someone arm tool chain is doing by hand as well. Similar. Someone arm tool chain is just a script around upstream. Same here. Presumably build... Cross Tools NG supports bare metal as well, which is, that's a more widely recognized script for doing these things than this summon thing that I've never heard of. Right. My target, I needed the M3 target and the Enge stuff didn't support it when I was looking at it. And now that I'm using M0, the summon stuff doesn't support M0, so I'm using the Lenaro stuff. So I haven't gone back to look at the Enge stuff to see if it's moved forward at all. We've had the new topic of shipping compiled libraries for M3 and stuff in Nebion. I think there's a relatively wide use case now that Arm has opened up its CMS's library. There's a, for Arm, there's a base library which abstracts away all the gory details of the interrupt calling and stuff. And now this is free software, as well as some vendors' peripheral libraries. So I think there are use cases for having pre-compiled stuff, whether it's in user share, Arm, or whether it's in user lib Arm, user lib Arm none, whatever. But I think there are use cases for that because what I envision a little is, what I'd like for Cortex M3 to happen is that I can aptitude install Cortex tools as I do with aptitude install Arduino. And then just hack away and compile what I actually need and upload that. Also, do you want to be able to ship things like libraries for the Cortex tools? So we'd have a new lib package for Cortex and a PDC lib package. Definitely, it'd be nice. I think for many libraries, this wouldn't work because they just have too much in that area, too much compile time options. Like, you can't have a generic build of LWIP, but for some of the base stuff, like CMS's and muleb, I think it would work. Someone needs to try some experiments. I mean, the GCC packaging is a joy to behold if you've ever looked. It's quite scary because it's very clever because it's actually, it's not really a package. It's a thing for making a set of packages out of one pile of sauce. So it's got this amazing mechanism for applying a different set of patches depending on what it was you were trying to build, right? It was all scary quilt foo. You're waving your hands a lot here. Yeah, it's very good, actually. I mean, it does mean that you can build almost any C compiler from the Debian source. And if there's a bit of Leonardo patches missing, you just, there's already a lanaro.diff for, which providiously prized the whole ARM64 thing and now just provides a tiny bit that we're slightly out of date on. So if you need M3 patches, you just put in the M3 patches. And then somewhere, there's an awful lot of mechanisms saying what sort of compiler am I building? So the question is, let's put in a rune for a very stupid one. So the question is, do we push all this off and get the GCC maintainers to provide us this compiler or which would be awesome, of course. No. The GCC maintainer will complain the moment you suggest this. Well, of course. But it's very easy. He also won't do it. So if you want to succeed in this, don't expect him to maintain this for you. It's very easy to just depend at the moment on the GCC source package. Just like MingW does and just build it with the right option. So I think the straw man I would like to propose is, so currently we understand that the bits that are actually different for building for different, we have a compiler that's an ARM compiler and we have cross compilers even and native compilers both that understand all the various combinations of instructions for the various ARM chips you might be targeting. And the main thing that gets in our way is that GCC lib GCC depends on a lib C of some kind in some cases. So the straw man I would propose is for somebody who's keen to look at this to figure out how we can improve the packaging and not make the packaging even more complex than it already is, but figure out how we can let people use the existing compilers and not have to rebuild their own compiler because you don't really need a new compiler but solve the problem of being able to drop in the lib GCC that you need. And multi-lib is meant to deal with this but the problem is that multi-lib requires you to centralize it, declare all your options, you end up with this lovely M by N matrix of options in multi-lib that you don't really want to deal with and the GCC maintainer doesn't want to deal with but if we could figure out a way to split the lib GCC build out, we could enable an awful lot of things that then you only have to worry about maintaining that little bit that's actually different in the compiler. I mean, would this be just a different set of GCC compile time options in one of those giant GCC config files then? Yeah, it's, what's the file format called? I don't remember. Spec file, is it? Yeah, you just need a GCC spec file and you need your lib C. Those terrify me. Oh, they're quite lovely. They make great before bed reading. So Simon Richter has already done this, like seven years ago. So we need to go and find the post that says this is what I did to replace the spec file and novel lib GCC to make UC lib C just dynamically replace GCC for your compiler, right? It has been shown to work quite a long time ago now so everything might have changed but in principle, I think you can do that. So I could create a new, so I could create a new ABI target arm none EABI using the existing arm compiler which is apparently not ABI specific now. The compiler that we have in the archive knows about all the arm ABI's so it can build for any of them. Yeah, we have to recompile it though. No, no, you just do dash M whichever one. Don't you? Wookie? I think so. If you novel your specs file and your lib GCC correctly, yes. Right, so if you have the spec file that tells it what you actually want for, for, as far as like details of the spec file covers memory maps and everything else and how you assemble lots of linker specific stuff but in terms of knowing the ABI, any arm compiler that we have in the archive knows about all subsets of that as far as I know and there may have been bugs with like the M3 stuff or the M0 stuff that you're talking about. Yeah, the M0 was the quite the surprising thing was that GCC didn't gain credible M0 support till more recently. Right, but in terms of like when we build our arm compilers it has support for an awful lot of targets and they're just dash M targets just like our XC6 compiler knows to target 386, 486, et cetera, et cetera. Same thing on arm, it's just arm is a more bountiful environment. That's a nice term. Fragmented, fractured, chaotic? Embedded. Bountiful. I think we're about done here today. We have about five minutes, a few more comments and then we'll, so I guess the question is I'd like if anybody has, it sounds like one thing we need to try is to try just building a new spec file that magically uses the existing compiler and see if it works. So I was going to ask about debugging because in the ad talk outline there's open OCD. Oh. Is that still the best thing we have for bare metal debugging? I use two tools with arm right now. STM sells a little device called the ST Link V2 box. It's a little dongle. And then there's an ST Link application that talks to that and has a GDB target. So you don't have to use open OCD. You can use this little ST Link application which is significantly smaller. Now open OCD also talks to this device and I have met, so at this point I'm using the ST Link program to do M3 debugging and the open OCD program to do M0 debugging because I haven't gotten the other directions working for either. But the ST Link program is significantly simpler and it's not configured through these massive open OCD configuration files but it's instead source code so you can actually debug when it goes wrong which is really nice. So I don't, and we should probably just package that for the archive. That would be really easy. That's just ST Link tool. It's, there's a GitHub repository, a source for it, I don't remember which. Okay, that sounds great. What about the bigger arms? Is there anything for that? Because with open OCD I was really lost. Open OCD is loses everybody because it's got this massive configuration file. You have to get every bit right in. Yeah, I have this example. For example, I couldn't read when the MMU was active. I couldn't find a way to easily read from a physical address in the memory. So I had to somehow call the function that disables the MMU with single stepping and then I was able to read physical memory. And then you re-enabled it MMU. Yeah, to do it back. So is there really nothing better than that in Debian for small arms that still run Linux, for example? I don't have any idea. Those are huge arms. Not open OCD's fault that it's hard to read the memory is it? It's just doing what it gets. Well, the problem with open OCD is that it's a, well, it's not a problem and a feature of open OCD is it's a general, it's a very general purpose tool that targets everything that's scripting. With marvelous config files in TCL. Exactly, yeah, exactly. So you could do it, the fact that you could do with open OCD, clearly it's a flexible tool. But your other option is to do like the ST Link thing did and have a custom application that targets your specific device. And I used ST Link because open OCD didn't have support for the ST Link V2 dongle. That's why I started there. I mean, if open OCD, if support for what you're doing is already in there, it's brilliant. Yeah, because somebody's already worked out all the tedious runes. And yeah, I've had to work at tedious runes a few times and it is quite hard work. But that's the cost of flexible tools, really. So to first order, it's probably easier to write a program for open OCD to talk to your chip than to write your own program to talk to your chip. So I think that's what you need to look at open OCD is a debugging development system instead of a debugging system. We have one minute left. So we have at least a simple action plan to go try just building a new spec file. If that, when that fails, the other option is to try to use multi... Use the existing packaging to just make an ARM non-compiler and use that. Where would we install that? Multi-arch? Or do we create a user-lib ARM non-EABI directory and put everything out of there? So the packaging will probably already put it in... Yeah, you probably wanna add an architecture called non, I suppose. ARM non, exactly. Yeah. And yeah, and then just put everything in ARM non sorts of places. Okay. You don't actually need to get the architecture added to DBKG before you can install it. Even if it's multi-arch. Although I don't know what the multi-arch spec says about that. Is it supposed to just be Debian architectures? Well, the guiding principle is that those multi-arch directories, the user-lib directories are owned by the Debian architecture. So while we would never have an ARM-non-Debian architecture that I could ever foresee, it does seem a little bit strange to be using that directory for cross packages in all other cases like user-lib, ARM, Linux, GNU, ABI, HF, everything you install there is an ARM-HF package. So where do you want me to put this stuff? I would use the traditional cross-compiler path. I would just use user-arm-non-libc if there is one slash. So actually create a directory in slash user or user-share? Well, if that makes you feel better about it, but I mean, it's really, it's all the same to me. None of the cross-compilers have ever historically complied with the FHS. And we have ample coverage for you in terms of existing packages that violate that. Yeah, where would you like me to put it? We already put stuff in GCC cross. So we should probably just put it, I mean, I don't think that, again, those user-triplet things are for host architecture stuff, right, not for stuff that runs on the build, the machine you're actually running your compiler on. I think we should put it in the normal kinds of places and we just pick a name. Yeah, time over. Well, so we need to figure out where to stick this stuff, but I think that's something we can probably discuss with Steve over beer. Absolutely. Thank you very much for coming today and have fun with your, oh, thanks for bringing my hardware back. Anybody else got any more little pink bags? We'll look around, oh, very good. And we'll see you at the bar.