 Hwnaeth annog ddod i ni'n gweithio, dweud iddyn nhw'n unrhyw ymlaen o'r hotel ac mae'n siarad o'r diogel oedd o'r tyg mewn oeddi. Dyma'r amgylch ofyn yn gŵr dyma'n cael ei amgylch, rwy'n yn synod i'r Oedd Stig Muckin Tire. Rydyn ni'n rhoi'r Debyn Ddiol yn gwrs. Rwy'n meddwl i fan hynny. Mae'n gwneud o phobl i dda i wedi gwneudio wedi bod chi ddim yn gwneud yr adnod gynnig yma. ac mae'n bwysig yw'r ystafell hynny yn ei bod y dweud y inflwni Gwyrdranol. Nid yn gwneud ddweud o ddigonol, nid yn ei gwneud i gwneud i ddigonol yr ysgol. Onaf, mae'n gwybodaeth byddwn yn grifiteis o ddod yn Grwp Gwyrddranol. Jaime, lle mae'r ystyried bob o'r llunion yng Nghymru, oedd yn gwybodaeth bod y ddweud o gwybodaeth sy'n gwirio. Mae'n gwybodaeth am yr adnodau. Rwy'n cael y cyfnod i'r brif, rwy'n cael ei ffordd, a'r bwysig i'r arwain i'r ddechrau ymhlach, er oeddwn i'n ddim yn ysgolio'n ei cyfnod i'r rhaid o'r ddefnyddio ar gyfer y dyfodol, ond mae'r pethau yma, mae'r dda'r hynny o'ch ddwyf yn ddim yn dda. Rwy'n cael ei ddwy'n cael ei ddwyf yn ddim yn ddwyf yn ddwyf yn ddwyf yn ddwyf yn oes. Rwy'n cael ei ddwyf yn ystod o'r ddwyf yn ddwyf yn oes. Nowre I don't pretend to know everything and as per normal I will write up a summary of what's discussed here. It will help me immensely if people can take notes on what goes on. I've already got a Gobi document started. So quick update when down on port. O64 is our most recent port, first release for Jessie. It's working really well I think. roedd mynd yn ôl hyfforddiant ysgrifennid o'r cyffredinffredd mor hwnnw. Ringwch rhighod hwnnw'n dylun mor cyffredinffredd hwnnw yn ystod. Tawn o'n ddylun yn gweithio, mae'n dynnu ar ddigon nhw pleg. Rwy'n gŵr i ar yr m64 ac yn gyfwilol cyffredd hwnnw. Rwy'n amgylch gyfatholon hwnnw. Rwy'n krifynu ar gweithio ar y dryd. Roedd hynny'n gwybod iawn o'r amgylch cyffredd yn Debyg. There is one. It will work with either DTB or ACP to tell the kernel about the config of your device. You tend to see that the split is more simple. Small devices will still be using device tree. Bigger server boxes, the standards all say they should be doing ACP and most of them do. Sometimes it even works. Felly, onw HF is a bit older, we first released it with Weezy. The details of the ABI are well understood by now, I hope, but I will mention them again. It's a hard-float ABI, so that means that floating-point registers are passed using hardware floating-point registers. Floating-point arguments are passed. The minimum spec is on V7 using the VFPV3D16 set of registers, which is a lovely bit of alphabet soup, which probably doesn't mean much to most people. It means that it is guaranteed to work on any V7 CPU that has hardware floating-point. The vast majority of them do, some of them don't, we try not to talk about those because I think that went dead. We don't depend on Neon because Neon is optional in a number of V7 CPUs, in fact it's optional in V7. There are not many, but sufficient V7 CPUs don't have Neon that we haven't pushed it. Now, one of the nice thing about this ABI is that it was agreed as a standard across all of the distros doing V7. You can take a binary built on Debian, and you should be able to run it on Fedora, Sousa, Gen2, wherever. And vice versa. We have a couple of kernels available. We have the Onmenpeak and all, and we have an RPAE version. If you have your 32-bit machine with oodles of memory, you can actually use it effectively. Again, this all tends to run with device tree. So, in theory, assuming that your SOC vendor has pushed support for their devices upstream, our kernel could well support your device fully out of the box before we even know about it. Potentially. It depends, obviously. OMSF UEFI is a growing thing. If we want to run VMs sensibly of OMSF running on O64, for example, then UEFI will make it reasonably easy to have a working bootloader inside an image. It's much harder to have any of the other firmware and relations or any of the other bootloaders available. So, this is something I'm going to be playing shortly. OEL is our oldest existing port. First release with Lenny many, many years ago. So, it's Softflow TBI. It used to be V4T. In the last year, the baseline has been moved forward to V5TE. That is still very much supported for most things. We had been talking last year and the year before about maybe dropping it before Buster. I have certainly a lot of interest in working on RBL, but other people have stepped up. That's great. So, I know we have at least Adrian and Adrian and Roger. There are probably plenty of other people, too. Apologies if I haven't mentioned your name. It's just I have a crap memory. I'm not trying to miss you out. I have concerns occasionally that, I mean, we've seen in the past, and I'm sure we'll see again, that new people, new language on time for whatever may not care about supporting anything older than V7. This can be worked around, it can be fixed, but it can also be quite a bit of work to do. On going, moving forwards, people are going to have to keep on working at this to keep it running. It is well supported for things like the tool chain, no, sorry, for our core tool chain, CNC++ and for the kernel. Other stuff, we're going to have to manage expectations as to what goes in on the L, I think. So, to build these in hardware, we still have, and again, this slide is awesome. It's almost copied word-for-word from last year's. We still have those nice little orange boxes as our core set of build-ease for LHF. Sponsored by Marvell, the R-Milder XP. They're lovely little machines, except they are still deathboards. So, they don't aerocycle without you pushing a button on them if you've removed power, which is a pain. They support, they're fast, they support 4 gigs of ground, they could take more memory, but they only support a single disk. So, we've had a slate of disk failures. As happens, of course, disks are consumables, we know that. But when a disk on one of these things die, it takes a day or so of messing around and swapping a disk and then we're installing. They also don't support neon, and that is something that people do need to care about in R-MHF. We still have one of our older build-ease available in IOMX 53. That is now, I remember at the time when I installed a mini-wack of those, it felt like the future. They were fast and awesome and it had a whole gigabyte of memory. Now, they don't feel so fast. So, you can debug your problems on that machine. Please don't go building a whole tool chain or something on it, it will take a while. For R-64, we have many wider options. Not all of these are yet in use, but I'm hoping to pick some up. We have the old AMD Seattle, so that was the Optron A1100 family. X-Gene was the applied micro CPU that's now been moved on, so Ampere have taken on what is now X-Gene 3. Cavium were there ThunderX, ThunderX 2. Marvell have another CPU at 64-bit, which is going to be in there. People are going to be using full next generation NAS devices and network things, which is in the Macchiato bin. Qualcomm are in this market trying to sell a really big, high-end powerful server, the Centric, using their fold-core CPU. There's the Sincwesa, which is a 24-core Cortex-A53 machine at a CPU or other form associated in Japan. I've got one of those at home, I'm playing with. We have quite a wide range and fingers crossed, some of them will actually make it commercially and we'll be able to just buy them off the shelf. I keep on having my fingers crossed that that will happen soon. So, I'm assuming most of the people here will have seen the discussion triggered about architecture release qualification, in particular to do with supporting the build-ease. DSA don't want to support deathboards anymore as build-ease. Speaking as the person who ends up having to do a lot of the trained monkey things of copying disks and pushing buttons and whatever, because we hosted some of these in the data centre at ARM, I 100% support these guys. It's getting really tedious. So, we already have, as I mentioned, a range of 64-bit build-ease. Also, in the ARM64 machines that we're using as build-ease, and I don't see that being a problem many times soon. We do, however, need new 32-bit, so RML, RHS build-ease. We've had a look for proper 32-bit server machines. I don't think there are any that are worth looking at. I know there's been suggestions of a few odds and so I'll say like NAS boxes and whatever, which are 32-bit. They're whack-mount, but they come with one gigabyte of RAM, maybe two gigabyte of RAM. I don't think it's even worth looking at those machines, as it stands. We did have, do we have a mic down the front? We did have a company, Calzada, who were doing 32-bit ARM server machines, but unfortunately they've gone away. This might be a stupid question, but can't we use 64-bit machines in 32-bit mode for this or not? Be patient for 10 more seconds. So, let's build a 32-bit ARM instruction set on ARM64. Yay, what could possibly go wrong. Now, well, actually there's a few things. Some ARM64 machines won't run a 32-bit binary natively. This might come as a surprise to people who were used to the Intel world, but of course all the AMD64 machines and the Intel equivalent will of course sell on 32-bit software. Well, the reason for that is that in the Intel world there was already this huge corpus of old 32-bit proprietary binaries that if your new 64-bit chip can't run, nobody will buy it. Nobody's interested. In the ARM world it's slightly different. There's not that much 32-bit binary software out there, but the server vendors actually care about in the slightest. So, many of the server vendors have taken a look at the offerings from ARM, have decided now I could have all of the silicon to do the 32-bit instruction decode and whatever, as well as all the 64-bit instructions onto the CPU. Well, that takes up more space. So, rather than doing that, they say, let's just not bother. So it means maybe instead of fitting 20 cores of both into a single CPU, you might get 30 cores of 64-bit only or 48 or something. If you're trying to sell a dedicated server box, the 32-bit story is actually not all that compelling. So, hence, quite a number of those machines that I mentioned, the Centric and Thunder X in particular, don't run 32-bit ARM binary's natively. They can in emulation, but why would you do that? You might as well do that in AMD64. So, some of the machines we have will support it, and I'm already playing with those. So, what I've already done is, and some people will have noticed, ARM01, which is a ARM machine hosted by ARM in our data centre, hence the naming, is actually now a 64-bit machine. It is a Seattle box, so 8 cores, of course, it's a 57, 16. Lovely to you, Wackmount box, will support multiple disks, it's got 10 gig ethernet on board, it's a nice machine, and it's been building ARM HF. Almost immediately, we found, we did find the first problem here. ARM HF traditionally has had alignment fix up enabled in the kernel of all of our buildings. This is how most people run things. ARM CPUs actually do care about alignment, just like the older Spark CPUs, M68K, and a whole bunch of the bigger older architectures. So, what does that mean? If you don't have alignment fix up turned on, and you have badly written code that assumes alignment doesn't matter, you will get a sick bus and alignment fault. The kernel on ARM HF is configured to pick up and catch that exception, fix it all up in software, and then hand back to userland, so userland doesn't actually know it's happened. Apart from one problem, instead of it just being dealt with automatically on CPU because it didn't need to do anything extra, if the kernel gets involved, you can see this can take factors of hundreds or thousands times longer. There's huge delays going on, while the kernel has to go on picking things in software to deal with the bugs in the software you're running. This is not great. If we actually have a look on the build days, and I saw this, this is triggered all over the place by lots and lots of our code. Now, if we run this on an ARM64 machine, the ARM64 kernel does not include support for that alignment fix up, and things fail. So, we've had a few build failures that have already come out, and we've started reporting bugs. What I've also seen is, G-Lypti in particular fails to build in ARM HF running on ARM64 because of a mismatch with the size of the alternate signal stack. So, a signal stack size does not match between ARM HF and ARM64 when the kernel gets it wrong. This is a real kernel bug, I've reported it to the ARM kernel guys, a fix is going upstream already, and then we can backport it, but at the moment you can't build your libc in ARM HF on ARM64. I've found as well that our Haskell builds for v7 are really badly mis-targeted. Again, they're causing alignment faults at the wazoo. I believe they're targeting ARMv6 and ARMv7, so that means they're using the wrong CPU instructions for doing barriers. Now, barriers in the ARM architecture have gone through quite a range of changes over the last few versions of the architecture. In v4, v5, there was no support on CPU for doing this directly. If you want to get a barrier, you need kernel help. You can do it in v6. I didn't need to interrupt you in the middle of the slide, but could we not take advantage of reproducible builds in this case? In theory, a build on ARM64 should produce the same result as the same build on a 32-bit native machine. Absolutely it should do, and that's something I'd love to do. At the moment, we're actually getting problems building in the first place, and this is one of the things I'm working out. Haskell is mis-targeted, as I said, it's using the wrong barriers. It will run OK on v7, it won't run on v8. You end up with failures with the legal instruction exceptions. This is one of the few times when I won't complain about Haskell needing to be rebuilt every other day, because it means that when we do this fix-up, it would need to be rebuilt anyway. I'm actually doing a complete build of the archive right now on three machines at home. Three of the machines I've already mentioned, the Macchiato, the Sincuesa, and the Seattle. I've managed to get hold of one of each of those machines, and they're sat in my home office doing a rebuild at the moment. I can actually demonstrate just before I came up here. Yes, we're up to of the 28,000-odd packages. I don't know if you can see that at the back. With the 28,000-odd packages that are in the archive, I'm currently up to 21,800. That's taken about three weeks, so in about a week's time I expect I will be able to give the full results. What I want to see is basically what other problems are there that might come out from building on HF on top of on64. I've already mentioned the first three I've found. If we find that there's maybe a few hundred packages with alignment problems, I'm just going to start filing bugs and patches. If it's a few thousand, we're going to have to rethink this approach. At the moment this is just building in on HF to root, the other solution could be, and this links back to what I was saying earlier, we could run on HF virtual machines on top of on64. That does absolutely work. At that point, the guest kernel would do, for example, the alignment fix-ups. It would hide some of these problems. The only issue we're doing that, of course, is at that point we then end up with binaries that we know will not run properly on on64. I think I would actually rather see the effort going in to fix these binaries properly and fix things up. Fixing alignment bugs? Does that also help M68K and Spark, for example? Yes, absolutely. These are real bugs where people have just made invalid assumptions about structural alignments. We've all seen these before. The only reason that people get a free pass for this is because x86 doesn't care. You might think so. x86 will run these binaries. If you have badly aligned code, it will run slower. You should always think about alignment of your structures when you're programming. But lots and lots, particularly, and I've seen some frameworks. I think it was PyPy. Just throw everything in, and of course, because the machines they're working on don't show any problems. Oh, it must be your problem. No, it's not. So, discussion as well. God, I haven't applied more than I am to. What else should we be talking about? What else should we be doing? Hi, I'm from Ubuntu, and we are doing ArmyJF builds with no fix up all the time. And we experience quite a lot of bugs, as many of those. We would be happy to see something done in Debian as well. That would encourage Debian people to pick up our patches. So, thank you for doing that. I'm wondering if we could just disable fix ups on the builds. Is exactly the next thing I was about to mention. Yes, thank you for bringing it up. As has already been pointed out, Julian Christo for example said, we have at the moment a bad situation where you get inconsistent behaviour depending on which machine you happen to be queued to build on. We should be disabling the alignment fix up on ArmyJF as well. There is support in the kernel. You can actually tell it to fix things up, not fix things up, or specifically report exactly where the problem came from. And that's a really nice thing. I think that will be the white answer. So then we can explicitly start grepping through logs and filing bugs. Again, it comes down to just how many packages are broken. I would love to go through. We still have a few months before Buster freezes. If we have a few hundred packages that are broken, it's not too late to file bugs and get them fixed. I'm also wondering if you plan creating auto package test for ARM64, because we are doing that too and some alignment issues are coming up in auto package test, but not in the builds. Oh, awesome. Yes. That would be a lovely thing. If you can share what you've got already, that would be good. We have auto package running in the Ubuntu infrastructure, but it can't be copied directly to Debian, so it would be in need for a machine, I guess. Oh, yeah, absolutely. One of the things, so I was talking about the build machines, we will need, of course, to find a few more ARM64 machines to make this happen. One of the machines I mentioned, where are we this in Quater, is something I do have a fair amount of experience with. It's a socio-next system-on-chip design that is meant to be a server, so it will support up to 64 gigabytes of RAM, it's got multiple disk interfaces on board, it's got gigabit Ethernet and stuff, and all of that's boring. However, the useful thing is, in Lenovo, which is where I do a lot of my work, we're actually working with socio-next to get these boards out there on sale. You can buy them right now, I'm hoping that we're going to start ramping up soon. I have an outstanding offer of a number of these machines as a donation to Debian, so what I'd like to do is get, say, four or five of the machines at BuildDs. Quaterex-A53 is not very fast, but when you've got 24 of them in parallel, oh, they really, really win at building a kernel in parallel, they're a really nice machine. What I also want to do is get one or two of these to help us with auto-package tests, to help with Debian CI and whatever as well, so we can help improve our coverage across the project, I'd like to say. I was just curious if you think it would be feasible to get one of these boards for the reproducible builds project, or two of these boards, actually ideally, because right now we have mostly ARM32 hardware, and then we had the ARM64 hardware we have is not enough to systematically build one way or the other. I've asked for ten of these. It's wonderful. The guy in charge of Lenovo's Enterprise Group, sorry, I think it's just changed name, Martin, is very much a fan of Debian. The Lenovo Enterprise reference platform that they ship is basically Debian with a new kernel and a few updates. He's a fan of Debian, he runs it at home, he really wants to help contribute back. So when I asked for a number of machines, he said, how many do you want? I said ten, assuming he would laugh at me and say, no, I've got two for you. And he said, we can probably do that. I said, OK, no, we'll pay for them. And he said, no, no, no, don't offend me. I want to give these to Debian. I haven't got them in my hand yet. I said I have one at home. I am hoping when I get back to get these organised and I want to get these spread out so that we some going hopefully to Vancouver to go into UBC whacking. We have Debian machines. We'll get some of these maybe at Byte Mark or somewhere in the UK. And we will definitely have others that we can just send out. So wherever you need them for reproducible builds or for auto package test or whatever, I want to get these all working and help improve. Of course, I've now gone on camera talking about Martin. He's probably embarrassed and I'm going to get shouted at Tuesday when I'm back in the office. Yes. We've been teased with rumours of these fast, powerful, 24 core ARM servers for several years now. But I still never actually got my hands on one. Why have they never made it mainstream and then become cheaper or widely used by hosting companies and so on? It's depressing. There's lots and lots of companies are trying to make it in the ARM 64 server world. My take on it and other people would disagree, I know, is that lots of these companies, each of them are trying to do the next big deal that would get them selling 100,000 of these current systems. The first thing that each of them needs to do possibly at least a couple of them is to help prove the market is go and sell the first 1000 of them or even better maybe give away the first 1000 to people like us. Because obviously that would be good. And then actually seed the market and demonstrate that they're good and useful and make things work. Instead what people have been focusing on. I understand it might look more attractive commercially. Each of them has been talking to the big cloud renders for example, where they know that if they can convince them, they will be able to say, here are 50,000 machines delivered on the back of a few trucks to the same data centre. That's wonderful for your bottom line, but unfortunately it does nothing to advance the architecture, it does nothing to advance your own sales. The ARM ecosystem is special in many ways. We've had a habit over the years of most of the SOCs of course have been in mobile phones recently, where people do expect to sell half a million of the same thing, and then next year that will be phoned away and you do half a million of the next thing, which is slightly more features, slightly more powerful. Servers are different, what we actually need is someone who's got the staying power, someone who's got the consistency, a to stay in the market, a number of people have tried and have abandoned or have even failed and have gone bust. So we want people to stay in the market, we want people to give us things where we can continue to buy essentially the same machine three years from now. And I know that's not just us, lots and lots of enterprise customers of course will want exactly the same thing. It's a difficult sell at the moment, it's a difficult conversation to have and I've had it with a number of execs from various of these vendors. I hope one of them understands and actually makes it work, we'll see. So what else should we be doing? What else can we be doing? Is everybody happy with where we are? Is anybody awake? So okay, I'll carry on with Ling as nobody else is. We do have, to go with Arm64, we now have open stack images. In fact we've had them for a while ever since we did the first stretch release. There are more server based things that can be supported on Arm and they are being supported. There's been recent announcements by various of the big, for example, ICD people like Travis and my memory is failing. There was another one announcement earlier this week where people are now starting to support their central cloud based services on Arm64, just the same as on my AMD64. Those are really great things that hopefully will help people to find some of their bugs, like alignment problems and all of the other things that we're trying to track down. It will be nice if we're not the only ones. I can only hope it continues, so of course there's lots of efforts going on to convince more people to make their services portable to more architectures. We seem to have a habit in the computer industry of every five to ten years there's always the phrase of like all the worlds are facts, all the world is a sum, all the worlds are 68k, now it's all the world's intel. It would be nice if we could learn from one mistake in the past and actually keep portable and keep spreading, you know, not putting all the legs in one basket every time. But it works out to be cheaper and cheaper always wins. As the UBoot maintainer I've been experimenting a lot with the EFI implementation in UBoot which will open a lot of opportunities for mostly Arm64 although I've seen some recent commits where maybe even in general Arm32 might start working. The thing I have been noticing however is depending on how the device trees in UBoot then get passed on to the through EFI and you don't end up using the device tree from the kernel and if they're sufficiently out of sync you tend to get some weird behaviors. Sometimes there are good behaviors like devices just work that weren't working. So that and then UBoot just pulls in the device tree from the kernel so you get these weird issues where the device tree is out of sync with the kernel and I know you probably have a few rants about this. I've got to say can I interrupt you and rant about device tree. So device tree is an awesome idea. Hands up anyone who doesn't know what device tree is. Okay I'll explain. So back in the awful dark ages it used to be that you would need to build your kernel almost for each different Arm device individually. You couldn't have a common kernel because you had to configure things specifically with a kernel config. Device tree is very similar to open firmware or whatever from the PowerPC world. It's a way of describing your hardware in a reasonably portable way so that you can have the firmware pass on information about the hardware. So it's where your memory is connected. It's where your serial port is. It's how to talk to your network devices. It's where PCI is. It can be a whole range of things. Well basically it's how to describe your machine such that a generic kernel can then work out how to talk to things. It's lovely as a concept. Unfortunately we've ended up with a chicken and egg situation and we've never really gone beyond it. The whole point of device tree was meant to be that your firmware would know how to describe things and it would then pass this information on in a fixed clean portable way. Unfortunately the way device tree has been interpreted by the kernel over the years has been a moving target. So because none of the devices we started with had working device tree built in we had to supply a device tree blob with the kernel. So that then means that the kernel will have to actually know what your device is even though the whole point is is not meant to. So you then have to say at installation time hopefully the kernel could work it out or in some cases you would have to tell it which device you're running on and then install that device tree. That's okay. The plan was always that wouldn't be a permanent thing. But we're now multiple years on and guess what? No-one trusts the device tree that ships with a firmware to be up-to-date because it might not be as Ray Grant says. The kernel might have differing ideas about how to drive things so you then end up forever having to keep chasing against the kernel. There might be a difference from say 416 to 417 that means you have to use a different device tree specific to that kernel version. This is broken. Unfortunately because of the vagaries of the market and the time taken to get things fixed it may never get fixed and that's a problem. The move to go to ACP, ACPI, did I mention? Yes. For OM64 is indirectly as a consequence of this. ACP and device tree are actually equivalent mostly. They both do the same job. They both describe the hardware. At least part of ACP does, there's more to it. For this discussion those are irrelevant. The point of ACPI is you can't supply it later so you have to trust what comes in firmware. Because of that the vendors have to supply a working ACPI tree or the device just won't work. That unfortunately is possibly where we should have started with device tree all those years ago to say we will not support your devices until you give us firmware that works. Once we have critical mass the next vendor coming in knows what they have to do and will probably do it right. They'll need help and encouragement to do it right so they're not beating up. But they will do it right because they know if they don't. For example the Red Hat kernel, the Debian kernel will boot on their hardware and say I can't drive anything, the customers will go away. The mobile targeted SOCs unfortunately are all the other way round. They're quite happy to continue doing whatever broken things they've been doing for many years and it's our problem to chase them. It's the other way round. This is why many people are pushing for ACPI. There are a bunch of documents from ARM which are being used to try and convince vendors to do the right thing with their firmware and with their SOCs. So there's the EBBR which is the enterprise-based boot requirement if I remember correctly and there's a whole bunch of standard-ish documents that are coming out to describe how things should work. But we're still chasing after a bunch of SOCs that are already out there and nobody's interested in reverse engineering things. Nobody's interested in going back rather than we doing how they work. That's where we are. The right answer should be that the kernel stops changing the device trees and we should just be using what's available. So my friend Leif, who's back in the UK, could rant even more about this and get really, really frustrated. He's a UEFI maintainer and his job has been made so much harder by people not doing it right. So we are down to about two minutes left. Sorry, I spoke far too much last. It seems I'm the only one that care about RVL in this room. Possibly. I want to ask. So the most concern to let RVL go to Buster is the DSA concern, right? So if we can find some device to qualify the build D then RVL is okay for the Buster room. I think so. That's where it was. The thing I wasn't clear about is I'm currently doing a rebuild with RMHF. The moment I've finished analysing those results I'm going to reuse the same machines and do a rebuild with RMHL. We don't want to just switch over to new RMHL or RMHF devices as build Ds unless they are proper server machines with sufficient memory, storage and CPU. What is much more like to convince DSA would be to have essentially more of the R64 server machines also building on RMHL. That's where I want to get to. So while I'm not spending much of my effort on RMHL, I'm more than happy to do another rebuild and provide other logs. Okay, so if I understand correctly, so RMHL shares the same status with RMHF, right? Almost. Very close, yes. Exactly, so the old RMHL boxes that we're using for both architectures at the moment are not considered acceptable for much longer. Again, I understand DSA, I agree with them totally on that. We do want to move on to better supportable machines for building. I agree with DSA too. I got a few emails from the building minister privately that they want to donate hardware or donate some money to support RMHL and RMHF, but they want to know what kind of hardware is more qualified for the building. Exactly, so that's why I'm doing this rebuild. So the machines I mentioned at the bottom of this slide, the Seattle, the X-Gene, the Machiato, the Sincuesa will all build 32-bit and 64-bit software. The Thunder X, the Centric and some of the other machines coming will only do 64. So we're looking at having quite a large set of R64 machines to cover both needs. And I think we're probably just about out of time. Very last thing I'm going to say is obviously you know where to find us. If you do have more questions or comments or you want to join in, hash Debbie in arm, Debbie in arm list, and thanks to everyone, we're out of time obviously, I'm still going to be around the rest of the day. Please accost me if you want to talk about this and we'll discuss on the list as well. Thanks very much, folks.