 Hi, welcome everybody. I'm talking about the state of and power VC or as it's generally marketing known as these days, power on Fedora. They're the two main architectures or groups of architectures really that as my role in release engineering I'm responsible for. I also get to do bits on S390 and the MIPS guys and I'm not sure whether they're here. Excellent. I'll speak to you later. But yeah, so the vast majority of my time of my day-to-day is spent on arm of power. So that's primarily what I'm going to cover today because that's what my head is primarily filled with. So I'm going to start with called toolchain and toolchain features and functionality. In all of these cases, in the case of both arm and power, across, well, I suppose I should say, arm covers, arm V7, A32 bit arm that we support, plus AR64 or arm V8 architecture and the power side of things supports power PC64 bit in both little and big Indian modes. So that is primarily what I'll be covering on. In terms of the core toolchain, PCC, GLC, the new tools and friends, we're primarily feature complete and identical in terms of core functionality with X8664. There's a few bits and pieces here and there that are a little bit behind power PC, for example, doesn't have full set support as yet, but that's been worked on and should hopefully land shortly. But generally speaking, on a day-to-day functional level, there's a very little difference between all the architectures and as a core toolchain used to build the distro should be, it's relatively boring. But then you start to get into more of the fun, interesting toolchains that while aren't used to build the core distribution, are starting to become dependencies for core functionality such as Docker, containers, various other bits and pieces that while we don't strictly need them, there's a lot of interest in them and a lot of use cases for them. Go-Lang both runs on RV7, runs just fine on R64 and power PC64. The support has mostly landed upstream with the Go-Lang 1.5 release. So we're starting to deal with bootstraps and various other issues. So hopefully by the time F23 goes GA, we should have most of that stuff in place. Of course there's also GCCGo, which worked actually fairly well and is mostly feature-complete or has feature parity with Go-Lang, although we're still sort of, still work how to deal with different toolchains to achieve the same goal in Fedora in general. The support there is still a bit more up in the air. Feel free to interrupt and ask questions if anyone has any specifics on the way through. Getting to the various different additions that we now have in primary, I'll cover over server cloud and workstation. The server addition is basically the same. There's a few minor functional differences in the way things are being produced at the moment. We're working to ensure parity across all of that. On the power side of things, it's 100% there, as you would expect. AR64 is pretty much 100% there. It's relatively boring, very stable and generally just works. We're enabling more and more features and functionality along the lines of Docker and containers and roll-kit and testing and integrating. But for all intents and purposes, the vast majority of the server addition across all the ARM and Power architectures is complete in that. Cloud addition, on the power side of things, released the first cloud images as part of Fedora 22. We didn't quite make it there with some bits of functional issues with some of the tools we used to build that on AR64, with some issues surrounding console support, EFI bootloader support. I'll get to more of that later. And a few other bits and pieces around there. But we're getting there. Hopefully in the F23 cycle, we should start to see cloud images or Queen Moon-based images for AR64. There'll be Docker images for both power and AR64 for F23, and then we'll start to build up some of the functionality similar to what primary is getting in terms of layered images and stuff like that. For the ARM stuff, where would you deploy that? In what? Deploy what exactly? With cloud images. So ARM supports hardware virtualization. So there are people that are using, you can write open-stack nodes on AR64 and braver cloud. There's not a lot of public cloud yet to support AR64 architecture. No. That will change in the future, but we'll be ready for that. Yeah, I mean, a lot of images are sold as cloud images. But the base Fedora cloud images are Queen Moon images. You can run them anywhere. You have KBM hardware there. And so, you know, the Fedora cloud Queen Moon-based images, for example, on X86, you can run on your laptop or you can run in open-stack or you can run in over or red or anywhere else that supports KBM book. So just because they're labeled cloud, they're not just Amazon or they're not just open-stack because, I mean, ultimately, depending on the edition of open-stack you're running, obviously, any open-stack instance that's running KBM hypervisors, you can run, you know, X86 Fedora cloud images, but you can also run them on your laptop. So it's, you know, you can have your own private little... So anyone that has AR64 hardware has hardware capabilities there. Anyone that has in-house or local power hardware or, like, PowerA hardware which supports Livevert, KBM and everything else can run the power cloud images. So, yeah, just because it's cloud, it's like, for example, we're not producing EC2 cloud images on either Power or AR64 because EC2 doesn't support it. So it makes no sense to do that. But in the case of, say, these seven Docker images which Dennis produced for F22, there are hosting providers out there now that are doing cheap R&B7 hardware and you can run the Docker images on those. So there is some public R&B7. I'm not sure if any publicly announced R&B8 or Power public cloud instances, but, you know, there's a lot of companies that are using it for other things internally on private clouds or just standard virtualization. Workstation Edition, I'm told on Power it actually works. I'm not aware of anyone that is actually using it in production because there's not really currently any power-based workstation-style things where you would use it on a day-to-day. It does work. It's all built. Not widely tested. Similarly with AR64, it works. There's a few people around that have managed to find ancient graphics cards that will run on the PCIe buses and it works fine. You know, in the coming year or so and Mr. Masters wandered in and then wandered straight back out again. But, you know, there's going to be a lot more lower-end style deadboards and things like that coming out where workstation will make more sense. So it all works. It's there. We're not actually producing workstation images just simply because at the moment there's no real demand for it because there's not really the publicly accessible hardware to do so. And a few people on the power side that have shown interest in running the desktop edition on power have done it with custom install anyway. So it's there. It works. It's improving. There may be, I hope in the case of AR64 in the next year, some decent, cheap, readily available desktop laptop style hardware which will make it worthwhile starting to produce said images. That's what you said last year. Of course! It's probably what I'll say next year as well. And it's being recorded so you can quote it back at me. Which is why I said hopefully. It seems like we had some kind of a bet on when that first hardware was going to be available for other developers. We all lost that bet. I never was involved in that because I was always skeptical. I mean there are, like with the high-key board or the 96 boards with consumer edition in particular the Qualcomm board the up-strain developer that's been developing that open source has AR64 running, like Fedora, 64 running on it, running full-none chip and fully accelerated and everything. So it can be done. And there is, like that, the Qualcomm stuff is available. The boot loader is truly terrible but that's no different to any of the other Qualcomm dev boards that have been out. And hopefully that means the 96 board enterprise edition with the AMD Seattle weather has been announced should hopefully be out before the end of the year. I was told, I think, of Toyota? Yeah. So you... Yeah. You're going to harm that. 64 bit. 64 bit. And from the AMD 7 point of view the Nvidia Tegra stuff is now almost up-strain completely. The user space is all there. The firmware, surprisingly, is actually up-strain. And the final bits of the kernel are scheduled for 4.3. And so that will give us finally gain and Ian, I did say that last year and the year before on the AMD 7 stuff it was a nice fully accelerated 32 bit solution. So I was thinking about bringing the board with me and trying to do the presentation on it but given that I actually did most of this presentation after I arrived I didn't think it was necessarily the best thing to do. The rest of the general user space well there's 18,000 large source packages in Fedora in general. The vast majority of the user space is there. There's a number of corner cases on ARM64 and Power. ARM64 doesn't have mono with the Rebase to mono all the power have mono now. But in good shape as people are starting to use Fedora on both ARM and Power we're seeing more and more optimisation bugs a few runtime bugs a few assumptions made by different projects about bits and pieces but I think that's really good because it shows people are actually using things and testing use cases and generally trying to do day-to-day work with it and finding these bugs in the morning which is good because it's really relatively straightforward to compile those 18,000 source packages onto different architectures and go yay we have all the 250 packages built but it's not until people start to use them that you start to see bugs about the usability of them and if you're starting to get bugs it means people are using it which is awesome. Kernel on the power side of things very boring there's some interesting features that have been coming along there with regards to some of the advanced power hardware the best way to find out about those features and functionality is OpenPower.org website there's a few issues there we've seen with feature enablement in 4.2 which meant we had to disable a few things like transactional memory and bits and pieces like that by the time f23 comes out they should be enabled and back and there on 64 side of stuff with the 4.0 kernel that f22 ships with we had enough upstream that we managed to drop the big 64 kernel patch which at one point was around 100,000 lines of code that we were patching into the Fedora kernel now the only patch for ARM that we carry is for an out of tree pre-production hardware that will never go upstream because that hardware revision will never be generally available so we carried at the moment to make it easier for people with that hardware to test and consume but that is literally at the moment the only patch we contain with the 4.0 kernel or upstream landed the massive big ACPI patch it's not enabled by default because on the Mustang board in particular it will take you back to a feature set about the equivalent of 3.17 with device tree where you get a single neck and a set of port and a serial console and that's about it there's a huge raft of ACPI based enablement hardware enablement patches that are on their way into 4.3 and so I think this 4.3 maybe 4.4 ACPI should be functionally equivalent to device tree but that's that's been relatively amazing and the ability to be able to use a pure upstream kernel that made my life and a bunch of other kernel people's lives much easier and much less stressful when patches don't rebase and we have to run around and get that fixed because it generally will interrupt x86 and the things that other people need to do to develop the distro so I mean that to me was a big step in the evolution of AR64 in Fedora or in my opinion general as a platform to show that it is coming of age so to speak in that we don't need to carry a whole bunch of non upstream packages bootloaders I mentioned before that I would touch on this again obviously in the 64 space where purely at least at the moment UEFI based one of the issues with regards to this and cloud images is that we don't actually have the ability to redistribute a completely open UEFI bootloader hopefully soon John I see you're reading up the back would you have a comment to make on the state of that so the issue as I understand it without going to the government is can build the stupid UEFI just now to be the problem of the licensing it's the big fat driver licensing in particular which makes it currently unpalatable to Fedora it's been worked on and there is yeah what you said I've had that for two years I recently I think you just did through somebody else yeah from an AR64 point of view and virtualisation when that happens it will make the bootloader and cloud side things much easier for us to deal with can we take this so that's AR64 on power users I don't know if you understand but it's oval which is anyway it's completely open standardised bootloader very similar to the and that's used both in the virtualisation use case and on the as part of the open power consortium the open power hardware also uses that same bootloader that's all packaged up and chips and just works with Fedora VRT which is great on v7 obviously we have Uboot that's evolving actually quite nicely and we have 130 on board so last time I looked we should be able to boot using a standard distro boot procedure using xdlinux comp to give us standard upgrade paths single kernel so it's still evolving, it's still being polished we have a few minor issues there around setting up consoles and stuff like that but generally on the vast majority of the platforms with the actual devices that we currently support all of that is there most of it's upstream I think we carry four or five patches in total for that that they're working with the upstream maintenance to polish up and get upstream but again it's obviously not quite as neat as we would like but a number of the vendors have picked that up now are using the upstream distro so any of it and video stuff you can literally just plug in a Fedora card and it will just boot which is kind of nifty and it's taken us a long time to get to that from the F13 release with Seneca F14 where we had multiple kernels and cobbled together Uboots from God knows where similarly the Anaconda installed a number of people have asked me when will ARM V7 support Anaconda installed well it does and it has done for some time you can pixie boot I mean ARM doesn't tend to have the traditional x86 put in a DVD style install but you can pixie boot from Uboot and do a complete interactive Anaconda install exactly as you would or you can use a Kickstarter install exactly as you would on x86 and the same is case the same is the case on ARM 64 bit and the same on all of the power platform so while the lack of optical media and bits and pieces like that are somewhat different ways of consuming Anaconda than people are traditionally known to use on x86 the mechanism is generally the same and works correct me if I'm wrong but on 32 bit ARM interactive install generally speaking it won't boot when it's done depends on the system that you have no does Anaconda have code to detect or to ask you which SPC you're on and then do them I don't want to derail you it will detect yes if you install to a system at least at the moment if your system the petition that you're installing is not the petition that you can not want to boot from so if your U-boot is on flash or if your U-boot is on an SPC and you're installing on to a side of this you do an install and at the end of your U-boot it's going to reboot if you have to like U-boot into the start of an SPC and you're installed with that SPC today Anaconda whites out and we don't have something that is putting U-boot back in place and so in that case it doesn't reboot so I'll put it into regularity flash in the U-boot and then okay, so you've already set up U-boot to work and that's how you bootstrap Anaconda then it will work there's a number of people that actually put in like a scriptlet in the post-install of a kickstart to do that for them in which case it's automated and you don't need to fath about with other machines if the U-boot is on like a NAND ROM or something like that or the detection is there and it will just boot Any words moving on AR64? Yes, on AR64 with the UEFI of the platforms we've currently tested and support which is Mustang and Seattle but there are other I know people have been testing it on cavium hardware I'm not lucky enough to have a 48-core piece of cavium hardware to play with and test it on but with the standard UEFI where the UEFI on those more expensive boards is on a NAND ROM it will work with traditional legacy whatever you want to call it spin media and questions? Can you go about uniting the various cookies into one cookie so that the American cookies have all the fun? That's sort of not entirely related to my support of and power hardware it's something that I have been it's something that I would like to do with so to give a brief overview of it the idea is to decouple the primary and secondary architectures from where they reside in what Kogi instance so in the case of i6 like 32-bit Intel or 32-bit X86 they are wanting to demote the cloud image and server image to a secondary architecture but with regards to that how do you then take that 32-bit architecture and then say leave all the spins in place as primary on 32-bit and then put the server and cloud into a different Kogi instance well you don't so the idea is basically we will decouple the Kogi from the secondary and primary and then potentially import all the architectures into a single primary instance of Kogi and so when you build something it spits out all the architectures in a single build at which point the way you compose as a primary or a secondary can be done as a separate feature or it can all be done as a single process of composing the distro server image and then where you then put it on the mirrors determines whether it's primary and secondary or the QA process that goes through this is a primary architecture for the server addition so we're going to QA in this way for all these secondary architectures of the server addition we're going to assume that it's basically working in the same way as it is on primary QA on that so there's a bunch of ideas floating around there's no roadmap as to how that will happen if it will happen and what the final outcome will be but that's some of the ideas that are sort of starting to be discussed and float around I wanted to know if you do do one unified Kogi and then build a build in the secondary architecture I'm sorry if the build fails on A architecture it fails on all but the fact of the matter is and dealing closely with secondary archers every day in most cases it's rarely an architecture problem that bails are built there are a few the big one that comes up on a slightly more regular basis than most is a big Indian issue versus a little Indian issue in the case of the core fedora that is used by everyone every day if a GCC build fails due to an issue on one of those architectures the GCC maintainer is going to have to fix it anyway and when the GCC maintainer pushes GCC build he simultaneously pushes GCC to all the Kogi instances at the same time if a build fails on power on GCC he re-spins that and pushes it to all the architectures anyway so a lot of people are dealing with that sort of stuff already and in the case of dealing with say some graphical package say blender for example I happen to not build on all the architectures but say for example blender goes we don't support power or we don't support S390 well that's already the case and the architecture maintainers have already put in excluded arch S390 excluded arch for all intents and purposes a lot of the process we do already deals with that and there's probably four or five people in red hat that are dealing with Kogi shadow dealing with players from Kogi shadow and stuff like that and it suddenly frees them up from a very tedious, very repetitive and the most problematic part of building the secondary architecture is actually not the building of the packages but issues with the towing and various other bits around the packages which if a single package fails and causes other things to block it ends up costing the maintainer on that package in most cases more time than it would be if it fails across all six architectures at the same time it would probably take the maintainer less time to fix it then and there to build on all architectures than the loops required often to go around and back around and back between the architecture maintainers and the package maintainers to get that fixed which gives the package maintainers instant failure so they know that it's an issue then and there and if it's because some features have been enabled like land like a new major version with a feature that uses say Mongo which we don't currently support on ARC64 they can go give ARCs to disable that feature then and there and deal with it and maybe file a bug what file a bug with the architecture guys so that they can do it and it gives the guys that are doing this repetitive task day to day a whole lot like frees up probably the vast majority of their time and I can pretty much from working on secondary arches now for five years the time that's freed up only a small time of that would then need to be spent assisting other people with those sort of failures to get them fixed quickly so that it doesn't block that functionality does that answer your question John I was thinking about this something with a bunch of sub-appliates where if you have to build this for AC6 but now you have a case where some spin is not actually critically locking and then you think about it some more you think actually going back to the old razor proposal and the minimal OSB and these are talking about actually don't kind of fit in nicely because if we build a very minimal core base not core OSB whatever you want to call the lowest fees actually this fits in very nicely but that's going to be basically according to what's right behind everything and you know it lives in world to today so it's talk of what Red Hat needs from Fedora because Red Hat already supports PowerBegendian, PowerLittleMD and S390, AR64 and that core group in most cases either has a Red Hat maintainer or a co-maintainer or Red Hat actively participating in that maintenance of those packages anyway and so part of their day job is to ensure that it builds across all the architectures that Red Hat cares about so it's part of their day job anyway and it's something they do and I know there's a number of packages not going to mention anything like Java but I have a constant backwards and forwards with the Java maintainers about the architecture and they and it's not their fault at all but you know they'll push a build it will break something in AR64 I go and say hey guys this is broken and then we go backwards and forwards backwards and forwards backwards and forwards it ends up being if they know that they need to deal with all that at the same time it ends up like probably saving the last case we did this and a whole bunch of my time I mean one of the interesting side effects from that they're like open to 80k upstream build and it's like what kind are you using 3.19 well this one's broken in 4.0 even though the actual ABI didn't change because of the big number jump a lot of things broke yes Adam so I guess since it's not a primary architecture yes that happens already now no it doesn't happen right now you can talk about it in a proposal where you all build you should be finding a bug somebody and block the abstract so that someone from the upstream can know about it I agree I'm getting to the point where somebody does it in a free time for AR64 they say are you here for 0 and just coming out and move on at what point does that become a point of retention the thing is that already happens now and it is already at times a bone of contention as it is so in terms of increasing or decreasing that bone of contention I don't think it will change the status at all so it all just increases the visibility of the problem so I mean it enables a lot of I have people like multiple times a week saying to me why isn't AR64 server edition primary and it's like well because and like because why it's like because in the current state of promoting an architecture we have to do server we have to do if you go to that it's an all or nothing thing and at the moment it makes no sense to promote say cloud on AR64 to a primary architecture or even like say on like cloud on power cloud on power on AR are a primary architecture and make it blocking because there's not a lot of people using it and so it's the sort of thing that would be remained vertically and secondary but server on AR64 is something I've asked about multiple times a week so the ability to say server on AR64 is ready for primary let's promote it and all it is it's flipping of some bits rather than a huge massive amount of work and everything else it's similar to like the demotion of I686 on server which has been discussed on the mailing list at the moment it's just like you know we're not producing the bits anymore where if we have the ability to say demote that and then there's this massive outlaw in the community that use it on a day to day basis that don't follow the list closely and probably actually they test something until it goes GA we can go oops sorry we fucked that up and quickly re-promote it in the next cycle or something like that without major amounts of work required by a small amount of people as is currently the case of demote or promote an architecture yeah yeah so at the moment where demote I mean in the case of I686 we're sort of already implementing this policy because with the demotion of I686 some subsets of I686 to secondary they're not getting whole bottles evicted from Koji and expected to go and say up another another whole set of Koji instances and build a whole bunch of community around that and everything they're not getting demotives like if our image just went away we don't have a way at the moment to promote this we need tooling work to be able to do that but in theory we can much easier do that and by having everything built in once there's times where it's going to cause a lot of people but most of the time they're never going to notice it stuff's just going to build and we just carve up the output and ship it into different locations and and I mean we've had a few like in the lead up at 21 going like in the death cycle I think it was just before beta there was a toolchain issue discovered on S390 but that's still necessary a a partial mass rebuild across all the primary anyway so there are issues there that like people say oh you know if something happens on S390 it doesn't affect primary well actually yes it does you as the like the random person who said that that doesn't care anything about S390 didn't see me doing an entire weekend of work to do a partial mass rebuild to on primary to get that fixed similarly in I forget exactly the component but in the lead up to f22 there was an issue in part of the toolchain that actually that package I think it was it might have been a few tools it was one of those low level toolchain bits and it regressed the regression on that affected actually all architectures but it instantiated itself slightly different on AR64 and actually stopped that from building all together for whatever reason but when the maintainer went and actively looked at it he went oh shit this is like a toolchain bug issue across all architectures and we have no idea why we didn't see it on x86 so there is value there Simon let's come up a couple of times with GCC as well where there's been a bug that has manifested itself on ppc or S390 but the bug is it's in the generic node parts and exists on x86 and all the other arches and it isn't visible it's getting hidden and it is a bug but maybe not tomorrow before issues but down the road before issues having that wider build coverage does help I think having a wider ecosystem of architectures is healthy for the project as a whole and benefits each other not only is it healthy but it's a pretty good track for the ppc or issues when they pop up so each of your vegetables is the what answer to this after f22 came out there was a person in the community that has been massively critical of arm being promoted to primary he's been massively critical and when I sent out the release announcements for AR64 and Power he actually emailed me personally and said well I want to say thank you the uplift I've seen in the secondary architectures is actually making a really positive effect and it was just as well I was sitting down when I got this email because I had to pick my mouth up going Jesus Christ and generally the community in general is actually you know yes the rv7 builders are particularly fast yes we have a plan and much much faster hardware in process so that we can start doing rv7 on rv8 to solve that problem it's a little bit down the line much of people do primarily me and engaging with people in a whole bunch of other stuff to do some work to polish that up and make it beautiful and usable and in case of like power builders like power rv8 hardware actually makes our x8664 hardware look slow and ancient so there's stuff there that things like hardware concerns with regards to speed and quality and various other bits and pieces are very much in process to being addressed as part of that proposal very specific feature question related to what you just said and I want to ask it because I know the answer is somewhere in this room at a hard time finding so when I get my affordable 64 bit AR arm stuff in December 6 months 6 months but he also told me so whenever he says something I always just go the other next month to ensure he says next month I would say to him that would be 28 so John's lies are always even more optimistic than yours but will I be able to John do I not question your timelines you tell more realistic lies so I'll try and the treatment unlocked let me get my question out will I be able to do hardware accelerated the 7 virtual machines yes yes yes yes we have that working now I wouldn't know because I can't get the hardware sorry there is one thing but your machine needs to have 32 bit arm support for CPU because this is optional if you copy our error if you take the counting example of public knowledge at this point they did not implement 32 bit some of the people who buy hardware you won't have these 7 support so there is an army in that way but the CPU that's coming out in the future that one definitely has the support but what I'm thinking of is other 500 bucks coming very soon that's the one the publicly announced prize edition has the ability to do that I believe there may be a demo coming up at a major arm event in the next few months so one of the internal guys spent a bunch of time going through all of this to make sure it happened and he sent me this amazing email that I kid you not if I printed it out it would be about this long but to deploy it nicely into Fedora infrastructure and various other bits and pieces there is no way in how I am going to be running a Qimoo command line this long so basically I'm taking that recipe and I am polishing it and engaging with the Libvert guys and the Qimoo guys and we're going to get a Qimoo in place that we can boot U-boot so we can do standard and a condor installs with standard YUM upgrades with the process that we currently have we can't do but ultimately that is a proof of concept to ensure we could do it and to engage with like there was a bug in big U-tools with regards to 64K load sizes which the fix actually landed to give you an idea how long they've been doing this landed fairly early in the F22 cycle it needed a mass rebuild of everything to ensure that it worked in the F22 cycle after that landed a minimal install build route had everything just through standard upgrades package churn in the distro dev cycle we could run a minimal F22 install as such and because I knew we weren't going to have the hardware and the other bits in place beforehand I wasn't going to yell and scream about a mass rebuild just for this in F22 we've now had the mass rebuild in F23 so everything's there is me this is a project that I've been inching towards over the last over 12 months so all the bits are slowly coming together to enable that and yes you will be able to do it