 All right. Good morning everybody My name is Andy McCrae and this is my colleague Jeff young We work at Red Hat as software engineers on the multi arc team Our primary focus is to deliver OpenShift container platform. So otherwise known as OCP on power But we also do manual builds of S390X and ARCH64 For OCP and this is specifically for the Red Hat build system which uses OSBS or the OpenShift build service We've been very strongly focused downstream Due to time constraints and kind of necessity To get things delivered for power for OpenShift in the three series But this has caused us a bunch of challenges and we thought there had to be a better solution to that By mid-last year our team had been working on OCP 310 which G8 on power last October We were building OCP 310 on power delivering IBM beta drops during this time We had also built 3.9 for ARM and S390 for the OpenShift build service to use And we validated all that work We have a designated multi-arch QE team that validates all of our downstream work and really at that time there wasn't Any real issues compiling from one arch to the other everything was rail seven based Rail seven worked on this architecture So there wasn't like this immediate need to say hey I need some CI upstream to check everything first because it really just for 3.9 and 3.10 at least it mostly worked And that was until OpenShift 311 came along With 311 there obviously were a lot more containers that kind of existed outside of the OpenShift space things like Prometheus Grafana Admin console things like that and it also introduced multi-stage builds So up to this point we had kind of been using you know like Docker pull or Docker to build all of our things And that worked pretty well and then when multi-stage showed up we started noticing as part of that process You know you build and you have an output file and that output path oftentimes would be a hard-coded AMD 64 path in the Docker file right so we had to find a way to vet this out they started showing up And because we were doing everything downstream It was really cumbersome to fix that downstream Test it downstream and then put it back into upstream so it could be consumed properly right You know when we did 3.9 and 3.10 we had one or two of those things that it really wasn't a big deal But when 311 hit it started happening more and more and more and we were breaking downstream builds as a process And that's a problem because we were holding up x86 It was also during this time that Fedora CI reached out to us and said hey we would like to do We would like to use upstream OpenShift for our OSPS container builds. Can you help us? take upstream OpenShift and Put it on S390 power R in x86 so What was really stopping us from just going upstream and turning on other architectures? You can't you know just cross compile a container like you can an RPM right you really need native hardware to build these Docker files under these containers on and Internally to Red Hat we use a system called beaker that has all the architectures We need to build on but OpenShift upstream CI lives kind of in Amazon and Google and in clouds right and there We we really tried to investigate a way to just connect the two You know we really wanted upstream CI just to call our beaker machines and say hey build compile Let's just make this part of your CI test But there really wasn't a good way to do it I'm not saying it wasn't impossible to do but it would have taken quite a bit of effort on the OpenShift side to do it and You know we had just acquired core OS, you know OpenShift 4.0 was really the priority So the multi-arch work upstream at least became deep prioritized so they could get 4.0 out the door So we were kind of stuck right? Additionally, the other problem was OpenShift upstream is built off CentOS 7 containers And as some of you may know CentOS 7 doesn't support S390. It's just not built for it We had a couple problems. We needed to work around and our work around I guess the first decision we made saying hey, you know, we have to support S390 Fedora supports S390. Why don't we rewrite all the Dockerfiles to support Fedora builds? And that really ends up being a lot of Dockerfiles it turns out But we didn't want to maintain forks of every single OpenShift repo With just like one Dockerfile difference and we knew we didn't quite have the buy-in to put all those Fedora-based Dockerfiles back in the upstream because honestly Andy and I were the only ones using this at the time So instead what we did is we basically just cataloged all the Upstream repos needed to do a full build of OpenShift and that's not just the OpenShift origin repo That's things like at CD images or it is the image registry web console cockpit things like that And we wrote a script that would basically you could run on your machine your local machine that had Fedora 29 on it It would put all those repos in slash temp It would inject all of our Dockerfiles essentially replacing the Sentos 7 versions of those Dockerfiles and we would run a local builds in the right order to basically produce All the containers needed to do a full install of OpenShift On your local machine Obviously, we have to rename the containers so we don't have a collision with the upstream container names So additionally we put some sample inventory files that OpenShift Ansible can use to consume the containers we make and do like an all-in-one install And that saves us a lot of time because we know we can just go clone this repo run it on a Fedora 29 machine That has Docker installed say make okd 311 and it will build whatever architecture you're on It's the same experience across all four arcs, which is very very good for us It helps us a lot in in trouble shooting all the little issues that we see downstream Before they get downstream Additionally in this repo we have We've wrapped some shell scripts around the conformance test so we could at least validate the containers we build and we also Have the ability and when you have some instructions that say hey, you know if I build x86 power and arm maybe I want to push those to my private Docker hub or Container registry and manifest list them so I can re-consume them in a In a predictable way, I guess So this is really a temporary workaround that we wanted to share with you guys It's a really easy way for a developer who's interested in getting started with multi-arch on OpenShift to just kind of have a place to one place to go to know everything that needs to be involved to pull down and build And to do a little bit of testing And it also gave us the ability to bootstrap Fedora's CI OSBS process and Again, this is all of just for 3.11 4.0 change some things for us and Andy's gonna talk about that next Cool. Yeah, so as Jeff said what we've done so far is we really want to stress is just a workaround You wouldn't deploy this as a product You wouldn't it's not supposed to be a distribution of OpenShift or anything like that It really is just a collection of scripts that we had already and that we were using regularly So with 4.0 coming like a lot of things are changing and the timings have been typed for the OpenShift team understandably they've been you know working flat out and They've got a lot of lot to get done So multi-arch isn't really a priority for them and in an ideal world What we'd love to see is a situation where an event that would trigger a build So for example a pull request or a release would build multi-arch containers in the same way as the current x8664 containers are built for Sintos And and this would be a complete ideal way to do it now It's not possible right now one of the things Jeff mentioned would be the problem. We have with it just being AWS for for the CI So the builds all happen on AWS and there's not too much we can do right now to change that It's not something that's impossible to change but you know, that's a much longer term kind of goal a more realistic Goal for us in the shorter term would be to put the build files we have now For the Fedora containers into upstream and that way we could kind of get rid of the repository that we have created And and have everything done in a single location or in the same location that you would do it for x8664 Which would be great because it would mean that you could go in and the builds will be treated the exact same way as The x8664 builds are treated now There'll be downsides, which is that they not tested properly and the work gone into them isn't isn't as high but at least it's a good starting point and On that note another thing we'd like to see in in the short term is some lint tests for multi arc Jeff mentioned that there's some pretty common issues. We run into Where people have essentially hard-coded paths with the architecture in them So that really gives no benefit to even the x8664 containers It's it's kind of just a thing that happens when you don't consider multi arc And and there's really easy ways to avoid that And so by just having a simple lint test that would check for these common issues. We found We can hopefully reduce the amount of failure as we see for multi arc builds and ensure that pull requests going into their Positories are not going to in in force in x8664 architecture And hopefully that'll result in you know more successful builds for us And as time goes of course as we find more use cases we can add those to the lint test We'd also like to see full like the tests that are run on the x8664 containers right now We'd love to see a full run against the multi arc containers We do run tests against them at the moment you run all the kubernetes tests and a subset of the open-shift tests And the reason we don't run all of them is because a couple of the containers that are used for testing open-shift Aren't built on multi arc So we just can't use them right now, but it would be great to see like all the tests running against multi arc The same way they do x8664 and seeing them pass and succeed Now as we've we've said like we've put a focus on downstream in in the three series We'd like that to change moving forward We think there's a great amount of benefit we can get from pushing changes upstream first Seeing the issues quicker fixing them upstream and then inheriting them downstream Which is kind of the whole principle of doing things upstream in the first place I mean we think we could really gain a lot from doing that In the last cycles we were very much trailing behind so whatever happened in the open-shift team We had a then kind of scramble to to get and scramble to get it working on power And it would sometimes be a couple weeks a month maybe more after the actual release with an open shift So you could get open shift version 3.10 for example And maybe a month or two later you can then get it on on power Which is really not an ideal situation and we'd love to be able to see the issues coming Fix them and then hopefully release very much closer to the actual release points So you changed that one So we So Jeff had the link to the repository before But there it is again again Not a distribution. It's just a collection of scripts, but it will work straight out the box You can essentially just clone the repo. There's some pretty simplistic readme files in there And you can run some scripts to get it going. I would recommend if you're interested You know, we're definitely keen for people to try it out. I would recommend starting with x86 64 builds on fedora It's just a really nice way to Compare what we know works on CentOS 7 and that everyone's using to something built on fedora and from there move on to build Another architecture or whatever Takes your fancy And and see how that goes and if you find any issues or you'd like to make pull requests You please go ahead. We will be monitoring that and we plan to add The Openshift 4 containers Capability for Openshift 4 at some point when when it's available. Yeah So like the big change for Openshift 4 obviously is the core OS Install sorry. The big change is the 4 OS 4 OS core OS Operating system. So that's problem. That's the first thing we'll tackle first Faktora core OS get that multi-art. So then we'll circle back and do the containers after That's it Yeah, I think that's it. So if there are any questions So using fedora wiring and fedora infrastructure and like the powerful C64 and 390 hardware into this That seems possible because that means an object cross. Okay. So like that and as far as like Yeah, so every pull request to an Openshift 3 build will spin up a cluster and actually until recently in one of our Repose you can change the read-me-and-me like spun up a multi-master cluster, right? And there's a lot so that's just not gonna like RCI depends on Elastic scaling, but there's actually another step before so you have CI of the upstream the containers are generated And then there's a final integration test of a release payload And that's one where it would make sense. Yeah, okay multi-arch because that Couple of those an hour so clearly, you know I think we can scale out tests there and that would where it would probably make sense to Yeah, have the CI operator I guess Yeah, so this is a conversation that was hard to have Seven months ago, right? There was a lot going on then and there's still a lot going on with for so I This is just to get us through this dark time if you will for multi-arch So when when Openshift is more or less ready for upstream for us We can just hand in something and we have some ideas We know it'll work and the only work that has to be done is the automation Yeah, yeah, okay, cool. Yeah, so let's talk after any more questions, okay Thank you everyone. Thanks for all the urges in our presentation