 Welcome. My name is Adam Miller for those who don't know. I work on the third engineering team at Red Hat. I do a lot of things in as much as I can. Most recently it's been around container build system and automation. I'm going to talk today about the multi-architecture version of the layered image build system. I want to preface that with the fact that I really, really tried to have a live demo for Yalta Day. It's just not going to happen. So today's topics, we're going to define containers. I like to do that briefly just for anybody who is new to the topic space, who doesn't have experience in it, knows what on earth this stuff is, what we're talking about. Go ahead and specify the difference between based image layers, images, and then why Fedora containers. What's the motivation? Why are we doing this as a project? Why are we spinning our wheels on this? And then why multi-architecture containers? Why does that matter? A quick fun history lesson about the container build system within Fedora and then now to kind of break out where we've come from to now where we are today and then where we're going. Then we'll get into a little bit of how it's built and one of the main components built with is OpenShift, so we'll define it and we'll go ahead and very briefly describe what that is. I want to define release engineering just because it helps frame and identify why the system is designed the way it is in certain aspects and why certain decisions were made and what the motivation behind some of that was. And then how does it all work? How does it all work today and then how is it going to all work for multi-arch? Because right now the multi-arch stuff is inactive development in the Fedora staging environment so we're still pushing towards the goal of having it in production. So very quickly, what are containers? Containers are generally referred to as operating system level virtualization and what that means is that we can have multi-tenant isolation of processes and what we can do is we can more or less forklift software and confine it in a space and make that it think that it has a view of the world that it doesn't actually. We can lie to it and say that when it queries the prop file system it has its own specific view because it's been named space to offer. We're providing constraints around it in order to control it and have finer grained restrictions on it. And it also forces the ability to kind of wrap that in whatever kind of abstraction we want and I'll get into a little bit more specifically how we're doing and the implementations we have but we kind of get to wrap it in ways that we want to execute various different platforms on one host. So containers are not new and I will make the argument that the original container was the Cherute in 1982 and you may or may not disagree with me on that. I will completely admit that it was a very unsophisticated container but it allowed us to basically take a piece of software and lie to it about the environment in which it was run, the context in which it existed and executed. It thought they had a world view of the system that wasn't true and that's more or less what we're doing with containers. Now they're exponentially more sophisticated these days so we have copy and write, we can do quotas, we can do IRA limiting and all sorts of things. Brief, non-exhausted history of sophisticated unix-like container technology. I would wager and tip the hat that this started in 2000 with FreeBSD Jails. Then Linux vServer and O1 Slayer Zones and O4 OpenVZ and O5 O8 LXC happened. IBM created a toolset for LXC which basically was a user space toolset that allowed us to talk to and have kind of a really interesting grasp on kernel features in Linux. And that's where I like to think that things got interesting because that is kind of what was the catalyst to a lot of our current tools that we have and a lot of the change in the way that we are trying to run systems and implement a lot of this stuff. So 2011, SystemD N-Spawn interestingly enough did nothing with LXC but used a lot of the same back-end technologies. And a fun fact I would like to tell N-Spawn apparently was originally created because they wanted a way to be able to test SystemD without having to repeat booting systems. So N-Spawn is actually kind of a containerized technology. And then until 2013 DotCloud released Docker, later renamed themselves to Docker Inc. I think that was probably the big one that got everybody really focusing because it provided very low barrier of entry, standardized image format, and then a way to actually like move these things around and pull down these images, run them, destroy them very ad hoc very easily. Until 2015 Run-C was really some of the purview of the open container initiative. There is now this documentation out there that defines what a container is, how it should be run, and Run-C is a reference implementation of that. Then until 2016, ContainerD was a Run-C orchestration daemon. All of these things are now how the Docker is actually built or Moby if you're following the upstream component. So, layered images versus base images, when we get into container world, you have layered images and base images. And base images are built from effectively nothing. They start with basically a blank slate. You define a root file system within them with everything you need. If you want to break it down, you can actually have a single process in that. Languages like Golang that are statically compiled are kind of making that popular in some use cases where you can actually have a base image that is from nothing and just runs this one process. That's valid, but for most people who are transitioning to a container world, what they start with is a base image that's based on an operating system distribution that they're used to using. So what we have in Fedora Space is we're going to have a Fedora base image that you can then build upon. You know, CentOS or REL or Debian or OpenSUSA, they're going to have those as well for those people who want to use alternate other distributions than what Fedora is doing. But there's a distinction because you have the base image that is effectively your starting point for building what I would call more sophisticated applications. And that's not like kicking the shins to saying that your shell running core utilities isn't sophisticated, but if you're trying to run a service that you're generally going to write that and build it on top of what the base operating system provides. So your base image is going to be the thing that your distribution release engineering group is generally going to produce and then layered images of what the rest of us are going to build on top of to create things. And I like the example here that shows that we can basically take a Fedora 25 app, put it on a Fedora 25 base image, and we can move that entity around from a Fedora 25 host to a Fedora 26 host base OS and just run it unmodified as it is. So the reason I say that's a distinction because what we're talking about from the Fedora layered image build system is we are building images that are layered. The Fedora base images today are still built in Koji using Image Factory and that's outside of the scope of this talk, but just know that there is that distinction and we have them separate. So why? Why on earth are we doing this? So why Fedora containers? So we want to enable the ability to deliver Fedora content faster to users. And the next line item is partially, it's a very vague statement. We want to automatically generate release artifacts with security updates and that's very far and wide reaching in the sense that we want to do that across the board. But for the application stacks and the service stacks that we want to deliver, that falls under the purview of this build system because it allows the ability for all of these new deployment types and all of these new, you know, you've got your green blue, your red black, you have these different strategies for deploying things. And instead of making the users pull all of our content and build all of these things themselves and replicate, you know, having a build environment and then having to track dependencies so they can kick off and make sure that they always have the latest version of everything. We want to be able to do that on their behalf so they can just get to use it just like they would use, you know, a DNF update and traditional system. And then also lowering the barrier of entry for contributors. So we currently do a lot of legwork to repackage things over and over again in ways that are already packaged. And that makes a lot of sense for a traditional operating system for a traditional Linux distro because of the fact that you really don't want to have applications clobber the system. You don't want, you know, let's say something that you pip installed or NPM installed or Ruby gem installed or insert thing here. Installing, you don't want that to clobber your system. It makes it very difficult to manage updates to maintain, you know, a good manifest of your systems do, you know, relatively standard practice for maintaining a large environment. However, if we take that away from the end system and we put it more into a container and that container is an entity that is then rolled out into what you would consider an immutable infrastructure to where if you want to do a change or you want to do an update, you don't actually change something on the system. You rebuild that artifact and redeploy it. And then you just simply restart whatever that process is, running a new context of whatever the thing you're trying to deploy is. So there are some obstacles to actually be able to allow things to be distributed under the purview or under the flag of under the brand of Fedora without being packaged originally. And we have some plans on how to do that. And a lot of that will actually kind of fall under, you know, the reasons that we need to do that will fall under the release engineering component. I'll kind of run back to that. We don't just support one architecture today. So that's like the first and foremost is if we're going to do something as an official Fedora initiative. I like to believe that we will focus and pay attention to all the things that we currently say we will. But also, I think one of the big ones right now is a very large trend in what's going on in technology is Internet of Things. And not many Internet of Things dev kits or packages or, you know, things like pieces of hardware that IoT folks are working on are actually 664. There's just not a lot. I mean, most of them are doing something considering a little problem. How am I supposed to pull down a bunch of that stuff? Well, I mean, you know, throwing stones and whatnot. What I'll call the magical arm revolution. I don't know if you spend much time on various RSS feeds that I do. It just seems like they're at least in headlines taking over the world. You know, they still, wait, we don't have hardware, but whatever. They'll ship it next month, every month. Yeah, so in other architectures matter. I mean, there's a risk five bring up that's kind of being grassroots, bootstrapped. There's been kind of a rekindling of a handful of people's interest in MIPS. There are other architectures out there. There's a lot of things that people are interested in. And we don't know what the future holds. If at some point in the future somebody decides that, you know, architecture X, Y, Z is going to take over the world, it just seems silly that we would build a system today that will lock ourselves out of being able to adapt to that quickly. So quick history lesson. If anybody's been to any of my talks before about the later image build system, some of the content I've gone over already probably looked familiar, and this definitely does because I love to poke fun at and or throw Matt under the bus. But the way that the later image build system happened was Matt Miller said that there's an open source layer image build system and we should deploy one. And he's right. There was one and we should deploy one. However, there was a misunderstanding by both of us that it was done and just needed to be deployed. And we just needed, from the infrastructure standpoint, needed to like make some antipole playbooks and roll this thing out. We did not, either of us, realize that it was under active development and that we needed to join in its continued development and help get it somewhere for the finish line. So we did that. And then phase one, I would say phase one is probably about 18 months ago. We had single node builder. We finished that in a few months. Image format, v2, registry v2, manifest v2. That broke the original implementation. So when v2 rolled out, if anybody is familiar with any of the, so what I talked about previously with the way that there's like these standardized ways to have an image created to transport that image and have the metadata about it, that's these things. Well, the specification went from v1 to v2. This was incompatible. Universal truths that were true about v1 were not inherently true about v2. And very large portions of the build system had to be rewritten from scratch or heavily refactored for the components that were still legitimately useful, but just were broken. There was a lot of refactoring. So phase two, let's go ahead and bring it up into the future. Grab the v2, manifest all the different v2 formatting. Do a scale out deployment because now that we have a better understanding of what we're doing, we can go ahead and scale this out into a way that will make sense for the future. Automate tests that can be tied to the output of OSBS. This is actually none. Nobody uses it, but we have. There is a testing mechanism. I would like to very much so thank Tim Flink, because he worked really hard on this for a very long time, and then nobody used it. So, I'm sorry, Tim, but thank you very kindly. And Relinch is able to promote images from the candidate registries to production. We have a formal process for that. There's Koji tags, and things are tracked in a way that are relatively similar to the way that other artifacts in Fedora are. For a while there, there was kind of like a special off to the side thing, and we're getting more in line with the way that release engineering does things. Phase three is what I would say is happening right now. We have image registry scale out that is done. We have multi-master image registry. It's running on top of Gluster Storage. Search and advertise for image registry. I know if you go to registry.project.org right now, you'll get a web UI. I'm sorry. I'm actually helping to track down an individual while I will not name and throw them under the bus on tape, but I'm trying to track somebody down because we just need one patch, and they had said that they would do it and hopefully we can get that done and merge soon. CVE and security metadata for updates. That's something that we still want to plan. What's very interesting about containers is there's basically a tar ball with some metadata, and there's certain metadata that is specified that you need to have, but there's a lot that's not. For example, how do you version these things? That's an unanswered question. There's no specification for that. You define a tagging strategy that makes sense for you and your team and you go with it, but it's not universal. It's not ubiquitous. Everybody's not going to agree on it. You can't just go to somebody else's registry, whether that be the Docker Hub or key.io or somebody else's and search for a thing. That's Postgres. You will probably find a couple of dozen Postgres, and they might not all be versioned the same in the same format in the same way. That's an interesting challenge and we're kind of working on that stuff. Phase four is going to be OrbStrader and worker architecture, and I'll talk about what that means and why that is important, and then multiarch. Phase three isn't inherently done yet, but phase four has already begun because of the necessity to get moving towards the multiarch truth, and that will also allow us to take out certain components of the build environment that we don't require. And because it doesn't inherently affect phase three, we're not going to put a stop to some of that work. Very quickly, OpenShift is a container platform built on top of Kubernetes. What is Kubernetes? I'll talk about that in a second. It has a bunch of advanced features. The only one that we simply care about is the build pipeline. I'm sorry, the only three that we care about, build pipelines, image streams, and triggers. So a build pipeline is basically a primitive that allows you to pass in some information. OpenShift turns it into a thing they call a build. They build it for you and they spit out some stuff. One of the inputs into that is an image stream. So we talked about how the container registry has this standardized format for how images can be transferred and moved around while an image stream basically is an alias for that that can monitor it for changes and then logs an event. And what's interesting about events, we can trigger actions off of them. So that's where the triggers come in. So one of the things that we're going to have is triggers that are used for our various aspects. And we could potentially use that for things like generating CVE data and that kind of stuff. So we're interested in those things. Fun diagram. If you look at the green section, build automation, that's most of what we're interested in. If anybody has questions about all of this stuff, we can talk about that later. The main reason I put this up is because I did want to point out that there's a lot going on here. OpenShift is a very powerful system. Kubernetes is a very powerful system. There's a lot of capability there. And what's interesting about the layered image build system is we only really are using a very small fraction of that capability. But a big reason that we use this is because of the image streams, the triggers and the fact that it is a cluster that can be scaled out. So as the build system that Fedora is using becomes more popular and we need to put more hardware behind it, it's pretty easy. We can just keep adding more nodes behind the orchestrator and it'll span to our heart's content. And I say that and I think there's, what, thousand nodes? I think there's thousand node systems now and we have a three node system. So we've got room to grow. We'll be fine. So really quick, Kubernetes overview. You have master and nodes. And this is what I was talking about. So when you run a container inside of Kubernetes, you have this idea of a pod. Container runs inside of a pod. And that's where our builds will run. So we schedule this unit of work inside this pod and our builds run there. And then the master is where the orchestrator happens. So when we need to scale out and we need more build capacity, we need to be able to run more of our build pods. We can just add more node hosts behind it and scale out there. And the reason that this... Nope. Ah, I put them out of order. Sorry. We're going to skip that slide. I'll come back to it. So OSBS is built on top of OpenShift Origin. So if you look inside of the OSBS thing, OSBS is the OpenShift build service. It is a thing that puts together various components to create a custom build type that we feed in OpenShift. So OpenShift is one component. We add some things to it and we create what we call a build root. And that is where the build actually happens. So we take advantage of the build primitive. We rely on OpenShift for the scheduling and orchestration. We enforce the inputs come from known, valid places. So if you try to create a build that just does a curl pipe to bash, we will probably fail that build because we have actually resources that we green light. Isn't that it? Yeah. Well, thank you. Okay. And then we have a build root. Our build root is the unlimited Docker runtime. So for everybody familiar with doing RPM builds in Koji, this is going to be kind of a similar idea of a build root for that. This is the minimal set of tools that are required to be able to actually run the other set of tools that are needed to perform the build that you have to ask of the system. And we create that and that is the baseline of every build and every pod that goes on inside the cluster. So we firewall constrain at the Docker bridge interface and that's how we actually do our isolation for various inputs is there are no entrusted inputs. So for example, you know, DistGit in Fedora space is going to be an accepted base, official Fedora repositories, that kind of stuff. Inputs are sanitized. If for any reason you pass something that is malformed or misinformatted, I'm not going to say it's perfect. You could potentially get around to what we've set up, but so far for the most part has been good. Okay. Yep. Atomic reactor is another component. Single pass Docker build tool has a lot of really fancy features that we really like and the main ones are the fact that we can change the base image and that we can inject repositories. So right now for any reason that we need to do like a build override or something, we can actually inject another source of information to be valid. So it's the build system. So here's Fedora's implementation as it stands today. So we have Fedora later in between containers. They interact with DistGit just like RPM maintainers do. They send a container build to Koji. Koji then kicks it off in OSBS and then that magically happens in the registry. Let's see if people take pictures all the way. Cool. Okay. This is what it's going to look like. This is what it looks like in stage right now. It just doesn't work yet. So this should all look familiar. This piece down here is the same. This piece over here is the same. But what has happened is we now have an OSBS orchestrator cluster and then we have worker clusters. And we will have a separate worker cluster for every architecture that we care about. What's going to happen is Koji is going to kick off a build in the OSBS orchestrator. That OSBS orchestrator will then in turn kick off a build for each of the architectures that we care about. And then that information will flow back into the orchestrator OSBS and then that information will flow back into Koji. So Koji will remain our source of truth and we will get to a place to where, just like today when you kick off an RPM build, it will build for all the architectures. If you kick off a container build, it will then kick off the builds for all the architectures. And this allows us, I probably should have found some space to say like other architectures down here, but I just picked a handful of them for demonstration purposes. I'm not saying that these are free, we will support, but yes. Is it going to be necessary for everything to be RPMs before you can build a container or will you be able to pass them again and build a container for something that isn't necessarily packaged as RPMs? Today, yes, it's required. In the future, so that was actually a few slides. Yeah, so the lower your barrier of entry for contributors, that's what I was talking about. I probably should have spelled it out a little bit better, but right now, yes, it has to be all RPMs. In the future, one of the goals is for that to be... I mean, the limit is not in OSBS, you can upload it into Lucaside Cache and unpack it into Docker file and use it in there. Right, no, no, yes. So it's not a technical limit in OSBS or in the layered image build system. We can technically do that today. We currently lock it down to not allow that. And that's actually, so this is a good moment to do the really quick... What is release engineering? Main software production pipeline that is reproducible, audible, definable, and deliverable. And those attributes we want to maintain true into the future, those are things that we want to keep and not throw away. And one of the big problems and one of the big challenges with allowing things to not come from RPMs is the ability to recreate them in a way that can be audited later in the future. So what we do is in Koji we have... So for the OSBS, the Open Chip Build Service, what that happens there is when Koji kicks off the build, when Koji kicks off the build, when OSBS is done, it has a successful build reported from the worker node, the orchestrator will then do what's called a content generator metadata import into Koji, so all of the metadata that would potentially be needed to recreate that build environment, that build route with those specific set of inputs is stored in Koji. So Koji is still the source of truth. We can realistically, at endpoint time, once the Ansible Playbooks are set up and everything you can fix are good, we can turn key destroy OSBS on the back end and just relaunch it and then we can just keep building. That's the goal. However, the rebuildable part and the metadata to import back into Koji, if we say that we allow things to be able to like pip install or go get. So if you did a go get, those are pulling things from random places and those random places on the internet can disappear. So we lost the ability to truly audit and we've also lost the ability to recreate. So what we need is a way to have content streams that are curated. And what's interesting about that term is we have a wiki page somewhere in Fedora that talks about that exact topic and a way to solve it. I just don't know whatever happened with that project. So at some point in the future we will find a way to bring that project back to life or we'll start a new one because this is something that we need. Can I just add real quick that this is also partially there's been work on the OSBS side to support pulling content other than RPMs directly from Koji. So if we have a system that's either building or importing Go artifacts into Koji, they could then be pulled directly into a container build in such a way that it would be audible that a reference to those artifacts would be included in the metadata. So for the video one, this line of questioning started because he asked if we are required to have RPMs built two, the comment was there's currently work on going to allow from the OSBS upstream to allow for content sources other than just RPM repos to be injected into the build environment for a container. So we could theoretically get to a point where we can then have that input. There's another guest. Cooper, this is a bit more of a question about the actual architecture of OSBS here that we'll see the answer later. But is there a reason you decided to go with a multi cluster approach for the different architectures as opposed to, say, having multiple nodes with different architectures? So the question was, is there a reason for having multiple clusters instead of having various nodes with specific architectures and just labeling them appropriately and having node selectors. For those who aren't familiar, node selectors based on labels. Yes, the implementation overhead of patching all of Kubernetes to understand architecture awareness was difficult because otherwise Kubernetes would just kind of randomly schedule things to run on nodes of the wrong architecture. And that wouldn't go well. So yeah, it was basically from the OSBS upstream of, I think, five people who don't actually have a long tenure of hacking on Kubernetes itself kind of evaluated do we try to implement multi-arch awareness into Kubernetes itself or handle it higher up the stack and the choice was just to handle higher up the stack. That's interesting. I think that problem actually got a little bit better with a couple of recent releases. Okay. Okay, so the comment was that I think the problem actually got better within a couple of recent releases. So this work, the architecture design for this work started back in January, if that helps frame some of the time frame of information that was available. I think it could be switched over very easily. It could be switched over easily if there's a better way of doing it. Oh, I don't know. Because of some of the ways that the bookkeeping work manages between these, it would be a decent amount of work to pivot. It seems to me, at least, that it's designed to I mean, a lot of the metadata transport is happening. I was just, it was up my brief thought that it could be switched over by itself. It probably could be. I just don't know to the best of my knowledge, it would be non-trivial. From what I've looked at in terms of the implementation of the orchestrator to worker communication, basically the orchestrator just like gives instructions and then kind of like keeps an earmark of I'm waiting for metadata I'm waiting for the metadata upload from each of those and then it reports back. I mean, I think the idea of work is all built with standard file that they were Okay. There was a, yes. Can you, is there an example diskit repo or package name or whatever that package container works in just to like, poke at it and see? Yeah, actually we have an entire container namespace in and that's not like a standard that has a spec file. No, it doesn't have a spec file. It's a Docker file instead. What's that good example? Um, figure We'll just state for the next session. I feel like we're in the question and answer space. Yeah, yeah, it's fine. We have wholeheartedly translated Q and A and that's totally fine. There were like two slides left and they're not important. Container, yeah. So here, container engine, why not? Do-do, files, sources. Oh. What? Yeah, what? That might be a bad example. Well, no, because I know this is a thing. Like, now here we go. We have the developer container engine here saying that it's not in figure. Yeah, it is. But, all of diskit is in figure now. There's nothing that But you really want to know more about this Yeah, that's not a good one. It's running an instance of how to get. Yes, question here? We're still on your question. Was that good? Yes. Okay, cool. It's also, cool. Yes. I maintain that. It's a little outdated. I need to update a couple things. It's not perfect, but we do have container guidelines. So if you go to container guidelines. So we have a review process and guidelines for all that, just like we have for RPMs. The verbosity of our guidelines are considerably lesser than that of our PM. We're still figuring out what we have because, like I said, there are no definitions for what this stuff has to be. So we just kind of try to work amongst various communities and gather as much consensus as possible and say that that's actually that. Yes, that is actually a really good transition and a good plug. If you are interested in that directly after this, we have a workshop on how to become a container maintainer in Fedora. Josh Berkus and myself will be doing that right here in like 20 minutes. Yes, Troy. You got your container and you have three arches. How does somebody say I want arch 64? You just pull it. So you can use something like Scopio or Docker or whatever and you just pull it because we'll have manifest list implementations. So if you are on an x86 machine, it'll pull x86. If you're on an AR64 it'll pull a AR64. So if I do a Docker pull. Dan, where you at man? I'm sorry, what Troy? So if I do a Docker pull let's say I just have Docker. So the original question was how would somebody know how to get the appropriate architecture once we have multiple architectures building? The answer is the manifest list in the registry should make it such that you will just Docker pull F26 let's say F27 you will Docker pull F27 slash HGTBD and then it will negotiate with the registry what architecture you're on and pull the appropriate one. I think it actually provides metadata in the post now when it does the get it would be a parameter. Cool, question? I have a question so do I have to maintain one Docker file per architecture? The question is do I have to maintain one Docker file per architecture? The answer is no right now it should just be like with a spec file, you just have your one and we feed it through. The kicker there is if we find somewhere in the future that there are oddities that don't work on one architecture and we have to find some workarounds that could change but no the plan right now is just the one. Question? So the question was how do we get using Builda instead of Docker and the answer to that is we already have a plugin for it we just need to switch to it and that is on my roadmap for Fedora's OSBS instance I will speak for nobody else's OSBS instance but yeah we will be doing that and if Dan, if that is not done by DevConf I will owe you an explanation. I have kind of a safely related question so it seems like a lot of containers from packages the Docker file is literally just like install package and possibly set entry point depending on which entry point is there any is there a reason that you need to in those cases we need to have Docker file and all is there a reason we can't just use some of the APIs that like Builda uses for instance to kind of automatically produce. So the question is some of the Docker files are basically just DNF install and then provide entry point for whatever that thing that was the statement and the question was is there any reason that we should necessarily do that as opposed to just allow people to use things like Builda and just define what they need and build it. I mean like it seems like in those cases right you can kind of almost assume an implicit Docker file but not actually need to maintain a separate repository that has virtually the same Docker file across the steps. Well yeah I mean basically okay so what we need to do for that scenario is we would have to define the set of things that we want to just be the DNF install of that package and then an entry point and then we would have to have a uniform defined entry point that we would have to then supply for each one of those things. We could do that it's just a matter of somebody defining what that set of RPMs is that will be containerized somebody writing those entry point scripts because it could just be a command but that is supplied by the RPM or it could be a script that we have to provide and then somebody writing the system that would tie it all together but the way that it's set up now and the idea behind it is that we have this one uniform process that will cover all the bases that we need I'm not against changing it that's up to the atomic working group to do as they please might need me personally like yeah why not let's change the world a lot of this stuff is I mean all of this stuff is made up as we go so I mean if you have proposals for how to change it I'm all in like it's just this was kind of proposed the way to get moving and that's how we're doing it and the other thing too is like so let's say you do have HDPD as you are starting in your entry point and then you want to use that as like the base image for to build another layered image to inherit that one that's another thing too is you can actually create a layered image from a layered image and you can daisy chain up that and you know that just because of your container background but that's a point for a lot of people that affords us new possibilities and those kind of things but yeah I think that it would make a lot of sense but I think ultimately in a lot of ways once we have the fresh maker integration into this stuff that won't matter because fresh maker is going to be doing all that stuff for us anyways oh I'm sorry fresh maker was from the previous talk it's a magical system that's part of factory shadow that automatically kicks off automatically triggers builds within the environment based on changing of content yes great we have six minutes we can make it generate it doesn't today so multi multiple questions yes right now for our container definition our arch is a tag yes it's a label and right now we've been putting x8664 in there yes is there going to be a separate definition file for each architecture or for packages that are available in architectures can we define a generical he answered that earlier well I didn't explicitly I didn't call that out explicitly so the statement was currently in our guidelines we have a definition of the architecture and currently we define as x866464 the question is in the future we're going to have to define that explicitly for each one the answer is no I think we will go the route of rpm in the sense that if you we will remove the requirement to define that we have to patch coji container build to remove the requirement of that and it will default to some value however if you need to specify only certain architectures we will then in that case introduce that because just like in you know rpm so we can do an exclude arch or explicit define arches we'll want to do similar thing for container builds just because that in reality sometimes for example like there is no docker on powerpc64 there is on le on little indian so pbc64 you can't build kubernetes or openshift today because they rely on docker so I mean there's just there's going to be certain things that the other department question you almost answered my question once the limits on those things do you have to have openshift be able to build before you can build on each of those architectures the question is what is the limit on those things do I have to have openshift be able to build openshift right now doesn't build on powerpc64 le it does I have it ok it technically does it's not supported yeah I know a lot of what we're going to do is not supported that's kind of what we do in fedora oh ok yeah 32 I mean nothing but x8664 is technically supported by openshift origin well in centos we got the merge64 right I mean I mean I mean in fedora so let's actually ok so you're going good I've got s390x I've got how commercially supported does it work as long as you've got it to build yeah so we've got AR64 it's the first phase so we've got AR64 RV7HL I686 powerpc64 little indian x8664 and then what is it s390 s390 is it dot yeah so if you go to the f27 build you'll have s390 because I think you just went to an f26 build I did you're right yeah so f27 and you'll see s390x in there as well yeah s390x so we're building for everything right now that's also we might talk to you about the problem make it pen on centos me yeah I've got a handful of patches to the build scripts to make it work I didn't end up having a patch openshift itself but the like all the like 10,000 lines of bash because apparently everybody in kubernetes and goling just hate to make files we're like it's the lineage that's been working for you for 20 years we hate everything what's up with these patches right oh god yeah actually just I mean since we're talking about it yeah it's actually it's an impressive thing but it's like a whole there's a whole namespaced so let's say like binaries yeah so if you look at the functions there's like function definitions that are namespaced there's like OS colon colon build colon colon binaries from targets and stuff and like there's a lot of this and it works well like I'm not gonna knock it except it's just difficult to debug when it fails because I mean anyone in the room who's ever attempted to debug an overly sophisticated bash script will attest to the fact that it's just hard sometimes and then I mean kubernetes does this like openshift does this because kubernetes does this and why reinvent the build system because the project you're based on already has one so yeah I don't knock them it's just like I've fought this thing enough that I have a little personal care about the pain yeah so the answer to the question is we will build on everything that we can we have one minute any last questions cool thank you