 Alright, so this talk is really, I'd like it to be somewhat of a discussion if we can make it so, because there's some things that we need to discuss as Fedora surrounding Kubernetes and OpenShift, specifically around, you know, how are we going to get this stuff onto Fedora, or how do we get it onto Fedora, how do we keep it updated, and then, you know, getting the software onto Fedora and Fedora isn't enough, we also have to configure it on Fedora and deploy it and maintain it and all of that. So first just a bit, you know, white Kubernetes containers are like ultra hot, it's a new, you know, generation, it's a new future, everything, so for all the people who are psyched about containers and wanting to run containers, they're applications in containers, you're going to really do it, you need to, your apps need to span multiple hosts, you're not going to run your production app on your individual, you know, MacBook or whatever, then Kubernetes is great for this, you take your container and, or multiple containers if you want them to share resources together in pod, you toss it to the cluster and get scheduled on one of the nodes that can handle it, you've got services to make sure that the network traffic goes the right way, so it works well, and a lot of communities have adopted it and specifically a lot of communities that are important to Fedora. Project Atomic, which is this umbrella project that puts out Atomic hosts and a lot of other kind of container-ish technologies kind of fit into the umbrella, has been shipping in Fedora Atomic and also sent to us at Atomic, they've been shipping Kubernetes as part of the host from the beginning of those projects in Fedora, we have a Fedora Atomic workgroup and Kubernetes is part of what we do there, Red Hat has, you know, huge investments in Kubernetes through OpenShift, OpenShift is based on Kubernetes, there's also like just a ton of really cool projects happening all the time, one of the projects that I'm really interested in right now is called Kubevert and it is about running virtual machines along with all of the live migration and all of the stuff that we expect to get out of, you know, virtualization system, but using Kubernetes to do the scheduling and stuff, so, and that's one that I'm particularly interested in, but there's just a ton of activity around Kubernetes and, you know, just in the kind of container community in general, a ton of action around Kubernetes, so, I mean, that's why we're talking about it, it matters for Fedora for all those reasons. So, this is an important point, you know, OpenShift Origin is based on Kubernetes, but they are different, they do some different things and, you know, I've found personally that it's difficult to be deeply paying attention to Kubernetes and OpenShift Origin at the same time, I mean, you kind of just as an individual contributor need to make some decisions about where you're going to place your attention and efforts and since I've been working with Project Atomic, Fedora Atomic, and CentOS Atomic, and since Kubernetes comes baked into those images, you know, from the start, it's like, hey, we're shipping this on our image, it better works, so I have put a lot of my effort into Kubernetes, you know, more than OpenShift and, you know, that's why, but in kind of some of the reasons why you might run one or the other, I kind of linked it to some of the Fedora Foundation values, you know, first, if you want just the very latest stuff, Kubernetes leads to OpenShift Origin, you know, it's on average it's about a release ahead of where OpenShift Origin is, even though OpenShift has, you know, that gap has gotten smaller over time, that's just the leading edge stuff is in Kubernetes first. Also on the friends point, Kubernetes is just super popular. I mean, even if we decided that in Fedora, we wanted to mainly focus on Origin, which whatever, that would be cool. We can't, we have to have a story for Kubernetes around Fedora. Kubernetes is just massive and a lot of people want to run that and we can't just turn them away, you know, or Telmo, just go use Ubuntu or something, or CentOS or something other. I mean, Fedora has to have some kind of game plan, even if it is docs to show you how to use the upstream well. You know, why you might want to use Origin, you know, again, going back to some of the Fedora Foundation's features. It adds on some really cool features on top of the base Kubernetes. I mean, a lot of stuff around the kind of build pipeline stuff, really slick UI. It's got like a real good kind of a self-service story. If I was going to be running a cluster and exposing it to multiple people, I would want to use OpenShift for that, just to help me manage using that stuff together. On the friends tip, you know, it's not, you know, I think it's not safe to say that it is not as popular as plain Kubernetes is on its own. I mean, it just says that Kubernetes has a bigger kind of mind share footprint, but we have a lot of friends working in OpenShift. A ton of Red Haters are working on it and they're doing awesome work. I mean, there's totally, I mean, there's great documentation. The Ansible scripts are really great. There's just a ton, there's a lot of goodness to be had in OpenShift and also because it is kind of coming from a, you know, Red Hat, Fedora place. A lot of the ways that OpenShift works kind of fit well with, not surprisingly, kind of the way that Fedora works. So there's a lot of good stuff to be gained there and for an individual contributor, there's good reasons why you might want to kind of focus on that. I mean, Fedora can and should and does do both, but for both Kubernetes and OpenShift, they're things that we're not doing and they're things that we could be doing better. But we got to have contributors to do the work. So, you know, that's a key, obviously, to everything. So I kind of broke down to the three parts, kind of the pieces that are involved in the Kubernetes origin story on Fedora and it first starts with just the software itself. Kubernetes, every time there's a release, they release binaries for all the components and there are a lot of releases. Like, right now, there are, you know, considered to be supported active releases, 1.5, 1.6, 1.7, and there's 1.8, like alpha release right now. So, I mean, that's four releases worth paying attention that people who are interested in Kubernetes are going to be looking for on some level. Each one of those has these, you know, Z streams and those come out pretty frequently. And so, for all of those releases, you can go to Kubernetes and grab the binary. I mean, that's cool, also, the multiple architectures. A lot of installation methods, they expect to be able to say, okay, go give me 1.6.7, give me 1.5.3 or whatever. And so, there's kind of this expectation that all these binaries are going to be available somewhere and they are upstream. It's similar with OpenShift Origin, they provide binaries, but there are fewer individual releases. So, it's a little bit easier to kind of keep track of. The thing about using the upstream binaries, like for instance, again, was our Fedora story, we could just say, okay, go and grab the binaries from the project. You know, if you want to build an RPM, grab the pre-built binary and dump it into an RPM and use that. The problem with that is that, you know, these are outside of our control. We ship things that come from our build systems. Sometimes we want to patch things. So, you know, there are reasons why we don't want to just use plain upstream binaries. So, we do have Fedora RPMs and we've had them since the beginning. And we get to distribute what we built, we get to patch things. But we haven't been tracking, like I said, there are these, you know, four releases worth paying attention to at a time, you know, three just kind of stable ones. We're not tracking all those super closely. And we bump against issues with the way that we do packages. Each pack, we have a Kubernetes package, and so we can kind of have F25, F26, and Rahide are kind of the places where we have, where there can be separate Kubernetes packages. And then we can do things like do Kubernetes, you know, Kube 1.5, Kube 1.6. I mean, you know, we can, there's different things we can do. We're not doing that right now. We're just, so, but in Fedora 25, in Atomic, we kind of, we kind of pay attention to the latest stream. So, basically, Fedora 25's Kubernetes is 1.5. something, I'm not sure, but it's not the current 1.5 stream point release. And then in Fedora 26, right now, we have 1.6. 1.7 is an updates testing. So, when that goes stable, we're going to have that shift from, you know, that point release. So, there are things that we're bumping up against issues like that there. Now, modularity might help because it gives us a scheme to say instead of saying there's a F25, F26, F27 package, there could be modules based on Kube 1.5, Kube 1.6, Kube 1.7, et cetera. And we could, especially when you combine that with containers, that gives us a way to sort of satisfy this expectation that I should be able to go out and grab a particular version of Kubernetes, the one that I want. And again, originally, we have Fedora RPMs for origin. And, you know, they move at a slower pace, and it's more, you know, it's, we track the latest Fedora, the latest origin, you know, released pretty well in Fedora, I think. Okay, so, you know, that's like where the binaries are coming from themselves, but the, and I should say too about the Fedora RPMs. You know, one thing that we can do to help us track better and that individual contributors can do is keep an eye on Bode. You know, we do not, we often don't have, like, fast, timely karma to get releases from one to the next. So, even if we did update the Kubernetes in F25, it would probably sit there, I mean, it would probably sit there indefinitely without enough karma to get pushed to stable just based on the current level of kind of, you know, testing that we're getting just from individual contributors. I mean, that's an area, if we want this to move faster or more efficiently. And even if modularity solves some of our versioning issues, we still need the testing. We still need contributors around that. So, the packaging part. You can do no packaging. You can just copy binaries to user bin local and you can run them. You can make a system to unit file. Dan in his talk mentioned Kubernetes the hard way. Kelsey Hightower is a really popular how-to. And that's what he does. He says, go to Kubernetes, download binaries, drop me user bin local, write these unit files and you, you know, go from there. There's this hypercube image. This is a super popular way to deploy Kubernetes. A lot of different installers use this. And it's one binary that basically is that does all the jobs, scheduler, API server, client. You can use that one binary to run all the, all the different roles. Now from upstream, it's one of the most common ways that hypercube gets used is upstream makes a hypercube image. And it's Debian base plus the hypercube binary gets pulled in from there where their binaries are and goes out and that's the image. And then they have one of those for every, you know, one dot five dot, you know, three, four, five, you know, every single time they do a release, they have a corresponding image for that. And they're based on Debian. Now, we don't have a Fedora-based hypercube image yet. We were, the way our package uses, actually uses hypercube for most of the components in our Fedora package. But it doesn't use it for the API server. Because the API server needs an additional capability to bind a four, four, three. And so that was kind of broken out. And then to be less confusing, I guess, the ability to use the hypercube binary as the API server was removed. So then that stopped me from being able to make a hypercube image based on Fedora. But I talked to the maintainer and got that changed. So I'm going to make an image for hypercube so that you can, if you wanted to, you know, use some of these same installers that require hypercube, but you want to use Fedora base image and Fedora RPM, you could do that. And in some places, some things are actually hard-coded to use, the Kubernetes upstream, you know, Debian-based image. And those, those, that would be a thing that we would need to patch. OpenShift Origin has an origin image that's like hypercube. And it's CentOS-based. I mean, unless I'm wrong, you know, someone correct me if I'm wrong, but I don't think OpenShift actually makes any Fedora containers, you know, Fedora-based containers when they run in containers. They use CentOS. And so this kind of raises a lot of questions that I want people to think about. And, you know, I mean, how do we feel about that? Does it, does it bother us that if the main way to use, like right now we have a, well, I'll talk more about QBADM in a second, but we have a QBADM package in Fedora right now. But that package is hard-coded to use this upstream, you know, for, for some of its components, this Debian-based image with the upstream built binaries. Or if you're going to use OpenShift on Fedora, you're going to be using CentOS-based images. I mean, I don't know, do we care? This is a thing where it's not the end of the world, I don't think. But if individual contributors think, no, I want it to be all Fedora, I mean, this is the sort of thing where people have to, you know, do it. You know, so something to think about. All right, so Fedora RPMs are both the way of building the binary, but they're also a way of packaging it. And this has kind of been the default way that we have installed Kubernetes on Fedora so far. And so we have the, we have the Kubernetes, the Kubernetes package makes the Kubernetes master package and the Kubernetes node package, and then a client package. And, you know, the master runs on the master. It includes, it includes the three master services, and the node includes the two node services. Those RPMs have so far been built into the Fedora Atomic image. So they're baked in there. We're planning to remove them partly because they can, they, if you don't need them, then they're, they're extra kind of weight that we can get rid of. And if you, you might want to run a different version. There are different things where, you know, we will be, we, you know, want to, to remove them from the image. And we have other ways of getting them on there. There's package layering, which Jerry mentioned briefly. It works pretty well. You know, RPMOs treat Kubernetes with, if your system didn't already have Kubernetes on it. I believe there's a way to now do some overrides that's here or coming. And you can, so that's a way that you can use the plain RPMs. And the RPMs are cool. They're managed by SystemD. The, the, we have Ansible scripts out there that use those that expect to use it that way. And, you know, similarly, I mentioned before about the Fedora. We have OpenShift RPMs for Fedora, and that's been, we've been maintaining those pretty well. So you can use the RPMs in further level of packaging in containers. So I maintain in the Fedora registry per component images. So for the API server, for the controller manager, for each of the components based on Fedora RPMs. And I have made those so they can run just as a regular Docker container or as a system container. And so if you're running it as regular Docker container, you can just write a unit file that will, will, you know, run the containers from the unit file. You can also make a Kubernetes manifest that you can drop in the, the cubelets manifest directory. And when the cubelet starts up and then looks in there to see what things I have to start up, and it can start up, it's the master components. Those are, and then I have made them with an eye toward, like particularly with the system containers, and I toward making drop in replacements for the RPMs. So when you install it, like the name of what a package might be is Kubernetes-API server. But when I install it as a system container, I do, you know, dash, dash, name, cubet-API server, because that's the name of the system D service that the RPM would install. So that then if you're running, if you have a script or something that was expecting that one, it just replaces it. So one thing with these, and so this is similar to the issues I mentioned around building of the RPMs. You know, we need some better versioning and tagging right now in our Fedora build system. We are just, we want to add automatic version, we want to add it where the build system will automatically put the correct version of the package in a tag. We're not doing that right now, but this is the thing that, again, people are expecting when they're coming from the upstream side, upstream as every single release and they have the component, the architecture, and the release number. And so people expect to be able to access Kubernetes in that way. And that's something that we need to work on, tagging that matches those expectations. And again, I mentioned before modularity, this is going to, I think, help. I've spent about just the last week and a half or so looking at building a module for Kubernetes. It's still early days, I think, for modularity, but I think it's going to be a big help. And then I mentioned before, unless I'm wrong, there is no Fedora-based OpenShift Origin container. And that's something that I think should exist. I think that we as Fedorans should want that, but that's something that we'll have to do. Okay, so the actual deployment part. We've been including the packages in the Atomic Coast since the beginning, so you just download the Atomic Coast and Kubernetes is on there, but it's kind of involved actually getting it up and running. And so you can start from scratch. A lot of people really want to do this, especially when they're getting started. I think Ansible is crazy easy. And once you get a little bit of a hang of it, you can pretty well see what's going on. But a lot of people I talk to in IRC, they're just like, don't make me, I mean, I know how they don't want to learn a new configuration management system. It's like when I was doing stuff with RDO and I was like, I have to understand puppet. And even though Ansible is much easier, I think than that, until you invest a little bit of time to start to not be afraid of it or it can turn people off. And also people want to understand. They don't want a scripted install. They want to see exactly what's happening. And that's totally understandable. So I mentioned this Kubernetes is the hard way. I mean, Kelsie Itower is like a total luminary in the Kubernetes world and a ton of people reference and read his, this how to. And a thing about the how to though is that it's all about Debian. It's all about the upstream binaries. It's all about Google Cloud. I mean, it just sort of punts on some sort of hard topics because Google Cloud just magically does that part, which is, you know, he works for Google. I mean, that's, it's totally legit way to go about. I mean, Google is a great way to get your feet wet with it, you know. But I thought that, you know, it would be cool to do a version of it and kind of draft off his efforts, but just change the bits that need to be changed to make it work with atomic. So I started a fork of it. And I, you know, it's something it might be worth thinking about. We talked about a little bit about thinking about kind of including that in our more official atomic documentation or, you know, looking at the strategy of kind of drafting off of this popular existing from scratch effort. Because I'm personally not very interested in from scratch installation. I like Ansible, but I recognize that it's important. We have a project atomic getting started guide, which is also a from scratch thing, but it needs work. It's a pretty, it just, it was written a while ago and there's some things that it doesn't do. And this sort of thing requires maintenance. This is again part of why I like the idea of drafting somebody else's efforts in part because we can minimize the maintenance that's required. Okay. Yeah, that's that's done by the OpenShift team, right? Oh, right. Okay, good. And now that is like a from scratch origin setup. Yeah, that's what I I mean, I that's what I thought because I had kind of looked around and everything kind of points you to Ansible, which again, I'm totally down with, but you know, what's that? Okay. Yeah, okay. Speaking of which, so make you a mini shift. Now, this is a this is a great way to get started. I think it's a single node cluster, which whatever I mean, I think if you're personally, I mean, if I want to get started with something more than single node, because that's the whole point. But, but you know, I think there's some value to it still, but you're running it in a VM. And so it's very cross platform friendly. They've got instructions for using with all sorts of hypervisors. Now, it's not Fedora based like at all. You're using the boot to Docker VM by default. So you're using you're not using Fedora, you're not using our Docker, your it pulls in a like a Docker local, it's like a or a cube local. There's a binary that it pulls in that's running Kubernetes on a local setting. And that's from upstream. Mini shift can use a CentOS VM. So that's getting closer to us. And if you use this VM driver nine, and you're running on Fedora, you're using your hosts Docker, so you're running it on Fedora then. So that's a that's more of a Fedora thing. So this might be something you know, again, this might be something that might be worth Fedora saying, Well, we want to have a really super easy way to get started for people. And so maybe, yep, I think that might be like two slides. But yeah, that's a that's a good one too. No, no, no, that's the thank you. So this is where mini q and mini shift live. And it's the same thing. It's just mini shift is the open shift version of it. So QBDM, this is pretty cool. This is, you can have a single node cluster, or you can have a multi node cluster, you can start your first node and then you can add additional nodes. It's not high availability. You have your master, unless I'm wrong, but you have your master. And then you can add additional nodes to it. It's considered beta. It's kind of been considered beta for a little while. And I guess there's like a mixture of different things that it uses that aren't considered fully GA yet. And I'm told that this is kind of the future of deploying Kubernetes. And it's really nice. As distributed by the upstream, it uses kind of a mixture of installed RPMs or they support CentOS and Debian in their docs and the RP and the packages they offer. So but when you run it on CentOS, they have CentOS RPMs for the for QBDM and for a container networking interface package and for the Kubelet. And then it uses the hypercube image for the rest of it. So you get the Kubelet started, and it pulls down hypercube. Month or two ago, we got a cube ADM in Fedora. And there's an issue with the way that they were doing the packaging that made their packages not work with the Tomacoasts. So we got that addressed. And you can use package layering to get QBADM on. But when you but in the go source of QBADM, it's like it specifies the repot the Google image repository. So that's something that we would have to patch to use a Fedora based hypercube. So again, you know, and then I have a in the the atomic system containers repo under project atomic, I have a pull request that I have to, you know, address some of the feedback I've got for a QBADM system container, where you can just install that on a system that has no Kubernetes on it. And it will, you know, you can run what it basically does is drops the QBADM binary into user bin local. And so you run that. And then it goes off and it starts the services it needs to it runs the cubelet containerized. And then it runs the goes and grabs hypercube. So there's OC cluster up. So this is kind of the open shift answer to QBADM. And this works really well too. Now, there are options for making adding additional nodes to your your single node. I spent like an afternoon trying to get it working. And I mean, I was one of those things where I kind of got a bit closer, a bit closer, but I didn't get all the way. So it's there. And I think I think that it's probably just some minor thing to be fixed. But that's a way to that you can you run the command OC cluster up. And you're up pretty quickly with a open shift cluster on your one system. And then, you know, pending, you know, this OC cluster join getting straightened out, you can add additional nodes to it. This again is kind of a mix of installed RPMs and container images. You can install it whenever I run it on Fedora atomic. Well, you can install for with package layering this origin clients that gives you OC and then when it'll go and pull down containers and it pulls down the send us based container. And one issue though is that origin clients includes a cube control tool. And that conflicts with the Kubernetes client package that comes installed by default right now on Fedora atomic. But if you like rebase to the Fedora atomic 27 or raw hide version of the repo, that's missing Kubernetes. And so you can install that cleanly without that that conflict. And this would also be a good candidate probably for system containerization. I mean, I bet you I could take my cube ADM container and change a couple things. And bam, you know, how they'll see the origin clients container, you know, running. So and that's another yet another opportunity for contribution. Okay, so contrib answerable in the Kubernetes contrib repo are ansible scripts for installing Kubernetes, Kubernetes cluster. This is what I've always used the most and I've contributed to this. You can you can use this with system containers. Well, so I say here, it supports multiple distros for Fedora. It's if you're running on a checks to see if you're running on atomic, if you're not running on atomic installs the RPMs. If you are, it assumes that they're there. You can use it with system containers. They have to install the system containers first. I actually have a fork of this that includes the way to just to specify I want to use system containers. And it'll do that part automatically. And I was just waiting to send a pull request for the system containers to actually be released in the Fedora registry. But they are now so I'm going to send that pull request. There's, there's a vagrant portion of it where you can run vagrant and install it on VMs that are vagrant. And you can use vagrant plugins for OpenStack and AWS to install it there. But I don't know, it's a little hackier that you have to go through vagrant for those things. And really, it's, it's kind of, it's not a heavily active set of scripts. And the sense I've got from targeting different people is that, you know, there's issues where I mean, one issue is that you can run multiple node at CD, but there doesn't support running a highly available master. And there's talk about contrib is this big weird like crazy grab bag of a million different projects. And I guess the idea in Kubernetes is that they need to start moving to more appropriate locations. There's talk of moving out of a cube deploy. But I mean, this is, this is, this could use some attention and some opinion and some discussion about what's the future of this. This is what I use the most often myself when I'm testing things. There's this other Ansible based option called Cube Spray. That is a Kubernetes incubator project. They've got a bunch of different options for running it on AWS and Google Cloud, Azure OpenStack. It does support the High Availability Master. They support Rel and CentOS and it didn't work for me out of the box with Fedora. But it was like three or four changes I had to make to the Ansible to make it work. And I saved the def. I was looking at the cycle week and a half ago I saved the def. And so I'll probably send them a pull request to fix those few things so it can work for Fedora. But a couple things here. One thing I mean, they, they're pulling my hypercube images from CoreOS. And interestingly, I think the only the only thing that I think that the reason why they do that is CoreOS does the exact same hypercube images upstream does. Except they give the whole hypercube binary permissions to mount to bind to lower network ports. And I mean upstream, the idea is that instead of doing 443, you'll do like 6443 or something. So you don't have to have that. But in the Ansible, they have a place where you can change the registry that comes from. So if we have a Fedora cube, hypercube image, we could use this and we could pretty easily just point it to use Fedora. So you know, if this is this seems to be much more active. So maybe, you know, those of us who've been focusing on contravencible, maybe this is a place where we might want to think about turning our efforts and getting Fedora working well, and setting for zero. I mean, this is the thing. It's just, I haven't looked deeper into this, but I'm pretty sure that this is would be easy to get that C Linux working. But I mean, people in the Kubernetes community, there's just by and large, people are not super worried about just tossing a throw SC Linux into permissive mode, and they're getting started docs. I mean, it's pretty pervasive. And so that's something that also needs to be fixed. If we're going to use this like the contravencible, you know, when I first started using it was set in four zero. But it's not now. And it's not it's not like, you know, total rocket science to get that working. But it needs again, contribution. So there's OpenShift Ansible on the OpenShift side of this. And it is pretty awesome. You know, you've got high availability at CD and high availability master. It's got a lot of activity, you know, a lot of red headers are working on it. There's a contrib repo that's got, you know, a ton of different docs and additional Ansible scripts for running on these different platforms. This link I have here, Dusty May did a cool blog post, I guess it's getting a little bit dated now, but it'll probably still work on installing on Fedora 25 atomic host OpenShift using OpenShift Ansible. You know, again, you're using these, I guess you can use RPMs. I think I've installed it using containers and you can use the system containers. As Jerry mentioned, this is the same slide about the options for that. Again, you know, these are the central space containers, you know, we have to decide do we care about that? Maybe maybe we're cool with it. But I've used this myself. I like I said, I don't use OpenShift as a test and contribute as heavily as I do to Kubernetes, but I mean, I like, you know, when I look at this compared to the state of our Kubernetes Ansible scripts, I think, gosh, you know, that's that's great. And I also I've thought about, and I've talked to some people about, can we, you know, jump in here? You know, can we basically get some mods to these scripts where if someone wanted to just install plain Kubernetes on, you know, Fedora said to us, well, you know, can we toss that in the in the many options, you know, that that are there to do that. And that would be, you know, that might be a place to place to put our contributions or our efforts, instead of some of these other places. And then I got this slide about SE Linux. I mean, it's just like really, there's just a ton of set and forth zero out there in instructions. And it's just like, it's just easy to toss that in. And then just never go back to it. And say, oh, yeah. I mean, and sometimes I've tested things where you don't even actually, like something has subsequently changed where that's gotten fixed. And the documentation didn't get updated. And just kind of perpetuates this idea that SE Linux is going to get in your way, and it's not going to work. The way that I have gone about fixing a lot of things or at least, you know, getting things to where SE Linux can be enforcing is in the manner when things run as containers, like network plugins, or some of the Kubernetes add ons. Usually those are the things that give a problem with SE Linux give you denials. And rather than just uncontaining everything, turning SE Linux off, you can run these containers, you can put in the manifest, make them run as SPCT and just unconfined that specific container. Now, it's better to have things contained properly. But I just figured that rather than, you know, turn off SE Linux on your entire host, it's better to unconfined the particular thing. And then, again, you know, we need to work on, you know, getting it contained properly. Another thing is sometimes it's on Fedora, the you'll, you'll be plugging will be mounting a location on the host. And you, you might need to to change the context so that it's readable by the, by the container runtime. But I mean, I guess, you know, there can be issues with getting that to stick sometimes. And this isn't I'm not like an SE Linux master, but I am, I do care about having it disabled or permissive places and reversing that. And I've tried to do a little bit of what I can to kind of, you know, get it at least running on the host. But like, when you run QBADM, you initialize your cluster, and then you need to set up, you need to choose a network plugin, and those running containers, and just like every single network plugin has some kind of issue with SE Linux, or at least that I've seen. And so, but it's just as simple. What I've done is like, with the flannel network plugin, I just gotten the plugin and made a fork of it and just added the security option to run as, you know, SVCT. And then at least I can have the host enforcing. But we need help on you know, spring the word and getting fixes out there around that. So this is some places to contribute in the the atomic work group, we kind of set up a separate little Kubernetes SIG. And on Pajor here, this is our our issue tracker for that. And we're starting to track issues around that that relate to that effect Kubernetes on Fedora and sent to us to we're trying to pay attention to both. I mentioned Bodhi before, you know, you can, at any time, go and look for Kubernetes or other packages in this world that you care about. And just see is there something waiting for karma in Bodhi, and you can really help by testing it out and giving it karma. And, you know, and if it's not clear how to test it out, poke the maintainer or poke one of us in anatomic, you know, in free note and ask us and we'll I mean, that's something that people should be poked about. This is the contribution page for OpenShift. Jerry had mentioned the container guidelines. That's a good way to help out some of these things I mentioned that that we don't have a Fedora container for. That's where you can kind of start your journey of rectifying that. And then this last one, the is a repo where we're doing atomic host documentation and a lot of it relates to Kubernetes and OpenShift. And we we had a virtual fad recently where we went through and we kind of identified a bunch of things that we need. And then we hunted around and found reference materials. And we put them in a bunch of issues. And so that people who want to take one can see a blog post or something that's already been written and get it into shape. And we have an activity session this week. When is that? Josh, when's that activity session to do some doc writing for atomic host? So tomorrow afternoon, we've got a but yeah, but that the contribution is really welcome there. So that's I mean, questions and also, you know, some of these things I've been asking about, I'm kind of curious, like to hear what people think. I mean, does it bug you that tomorrow morning, 930, we're going to do a work session, writing up some docs or working over some atomic docs, and I'll include helping to document some of this stuff. But I'm curious, I mean, people, what do you how do you feel about, you know, yeah, feel free. It's kind of weird, because the QVADM, like when you when you run that, we run it from the Fedora package, like that's what we're distributing, but we're distributing something that in turn goes out and grab something else. But then it's also when you run Kubernetes, you grab some other images from Google too. Yeah, I mean, it's what the one trick, when we if we do our own, then we have to contend with this thing where you do QVADM, and you do, you know, you give an argument to give a particular version. And if that version doesn't exist in our registry, you know, I don't know what do we what happens then can we make it somehow that it feels over to use the upstream or, you know, but that's something we have to figure out. But it's it's there's a challenge because there's this quantity of releases happening. And it's just way more than we typically are set up to deal with in Fedora. Obviously, you mentioned a lot of does it ever make sense for us to say, for, we're going to support only these, for example, only going to have a QVADM image based on hope is one, I'm going to try, let's say, Coupe Spray, you're going to have to figure out yourself. But we're going to be completely dedicated to making just QVADM and not Yeah, I mean, you know, I mean, that's that's a question that we have to answer. Do do we I mean, whether or not we set out to say we're only going to support these these ones and we say that we then for our contributions only support are only going to support what we actually support, you know. So yeah, it's better that we say, this is what we're going to do. And this is the thing too, that I'd like to consolidate as much of our efforts as possible, like if we were somehow able to get together with the OpenShift Ansible, and like somehow be able to work together, you know, doing origin stuff and cube stuff at the same time, that would be ideal, because at least one thing we know is that they're both caring about our distro. And so, you know, because that's that's one thing that that, you know, we can assume, is that what it's running on Fedora? But yeah, I mean, there's no way not to just support certain things, because that's just the reality, but we should be clear about what we support. Because right now, when people show up to Project Atomic, and they do the getting started guide, and some stuff's out of date. And then, you know, there's all these other ways to do it. It's confusing and afraid that we're giving people a bad experience. And also, we're not, we don't really direct people towards origin much at all in the Project Atomic or, you know, world. And there's a lot of Fedora specific things I get pointed out during the course of this, they're just like missing from the OpenShift origin kind of stack and experience. And again, maybe we're cool with them being missing or whatever. But this is I'd like to at least be moving forward, you know, intentionally. All right, awesome. Thanks for