 Hello, everybody, and welcome to this Community OKD Working Group Office Hour. We're going to use the hour that we normally use for OpenShift Commons, and we're going to bring together some of the members of the OpenShift, the OKD Working Group, and we're going to give you a really short intro and then some demos of the CRC and answer all of your questions. And to do this, I'd like to introduce from the University of Michigan, Jamie Magria, who is one of our co-chairs, and he's going to take it away and run through what's going on in OKD these days. Hello, folks. I'm Jamie Magara from the University of Michigan, co-chair of the OKD Working Group, and at the University of Michigan, we're using OKD for a variety of projects, a lot of data-driven stuff with research and development of web applications. And what I'd like is for the other folks on the call here to introduce themselves. So Diane, maybe just quickly reference again who you are. All right. And I'm also going to hit the Start Recording button, so I forgot to hit that, so here we go. Recording has started. And don't worry, because we keep a raw file of it, too. So no worries. Well, hello, everybody. And I'm the Director of Community Development at Red Hat and a long co-chair of the OKD Working Group, and been working on OpenShift and Origin since the early days, like seven years ago, so eight years ago now. So, yeah. Happy to be here. Excellent. And Charu. Charu Gruver, formerly an OpenShift Origin user and an OCP customer, now Red Hat employee, almost a year now, still trying to stay as active as I can with the OKD Working Group, but hashtag day job takes away from that. But whenever I can, I contribute with a lot of things I do in my home lab. I'm currently also trying to maintain the CodeReady Containers builds, which we'll talk a little bit about more here. Excellent. And Neil. Hey, y'all. Names Neil Gampa. I'm from Datto, and I'm here primarily working kind of to help bridge the gap between the Fedora community and the OpenShift OKD community, as well as, you know, trying to help make OKD and OpenShift the best that it can be as a Kubernetes platform. So, I use OKD actually personally and professionally at my workplace. We use it for doing reliable building of images in a consistent, repeatable, reproducible way with nice auditing and tracing and stuff like that. Yay, OpenShift builds. And you know, I like doing stuff with OKD, so that's why I'm here to help make OKD better. Excellent. Thank you. So, let's now, we'll provide a quick overview of what we're going to be talking about today. The agenda is that we've got an overview of OKD4. We'll provide a quick update on the state of OKD4. Then we'll talk a little bit about operators and operator hub. And then we'll talk about the collaboration, the cross-pollination between OKD and Fedora Core OS. And then we'll talk a little bit about code ready containers for OKD, and then we will open things up for question and answers. And if there's time, then we'll also have Diane talk a little bit about documentation in the working group and OKD in general. So let's get started with an overview of OKD4, and I'll walk you through a little bit of this. So, OKD is a community distribution of Kubernetes. And basically, this means that it's a community developed and supported distribution. And so when you ask a question, you're asking a question of the community for support, there's people contributing various components to it and even doing their own builds of OKD to add functionality that they'd like. And it has the OpenShift OCP, the commercial product as it were, code base, and it's on top of Fedora Core OS. And some tweaks to make them all work well together. And you can find out, you can get to all of the resources that I'll be talking about by going to okd.io, that's the website with links to the repositories, to documentation, guides, installation guides, and some recipes for some unique things. And also the ways in which you can communicate with the group, and we'll be talking about all of those as we go through those. And a little bit about this, a community distribution of Kubernetes. So it's an automated installation, patching, and updating from the operating system up. So from Fedora Core OS all the way up to the applications and services. And so you've got Red Hat and community operators, we'll be talking about operators in a little bit. And then we've got the platform of Kubernetes itself and the cluster management, which has your monitoring, your registry, your security. And then underneath, of course, is the operating system of Fedora Core OS. And this is for hybrid and multi-cloud deployments and also even just local single machine deployments. You can get it to work in a lot of environments. So the usual suspects, right? Over OpenStack, VMware vSphere, AWS, Azure, Google Cloud platform. Basically, across the spectrum, from cloud all the way to bare metal. There's options for you to use this in a variety of ways, actually. So what is the state of OKD for? And right now, we are currently at a stable release in 4.7. And there are new maintenance releases on a regular basis. We actually have a quicker pace than OCP. And there's community contributions getting added all the time. And we have a link actually to the repo and also documentation for getting the latest set releases and also the nightlies. You can get access to the nightlies to test things moving forward. And there's collaboration with Operator Hub and the Fedora communities. We'll be talking more about that in a little bit. And there are operators for OKDs specifically and stuff coming from these other sources that make for a really nice environment. And if we have time, I'll actually show you that. Last I checked, it was 146 operators that are available for install in OKD just by default. And it enables early adoption of some upcoming technologies. And we'll touch on that a little bit. So that's today and tomorrow what's happening with OKD. And there's 4.8 is up there for testing as well. So if you're looking to get a sense of what's happening with 4.8, that's also available as well. There's a channel with the releases for that to test. And I want to talk a little bit about operators and Operator Hub. So for folks that don't know, operators are a method of packaging, deploying and managing Kubernetes applications in the OpenShift slash OKD world. They got integrated in the 4x version. And basically, they are custom resources. They're controlled by custom resources. And these operators are written in, they can be written in Ansible, in Go. You can even write them in Bash, Script. And they simplify the process of deploying applications and maintaining all of the associated resources. And there's an operator SDK that's available that allows you to, on various OSs, create operators of your own. And there's documentation examples for Ansible, for Go. And they allow you to get up quickly creating your own operators. They're quite easy to deploy also to an OKD cluster. And so the cluster version operator, well, let me back up a little bit. So in essence, when OCP and OKD 4x, when we started down that path, essentially everything from the top applications down to the cluster management became operators. So you've got the cluster version operator, which actually is maintaining the version of OKD that is installed. And also the various components, right? Then you've got the QBasey API server, controller manager, scheduler, etcd. These are all operators underneath as well. And for networking, you've also got network operators ensuring that your plugins are installed, your CNI plugins are installed, your SDN is properly configured. And switching to operators really just radically changed the ecosystem that is OpenShift and OKD. And there's an image registry operator that keeps your local image registry, there's a built-in image registry in OKD, keeps that functioning correctly and sets it up with a few little button presses and some application of some YAML, if you so choose. And there's also monitoring for your metrics and Ingress. Your Ingress operator ensures that your routes are properly set up and functioning. And of course, storage ensures that your plugins are installed and that the storage classes exist. And there's, you know, if you do an install on AWS, you're automatically connected to registry and storage in EBS. And likewise, if you're doing vSphere, you can connect to that storage as well. And there's also some plugins for CSI plugins for example using SIFs. It's very easy to apply the SIFs plugin that's available, OpenSource plugin, to do SIF storage on OKD. So again, operators all the way down and it's very easy to monitor that way. You just look at what's happening with the operators, read their logs and you can see what's happening in your cluster. And Operator Hub is a community-sourced index of these optional operators. And there's a list there of them, Argo CD, very popular, Kubevert, et cetera. These are really cool. Now, if we get a chance, I'll show you in a little bit how OKD connects to Operator Hub and provides these for you. And there's an underlying system, the Operator Lifecycle Manager, the OLM, that takes care of the scope, right? Because you can have operators that are cluster-wide or only in a namespace. And making sure that these can be updated manually if you so choose. And this is something where, you know, if you have, you can update operators independently of updating the cluster itself, right? So these third-party operators can be updated on their own and that can be managed very easily. And as I mentioned, the Operator Hub is integrated into the console. So you can install these operators via the interface with just a point and click or via the OC command. Very easy stuff to manage there. And so now I want to talk a little bit about OKD and Fedora Core OS. I'll talk a little bit about this and then I think Neil will probably contribute a little bit of stuff and talk about Fedora in general and that integration. So essentially Fedora Core OS is the underlying operating system. And when you do an OKD install, the Core OS operating system bits and pieces are downloaded using OS Tree technology. And this is RPM, so it's RPM OS Tree. It's basically like Git for your operating system. And you're pulling down a version, the installer pulls down a version, then it adds some specific components for Kubernetes and OKD and then deploys that to the nodes. So you always get an updated version of Fedora Core OS on your node and the desired version of OKD. And it's very elegant the way that it's done and it allows for seamless updates of the underlying operating system of the nodes. And there's a lot of collaboration between the groups. There's multiple members of the Fedora Core OS working group who attend the OKD working groups and vice versa. And so there's always communication going back and forth about how to work together, how to make changes that might not overly complicate things for the other group and whatnot. And I think it's a good example of how working groups and work together to solve issues and to move technology forward. And both groups are open for people to join and become part of that collaboration. The OKD working group were on Slack. So if you go to the kubernetes.slack.com channel, we're in OpenShift Dev. There's also the openshiftcommons.slack.com in general there. There's a Google group that is great for discussion either via email or if you use the forum format in that Google group. And then there are bi-weekly video conference meetings. And right now these are on Tuesdays, Tuesday afternoons. Generally if you're in Eastern, it's 1 p.m. and they alternate. So there's bi-weekly general working group meetings. And then we're having on the opposing Tuesday a documents subgroup meeting. And so if you want to contribute to OKD or learn about OKD from a documentation perspective, these are great meetings to participate in because we talk about ways of improving the documentation, fixing errors in it, making it clear, and ways of better organizing the information and documenting the great functionality and features in OKD. And there's also repositories, one of them for sort of the working group itself and then also the actual OKD repositories. And we've now started to use HackMD for our meeting notes. And I can put a link to that in the channel as well. And Fedora Core OS working group, there's an IRC, and actually this is incorrect. Sorry, this hasn't been updated. They are no longer on FreeNode. They are now using the new project that was started from the folks that broke off from FreeNode. And I will update that slide. I apologize that that's incorrect. But if you go to the website, it is updated there. And there's an issue tracker for actual Fedora Core OS issues. There's a discussion forum. And there's also a documentation repo that's not mentioned on here. They work on their documentation as well, the Fedora Core OS working group. There's a mailing list and weekly meetings that are three of the meetings each month are IRC based. And then the first meeting of the month generally is a video meeting so that people can see each other face to face and interact more directly. And let me actually, well, I'm on Fedora Core OS. Neil, do you want to chip in a little bit about the general relationship between Fedora and OKD? Yeah, sure. So with a big complex technology like OpenShift, which OKD is the community distribution of, you have a lot of moving parts across the stack. You've got things like security features for filters and control groups and supporting volumes and data access. You have new container technologies around orchestration. You have service management interfaces. You have provisioning interfaces. You've got all these other things. And the goal of OKD is to be a self-healing, reliable platform to run and develop applications and services on. And the only way we can continue to make this better is to be able to consume the latest and greatest technologies and incorporate them into the OpenShift technology base. That is Kubernetes. That is the operators and all that stuff. So fundamentally, the idea of using Fedora as our base for OKD aligns with our ability to take advantage of these new technologies that come in through the innovation pipeline that is the Fedora project, you know, as with Red Hat OpenShift container platform. Red Hat Enterprise Linux is what underpins that. For OKD, we leverage Fedora Core OS so we can bring in the latest technologies to support that mission of making the best container orchestration platform. And that allows for, you know, things like, I'm going to throw out a term here, C Group V2. It's a hot thing in the container space. It's basically a rejiggering of how we actually do resource allocations and management with control groups. And that is available to us through Fedora Core OS and because it's been available to us within OKD, within OpenShift, within Kubernetes communities, all of us have been working together on making this a reality for all Kubernetes platforms, for all container orchestration, for all of this stuff. And so Fedora Core OS gives us that strong, stable, reliable and fast-moving base that gives us the ability to integrate the latest and greatest technologies to make the best possible Kubernetes platform. And through that, we're able to provide you pretty much the most awesome Kubernetes distribution out there, in my opinion. That's pretty accurate, all the way down to that last statement. It is an amazing distribution. And the community surrounding it is very active. And it's a great thing to be a part of. Thanks, Neil. So let's now move on. Here's a list of the resources. By the way, again, you can go to okd.io. And there's the repositories there, the various ones. And if you just go to okd.io, you can find links to all of those. And let's do demo time for code-ready containers on OKD. So here's the resources for code-ray containers on OKD. And Charo, take it away. I'll take over the screen share if I'm sharing. Before I start, let's just take a brief moment to think about what might be the absolute best Java framework on the planet. He is not biased at all. Not even a little bit. He's obviously a completely well substantiated, technically sound opinion. All right. And while he's blathering on about his carcass and showing off all of his carcass stuff, I will mention that if you ask a question or if you submit an issue, we now have actual swag for OKD t-shirts. So I encourage you highly to get your OKD t-shirt, submit an issue, make a pull request, and we will be able to send you off a t-shirt as a thank you gift. That's my contribution beyond docs. Absolutely. OK, are you guys seeing my screen OK? A browser on the right and an empty terminal on the left? Yes, you're good to go. Excellent. OK, so let's start with the downloads. The downloads, in theory, are available from OKD.io for dealing with a technical glitch on the website. Once resolved, you will be able to see them. We will post this link in the chat, and it's also in the slide, so you guys will be able to see them. But the Fedora project graciously gave us some disk space to hold these. So when you go to this pubaltokd-crc-release folder, you'll see folders for the Linux 64-bit Mac OS 64-bit and Windows downloads for CRC. So the installation is pretty simple. You download the executable, add it to your path, and the first thing you're going to do is you're going to run a CRC setup. Now, I've pre-run this because it will take a few minutes. It has to unpack what is effectively a binary disk image of a OKD installation as a single-node cluster that has been packaged up so that it can run on the hypervisor of your desktop operating system, be it Linux, Mac OS, or Windows. So you'll run the setup. It will extract the bundle. It will make sure that the hypervisor is installed and configured properly. And it will prepare you for starting your OpenShift cluster for the first time. Once the setup is complete, you'll run a CRC start. This will also take a couple of minutes, depending on the horsepower of your particular workstation, because it is going to fire up the OpenShift cluster, which means all of the operator that Jamie talked about that are part of the OpenShift install, they are all going to go through their bootstrapping and their validation processes. So you can see from the logs on my terminal output here that it gives you updates as this is progressing of what all of the operator is doing. And then when it's completed, you will see some output text here that tells you how to access your now running cluster. So one of the quickest ways to get to it is via a browser. If you run a CRC console, this from a terminal, it will pop open your default browser and bring you to the login for your now running single node cluster on your desktop. It will default to logging you in as the developer account. This is configured to actually create two accounts in this cluster. The default Kube Admin, which is your administrative account, and it gives you a randomly created password here for your Kube Admin account. And then you have a developer account that is developer developer, so extremely secure on the developer side. I'll log in as developer developer. It will bring me to the developer perspective, which starts with a tour. So the first time you log in, you can click through the Getting Started tour and it will take you through the various components of the console and how to access what. You'll also have this nice little disclaimer up here at the top reminding you not to use any containers for anything in production. It is a stripped down installation and a single node cluster, so availability and redundancy. The developer perspective is limited, so if you want to configure things in your cluster, then log in as the Kube Admin account by being and pasting the password. Now you'll be logged in with the administrative perspective, which gives you access to everything in the cluster. Now, with a few exceptions of some operators that have been disabled, like some of the monitoring and things like that, this is a fully functioning cluster. So you can add additional operators to it by going to the operator hub and selecting whatever you want to install. So if say you want to try out Eclipse Che, there's an operator for that. If you, for example, want to do some work with Kafka, you can install the Strimsy operator. Now with all of this, you need to bear in mind that you are running this ecosystem on your local workstation and so you are limited by the resources available on your local workstation. Can tune those things. If you follow our link to the documentation on how to use CRC, you can configure the memory, you can configure other things about the virtual machine that underlies this. So if you've got a big, fat MacBook Pro with 64 gig of RAM, you can crank up the memory available to this cluster and do a whole lot more with it. All right? So that's basically a quick tour of this little guy here. The next thing I want to show you is where all of this comes from. So in GitHub, under github.com slash code dash ready, that's where the source code for code ready containers is. So if you are so inclined and want to build this thing yourself, there are a couple of projects in here that you're going to care about. First one is SNC, which does what the acronym stands for. It builds a limited single node cluster. And so if all you want to do is get a single node cluster up and running that only has local access to whatever station you're building it on, you can actually use this to build and run a cluster. If you want to package it up as a CRC bundle, that's where you switch to the second project. After you've built the SNC, then build the code ready containers bundle from this CRC project. One of these days I'll actually get around to posting a little blog entry about how I do this for OKD because we don't yet have the CICD mechanics built in for automating the build of code ready containers. So if you're looking at the CRC bundle that you downloaded from the site where we're hosting it and it's a couple of releases out of date, that's my fault because I haven't built and released a new one yet. One of these days we'll get the time and the community effort underway to get this automated with CICD, but for now we wanted to provide the community with a version of code ready containers that is completely unencumbered from having to have a red hat specific secret key or registering with the red hat developer site. I would still encourage you to do that because there's a ton of free stuff, tutorials and labs and documentation and things that are available with a red hat developer registration, but we're community members too and we're also very sensitive with our privacy and our own data, so we fully understand not wanting to have to register with a site to use a piece of... and thus we support code ready containers for AD. And I could go on all day and talk about home labs and single node clusters and things like that, but I'm going to stop there. Let's see if there's any questions out there. I don't know Chris and Bobby who are producers if there's any out on Twitch and YouTube or doing it, but I'm not seeing any in the chat right now, so I'm going to end. Here's one now. And Jose, thank you for that. So Jose is asking, are there any plans to add support for ARM architecture? So interesting you should ask. I'm going to give you two answers to this. So the first answer is code ready containers specific. The official code ready containers does run on the M1 processor. It's running under an emulation though, so it's still an x86-64 runtime that's running on the virtual machine. I haven't yet had an opportunity to test our OKD CRC on ARM. My wife does have an M1 MacBook Pro, but she hasn't let me borrow it yet to subject it to code ready containers. So by that I might sneak around and see if I can buy my own on-base MacBook Pro. The bigger answer though that everybody is interested in is yes, ARM 64 support is in the works. I don't know where it's at. I believe the Dean or Christian who are a part of our group do have some insight into where engineering is. The first goal is to support data center sized ARM 64. So we're talking the big metal that you can rent from AWS or Google or that you can buy and deploy in your own data center that will be running the ARM 64 architecture. Support for things like the new 8 gigabyte Raspberry Pi 4s. As soon as we get enough community effort around that and we have a base of the operators built, I can see pretty quickly because these are so accessible now and 8 gigabytes of RAM is enough, barely enough but is enough that we could get it running that once we get the machine config operator and the cluster version operator and a lot of those things ported over to ARM 64, we'll probably start seeing some Raspberry Pi based OpenShift clusters, which I know there's a lot of folks out there interested in. The last thing I'll say on it before I talk forever on this one thing is join the community and contribute to it because a lot of the making this work on ARM is just the muscle of lifting and shifting and compiling and fixing, you know, compiling errors and making all of the go code and run on the ARM architecture. So it's not like there's an insurmountable block in our way. It's just we got to get all the packages built for it. We got to get Fedora Core OS fully supporting it from an OpenShift perspective, so on and so forth. Yeah, so actually Vadim chimed in remotely on this and in essence Christian Glombeck is one of the people working on ARM 64 in general and once it's working for OCP, OKD will follow and it's looking like hopefully in 4.9 or 4.10. No promises on a Pi 4 though, but in general ARM 64 coming in future releases, again the 4.9 or 4.10. Thank you for that question. That's a good one. That question gets asked a lot in the working group probably once every working group meeting and we do have working group meetings bi-weekly. Was it bi-weekly? Yes, bi-weekly we have the main working group meeting on Tuesdays and you can subscribe to it on the Fedora calendar and I'll throw up the calendar URL in a second and we also have the docs meetings on the other day, Tuesday of the week and so another great way to contribute to OKD and the community efforts is to help us improve the documentation. So one of the, if you go to docs.okd.io, I don't know, Cheryl if you want to just pop over there for a minute, there are some decent I will say docs for OKD that we are currently working on and there's stuff for pretty much every version and the latest version is there, but especially if you're a newbie to OKD and you are trying to use the OKD for documentation, I would highly encourage you to keep a journal like if something didn't work or you needed more depth on something and if you go over to one more link that I'm going to make you travel over to is the, you go to openshift slash docs in the repo under openshift and I'll throw the link here but I'll just share my screen for a second. If you want to log, yeah, you've got the OKD. I'm going to open shift docs, dash docs, rather than OKD and just pull up any issue that has OKD in the subject line. Here, let me just see if I can give you a link here to one. It's openshift dash docs, there we go. Yeah, there you go. There is a third, just slide down until you get one that's in looks there. So probably the easiest way to get involved in any open source project is to use the docs and comment on the docs. And here, there's lots of things. We're not perfect. We generate our docs off of the OCP, the openshift docs and so there's always usually a reference to Rail Core OS as opposed to Fedora Core OS or perhaps our language isn't as inclusive as it should be but there's lots of easy fixes as well as grammar fixes. And if you want to log an issue in here, this would be great if you could pick one and just show us that. We'll show our appreciation for your finding it and even better if you can suggest the fix. We'll be happy to take those as well. So docs is a big part of community development and community building for OKD. So please feel free to come in and critique our docs as we are very happy to have your input on that. Absolutely. And another thing worth showing is, Cheryl, go back to the main okd.io real quick and show folks the guides because this is a great thing. So there's the official documentation for the install and if you click on the very last hyper, second or third to last hyperlink guides there, there is a repo that contains guides for installation for primarily what's called UPI, so user provisioned infrastructure on all of these platforms, AWS, Azure, GCP, HomeLab. I wrote one that's in there for vSphere. And basically this is, instead of using sort of the automated functionality of the installer, you can use the UPI. Well, that one's for IPI, yep, there's a good example there. And then there's also for UPI, which is your user provision. So you configure your various components ahead of time and then install onto them, configure your VMs ahead of time, et cetera. And so there's some documentation that augments the official, in quotes, documentation for okd and it's really, really helpful and can always use contributions as well. And one thing that comes out of this is that when you see folks doing stuff repeatedly that maybe the installer doesn't provide for or if you're doing UPI, there's been a couple of instances of folks coming up with scripts to automate or terraform or any of your other deployment control mechanisms to simplify tasks that we see that folks are doing over and over. So it's a great way to contribute to the project as well. And do we have any other questions coming in? I don't see any. People must be stunned with all the amazing stuff here, but I would highly encourage Charo to continue to, here comes one. Chris is asking, can we make multi master clusters with okd4? Oh, yes. Yes, I would hope so. I'm pretty sure that's our default, right? Yeah, it is the default. So yeah, I mean, essentially you have three masters and three workers are the default for the IPI installation on any of the platforms except for doing sort of the single node approach. So yeah, three masters and three workers. As a matter of fact, there's a switch that's flipped in the installation configuration by default that makes it so that your masters do not handle any workloads and keep that separation. You can do that or you can do three nodes where the nodes provide both master and worker functionality or then you can go down to single node. But the minimum for the standard install is three masters. Yeah. All right. Well, Chris, definitely get yourself a t-shirt. I know you know how to do that. And we'll make sure that happens. And I think that that actually highlights the really great thing about OKD is that you can use it in production in a highly available resilient fashion. And then you can also do something small with a single node to do development and whatnot. And it's very flexible, including all of the tools that come with it or many of the tools that come with it by default, which is really great. Yeah. We should also. Sorry. Go ahead. All right. Yeah, go ahead. We should also keep in mind that because of how awesome OKD is with all the stuff and supporting multiple providers and between UPI and IPI type stuff, you can even go into the realm of having, maybe you have your master, your control plane, Kubernetes term, meaning the masters that actually like orchestrate the setup, have that. Maybe that runs on a VPS, a digital ocean or you want to run it on on a machine that you have at home or whatever. And then you could have a worker. You have workers in AWS or Azure or GCP or on machines remote somewhere else. Like you can, you know, with your homelab stuff, maybe you don't have enough machines to do the whole setup and you want to still play with the full architecture. Well, you could do the masters could be, you know, machines on your home network and you could dynamically make workers come up and down as you need them, you know, in the cloud. Like there's all kinds of interesting permutations that you wouldn't normally be able to think about. And that's really cool because of the way that operators and the OKD installer and CRC and all these other things, you can do whatever crazy configuration you want to achieve what you want to do. And what be even better is that if you've come up with something neat that we haven't really thought of or done before, right, you can send it to us in this OKD deployment configuration guide so other people can see this, maybe help improve it, make it better. And, you know, this is also, it's also an opportunity as I think Diane and Jamie mentioned earlier about contributing the docs, like the UPI stuff is, you know, because user provision infrastructure, which is what UPI stands for is kind of whatever you want it to be. It is probably one of the least fleshed out aspects of OpenShift deployments and OKD deployments. And using, if you come up with something cool and, you know, you've come up to some stumbling blocks so you need, you have some clarifications that would make sense to like put in the documentation, by all means send it to us and help us fill this out because that'll make it better for everyone else and you'll get a free t-shirt out of it. So there you go. Like everybody wins. I know. This is, I have to say, this is the first time we've had t-shirts for OKD. So that's why we're flagging them so much here. We're really thrilled to, that we actually have something to give away besides our thank yous. And so that's, we're really thrilled to get this support and the work that Mike McEwen and the team have done on these deployment configurations is amazing. The biggest issue I think we have is testing, you know, on UPI, on all of this stuff. I mean, we have a regular cadence of releases that go out and each release, basically we need testing. And if you have some esoteric edge case deployment on some cloud that's, you know, or not so esoteric, maybe it's Alibaba or Digital Ocean or something like that, but like testing to make sure that the Fedora Core OS images are available to make sure that once you've got, if those are there, we need to, if they're not there, we need to work with the Fedora Core OS community to help them get them on that and get the latest versions of them, continuously. There's so many moving pieces to open source projects in general, but this one, especially because it is a collaboration between the Fedora Core OS community and the OKD and OpenShift communities that we really are looking for people to help us grow the folks who have access to these different deployment target platforms and maybe commit on a regular basis to do this, that also some work could be done on testing pipelines so that, you know, if you could build scripts to test so that other people could do them, there's always something else if you're interested in contributing here. And that's, you know, we've got a great core of folks that have been working, you know, since the early days of Origin, but especially since the 4.0 release of OKD came out, we've had a pretty vibrant community and very active, but, you know, we are active normally in the spaces that we are currently deploying to, so I am seeing someone screen there popping up, there you go. But that's the key is if you have platform, whether it's ARM or Alibaba or, you know, ZCloud, wherever that is, if you want it tested on that and you have access to that and that's always on a reoccurring basis, please reach out to us. We are with one of the cloud hosting providers, whether it's Microsoft Azure or Alibaba, we can go on because there's tons of you guys out there. We would love to have you take on helping us with the testing and help you create some of those automated scripts for doing that. All right, we've got another question here. Self-created OKD 4 cluster or Rosa and AWS, which would you prefer? It really depends on what it is you're trying to accomplish. Rosa is a subscription based and it is managed by Red Hat and Amazon on your behalf. So if what you're mainly interested in is getting access to an OpenShift cluster but you really want your resources to be focused on application delivery, then Rosa is the way to go to have something that all of the site reliability engineering is taking care of for you. If your budget is really tight and you've got some really, really rockstar engineers and you want to roll your own, then OKD would be an approach that you could take and support it on your own without the subscriptions. Let me add some color here because I have had this thought process for my own personal, like me, super personal, like I want to futz around with Kubernetes and not break the bank and break my time doing it. So the way I tend to make this decision tree go is if I'm learning about the Kubernetes platform, if I want to learn more about the underlying technology, if I want to get my hands dirty doing all the things, like working with operators, working at the administrative level, developing those things, plugging those in, doing some tuning, that kind of stuff, wanting to really deep into the guts, OKD on AWS is a fantastic way to do it because then you get access to all the things, you get the opportunity to really dig into the meat and potatoes of what makes a stellar Kubernetes platform for container development, orchestration, software delivery, application delivery. And I'm not saying you can't run OKD and do your production workloads on that. I'm not saying that. That is totally fine, totally workable, and totally a thing. However, if you are not necessarily up to the task of doing all of that stuff and that includes making the decisions about how you set up the cluster, making the decisions of the sizing of the cluster, making the decisions on what the machine types are going to be, how the storage is configured, how the authentication is being put together, how you interoperate it with your services infrastructure, how you plug it into S3, how you set up IAM roles, how you do this, that, VPC is like, I can throw it more jargon if you like, but the idea is when you do all the things, if you don't want to do all those things to kind of figure it out, then Rosa takes all that away. It makes it magic. And the nice thing is that the pricing for those, like, so, you know, Charo mentioned the subscription thing, Rosa is pay-per-use. And so that means that you can budget it based on how much you're going to use it. And so that makes that, that's where I would put the decision tree in. All right, Jamie, now you can say something. No, no, no, first off, let's back up for 30 seconds to say that there are multiple options for running OpenShift in the cloud, or OKD in the cloud. There's OpenShift on AWS, which is the Rosa in which your control plane is managed, basically, and you use a tool called Rosa to create worker nodes and interact with the control plane. There's also a provided OpenShift service on AWS. And then there's running OKD in AWS doing an IPI or UPI install. So actually, my day job involves doing OKD in AWS. And we are going that route currently because of the ability, as Neil said, to have control over the control plane and to be able to get into the tofu and potatoes of this process and be able to control those nuanced things and integrate different components. And you can, you, you don't lose anything by doing OKD in production or in development in AWS, but you gain knowledge, I think, of the underlying system and the ability to tweak. And so, yeah, if you want to pay for the service of Rosa, you can. But I think you miss out on some of the nuance and some of the controls, basically. And it should be pointed out that Rosa just became an official service just in the past couple of months. There's also some, I don't actually have like GovRAMP for Rosa yet. They're not on the GovRAMP list and things like that. That's not coming to next year. So if you're doing any type of work with federal guidelines, you have to be mindful of those things as well. Yeah, and there's a hybrid opportunity in between those two as well that is subscription-based OCP, which is the exact same code base as OKD. It runs on Red Hat CorOS, which is slightly downstream of Fedora CorOS. The main benefit, if you call it of the OCP subscription versus OKD, is that you are Red Hat supported. You have access to the Red Hat engineered and vetted leases, as opposed to the community support of the OKD. But like with any of the major open-source applications that are out there, it comes down to how confident are you being self-supported and community-supported versus having a lifeline on the other side that... And to kind of add a little bit to what Charles said, another advantage of doing the OKD style for an AWS deployment is you can actually make different choices. So like for example, let's say I'm going to go a little deep here and let's say you want to do Cilium for your networking. Cilium is not included in any Red Hat OpenShift product to the last I checked. But Cilium is interesting. You can see that in the DBPF, it's super fancy on the filtering networking stuff. Actually fairly cool and you cross my fingers, someday it will actually show up in OCP. But I want to use Cilium now. In OKD, you have the opportunity to use Cilium instead of something else, like whatever is in there by default. I think it's OpenShift SDN or OpenVswitch or OVN or whatever, I forget what it is. But like, Cilium is interesting. I want to use Cilium, but I like everything else around OKD with Cilium and so I can do that. So that's where I feel OKD tends to be a much more of an interesting value proposition where you have the opportunity to be a lot more flexible with how your deployment is actually going to look. Whereas with OCP, you lose a little bit of flexibility normally because you're operating on Red Hat OpenShift and then Rosa, even less flexibility because they're managing it. But there you go. Excellent. So we're coming down to the last minute and so I just want to encourage folks to go to okd.io. And again, we have a Google group where you can ask these questions. There was a question about auditing. I'm not sure if the person meant security or resource auditing or both. If you join us in the OKD Google group, we can answer your question there in detail. We're happy to get into detail. So again, okd.io links to all of the things that we've talked about. And again, we're looking for people to join us and to get involved and growing as a community together. So thank you so much for joining us today. Take care all. Thanks everybody. Thanks for being a great bunch of community members. This has really been wonderful to have you all with us today. So thanks for taking up the time. As ever, thank you for the CRC demo. Great stuff.