 Welcome everybody to the state of OKD for OpenShift Kubernetes on Fedora CoroS, the code ready edition. I'm Christian Glombig, a software engineer and co-chair of the OKD Working Group at Red Hat. And I'm Charo Gruber. I'm an active member in the OKD Working Group. I'm also with Red Hat serving as an architect here in the US. Let's have a look at the agenda for today. So we'll give an overview of OKD for then we'll talk about the state of OKD for. And after that, we'll have a demo of code ready containers, the developer image demo. So with that, let's get started with the OKD for overview. So what is OKD for OKD for is the community, a community distribution of Kubernetes, specifically the community distribution of OpenShift. So it's the OpenShift code base and Fedora CoroS. The website is OKD IO. Where you will find all the information as well. Let's go a little bit into detail here. So OKD for a community distribution of Kubernetes plus plus. This plus plus is really the important thing here. So OpenShift and OKD before bring a lot of goodies with it. So there's automated installation, patching and updates from the OS app. And that is very important this last part here. The operating system has become an implementation detail of the entire cluster. We update the machines, the underlying operating system through the cluster, bind their life cycle together. So when you update the cluster, you will also update the operating system. This is really a standout feature and it makes things so much easier to reason with. So let's have a look at the graphic here. So we have the underlying platform, obviously, which can be anything. It can be cloud, can be bare metal. And then we have Linux hosts and we use Fedora CoroS as our base operating system in OKD. Fedora CoroS uses technologies that really enable this use case. Like ignition for our first boot configuration. So we don't use cloud in it or anything. We have ignition, which is a declarative configuration that when you first boot, it'll configure the machine specifically for that environment you're running it in. And you can configure anything with it really. So you can write files, you can format partitions and lots of things. You can write and enable services, system deservices. A lot of things can be done with ignition. And Fedora CoroS is also based on RPM OS tree. OS tree, the technology you may know from things like Flatpak. So it's an image based system, which means RPM OS tree takes Fedora RPMs, composes them into an image, and then they'll get the Fedora CoroS image, which is essentially immutable and enhanced security with that. And you can, RPM OS tree is described like Git for operating systems. So you can really, you really get a hash that, a commit hash that really tells you what exactly is inside this image. And you can go from one commit to the next and securely update that way. And testing and things like that become much easier for us on our side. And because we can just every, all the time say in this commit of the operating system, we have those packages and those files. And yeah, if anything changes, there's a very clear differential between two commits. Going up from that, we have Kubernetes obviously. Our flavor, OpenShift is an enhanced Kubernetes. So we have security pipelines, a lot of things that make maintenance of the system and also development on the system very easy. Yeah, and on top of that, the applications, which is what the user actually wants to run their workloads and everything, both the Kubernetes system, the OpenShift system, as well as the workloads, they're highly operatorized. So there's this operator pattern, which we use heavily to essentially drive the cluster on autopilot. So the operator will do what an administrator would usually do and make sure that if something is not the way it's intended to be, it is reconciled to the state it actually should be. And we have core operators within OpenShift. And then on the application layer, there is operators from operator hub that you can install for non-essential features, essentially, that you want to add on top. For example, running hyper-converged workloads with VMs, there's an operator for that. Running Windows worker nodes, there's an operator for that. And with the operator hub, there's catalogs you can click and install the operator from very, very easily and that will enable more kinds of workloads for you on your cluster. All right, let's go to the next slide. Here we'll just have a quick look at the platforms with support. It's essentially all the platforms. So we run on bare metal, on virtualized Overt, on OpenStack, AWS, Azure, GCP. There's smart, there's vSphere, there's a whole lot of platforms with support and we're adding to that list all the time. So next is the state of OKD. Today and tomorrow. So currently, we're at the stable release 4.5. That was a big milestone for us getting everything out and ready and stable and that's great. We still have a lot to do with regards to getting more operators ready to be installed on top of OKD. So actually the main mission of the OKD working group is to facilitate community contributions. So that is really what we want to do and we want to see more of that. We want to actually enable the community to work with us and to contribute here. And specifically the collaboration with the operator hub and Fedora communities is already very close and we're constantly working with them to do that. What we want to do is get more bespoke operators released for OKD. We're working on guidelines to do that, for example, and to enable the community to actually release their own operator for OKD as well. Another part of the mission of OKD is to enable early adoption of upcoming technologies. So in the future, we will provide guidelines and ways to use upcoming technologies like C Groups v2. Like, what was the other thing I was going to say here? Oh, yeah, we're going to enable Cilium as a network, as the networking stack, which I'm looking forward to seeing. This is not going to be default because we're stable. Yeah, we're stable with OKD, but if you really want to go that way, then OKD is also the platform you can test out those new technologies with. So coming soon, we have released 4.6, which will, yeah, enable a lot of these things as well. And OK, next up is Charo. Good morning. So I'm going to talk to you about something that we're very excited about. We have been collaborating with the Red Hat Co-Ready team to create a version of the Co-Ready containers that is built off of OKD 4.5 and Fedora CoroS. And we're very pleased to say that after a lot of work, we finally have a release ready for you to try out. This is built off of the same code base that Co-Ready containers is built from that you can download from Red Hat. But unlike the Co-Ready containers that is built for OCP, the product that shares the code base with OKD, this is a community driven version that sits on top of Fedora CoroS as its operating system, gives you all the goodness that we provide with a full OKD 4 cluster, but allows you to run it on your laptop or workstation. So all you have to do is add a pipeline in your code and you can run your cloud native apps locally on your own device. I'm going to demo that for you now and show you a few things about how to get started. So allow me to click over to screen share. Okay, you should now be seeing my screen and empty terminal and a browser that is displaying a running instance of Co-Ready containers for OKD on my poor little MacBook Pro. This is a 13 inch i5 MacBook Pro, so it's got two physical cores in it, four VCPUs, 16 gigabytes of RAM, which unfortunately really is about the minimum of what you need to have a useful experience with Co-Ready containers on your local workstation. Unlike MiniShift that was a precursor or MiniCube, this really is a full instance of OpenShift, the OKD version running in a single node on your workstation. To get this, what you'll do is go to our home page at okd.io. This is where everything that you need to know about OKD is located. Scroll down a little ways and you'll see a section down here for Co-Ready containers for OKD. Clicking on that link will bring you to three download links. We've provided binaries for macOS for Linux and for Windows. One thing you do need to take note of is this pull secret right here. The supported version of Co-Ready containers does require a pull secret that you get from registering with Red Hat. For the Community Edition, we wanted to remove that step and so we're providing what is effectively a fake pull secret that will work for this OKD distribution of Co-Ready containers. The last thing to note is this link right here to the Getting Started Guide. It will take you to the Co-Ready containers documentation which will tell you a whole lot more about how to configure it, how to run it, things that you can do with it. Once you download the binary, I will note that it is fairly large, so don't be alarmed. It contains a full QCals instance of Fedora Core OS. This is a compressed disk image that when uncompressed will be about 9 gigabytes in size. Like I said, this is a full blown open-ship distribution that you're going to be running. You'll execute a CRC setup to get it started. You'll have the opportunity to do some configuration which you can read more about in the documentation. Here you can see I set my memory to 12 gigabytes for this image. I set the number of virtual CPUs to 4. Like I said, this is Core i5 with hyperthreading that gives me 4 virtual CPUs. When I start it, it's going to prompt me for that pull secret. This is where you paste that pull secret that you copied. After a few minutes, you'll have a running instance of code ready containers. Take a note of a couple of things. There are instructions here for you to log into your new instance from the command line or executing a CRC console. We'll launch your browser and give you a view like I had right here. I'm already logged into my cluster. I will say that your experience on a machine like mine is going to be better if you rely more on the command line, less on the console. The console will be a little slow to refresh from time to time. If you're blessed with a larger machine with 32 gigabytes of RAM, you'll have no problem whatsoever. But for those of us that are still living in the 16 gigabyte world, this is what we deal with. I'm going to run through just a few things to show you some activities that you can do. What I'm going to do for you now is I'm going to set up an HT Password Authenticator because as you can see, I'm logged in as the temporary administrative user. If I wanted to create another account for myself, I can use the HT Password command to create an HT Password file. I just created in slash temp. I'm using a really good password of secret password for my new user admin. I'm going to create a secret out of that file that I just created. I'm going to put it in the OpenShift config namespace and then I'm going to apply a config resource right here for an OAuth that's going to create a new OAuth provider that will use this HT Password file that I just created. Let's go ahead and drop that into our cluster. Now I have that new provider. If I now add a policy to that user, I now have a new cluster administrative user named admin. This is important. You can use the temporary account that was created for most things, but if you want to expose the internal registry that comes as part of this so that you can push custom images, let's say you're using Podman or Docker to create your own images that you need to use for something, you'll need to log in as a regular user because the temporary account won't work for externally accessing that registry. That's why I do this mainly. I'm not going to use it for anything right now. I just wanted to show you a few activities. What I am going to do now is I'm going to deploy the Tecton operator so that I have access to OpenShift Pipelines. I'm going to create the custom resource definition first. You'll see that our custom resource definition for Tecton was just created. I'm going to apply some custom roles and custom role binding so that here in a minute we'll see some interesting activity. I'm going to create a service account in OpenShift operators and then finally I'm going to deploy the OpenShift Pipeline operator. If you saw right here, Pod just started firing up. You'll give this a minute to do its thing. Then we'll apply this custom resource which is going to give us access to OpenShift Pipelines. Like I said, the console will be a little bit slow from time to time as you're working with this. If you have a machine that is constrained on resources like mine is, especially when you're running recording software and other things, there will be some times when it will take a little bit for it to do its thing. If we scroll through here to our OpenShift operators namespace, we should see our Pipeline journey. There's an OpenShift Pipeline namespace right here that was created for us. Actually, I'm just going to go ahead and because this is going to be a little sluggish here, I'm going to pull up the workloads here, none are found. I'm going to go ahead and deploy this custom resource which will create our instance of OpenShift Pipelines. You'll see now we've got some activity happening. Our Pipeline's controller and our webhook is spinning up. You'll notice also that we have a brand new option over here on the left for Pipelines. When this operator finishes deploying, we will then have the ability to create and deploy Tecton Pipelines that are ready to receive our code. While that's happening, I'm going to do just one other thing here because there's another operator that I'm a huge fan of, the namespace configuration operator. The namespace configuration operator, that's something that's maintained by the Red Hat Centers of Practice team. It allows you to create resources that get synchronized across namespaces. I use it for my Pipelines so that I don't have to maintain things that I want to be common across multiple developer groups. I don't have to maintain those in individual namespaces. Instead, I create them with the namespace configuration, a new custom resource that gets created for us. I use those to configure those common resources so that when a new project needs to be created for a team that's going to be doing some development, all I have to do is label their namespace with the appropriate label that I've given in the namespace configuration. All of those resources are not only created, but they're also maintained. If I update it in one place, it gets synchronized across all of those projects. I'm going to drop a couple of templates in there to create that for us. When we're done here, you can see now over here, our Pipelines is fully deployed and running. I just created an instance of my namespace configuration. Now, if I create a new project, my namespace, let me go ahead and switch to my namespace. No workloads in my namespace yet. I'll also show you nothing up my sleeve here. There's no Pipelines. There's no Pipeline runs. There's no tasks. I'm going to label this namespace with the Pipeline Tecton label. We'll wait a few minutes for this to refresh itself. When this finishes running, we should have seen that we had some Pipelines created. Maybe waiting a moment for the namespace configuration operator to do its thing. Switch over there real quick and show you this. We're still waiting for the namespace operator to finish deploying. When this operator finishes deploying, this will pick up the namespace configuration that we created. It will identify any namespaces that are appropriately labeled. It will synchronize the objects that we create from a namespace configuration across any of the projects that have those labels. As soon as we're deployed with that, your applications now are ready to run and deploy. All you have to do is, from your developer perspective, in your project, select the new template that we create in the catalog, and give it your GitHub account, give it the branch you want to build from, and click go. I'm going to show you a couple of projects real quick before I finish up here that you can get this information from. If I spell my name correctly. It's github.com slash cgroover, the Tecton Pipeline OKD4 project. This is where I pulled what I'm demonstrating for you here. The documentation is still a work in progress, but it is, at this point, usable. If you want to deploy these pipelines into your code-ready container space, go to this link right here, github.com slash cgroover, the Tecton Pipeline OKD4, and you will be able to replicate what I'm showing you here. Hopefully you have more RAM on your machine than my poor little MacBook Pro. At this point, I'm going to stop the screen share, and we're going to continue on with our presentation. OK, so that was our demonstration of code-ready containers running on your local workstation. Now we're going to switch topics real quick and talk to you about the working groups that we're all part of. We'd love to invite you to join us in the OKD working group. On your screen now, you should see several links. This is a very active community of folks from across the globe. This is not just a red hat thing. We've got a very, very broad and diverse group of people, all with a common interest in the OKD distribution of Kubernetes that are working together to make this project much better and much broader. We're experimenting with all kinds of new things, and if this is something you're interested in, we'd love for you to join us. And our partners in this venture are those who are working on the underlying operating system Fedora Core OS. Right, and we also have a Fedora Core OS working group. We have regular meetings on IRC. There's an issue tracker on GitHub. We have a discussion forum, a dedicated one on the Fedora project discussion board. There's a mailing list, and you can find the weekly meetings on the Fedora calendar, just like OKD. And this is where we discuss the underlying operating system, which is Fedora Core OS is geared towards containerized workloads. So it's very well suited for the OKD use case, but there's other use cases too. Most discussions nowadays revolve around the package set that is actually to be included in the compose. So if you have any wishes there, please do join the working group or anything else really. It's not just about packages, but the operating system and how we build it. If you want to learn about that or have an interest, please do join. And with that, we'll leave you some links for resources. So OKD.io, you'll find everything there essentially as well. That's the main site. We have the docs site, docs.okd.io. We have two repositories on GitHub in the OpenShift organization. The OKD repository is our technical issue tracker. So if you run into any problems, please open an issue there. We will shell that out to the respective repositories where we think the issue really lies. Maybe it's really just an OKD issue. We will essentially triage those issues there and send them off to the right spot. And then we have the community repository, which we use to plan our OKD Working Group meetings and add essentially anything that results around the working group. And thirdly, there's the Code Ready organization on GitHub where you'll find the sources for Code Ready containers, what Cero just showed us. So check that out too. And with that, I'd like to thank everybody for listening in. It's been a pleasure to talk to you.