 Okay, so a lot of what you're going to see here, I pulled the configuration that we're going to deploy directly out of the Rook project, which you should be seeing on your screens right now. So what you see me running here, I pulled from here specifically, let me clear this out if you guys can see the screen on the left. The common YAML file and the operator OpenShift YAML file. Those are pulled verbatim from this project. You can see the path here to get to it. And it's release 1.4. It's the latest release. I'm not quite brave enough to run directly out of master at this point. So back to the cluster that we deployed earlier this morning. You can see it is fortunately still healthy. There's a few errors being thrown, but it tends to do that in my home lab, network latency and such. So what we're going to do first is the three worker nodes that we deployed. What I didn't tell you when I deployed them was that I actually deployed them with an unused hard drive attached to the virtual machine. So it installed the operating system. It's using a SATA bus. So it installed the operating system on SDA, but it's got an SDB sitting there that is not currently being used. What we're going to do is we're going to create a Ceph storage cluster to serve up block devices on these worker nodes. The first step is we need to label those nodes to give them a role of storage node. And so I just applied that label to them. If I hit a quick OC describe on one of those nodes, I can show you that it now has a role of storage node. So that's step one is we need something to tell Ceph what it's going to be working with. Step two is we're going to deploy this common.yaml, which as you can see is creating a whole lot of boilerplate that the Rook operator is going to need. And one of the things that it did was it provisioned for us a namespace, the Rook Ceph namespace, which currently is very uninteresting. We're getting ready to make it interesting by deploying the Rook operator. And this will take a little bit for it to bootstrap itself. The operator image right now is pulling down and installing. When the operator is up, it's going to create some workloads, some pods on each of those nodes that bear the label so that they can discover the resources available on that node. And this will take just a little bit to run. Okay. And there you see the three discover nodes that are spinning up now. And while those are coming up, let me show you a little bit bigger so that you can see it on the screen. The cluster.yaml file is the thing that actually defines our particular Ceph cluster. And again, the Rook project has a boilerplate copy of this for you to take and modify to your own purposes. This is the version of Ceph we're going to be running, 15.2.4. We're going to have three monitor nodes running. I have set node affinity on those and that it's also going to be looking for a role of storage node. I've assigned resources for the various components of Ceph, a limit and a request, just like you see in a typical deployment. And here is the piece of magic that tells it where to find those devices that it's going to create the Ceph storage cluster on. Now our operator appears to be fully bootstrapped and up and running. So the next step is to go ahead and deploy our Ceph cluster on top of that. All right, now this is also going to take a little bit and you're going to see a bunch of activity here as the operator provisions this cluster. There's the three monitor instances you just saw spin up. There's the CSI plug-ins. It will start actually dealing with those physical devices and formatting the storage for its own use. Okay, you see these OSD prepares? There's three of them. When they are done, they're going to go into a completed state. Once you see that completed state, then the Ceph cluster is up and it is ready for use. It looks like we're still waiting for one of the crash collectors to go into a ready state, but everything else at this point should be usable. So to prove that it's usable, let's take our image registry and let's give it a persistent volume. I've created a storage class here that I'm going to apply. Its name is Rook Ceph Block and it's going to use the new Rook Ceph CSI plug-in that we just deployed as the provisioner. I will apply that to my cluster. Now if I flip back over here, go down to storage, we should have a storage class, and indeed we do. Now let's create a persistent volume claim under gigabytes because I tend to use a lot of container images. We should have persistent volume claim and the key here, you can see now it is bound to an automatically provisioned persistent volume that our Ceph cluster kindly handed out for us. We can see that from the command line as well. There it is, under gigabytes. Now remember if you were watching previously when we deployed the OpenShift cluster, we gave our image registry an ephemeral volume. We need to remove that ephemeral volume before we give it the new volume. So caveat here, any images that you had put in between, you're going to lose those because we're yanking away the storage. You will lost them anyway because this is an ephemeral volume. I'm going to put our registry back into a managed state. We're going to tell it to use the persistent volume claim, registry pvc. We're also changing the rollout strategy of this to a recreate. Because I created a rewrite once volume, the rollout strategy that comes by default is not going to work because it will try to do a rolling deployment. I need this to tear down the first instance and create a new instance so that it doesn't try to violate the rewrite once policy. So we just patched it. If we log back into our cluster and look at the image registry, here we go. We have an OpenShift image registry that's creating and should be binding to that persistent volume claim. So I'll pause there. Any questions on that? I know that was pretty fast. I think you're doing pretty good here. All right. I will now deploy a MariaDB Galera cluster. It's going to be a stateful set. What we're going to do, we're going to deploy this stateful set right here, which is going to create a three-node MariaDB Galera cluster from a customized image that we're going to build and push into the image registry that we just created. First thing we need to do is build our image, which the official MariaDB repository is going to install 10.4. Our Docker file looks something like this. So we're going to do some things to set up a MariaDB. The real magic happens in this shell script that is going to be run by the image when it starts up that actually provisions the MariaDB cluster, detects whether or not a cluster already exists. If it's the first node in the cluster, so forth and so on, I've got a short tutorial written up on this that you can see, so I won't drain it here. We'll just do the fun and kick it off. So first thing I need to do is make sure I'm logged in. I'm in the right cluster, important safety tip. Always make sure you're logged into the right cluster. I need to expose a route for the image registry. What I just did is I patched the image registry operator to say, create a default route so that I can externally get to my image registry. Then I'm going to use Podman and I'm going to log into that image registry array. It succeeded. And now I'm going to do a Podman build and build our MariaDB image. And you see, I'm grabbing the route from the image registry to tag my image. And I'm getting ready to build so that I can push it to the directory. It generally doesn't run that fast. I ran this just a little bit ago to make sure it was going to work. That's why that build went so fast because it had already actually been built. So all it did was add a tag to the already built image. Now we'll push it to the registry. Okay. So now our OpenShift cluster in its local cluster registry, it now has our customized MariaDB Galera image. So let's create a namespace for it. Create a service account. Okay. We're going to create a service account for MariaDB. The reason is MariaDB is picky about its UID. And it's especially picky if it restarts and its UID has changed. It tends to get upset. So we're creating a service account and we're actually going to run this privileged with this new service account so that it can run as any UID. There is likely a better way to do this and so I am open to suggestions. And now I'm going to apply a config map that actually contains the MariaDB server.CNF file. So by using a config map to do this, I can modify the configuration of my cluster without having to deploy new images. I'm now going to apply a couple of services. One of them is a headless service that allows the cluster to talk to itself on the necessary TCP and UDP ports. And then I'm going to deploy a load balanced service that will allow applications to talk to the cluster. I'm not sure why that's taking so long to come back. Okay. Now before I hit this, I'm going to switch over here so that you guys can see the deployment. All right. So I'm going to deploy the stateful set. And what you'll see in the console is a ordered deployment of the MariaDB stateful set. Okay. Good. PVC bound. Now we have persistent volume. And you see the first node in our three node Galera cluster is now starting. So just pause just for half a second. Someone's asking, Frank's asking a question. He missed the very first part of your Rook operator installation. And he says he has no Rook operators with his IPI based cluster installation. Is the operator hub filtering operators based on UPI IPI installation choice? Maybe clarify that for him. Okay. Good. Good point. I actually deployed the operator from the command line because it doesn't show up in the operator hub. See. Not there. Not sure why. It may be one of the ones that we need to as a working group add in. Yeah. There's a community version of it on operator hub.io that works with generic Kubernetes. But I think we need to do some work. I think it's one of the ones that's on the priority list for us to work on as a working group. Yeah. And that's why I deployed it by using the operator configuration that is provided in the Kubernetes stuff examples in the Rook project itself. All right. So our second cluster node is coming up. And these do an ordered startup and an ordered shutdown so that you can gracefully stop and start this cluster and it will retain its state. And when this is done, we have a three node MariaDB Galera cluster that is a full multi master database cluster running in our open shift with provision storage. Wow. Well played. Thanks, Charles. That was pretty awesome. I think I keep emphasizing in the chat too is well some of the operator work is the next things on the roadmap that we're trying to get. Folks to work on. So getting some of those default operators from operator hub into community. So we'll be working on that. So you fill the time very nicely. Justin Pittman's here and he is going to try and out do you on bare metal. Yeah.