 Welcome to another session on Rook. Today I'm going to talk about what it takes to install in a cloud provider and show a little demo. So the environment I have today is an OpenShift cluster running in AWS. I have three worker nodes, and on these three worker nodes, I need to install Ceph with Rook, and I need to install an application. So the Ceph mods and the Ceph OSDs are the two Ceph demons that are required to be backed by storage. The mods are the metadata or the brains of the Ceph system, and the OSDs are where the storage is actually stored. So each of those need to be backed by an EBS volume. And once that's set up, the application will run on top of the Ceph storage protocol to be able to connect to the storage without any concern for which demons it's connecting to. Whether it's in the same Z or a different AZ, it's all the same to the application. It's just getting it's storage. This means that any individual AZ can have complete failure and the application will keep running. If it really fails where the application itself is running, then there needs to be failover at the application level, which is a topic for another day. But let's get going with the demo. Again, what I have here is a six node cluster with three worker nodes, which we'll just focus on the worker nodes. I'll go ahead now and create the Rook cluster. So we'll tell it to create the CRDs. We need to create the RBAC resources, and I need to create the operator for OpenShift. Now that the cluster has been basically initialized with the operator, let's just confirm that the operator pod is running. Now what we do is we create a cluster CR, as we call it. This is saved in the cluster on PVC example. There's only one place that I've modified in the cluster for the demo. But let me get this created and then I'll go show you where that is. What this is going to do is create all the Mons, the OSDs and everything on top of the EBS storage in this AWS cluster. Just real quick, what I want to show you here in the example file in cluster on pvc.yaml, the volume claim templates is the place you'll need to look to to customize this for your own environment. The storage class name tells Rook what storage class to use to provision the PVs. In this case, with EBS, we're using the GP2 storage class and I've told it to create 100 gig volumes for each of the OSDs so let's go back and see what the status of the cluster is. It looks like the Mons are creating and this is just going to take a few minutes to get everything initialized, so now we'll jump ahead until the cluster is done initializing. Now we see that the cluster is created, we've got all the Mons, we've got the OSD pods, but you'll see three Mons pods here and three OSDs. Now if I look at the PVCs, just to show you exactly the PVCs created, we can create three PVCs for each of the Mons. Those are 10 gigs each, just to store that metadata and then these are the PVCs for the OSDs that were the 100 gigs each. So I'm going to create the toolbox so you can kind of see a little more about what's happening inside the Ceph cluster. So if I say we'll see create toolbox, this is going to give us a pod where we can go query Ceph to see its status. So I'm going to connect into that pod and I'm going to ask for the Ceph status, health okay, great we have three OSDs, three Mons. An interesting thing here in AWS now is we can see the topology and how the OSDs are arranged across the AZs. Ceph OSD tree, okay we've got all the OSDs in the same region, they're divided across the different AZs. We've got one OSD in US East, 2A, 2B and 2C. Ceph is told that the host name is based on the PVC name. So if a node is replaced in the cluster, this OSD can move to another node in the same AZ. This means if you're doing rolling upgrades, if you're replacing nodes, all of that is simple, the OSDs can just move to other nodes available in the same AZ. The cluster stays healthy even if you're replacing the nodes. Now if we were to create an application that runs in this cluster, and we don't have time as any other application, we just go in and create our simple RBD application. So we say create the storage class. This gives us a pool that works across three replicas. Now I'm going to switch over to the default namespace. Now I can create the sample PVC and pod. In just a minute we should see that the pod is running. Let's see and now the pod is started. It could write data across the cluster and it is not even aware of which AZ it's going to. That's the demo for today. Hope you have a good one. Bye.