 Hello! Welcome back to another session on Rook. In this session, I want to talk about the different environments that you can run Rook in, whether in your own data center or in the cloud. It's the same Rook cluster. Let's get going. So first important point about Rook is that Rook is both a consumer and a provider of storage. So first of all, your applications, they need to consume the storage. Obviously, that's why you want Rook in the first place. So your applications have access to that storage. So here at the top of the diagram, we see that your applications have requested the storage. You've got PBCs, these volume claims under your applications, and how that storage comes up from Rook and Ceph. It's all underneath it, and the applications don't care. They're just happy to be able to have storage available. But underneath the covers, there's something very important about how Rook and Ceph consume that storage. So down here at the bottom layer here, we see where the storage is consumed by Rook. And it's done by these demons called the Ceph OSDs. The OSDs are object storage demons, and that's where individual devices are used to store the local data. And then above that is where Ceph forms this software defined storage layer. But how does this work? So there are different types of storage that you can back these OSDs by depending on what environment you're running in. So once Rook is configured at this basic layer, no matter what environment you're running in, at the top layer, the storage looks the same to your applications. So what does it look like on bare metal first? So if you're running in your own data center, maybe you've got bare metal, really, you're going to have probably raw devices, maybe partitions on those nodes. Your OSDs are really going to be backed by that raw storage. No file systems are needed. Nothing. Just plain raw devices or partitions. And Ceph knows how to consume those raw block devices. All right, so that's it. You've got Rook on bare metal backed by raw devices. In the example cluster.yaml, so if you look in the Rook examples folder on our GitHub, you will see this example where you can find various settings like just use all available raw devices or use devices that match a certain filter or use devices by name. And there are various ways that you can consume those. But fundamentally, you're still just consuming raw devices or partitions. So what does it look like in a cloud environment? A cloud environment is very interesting because you might wonder, well, why would I even use Rook in a cloud environment if the cloud provider already has a storage platform available? Well, we found that there are certainly some shortcomings that cloud environments have that Rook can be very useful in overcoming. So first of all, you can have storage that spans AZs. You can have faster fail over times for seconds instead of minutes. You can have a greater number of PVs per node. Some cloud environments limit that to like 30 PVs per node. Well, that limit is just gone with Ceph. You can have a better performance to cost ratio with the size of your PVs. Your application might just need small PVs. But that usually comes with a performance hit in cloud environments. Rook doesn't suffer from that because you can start with large PVs and then the size of the PV in Rook won't affect the performance. It also gives you a consistent storage platform wherever Kubernetes is deployed. Ultimately, the way this works is under the covers, the Ceph OSDs will consume PVCs. So let's show the diagram of this. So basically the devices under the Ceph OSDs are dynamically provisioned from any storage class of your choosing. And so these OSDs again at this layer will be backed by PVCs. For example, we have a cluster on pvc.yaml where that example is tuned to running an AWS with EBS volumes. So the EBS volumes will be provisioned specifically for these OSDs now easily provisioned from the cloud environment. Now in the next video coming up, we'll show a demo on how to go ahead and deploy a cluster in a cloud environment on PVCs. See you next time.