 Thank you everybody for joining us for another all things data open shift commons briefing today. We're very excited to have a net clue it. And Kyle Bader, as well as Chris Bloom answering questions in the chat. From our storage group and with 4, 3 coming out of open shift container storage, they're here to give us a deeper dive into different new features. So, please, Kyle, take it away. Thanks. So, the, the major themes of this release of OCS is to provide more flexibility in terms of deployment, be able to add more flexibility in terms of the types of devices you can use underneath OCS. And to add a new platform, add the ability to support bare metal. Now, a few of these things are, are coming into 4.3 in our tech preview. And we'll, you know, go GA later, but we're going to go into a little bit more detail here. And then a net will provide a nice demo. So the status quo, you know, before 4.3 and and what continues to be an option in 4.3 is the is to use a dynamic provisioner for 4.2. We targeted 2 platforms. We targeted vSphere and we targeted EC2, Amazon EC2. And in both cases, we use kind of an infrastructure dynamic provisioner that would get deployed into the open shift automatically. And then we would consume volumes or PVs from those provisioners and then build kind of a build OCS on top of that. Now, there's, you know, there's a lot of advantages that you get from that, that I have detailed on the slide here. We have the ability to move that PV from node to node. So if you have a node that fails or if you, you know, otherwise need to, you know, move one of those, one of those OSDs to a different node, you can do so without having to basically recover the data. It can just detach and attach to another node. That's kind of a nice convenience type thing. And there are some limitations on this. If you're, if you're using EBS, then, you know, the EBS PV can only move within the same availability zone. So it's not, it's not perfect. Right. But, but it is kind of a nice thing to not have to necessarily recover data if of your OCS nodes go down. The other nice thing about the dynamic or the using a dynamic provisioner is the sizing is dynamic. Right. So, because the PVC is created to satisfy that particular PVC, see, it can be made to the exact size of the request. So if you, you know, request a two terabyte volume. Then you get a two terabyte PV. Most of the volumes that you would get through these provisioners are generally higher meantime between failure. Right. So if you're using an EBS volume is generally more resilient than a disc or a V-Sphere volume PV that's coming from, you know, V-San or some sort of sand usually has a higher meantime between failure than, you know, just a local device. That's what I say generally speaking, it really depends on what you have underneath your volume. But for the most part, the kind of other side of the, the, the sort there is that oftentimes, you know, the, the, the additional services, whether it's the case of EBS or the hardware and licensing costs in the case of V-Sphere volume can be a lot higher than you otherwise might have with local devices. And because you might be stacking, you know, EBS or V-San or their own kind of, you know, software defined storage systems. So if you were to put OCS on top of that, you know, that's, that's not, you know, while flexible and easy to get in the door, it's not the most efficient approach to doing storage. So, you know, knowing, knowing these shortfalls, we wanted to have options, right? We wanted to increase the flexibility. If the pros of using the dynamic provisioner approach were less interesting to a particular customer than, and maybe, you know, solving some of the cons, we wanted to have an option. So that's, the 4.3, we're adding the ability to do the local storage provisioner. But before I get to that, we also introduced the ability to do more flexible OSD sizing. In 4.2, we had a fixed size for the OSDs. It was always two terabyte. And so if you needed to add additional storage, you added it basically. It was scaled in multiples of three. And if you needed, you know, eight usable, usable terabytes, you would have, you know, a set of 12 two terabyte OSDs that would be able to satisfy that with 4.3, we've introduced the ability. So when you dynamically provision, or when you, when you set up a new cluster, you can, you can choose like a T-shirt size for the OSD. So you can either do 500, which is kind of for a really, you know, a more, so we can have a more minimal size cluster, or you can have the kind of the two terabyte, which was what we had in 4.2, or you can have a larger four terabyte OSD sizing. So that, that's another way to get to a larger maximum scale, or to get to a smaller minimum footprint. So this was, this was some flexibility requested by customers. And so we've introduced it in 4.3. And this is, is generally available. The local storage that I was alluding to earlier is the ability to consume storage directly from, from like locally in the system, right? So in the case of VMware, that would be where you have hypervisor nodes that have some sort of locally attached media. And then you surface them into the VMs that, that, you know, your OpenShift nodes either through a local device, VMDK, so, you know, where you have a local device and create, you know, maybe a one-to-one mapping of VMDKs to those devices. Or in the case of SATA or SAS drives, there's also, in vSphere, this idea of doing RDM or raw device mappings that basically, you know, like map those devices directly into the guest that would correspond with your OCP node. And then finally, there's a direct path approach for NVMe that's, that's like effectively like a PCI pass through, which is only for NVMe drives, right? Because that's a PCI eBase protocol. And you need something a little bit more direct for NVMe to have it be efficient. For EC2, we have the instance store, right? So there are a number of instance types. The predominant ones would be I3EN or I3 that have locally attached SSDs that you can now consume and use for OCS. There are some, some caveats there around, you know, you don't want to stop your instances. And so you'll want to create like IAM role or prevent that from happening. And then finally, local SSD or NVMe and bare metal hosts. And this is all kind of organized by the local storage operator ahead of time. You give it a custom resource and it will, you know, create the, the PVs and then OCS will consume them. And that's what, that's what Annette's demo is going to show you here today. So in order to give her maximum amount of time, I'll hand it off to her. Okay. So taking off from where Kyle landed, good explanation and a good sort of difference with OCS 4.3 that we want to go through here. So dynamic provisioning, as Kyle said, was the feature in OCS 4.2. And for using dynamic provisioning, the new sort of addition is the, as he said, the shirt size selection. That is not using local storage, that is dynamic provisioning. So what we're going to take a look at here is using the local storage method. And in particular, we're going to do it with VMware. So starting from my VMware client, I want to just show you what I have here. I've got OpenShift installed on six worker nodes and three control plane nodes or masters. What I did to create the local storage is I added a, what you call in the vSphere client a hard disk. So I added one of these. You also see a raw device mapping here, but I added a hard disk. And the size is 100 gigabytes. I already did that. So I'm not going to go over that again. So I did that to the first three compute nodes. Those are going to be the ones I'm going to use to create my OCS deployment. If we now go back to the operator hub, we see that we have a new version here. 4.3.0, 4.3.0 actually just released today. And before I deploy that, I want to discuss how we are creating the local storage. So if we take a look at the devices we have available here, and this is actually the output here is created using a utility that one of our own essays created, Daniel Moser and Chris Bloom who's on the call here. But what I did when I created those hard disks, I essentially created this additional device. And this is the device that we're going to use for the storage. And you can find this utility. I'll just put it into the chat. You can find this utility here. And I definitely encourage you to check it out. If we go back to the CLI here, we'll go ahead and get rid of that now. So I'll do an OC get nodes. And I'm just connected to the same cluster that we just looked at in vSphere. So there's our nodes. And I'm in right now, the local storage project. So I have already deployed the local storage operator. And that would be found in operator hub. If I type here, right, you see it's already installed. So if I do it OC get pods and PV here, you'll see that I have some pods that are in the local storage project. I also have some persistent volumes. Persistent volumes as you see there happen to be exactly equal to what I had in the vSphere client. And the way that I created that was to create a FAML file for a local volume CR storage. And I used the by ID values that we saw that I got from using that utility. I used those. I could have just done device path SDB, because SDB is the same on every one of them of the three nodes I'm going to use. But this is a better way to do it. And in the future, this actual ID will be discovered via the local storage operator. But right now I had to go use the utility and put it here. The other thing I'm using to decide that a PV should be created for the local storage device is I'm using the key of this label. This label is the label that should be added to any OpenShift nodes that are going to be running OCS. I've already added the label and I've already created the CR. So that's why if we look at the last view, that's why the PVs are already created. And you'll see that they have an available status. So all of this was done using the local storage operator and the local volume custom resource that I just showed you. So now we're ready to install OpenShift container storage. Just a note, this happens to be VMware using VMDKs. This could as well be on AWS using I3 or I3N instances. It could also be bare metal using local disk in a server. So it's really the same process to get to this point. You need to have the disks need to be available as PVs. So going back to here, let me go ahead and refine. So here's my OpenShift container storage. I do need to create it in a particular namespace, OpenShift-storage, which I already created that namespace. I'm going to hit install. And I'm going to go ahead and search for that namespace, which is here. We see that we're on stable 4.3, which we should be. We'll hit subscription. So as this subscription or the cluster service version is coming up, we can do a look at what's happening here. We also currently make use of the CRDs in the lib bucket provisioner. You see the provided APIs to the right there. And the next version, OCS 4.4, we will still use the CRDs, but you will not see the operator visible, the lib bucket provisioner operator. So let's take a look here and just see that our CSVs are succeeding. Right, project. So we still have the 4.3.0 installing. We do need to wait until that's finished installing. And then we can proceed to create the storage cluster. So let's do an OC get pods where we proceed here. And what we should see is our four operators. So we're going to see the OCS operator. That's the meta operator for the OCS service. We see the Rixsef operator, the new operator, and the lib bucket operator. So now we're ready to create our storage cluster. So in this case, because we're doing local storage, we're not going to use the UI as you did in dynamic provisioning. In dynamic provisioning, you would go to storage cluster and you would go create storage cluster. But for local storage, we need to be able to give it different information. And we need to be able to give it sort of the storage for the months and the storage for the states. So when we created the local volume CR using those 3 by ID disk IDs, we also got a local block storage class. And that local block storage class is how we're going to claim and create PVCs from those available 100 gigabyte PVs. This is the actual storage cluster CR that is used when over here you hit create OCS cluster. It's not exactly this one, but this is how it looks. So we need to have mon storage. In this case, because I do have the thin storage class available here, I'm going to use it to create the mon storage. I could also use, I could have created a 10 gigabyte VMDK and create another local volume CR and I could have claimed the storage that way. But in this case, I'm just going to use it because I have it available for dynamic provisioning. And then for my OSD storage, I'm going to use this new local block storage class that is going to be able to claim those 100 gigabyte PVs. So just one more look at our PVs. Before we do this, we see they're available. Okay, so let me go ahead now and I'm just going to create that storage cluster. And as I do that, we'll start to see some pods come up. And this is again in the OpenShift dash storage namespace. So first off, we see creating because our operators are already created, but we see our container storage interface pods coming up. This is the new API via Kubernetes for all storage to be created. So OCS makes use of the CSI API. And the first thing we need to do is we need to land some pods on each one of the devices, excuse me, each one of the OpenShift nodes. Now this would be on OpenShift nodes that allow scheduling of application pods. So we're not going to see that on the master nodes, because currently the master nodes have a no schedule tank. But in any worker node or any info node that could host applications will have both the CFFS and a RBD plugin so that volumes can be created and deleted. So those will continue to come up. If we go back to our UI here, we can see, let me just go back here, if we go back to the storage cluster, we can see now that compared to when I first looked at this, we have a storage cluster being created. It's version 4.3.0 and it's in a progressing. The other thing that happened when I created the storage cluster is I got two new dashboards that are completely integrated into OpenShift. And right now they're not populating because the storage cluster is creating. But again, these are completely integrated because OpenShift container storages is a Red Hat offering. And so, you know, there was a lot of work to integrate it into OpenShift. The other thing if we have time is there's a lot of alerting and metrics that are specific to OCS and specific to CEPH that are also integrated in. So you get very good alerting in these dashboards if there's a problem. Let's go back to looking here. Looks like we're starting to bring up the mons. Let me just get out of here for a minute and I'll just show you the PVCs that are coming up. But we've already created the storage for our monitors. Our monitors require a small amount of storage, relative 10 gigabytes. So in this case, because I had the thin storage class available, I used it in a storage cluster CR and you can see that they got created. If I do a watch on the PVCs, I should start to see in a short time here that we're going to see 100 gigabyte PVCs. And let's just do this. Just need to make sure that they're coming up. So we'll go back and look at the pods. So the monitors are not quite up. You see here we have monitor A, B, and C. These are the self monitors. There needs to be three of them. And what will happen is they also need to be on a unique node. So each one of them will be on a different node. And if we did and looked at that node, we would see that they have been placed on unique nodes. So they're on compute 1, 2, and 0. So let's take another look at our PVCs. Okay. This is what I wanted to show you is remember that these are our PVCs. And if you do OC get PV, we will see now that these 100 gig PVCs are now bound. So before they were available, now they're used. And I think we're almost to the end of the deployment here. So once the PV, so yeah, we're at the very end here. We've already, we've got our three monns here. We've got our three OSDs here. These are our storage devices that are mounting that 100 gigabytes and creating the self cluster. And we're in process of the last bit here of creating the new BOPODs that are going to be used for the buckets to create buckets for objects. So if we take a look here, let's just take one more look and see if we finished. Well, I'll show you how we know that we have finished the deployment just from looking at this view. There's a lot of different ways to validate the deployment. But we are still creating here. So you notice the OCS operator here is is running but not ready, which is evidenced by zero slash one. So this operator will stay in the state until the deployment is completely finished. So if you see this, if you're, you know, watching the CLI doing a deployment, don't be concerned about this until you know the deployment is completely finished. If it stays this way, then something did not go right with the deployment. So one more look and then let's go back to the UI. Given the amount of time we have right now, it does take a few minutes for the dashboards to populate. All the pods are done now. And we could just do a watch on this pod. But it probably is already done. We'll just sit here and watch it and wait for it to get done. In the meantime, let's go back to the dashboards. We now, even though the OCS operator doesn't show quite done yet, we can see that our dashboard has populated. It tells us that I use 300 gigabyte VMDKs. So I've got almost 300 gigabytes of storage. Effective storage, of course, is just one third that because of replica three. And our object dashboard usually takes a little bit longer to come up because you notice Nuba was at the end. So I think that is the end of my demo. So a successful deployment of OCS 4.3 that released today. That was fantastic. Thank you, Annette. And thank you, Kyle. It's so nice to see 4.3 out and what a great demo. And next week we'll have another deep dive into 4.3. So please join us next week. And thank you, everybody, for another great All Things Data OpenShift Commons briefing. You can find this on the OpenShift Commons YouTube channel. Thank you, everyone.