 Hello, everyone. Welcome back to another OpenShift Commons. My name is Karina Angel and I am an OpenShift product manager here at Red Hat. And excited to have with us today, Sandy Amin, I hope I'm saying your name right, Akash and Sharisha from IBM, and they're here to talk about OpenShift container storage on IBM cloud and IBM cloud satellite. And Sandy, could you introduce yourself some more as well as Akash and Sharisha for everybody? Sure, I'd be happy to. I work in our container team and IBM cloud and satellite, and we focus on the storage integration within our container service, both for Kubernetes and OpenShift across those environments. So I'm the lead architect driving various storage integrations, including OpenShift container storage on IBM. Akash and Sharisha are part of our development team, and I'm happy to have them also participate and demo today the integration that we've done. Awesome. Thank you. Very excited to have you all here. I know you have a presentation prepared as well as some demos and let's get right to it. I'd love to hear more about OCS on IBM cloud. I'd be happy to. Okay, so let's kind of talk about what we're going to discuss today. We're going to discuss our integration with OpenShift container storage within our public container service and our IBM cloud satellite environments and how that can help simplify the stateful workload deployment across those environments. So we'll give it a little bit of context today on the particular subjects that we're going to be talking about today. I'll give a context on the offerings that we have today just to give a context of what they are and what they provide the user. Then we'll drill down a little bit on how we're integrating OpenShift container storage into those offerings, including today's supported deployment topologies and what we're thinking in the future to support as well. And then we'll finally close with a demo in different areas. So be happy to kind of talk about those today and kind of get going. So let's get started. Let's get some context around our offerings today and IBM cloud and satellite. As you all know, many companies and enterprises are already migrating and creating strategies to move to a cloud first infrastructure environment. In some cases, those customers are moving and developing new applications in a very cloud native application modernization approach. However, in some cases, there's some applications that cannot go to that kind of model. And so they're not as suited to be on public cloud environments. And so they want to still have the same experience of a cloud and experience to deploy their app and manage their app. But however, they need to keep those applications on premise, whether it's for data, sovereignty requirements or whether it's just for, you know, the fact that it's they want to control their costs. And it's hard to move because they already have that running in their infrastructure, but they want to modernize how they actually are managing their applications from the cloud perspective. So within IBM, we have two offerings that provide not only a managed open shift offering, but also let you have offerings that give you the full services of IBM cloud in any dedicated environment. So with that environment, we're also providing the ability to install and configure and deploy open shift container storage across both the public cloud and on any customer provided infrastructure. We'll talk more about those in detail in the next following charts. So let's talk about our first offering, Red Hat OpenShift on IBM cloud. What that offering is is a fully managed open shift OCP offering. It's available across at least 35 data centers and grown today. It integrates with all the IBM services that are supported such as IBM key protect for keeping your key management key safe. And also, you know, our object storage systems are native storage systems today as well. It's fully managed and fully configured across the globe with our SREs and available with all the security certifications that enterprises need today. Just a little bit of a drill down on what this IBM does for you. We actually help you with the full lifecycle management of deploying the cluster for OCP and open shift. And we actually deploy this in a highly available manner. So we manage all the masters for you. You don't have to worry about that. Manage all the updates, patch updates, etc. And we give you control of the client data plan. As part of the client data plan, you can customize certain aspects of it, including deploying things like OpenShift container storage, which we'll talk about that other things you can definitely customize as well. With IBM satellite, that's another offering we have an IBM satellite. It brings power of all the IBM cloud services, whether it's cloud packs or Watson or databases. And including eventually our third party services that you can then subscribe in a subscription based manner on any infrastructure. Why do you want to do that? One reason is you might want to control your costs. Although you want the cloud experience, you probably might want to also leverage your current infrastructure investments. So we allow you to take the same experience so you can see an IBM cloud and deploy that on any infrastructure you would require and have. And with that, one of those services that we do fully manage is a Red Hat OpenShift managed service, just like in public cloud. And then with that, we will also support different customizations within that in the same consistent manner. And we'll provide the ability to also tooling to move applications back and forth between those two environments and also connect from on-premise environment to IBM cloud public services as well. One of the popular applications that we'll try and show you today is also the fact that many customers of ours are using cloud-packed data as an example application. And that application can run in the public cloud or any other environment. And we'll kind of demo that today slightly today as well. This is a little bit of a drill down on the satellite kind of big block diagram. What you'll see is a customer will create their own location at their own site. The requirement for the customer is to provide their own infrastructure. After they provide their own infrastructure, our satellite control plane will take over and help you configure the location and actually create the clusters in those locations that are managed in a highly available manager, much like we do in the public cloud environment as well. The customer is then free to then deploy any services like ICD, IBM calls, Watson, within that environment, including their own custom apps as well. So they can actually choose where they deploy and choose how they deploy as well and what services they want to integrate within their environment. Okay. Now let's drill down a little bit on the actual container storage integration that we've done, not that we've given you some context over the cloud satellite and IBM Cloud OpenShift offering. To put everything in context, within IBM Cloud we provide a lot of native storage capabilities as most cloud providers do, such as file, block and object. We also are heavily invested in providing our customers a very flexible container native storage set of choices. And the reason for those choices are the fact that within a hybrid environment, many customers are trying to have the mobility to move their applications in any direction in any cloud environment. So the ability to bring your own storage and take that with you is very critical requirement for customers. And with these solutions, they provide a very consistent approach in managing the lifecycle management of the volumes, being able to do the actual backup, restore, and also the stateful workload migration between clusters as well. So these types of solutions such as Red Hat Container Storage provide much more advanced capability than the basic native storage capabilities do today as well. So it's a good choice if you're trying to work in many hybrid environments as well. And that's why we're offering that very tightly within IBM Cloud, along with other partners that we support in this space as well. Let's just take a quick look at how we plan to offer OCS within our cloud environment for both satellite and public cloud. We're going to be offering two packages in how we offer OCS, Essentials and Events. The difference between Essentials and Events is this. Essentials will provide the basic lifecycle management of the volumes, including volume snapshots, all the things you can do through a CSI driver. The advanced package will provide things like more advanced security capabilities, PPO key encryption with your per volume PVC encryption, but also multi-cost DR and backup capabilities. So we plan to offer these on a subscription basis and based on if you're using a VSI or a VM versus bare metal, the pricing will be different. We're still working at the pricing, but that'll be forthcoming soon once we do the GA. Currently we are in a tech preview mode. So it's available for free for anybody to try it right now in IBM Cloud and satellite. You're welcome to try that out. We support a deployment topology called internal mode and that mode basically co-locates OCS with the OCP cluster and your workloads as shown in this picture. We are considering later this year supporting the ability to deploy an OCP cluster and remotely connect a remote CEP cluster on-premise. Primarily for a lot of the on-premise side, a lot of our customers may have existing CEP deployments on-premise. And we want them to be able to kind of leverage that and be able to connect that. They still have the same experience with OCS and using that remotely as well. So that's one mode that we're also working to deploy. And in the very more longer term future, we're working to support a kind of a centralized storage OCP cluster that runs on OpenShift itself and can then be connected by multiple OCP managed clusters as shown in this diagram. The schedule for this is being planned for, but you know, this is just a big picture of what I wanted to kind of show you the modes and topologies that are capable and we plan to support in the future as well. So let's take a quick drill down on OCS. OCS brings together three projects within OpenSource together. CEP, Nuba, and Rook. CEP provides the file and block capabilities. Nuba provides the object capabilities. Rook was designed to provide an operator to help configure and manage some of the CEP aspects as well. As you can see, there's a lot of artifacts that get deployed, various human sets, et cetera. And with this integration, we've actually integrated the fact that we can now integrate with IBM classic infrastructure and VPC infrastructure or any satellite-provided infrastructure with OCS. So any place you can run satellite is any place you can run OpenShift container storage, including on IBM public cloud itself. We've done the integration so that when you deploy in public cloud or even satellite, you have the ability to use IBM cloud object storage as your kind of backing storage for Nuba. And also for any future capabilities around multi-car DR backup and any integration with backup tooling as well. One of the things that we have done, taken time to do to integrate in IBM cloud is we've tried to make the experience of integrating OpenShift container storage a little bit more easier and embedded into the actual managed OpenShift experience within IBM cloud. Because today, if you were trying to install OpenShift container storage, and let's say you wanted to install that with a local disk using local disk. You first, if you're going to do this from the command line, you first have to install the local shift, local storage operator, create all the namespaces, ensure you have all the disks all ready to go to feed the OpenShift container storage so they can get started to create all itch resources. So what we've done is created an IBM OCS add-on within our environment. And we've actually tried to reduce the steps from, you know, as much as from eight steps to one step to install and configure it. We'll actually demo this capability today with you today, but I'll cover it briefly in the following charts as well. So let's talk about the experience that you can get to see within our user interface. You'll see once you create a OCP cluster, you'll have an additional set of add-ons to deploy. You can add different capabilities. One of those capabilities is to install OpenShift container storage. Just to install it, you just click the install button and it's ready to go. We even have some predefined defaults, so you don't even have to put as many defaults in to prime OCS. They're ready to go. And once you do that, you can actually target whether you want to deploy this on IBM Cloud, on VPC infrastructure, or IBM Classic infrastructure, either using remote disk or local disk. And likewise on IBM Cloud satellite environments, you can also deploy to cloud-provided infrastructures like AWS, or on-prem systems or your own hardware that you already have around in your data center. You're kind of really heavily focused on the ease of use and the lifecycle management of this. A little bit under the hood of this integration is our OpenShift add-on consists of two components, OCS add-on user interface and CLI and also an add-on operator. We'll talk more about this in the next couple of charts. What this terminal window has shown you from the command line, I created the cluster in one shot using the IBM command line. And what it's doing is it's creating an OpenShift containers cluster and it's actually priming it with some parameters to actually feed it the mon size and also the OSD size that's desired. And then once you run that command, it'll actually create the cluster using those parameters. You can override these parameters. Otherwise, if you don't override them, we will actually provide some defaults for you so you're ready to go. And you also have the ability to auto-create the cluster when you create this command line, or you can do that afterwards as well. We'll talk about that as well. You can see that with this one when people are creating stateful work with applications, they're in a more of a debt test and production cycle. They'll have to kind of install and tear down these clusters in a repeatable fashion. And having this integrated with the IBM command line is great for them if they're on OpenShift on IBM crowd. They can simplify their CI CD pipelines. And we also will have the ability to integrate with other services like IBM HyperProtect and KeyProtect. And in the future, and also specify the advanced and essential offerings through this kind of option, but also in the future we'll also have the ability to just have T-shirt sizes and using these T-shirt sizes. You can then just give our users a simplified approach to say, how big do you want your cluster to be? And we'll go create the cluster ahead of time with that T-shirt size. So they don't have to understand all the nuances of these options like what is an OSD size and what's the OSD storage class name. We'll do that all for you. So that's kind of what we're trying to do with this add-on as well. A casual cover of this in this demo, I just wanted to show you that the OCS add-on operator that's used behind the scenes of the CLI that we showed, it just takes a simple security. And you can just, you just need to prime it with a minimum of three inputs. You can give it more inputs. It is a single point of control of basically having you configure all the inputs in one place without having to create namespaces and deploying different types of resources to get OCS going. We'll cover more on this in detail, but it's just one simple CRD and then you're ready to go. You can take a similar CRD that I showed you on the previous chart. This happens to be something that we would do for VPC environments, but you can use the same CRD on AWS and you can feed it a class name and it would go create the cluster just through our building. We assume some defaults. We actually always auto enforce three-way replicas, so most of our users need that and want that for their stateful applications and workloads. All those choices are kind of taken care of as well. You can have the ability to override what backing storage you want, whether it's caused by default, or even use it as three as well when you go. The other thing that we've done in satellite environments is we've taken that operator that we've written. And we know that within a location, a customer will create multiple clusters, more than one. And within those clusters, he's going to need the ability to deploy something like Lucy as multiple times. As a result, we've provided a grouping mechanism so you can define a group and the group membership can be dynamic. So as members join the group, what will happen is with our satellite templating technology, you can deploy a storage configuration. We support multiple templates within satellite. OCS is one of them, but we can also deploy AWS CSI driver as shown in this chart or even that app for spectrum scale. But once you do this, any time a member joins a group, OCS has auto-deployed a specific configuration that's very site-specific, the site-specific credentials, et cetera, and customization. So you don't have to do this over and over again. We'll demo that today as well. So I provided a brief overview fairly quickly of what our offering is on our integration. We're going to spend some significant time on the demo next, and then we'll open it up for questions. I'm going to start with the first demo, but before we do that, I want to show you what we're going to be demoing between Akash and Shrisha. I'm just going to actually show you how to deploy IBM Cloud OCS on VPC environments. It'll take you through that. And Shrisha will show you how to deploy OCS on AWS to two clusters in one shell. So that would be, that's going to be great. Before I end my presentation, I want to do a quick slight demo just to give the context. So within IBM Cloud, you can see we have different services. One of those services, the OpenShift services, I can actually create clusters and inspect the HEM catalog. We'll go to that in a second. Likewise, in satellite, I can create my location on any infrastructure whether it's AWS or on-prem. And in a similar manner, I can actually create an OpenShift cluster and then deploy the add-on there as well. We'll do a quick overview of the OpenShift cluster because that will kind of drive home the point. So if I click on clusters, you can see that I already have a collection of OpenShift clusters already created. And what you can see is I'm going to drill down into OCS cluster name. OpenShift cluster name is that OCS Cloud PackerDataD. And what you'll see is before I do that, I wanted to kind of give you the experience of how to create a cluster today within IBM Cloud. You can select the different versions that you support, 4.7 to 4.5. And you can run on any infrastructure as I've been discussing, whether it's classic VPC or satellite-enabled infrastructure. And you can make various customizations. I'm not going to go through all these customizations. And by default, we kind of do a three-way zone replication of ordering. And so you can actually customize that as well. Since I've already created the cluster and that can take, you know, more than up to 10 minutes, let's kind of just click on my existing cluster. And what you'll see is on my existing cluster, you can see that I have a three-zone cluster and the health of it is fine. And what you'll see in the top-level overview page is a bunch of algorithms over here. One of these algorithms I've already enabled, or to just enable an atom, you just click install. I've already done that. And so I'm going to switch to the page where I can actually load the OpenShift console, which I've already loaded up for you guys. And what you can see is I've already have OCS installed through the add-on. And in that add-on, you can see that there is an application consuming that already, which is cloudpack for data. So if I drill down into it, what you can see is it has quite a lot of pods running and a collection of PVCs. So if I drill down on the PVCs, what you'll see is all the cloudpacked data pods are running on top of OCS. Some are using the file system and some are using the block storage for its use cases. So this is basically cloudpack running on OCS. And if I was drilling down into the cloudpack application, I have the running application. I've actually fed it some data with some of that talented data as well. And I just want to show you that I do have a notebook created for my data and AI needs, and show you some analysis here. So the whole point of this part of the demo was to show you can create and run a mission-critical workload, not just a toy application, an IBM cloud with OCS, such as a substantial application such as a cloudpack. So with that context, I'd like to kind of transfer this part of the demo, the next part of the demo to Akash. Thanks for listening to me. Thank you. Thanks, Sandeep. So let me start my screen sharing. So here in this part, I'll be covering mainly around the IBM add-on. So I think Sandeep mentioned in his explanation that we have created an IBM created OCS operator. So we have deployed this OCS operator as add-on. And I'll be focusing more on the use case of why this has been developed and how it uses the OCS deployment. So as we know today, OCS can be deployed even from the web console or even from the CLI and there are different artifacts which have to be created. So I'm just in my current screen, you can see some of the list of resources or artifacts on the right-hand side where the OCS deployment has to be done to create them, like operator group, subscription, storage cluster, and a host of resources. And each of these resources need one or multiple parameters which has to be given as inputs by the customers or users. And they have to be created in a particular order and only once the one resource is successful, we can move on to the next one. As we found out that this is having a lot of resources or manifests which has to be handled and this is having some user errors, the possibility of user errors. So in order to handle that, we came up with an idea of operator which will basically take all the inputs required for the OCS deployment as inputs for our custom resource. And once the customer will deploy the custom resource, we take all the inputs and internally our operator will deploy the OCS and all the required resources. So we also consolidate the status of all these resources and provide one final status once the OCS is ready to be consumed. So as we can see here on the screen on the website of the table, we have the parameters which we are exposing for our IBM add-on. So as we see these are a handful of parameters which are just required to deploy the OCS. And we have just done a mapping of which artifacts on the OCS side these parameters are going on. So we have some of them like the device paths, the mon classes, the OST classes, etc. So this is going to ease out the user experience and the customers will be only dealing with a single custom resource. So in the next time I'm showing how this will ease out the number of steps. So this is one of the reference of installation of OCS. On the left hand side, we have the add-on steps as we can see it's only one step here. The customer will create the OCS cluster which is the custom resource and that will take all the inputs and the OCS will be deployed. Whereas if you go the CLI way or the manual way on the right hand side, we have around eight steps that we listed when we tried it out. So this really reduces the number of steps and also the number of resources that the customer has to be aware of to take care of OCS deployment. So this is about the OCS install steps and this is a comparison for upgrading of OCS. So whenever we upgrade OCS from one version to the other version. So here also we have to deal with only one OCS cluster custom resource where we just edit the resource and we set a Boolean parameter as true. And once upon saving that cluster resource, it will automatically get upgraded to the next available OCS version whatever is available. So if you compare it with the manual steps, the customer needs to know which version he has to upgrade and he has to provide those things in the storage cluster resource and he has to update the channel. So those are some of the simplifications we have provided for upgrade steps. So this is the similarly uninstalled of OCS here. So as we can see, we have only one step for uninstalled of OCS. So if the user simply deletes the OCS cluster that will delete all the resources which were deployed as part of OCS. So it includes storage cluster, set cluster, subscriptions and even the local volumes if there are local ones being used in case of OCS. So that is easing out uninstalled procedure or a cleanup procedure in case of any issues for the users here. So here I'm showing the steps for the scaling of OCS. So in case of scaling also we just deal with OCS cluster resource where we just increase the number of OSDs that are needed. So number of OSDs is a parameter where we give the parameter customer has to say only how many OSDs he needs. This is mainly about the usable storage that he is specifying. So once there is some number of OSDs specified or it is increased, we increase it by 3x number of times in setting the count parameter which is on the right hand side that you see. So this is some of the simplifications we are providing for the different stages of OCS deployment and upgrade as well. So now as we saw in this entire exercise, the customer is only aware of one customer resource called OCS cluster. So the number of steps that we have here is there are two types in which we can use this OCS operator. One way is we enable this add-on using our IBM add-ons. I think that is Sandy showed earlier. I can also show that in the screen. So we have different types of add-ons here, the open chip container storage. So we have to enable this add-on and after enabling the users to create the customer resource with OCS cluster and all the parameters. So this is one way of deploying the OCS. And recently we provided another option that will further simplify this deployment of OCS. We call it as a one click install. So what that means is if the user is going to deploy the OCS right after the enablement of the operator, he can as well do it along with the operator enablement. So basically what that means is when the user is going to enable the OCS add-on, he will also provide all the parameters required for deployment of OCS. So internally what happens in the back-end is it will first deploy the add-on, the operator gets created, and then our operator will pick up all those parameters which are supplied and will deploy the OCS. So it's in a single click, it is creating or deploying OCS. But we have also made it as optional. So the users will have two options to deploy OCS in this case. So I'll be showing a small demo of one click install here to start with. So let me show the different options that we have in the add-on here. So as we can see, these are all the OCS related parameters we have exposed to the users. And in case the user wants to enable the add-on and also deploy the OCS, the user can as well use all these parameters. And we also have the default values specified. So the users can take benefit of default values or they can override with their own custom values. So one important parameter I would like to highlight here is the OCS deploy. This is one of the parameters we have introduced here as we see the default value is false. So what that means is when the user enables the add-on and he doesn't give any options, and if this OCS deploy is set to false, it will only enable the add-on and deploy our operator. It won't deploy the OCS. So in order to deploy the OCS in a one click install, this parameter has to be set to true. And along with that, the other parameters can be overwritten. So I'll be showing the command that we are using here. So as we can see here, I'm enabling the OpenShift Container Storage add-on on my clusters. And here I've set the OCS deploy as true. And then there is those OSD size. Just for the purpose of demo, I'm taking one parameter of OSD size where I'm overwriting this 250 GB to find a GB. So now when we deploy this, if we have to show the add-on which is deployed, it will show that the add-on is getting deployed. And after that, the OCS is also getting deployed. So here I'm showing that our OCS OpenShift Container Storage add-on is deployed. And we are using the 4.7 OCS version here. So I can show the version of our operator. So this is the operator pod which I'm showing where we have deployed this operator. And it is actually taking all these input parameters and deploying OCS. So as we can see here, we are pointing to a staple 4.7. And that is the OCS 4.7 version. And we also see all the other parameters which are here. And with this, it has deployed the OCS. So I'll show you the OCS cluster that we have. So here, this is the OCS cluster I was talking about where this is the custom resource that we have exposed to the end users. And this is the single resource which will take all the parameters in our spec. As we saw, these are some of the parameters we gave at the time of installation. And we gave the OSD size as 512 which we overwritten at the time of enablement. And this is the status I was speaking about where we show one consolidated status for the storage cluster, saying it's ready and it is deployed. So now it is in the consumable state. I can show highlights of how many different resources are getting deployed in case of OpenShift here. So these are all the different types of deployments, stateful sets, demon sets, they are getting created for mon devices, for OSD devices. So these are some of the artifacts I spoke about earlier which has to be deployed by the user. And one important resource is storage cluster which is on the OCS side. Here, as we can see, it shows the ready state which means it is ready to be used. So these are some of the benefits I wanted to cover in this demo and highlight on the user experience using our IBM OCS operator. So I will now hand over to Suresh for covering the satellite related demo using OCS. Thank you very much. Yeah, so let me share my screen. Hi, I'm Suresh and for the next 10 minutes I'll be walking us through deploying OCS across multiple clusters using the satellite storage templates. To begin with, this is my satellite location where I've attached AWS instances as hosts and I've also assigned these hosts to two of my OCP clusters. So these are the two clusters that I've assigned those hosts to and additionally I've added both these clusters to this cluster group. So we can see that both of these belong here and now I can deploy OCS across all clusters belonging to this group. So the satellite storage templates are provided and tested by IBM or third party vendors and IBM provides us with a tooling to install these storage drivers on satellite clusters in our satellite locations based on as described in the template. If your preferred storage provider does not have any templates, then we can create our own storage configuration template. So these are the list of satellite storage templates currently available. Under OCS we have the OCS local and the OCS remote templates. The OCS local template can be used when we want to use the local storage present on our nodes and we can use the OCS remote template when we want to use remote block volumes. In this demo I'll be using the OCS remote template. So I've already installed AWS EBS CSI driver on both my clusters using the AWS EBS CSI driver template. This will allow us to provision remote block volumes. The first step towards using the satellite storage template would be creating the satellite storage configuration. So when we create satellite storage configuration we include the details of a set of parameters that are specific to that template. So this would be the command I'll be using today to create the satellite storage configuration. So here I've provided some values for some of the parameters like mon size, OSD size, mon storage class name and OSD storage class name. So this storage class was created as a part of our AWS EBS CSI driver install. We can also view this in UI. So these are my configurations and this is the configuration that was created. So we can also see the parameters we provided here in the data section. So once we create our storage configuration we can use this configuration to assign storage across our satellite cluster. So this would be the command I would use to assign the storage configuration. So here I've provided the group that I've added my clusters to. And I've provided the configuration name that we had created earlier and I've given a name for the assignment. So we can also view this in the UI. So here we can see that the subscription has been created and it's been deployed on both the satellite clusters. Now let us take a look at some of the resources deployed when we assign the storage configuration. So internally the template uses the IBM OCS operator that Akash mentioned. So we can look for the OCS operator pod. We can see that the pod has been created and it's running. Also we have the custom resource that's created. And in the spec section we can see that it's taken the parameters that we provided while creating the configuration. Now let us verify OCS installation. So let us look at the pods in the OpenShift storage namespace. We can see that all the pods are running and let me also show the storage cluster. So we can see that the storage cluster is also in ready state. I'll quickly also show the web console for both these clusters. So this was one of the clusters belonging to that group. And if we see the web console we can see that OCS has been successfully installed and we can get other details in the persistent storage tab. Similarly this was the other cluster that was a part of that cluster group. And if we check the web console of that cluster we can see that again OCS has been installed successfully. So with satellite we can bring our own infrastructure to IBM cloud and run IBM cloud services on it. The satellite storage templates can be used on multiple types of infrastructure like AWS, Azure, GCP and on-frame devices amongst others. In conclusion the advantage of using satellite storage templates is that it provides us with the ability to create a storage configuration that can be deployed across multiple clusters instead of deploying OCS on every single cluster individually. So this concludes the demonstration and now I'd like to hand the control back to Sandy. Thank you. Thank you everybody. I wanted to kind of cover the fact that thank you for our caution trees for doing the demos. And if there's any, I'd like to open up for any questions. I've seen some healthy discussion on the chat. Do you want to ask any more questions live? We'd be happy to take them at this point. Sandy, I wanted to make sure we address something first because in your slides we didn't mention this at the beginning of the call. So OpenShift Container Storage is going through a rebranding to OpenShift Data Foundation. So I just wanted to put that out there as people look at the slides after the briefing that if you see ODF that is also OCS. Correct. Yeah, the branding. Yeah, so the branding is partly because there's going to be these two, you know, basic capability and advanced capability. And those advanced capabilities will grow over time with additional capabilities as well as the basic capabilities is my understanding. But the actual operator names are not going to change like what I kind of showed, you know, from a technical resource standpoint. Thanks. Did I address that correctly? I believe you addressed that correctly. I just wanted to throw that out there because, you know, there's both in the slides and we'll see that across, you know, not just these presentations, but many different assets. So, looking back at the questions, we had some great questions. Akash addressed some of them in the chat, but I wanted to do it live as well. So, Sandy can have some input and for those that aren't viewing this chat. The t-shirt sizes are a combination of cluster size and the number of nodes. And we're yet to find out. See, I'm reading your answer Akash. Let's go back to Keith's question. Can you expand on the OCS t-shirt sizes? What's the minimum and then what's the maximum of the cluster size? Is this in terabytes or nodes? Go ahead, Akash. Go ahead, Akash. I've been, we've been thinking more about the size itself. And then based on the size we would calculate how many OSTs you need based on the environment that you're deploying on. So, we have to figure out all those choices and all the storage classes because many of our users are application users. They're not storage users and they don't even know what sometimes all block storages. They just want to read write or read only interface, right, for their PVC. That's all they really know. So that's what we're talking about. Exactly. Would you like to add anything, Akash? No, I think Sandy covered it so I'm fine. Another great question. You showed the demo showed a lot of the great features. So, can you talk about what DR features, so disaster recovery, do you have, you know, do you have replication, you have integration with backups. Can you discuss that further? Okay, so the, within the essentials package that we'll be offering, it does have, you know, reputation within the zonal or across a metro area, right? Because that's basically where within a metro cluster. So that's all synchronous. There's work involved with OCS where they're actually developing multi cluster DR both asynchronous and synchronous capabilities. And, you know, there, there will be some time difference between, you know, copying the whole cluster state, or partial cluster state from one cluster to another. And that that in that is including the actual data and the application resources so you can make one cluster be equivalent to the other cluster. And that can be done across regions. Right. That is still going to be coming out a little later this year. But that's kind of how we're going to be offering it will initially offer the essentials package and then roll out the advanced package as all those capabilities come up. Thank you. That's good to hear that that's on the roadmap. Very important features. I know a lot of people are looking for. Let's roll into, there's another question about CSI. So many storage providers have CSI plugins. Can you share the top two benefits by using OCS, this plugin. Yeah, so OCS, I mean a lot of storage providers, they're either hardware systems. And so they're very they have a data gravity to where they are located. A software defined solution such as OCS can be deployed just along with your application. And as a result, you could take your storage and your application and avoid anywhere. So you have portability in one of its benefits. And then many of the container native storage solutions have advanced capabilities that I'm mentioning, you know, such as like the security features that are continue to continue grow because people care about encryption. They care about using the native key management systems that are out there as a deploy different providers. They care about, you know, disaster recovery. All those capabilities are typically not provided by CSI drivers themselves. OCS is more or ODF, as Karina said, is like an overall packaging of capabilities. CSI is one of the capabilities that OCS is providing for its storage feature. That would be a growing list over time. Thank you. You know, we had a good question about so OCS is managed through its operator. How is the CSI storage managed. You kind of touched a little bit, but Go ahead, Akash can take that. Yeah, so in case of operator, as I said, the operator is kind of a wrapper on top of the existing OCS operator or the CSI driver. So it is making the user experience much easier. Now, reducing the user errors. In the initial days when we didn't have this operator, we had experience of many user errors and queries from our users. So basically this operator is simplifying those is deployment. So you can consider this as like a wrapper on top of the existing operator. I'm curious about. If you already figured out, so I'm just looking through the chat, but a great chat going on the, the billing and that's something that, you know, a lot of people. Sometimes they get surprised with billing, especially when it comes to storage as you're, you know, replicating volumes or. So how do you have billing set up and how granular is it in one of you. Talk about billing briefly. Sure, when we roll out our GA, because right now we're in tech preview phase and we don't actually charge for anything. So it's a great time to try it out. And the same experience. When we do billing, it'll be based on the nodes where you have OCS installed or ODIF. It'll be based on the types of nodes, whether it's a virtual instance of VSI and will be calculated based on for CPUs per VSI. We're still coming up with the actual charge for that metric. But once we have that, billing charges will be calculated based on that. And for bare metal, it'll be a straight charge monthly as well. And it'll be based on node instances. So wherever you install OCS is, is leave a number of nodes. And there'll be a charge for that. And you can use it as long as you want and tear it down and you only get to pay for what you need. Initially, we'll have hourly plans for VMs and then monthly plans for bare metals. So we look at future other plans in the future, but at this point, those are the current plans that we're targeting, at least initially. And with billing, you get the support, right, both from IBM and Red Hat. But the support through IBM and Red Hat, you first go to IBM to make sure you're purchasing through IBM Cloud. Yeah. And you can use IBM ticketing system to open tickets. But, you know, but we're backed by Red Hat as well. And teams there as well. Now, Sharisha, I didn't have a question to ask you. Is there something that you would like to bring up in our remaining minutes? Put you on the spot. No, I don't have anything to do. Everything's been excellent. I didn't want to have you feel left out. All right, Akash, Sandy, is there anything that you would like to leave the audience with some key talking points, key business objectives on using OCS with IBM Cloud? I would say, if you're looking for a solution that's already integrated and easy to deploy, you should consider that. We do have other alternatives in IBM Cloud. You can look at those, but with OCS, it's integrated into the user experience, and we simplify the deployment across those kind of environments where we run it. And if you're thinking about a hybrid environment, OCS is a good choice with IBM offerings using Red Hat. So give it a try and give us some feedback. And thanks for walking us through the technical being able to deploy it and scale it. That's all really important for all of us to see. So thank you. Thanks a lot, guys. If you have any more questions, then we'll have Chris do us out on the live streams. Thank you all.