 Hello all and welcome to the KubeCon presentation of Commvault Distributed Storage for Stateful Containerized Applications. My name is Abjith and I am a Distinguished Engineer at Commvault. Joining me today is Shahgyu Wang, a Principal Engineer at Commvault who will demonstrate the key capabilities we have developed for container ecosystems. Storage can sometimes be seen as an obstacle adding overhead and complexity to application workflows, but data is fueling applications and persistent volumes allow for stateful applications to become mainstream. Distributed storage is changing the way we think about storage with a focus on data wherever it lives. We do this by centering on three key architectural principles. One, an agile infrastructure that unifies block file and object storage in a single platform. Two, location transparency of data. And three, fully programmable infrastructure to seamlessly integrate with your existing workflows. All of this without compromising on enterprise-grade resiliency, security, governance, scale or performance. Commvault Distributed Storage is a software-defined storage technology that supports a breadth of workloads. While I would love to talk more about this, the focus of this presentation is limited to how we deliver on this promise for container ecosystems. Let's take a look at how Commvault Distributed Storage adds value to your Kubernetes environments. First, let's talk about simplified install and healing. We recently launched our official operator to manage all our storage components in Kubernetes and OpenShift clusters. This includes Devon installs and seamless upgrades in a predictable way that fully aligns with how you've been managing your Kubernetes clusters already. Our solution is programmable with Kubernetes APIs and industry standard orchestration and visualization tools. Our solution is also CSI compliant, supporting both block and file storage along with other capabilities, such as online volume expansions, snapshots and clones. Location transparency of data for application portability. Commvault Distributed Storage cluster can span across racks, data centers, public clouds or even a mix of both private and public clouds. A single distributed storage fabric is presented to the Kubernetes clusters with policy-driven data placement to ensure that data is protected and available to your applications within the determined SLA. Enterprise-grade storage for stateful applications. For security, Commvault Distributed Storage fully integrates with any KMIP compliant key management systems, enabling users to bring their own keys. We provide storage efficiency through features such as inline global deduplication and compression, which can be dynamically configured through storage classes. We also support point-in-line snapshots and clones for restoring old data or for test dev environments. Before I hand it off to Wang for a quick demonstration of the platform, let me summarize what we will be showcasing in the demo. You will see how we can effortlessly manage the installation and upgrade of storage components in Kubernetes using our Enterprise operator. You'll also see how multiple Kubernetes as well as OpenShift clusters can provision storage out of the same multi-site Hedwig cluster through a single distributed storage fabric. We will also showcase how you can provision application event volumes with the desired level of fault tolerance across racks and data centers with ease. And we will also finally showcase other capabilities such as seamless expansion snapshots and cloning of volumes with absolutely no application downtime. Throughout the demo you may observe references to Hedwig. Please note that we are actively undergoing a rename of Hedwig to Commvault distributed storage. Having said that, I'm gonna hand this off to Wang so that he can quickly take you guys through a demo of all the capabilities. Thanks, Abhijit. Hello, everyone. In this part of presentation, I will show some demo about how Commvault enables stateful applications in Kubernetes. For this demo, we have a Commvault distributed storage cluster with two DCS on prime and one DCS on AWS. And we have one AWS EKS cluster, one on-prime OpenShift cluster. Let me first show how easily we can install Hedwig CSI on OpenShift using operator. Hedwig operator is listed on operator hub can be installed directly from OpenShift UI. Here with one click on install, you will get it. After install done, go into the Hedwig operator. You can easily create a Hedwig deploy which contains the detailed information about your Commvault storage cluster. I already create a YAMFill for the multi-side cluster we are using. The Cs are part of the nodes in that cluster. As part of the Hedwig deploy object, the operator automatically deploy the CSI driver and the proxies for managing both proxy, both block and the NFS volume. In the rest of the demo, I will use OC or kubectl command to showcase how Commvault distributed storage is fully programmable through Kubernetes APIs. Now let's create a elastic search application using Commvault storage. Here is the client machine pointing to the OpenShift cluster. To use Commvault storage, we need to first have a storage class. In the storage class, you can choose the replication policy to decide where you want to place your data, how many copies you want to keep, and also turn on some enterprise-level feature like a deduplication, compression, or encryption. Here, I already have a storage class for NFS. Replication policy is data center aware and the data will be placed on all three data centers on the Commvault cluster. I already have a yamfuel for the elastic search. In this yamfuel, you will use the storage class we have and mount the volume to elastic search data pass. Okay, let's check if the application is up and running by issue a code command to elastic search service. See, we can get a response from elastic search. Now let's log into the container. You see the Commvault NFS storage is mounted to the elastic search data pass. We have seen how persistent volume data can be replicated across multiple sites. Let's explore a scenario where you want to your data to be located. We've seen a single site for pre-production or test use case and how we provide the flexibility to do so. Here, we have an AWS EKS cluster on US West 2 region. All the CSI driver and the Hedwig proxy already set up here and the storage class for block already created. It will only use the US West 2 data center to place the data. Now, let's create an application using this storage class. In this yamphill, it will create a Postgres container and using the storage class we created before and mount the volume to Postgres data pass. Now in Commvault UI, if you see, all your data will be located only on the AWS data center for this block device. Now the container is up and running. Let's go into the container. You see, the Commvault block storage is mounted to the Postgres data pass and the size for this persistent volume claim is 10 gig. Now, sometimes people might need to extend the volume as their data grows. For Commvault, we can easily do the resize and you don't need to take down your application. Here I will show you. Now the PVC is 10 gig, I will change it to 20 gig. You can see the PV is already resize to 20 gig now. And if you come to the Commvault UI, this is a block virtual desktop we created before for this Postgres container. Now the container size is 20 gig now. Okay, now the PVC is also resize to 20 gig. Let's check the size inside the container. Let's run the DF-HFNKH now. You can see within the container, this block device also resize to 20 gig and you don't need to take down your application totally. And we can also create a snapshot for the volume in case you need to draw back your data very easily. Here I have a landfill to create the snapshot for the Postgres volume we created before. Now the snapshot is ready to use now. If you come to Commvault UI, you can easily find there is a snapshot we just created for this block virtual desk. And later on, you can refer to this snapshot or create a clone volume from it very easily. Let's now create a clone from it. I have a persistent volume claim landfill which we will use in the snapshot we just created. Now you can see we have a new persistent volume claim which is 20 gig now. And if you come to Commvault UI, you will find a new block device creator here. The detailed information will show it is the clone desk from the snapshot we created before. That's all what I want to show here. Thanks everyone. Thanks a lot, Wang. That was a very smooth and wonderful demo. For our audience, if you have any more questions or if you would like to discuss any particular feature or anything else further with us, please visit us at our virtual Commvault booth. Feel free to also connect with us on our Slack channel and to get more information about our complete offering for containers, you can always visit www.comvault.com slash containers. Thank you all and have a nice day.