 Is it audible? Good? Very good. All right. Welcome, everyone. Thank you for coming out post-lunch. Yeah, I'm glad you all could make it to the session. My name is Rahul Sharma, and I'm the principal product manager for our container products at Dell. And today, I'll be talking about our open source store the solutions for our cloud-native stateful applications. I guess just by a show of hands, we've got a few people in the audience here. How many of you have stateful applications running in your environments at your organizations? Good to hear that. And how many of you by a show of hands have more than one cloud running in your environment, like public cloud, private, more than one public? All right. So let's just agree that we're all in a multi-cloud world. We had this announcement at Dell Tech World. And we talked about how it's not just going to be just public cloud or it's just private cloud. It's going to be like a blend of these two environments. And we found some research to support that. So we did some research at IDC, where we found that more than 90% of companies worldwide will rely on a mix of private, public, more than one public cloud environments. And we see that reflected in a lot of our customer environments today. And speaking of this multi-cloud environment, I think in a stateful application context, what our customers are looking for is really that consistent experience across private cloud, public cloud, and have just that consistency in setting up storage for their stateful applications. So when you think about stateful applications, what do you believe are the most significant challenges for the stateful applications that customers are running in their environments? And we see a number of them. High availability is a key one. Stateful workloads and persistent storage has been around for a while, but the problem of high availability is not fully solved. And we'll talk about how at Dell, we're approaching this problem and what kind of products we have out there to address that problem. We see a lot of customers who have traditional roles. Some have like storage admins or storage engineers who are still getting up to speed with containers and cloud native skill sets and Kubernetes skill sets. And then we have, on the other hand, a set of customers that are really good at setting up their cloud native environments with Kubernetes and containers, but they don't have as much knowledge of the storage side of the equation. So we're trying to fill the gap there. Compliance and security, as we heard this morning as well in the keynote, it's a major concern with stateful applications. Cost control is another one that comes up in many, many customer conversations. We hear all kinds of cases, but one case that stands out to me is where this customer mentioned sort of like a mid-state startup and they were like, we've been, we started off in the public cloud and we thought we'll have the benefit of that flexibility and elasticity. And what they found was that ultimately they just lost complete control of their costs. And they came to us and they asked us, hey, how do we rebuild that in our private cloud environment where we can actually control those costs? So that's another major challenge that we see. And finally, application portability and deployment. Now with containers, it's easy, like portability is just like a principle driving containers in terms of mobility and portability. But what's, the missing piece is the mobility of volumes that are associated with those containers and cloud native applications and we'll talk about how we're solving for that. So in terms of the environments that we see customers running today for their stateful applications, we see customers that have a mix of either their container orchestrator, Kubernetes, running on bare metal. We see some customers having a virtualized cloud native environment and with some customers we see a blend of the two. And that's really the most common operating model that we see customers using today. Now, irrespective of that model, what they do need underpinning that environment is the infrastructure elements, to compute the storage and the networking. And what I'll be focusing on today is how we are enabling customers to set up their persistent storage for their cloud native applications. So we have three key products in this space today. We have our CSI drivers based on the CSI specification from the Kubernetes community. We have our container storage modules that offer advanced enterprise data services that are built on top of the CSI spec and the CSI foundation. And then we have, from a data protection standpoint, we have the PowerProtect Data Manager that can protect your Kubernetes or containerized workloads. What I'll be focusing on is just our CSI plugins and our container storage modules. So speaking of our CSI plugins, we've got basically a CSI plugin for all kinds of Kubernetes workloads. Whether it's block, we've got a scale out software defined storage with PowerFlex. We've got a really high end block storage in PowerMax and many financial services and financial institutions use. And we've got a unified storage with a block and file on a single platform with PowerStore as well. In terms of VWalls, for any of you who use Tanzu, Tanzu environment, you can leverage VWalls, SBBMs, and we support that through both NFS or NFS CSI drivers or you use CNS to access block storage. And again, in terms of file shares, if you have sort of an environment where your pods are directly accessing NFS shares, we've got the PowerScale storage or if you wanna use the CSI driver to spin up your NFS environments, we've got the PowerScale storage out there. From an object storage standpoint, we just, at Dell Tech World announced, ObjectScale, which is going to be a new object storage platform. So for any of your object storage needs, you can always use that. So today, your Kubernetes clusters can access that through the object store APIs. In the future, as some of you may know, there's a new specification coming out called Container Object Storage Interface. So we'll have a CoC driver that's going to enable you to set up your object storage in a private cloud environment with Kubernetes cluster using that CoC driver. So that's coming out shortly. Whether it comes to any kind of Kubernetes distribution, all of the major Kubernetes distributions, whether it's EKS, Rancher, Kubernetes Engine, Azure Stack HCI, Google Anthos, Tanzu, OpenShift, Mirante, Kubernetes Engine, we've basically support and qualify all of these major distributions out there. And it doesn't matter what workload you're running on it, whether it's a database, whether it's an analytics workload, whether it's an AIML kind of application, we support all of those today from a stateful application perspective. And of course, we've got HCI products as well, things like VxRail, PowerFlex, and of course, the storage as is described and we've got data protection as well that is qualified on most of the popular Kubernetes distributions out there. So from a Tanzu perspective, like I described whether you're using sort of policy-driven dynamic provisioning, you can do that on Tanzu with our storage in the backend. If it's block, you can use Vvolves with VMFS and SPBMs. If in its file, you can use our NFS CSI drivers to use them in the backend as file storage for your Kubernetes clusters running on Tanzu. And we're seeing a lot of adoption in this kind of configuration picking up. So if you have any questions about that, feel free to have a chat offline or come to the booth later and we can dive into the details there. So now let's talk about what's not covered by the CSI spec or the CSI drivers. Like apart from the standard volume, creation, deletion, update, resizing, cloning, operations, what's not covered? Monitoring is not covered. So we're addressing that through observability. DR is not covered. Replication is not covered. So we're plugging that gap through our container storage modules for replication that are purpose-built for our storage arrays, leveraging the native replication capabilities in our storage arrays. From an authorization perspective, there is Kubernetes RBAC available today, but what it doesn't do is quota management. So we're plugging that gap with our authorization CSM module. You can do quota management, storage admins, they can basically set up tenants, assign them a JWT token, set up a quota and enforce that quota for those tenants. In terms of resiliency, so what we're doing with resiliency is identify any kind of like hardware failure, Kubernetes control plane failure or any kind of network failure and we can reschedule pods automatically. We've automated that workflow where whenever there's a node failure or hardware failure, it can automatically reschedule the pod, unmap the volume from it, remap the volume to the reschedule pod on a healthy node. In terms of volume placement, that's a module that's on the roadmap for now, but the idea there is that we plan to basically, whenever a developer fires off a PVC request, they would be able to match those PVCs with the array based on the performance characteristics, whether it's SSD, HDD, NVMe or they could even, it could match with the arrays based on the available free capacity on those arrays so that there's better load balancing. And then we also added an enhancement on the snapshot feature through the CSI driver, which is the volume group snapshots and what it does, it basically enables crash consistent backups of your applications. So what you can do is you can take a snapshot of a group of volumes together where you have your application deployed and you can do that to the sidecar that we've created so that you maintain referential integrity and you have a crash consistent backup of your applications. So now let's dive into some of the details and I can even provide a demo for each of these modules. So for the observability module, what we're doing is we are leveraging the open telemetry collector which scrapes the performance data, the capacity utilization data from the storage arrays. It stores that data in Prometheus and we visualize that in Grafana dashboards. And what it shows the customers is storage pool consumption, the system IO performance, it basically shows the provision volume topology and all of the key characteristics that you're looking to observe from a storage perspective for your stateful applications. And the way to deploy it today is through Helm charts. We're also adding that to our operator as well. So let's look at one of the demos. So you see we are using a Rancher Kubernetes engine here. We see that we've got our hotel collector available in the marketplace. You deploy the collector. We've also got our Grafana dashboard set up here with Prometheus in the back end and you see all of the key usage characteristics, memory usage, file system usage, topology data and we're using our PowerFlex storage array in the back end here for demonstration purposes. And you can even see that topology view with Kubernetes which shows you all of the provision volumes and what array they're connected to. And it is a view of the capacity utilization through the observability module. And in the future we'll be adding customizations to these dashboards as well. And here you can see all of the performance characteristics of the arrays that have been made available through the observability module. So let's talk about replication. This really is one of the key differentiators for us when it comes to supporting stateful applications and containerized workloads. I'm not sure how many of you have heard about our SRDF capability on PowerMax for our block storage array. So for containerized workloads we basically extended that SRDF capability for PowerMax to Kubernetes through our CSM replication module. So we've got a side car that runs on the controller, master nodes and the worker nodes. And we've created a command line utility called RepCTL that you can use for failing over volumes to your DR site. You can fill them back, you can re-protect them and it's basically supports both a stretch cluster configuration or a replica cluster configuration. And similarly for your file storage needs for PowerScale, we're leveraging syncIQ today. And similarly we'll be leveraging our array replication capabilities and extending them to Kubernetes. So authorization, so like I mentioned, for Kubernetes today there is a native RBAC capability but what it lacks is the ability to do quota management and enforcement. And how we enable that is we deploy a proxy between our CSI drivers and the storage system and we enforce role-based access and we enforce usage rules. So what the admins can do is they create these tenants, they assign them JWT tokens, the tenants can access the storage based on those JWT tokens, they're assigned a quota and that quota can be enforced by the storage admin. And we also enable credential shielding. So if you've got multiple developers, multiple tenants running on a cluster, they just don't sort of basically use up all of the storage that you've assigned to a Kubernetes cluster. You can control the usage of storage through the authorization module. And we enable this today through another command line utility that we've created called Karavi Cuttle. Karavi by the way was the code name for our container storage modules internally. So on some of our GitHub repos you might see references to that. So let's look at a demo again. So we're using Rancher Kubernetes Engine and we have deployed the Helm charts for our authorization CSM module and we're using a Cockroach DB database for this demo. You'll see that the storage admin will create a tenant. Here we'll assign a quota to that tenant and then we'll see that the tenant is being a bit naughty and trying to exceed that capacity and then you'll see that that's denied because we've provided capability to the admin to enforce the quota that's been created. So you see he sets the quota limit for three gigs and you see the tenant listed here and one of the tenants he goes in, she goes in, she tries to create a volume. They try to exceed that volume capacity and then you'll see that the request was denied. Not enough quota available. So this basically really just just demonstrate how we're sort of extending the capabilities that are available on our storage platforms today that the CSI spec lacks. And by the way, all of this like I've already mentioned is open source, free of charge for all of our customers today. So let's talk about resiliency and how we enable that. So today, when you set up a Kubernetes cluster, you have a hardware failure so we're sort of demonstrating that right now. We have a Kubernetes cluster running with three masters to workers with running on vSphere. We disabled a VM just to show what will happen to that pod and you'll see that stuck in terminating node. Nothing happens. So how do you solve that? Like, you know, you generally have to drain the node and you have to manually intervene and reschedule that pod onto another healthy node. So, you know, we spoke to customers and they brought this problem to us and, you know, we thought about how do we fix this problem? And, you know, the result of that was the creation of this resiliency module which is essentially a pod monitor side card that we created and what it does, it basically makes the persistent storage more resilient to failures. And this pod monitor basically, you know, it's like a, it's a side card to the CSI driver. So when you're deploying the drivers with the Helm charts, the operator, you can deploy the resiliency side card as well. It's deployed on both the controller pods and the masters and the workers and it basically looks out for any kind of hardware failures. It puts a taint on that node and then it reschedules the pod onto a healthy node, unmaps the persistent volume in the backend, remaps the volume to the healthy pods and, you know, all of that is done in the backend in an automated fashion. So let's look at a demo for that as well. So again, we're using an RKE cluster here and we've deployed our resiliency side card in the backend. We're using a cockroach DB deployed deployed with, you know, on a three node environment. And then you'll see that, you'll see, you know, we're, you'll see that, you know, we'll manually disable one of the nodes, the network connectivity on one of the nodes and that in the background, automatically the, you know, the side card will have put a taint on that node. It will have rescheduled the pod onto another healthy node and the volume will also automatically have been unmapped and remapped onto the healthy pod. So all of that is being done in the background and you'll see that, you know, right now you see we just manually intervened and killed the node and then, you know, soon after you'll see it will be up and running again, like nothing happened. So especially in cases where you have databases running on, you know, spread across multiple nodes and, you know, you want to automate that resiliency piece of it, have a highly, you know, truly highly available application. I think this piece is really useful in those kind of use cases. So again, you know, just showing those nodes and you see it's back up and running again in that automated way. So, you know, as we talk about, so just, you know, again, zooming out a little bit and, you know, talking about our storage products, ultimately what we're trying to enable is performance, resilience and scalability. And when you compare that with many of the products that are out there today, you see that, you know, there really is no match for our storage products, especially for containerized workloads. Like with PowerFlex, you can get up to 2.2 million IOPS in terms of resiliency. I just demonstrated we have a CSM replication module that enables resiliency and we also leverage our Ray-native replication capabilities like SRDF or PowerMax, which really enables, you know, disaster recovery and high availability. And in terms of scalability on PowerMax, you know, you can scale up to, you know, 64,000 VWalls. You can have up to 40,000 NFS exports per cluster on PowerScale. And, you know, there really is, you know, really not many alternatives out there when it comes to that level of scale. So now that we've talked about what's available today, you know, we also wanted to, we also wanted to sort of talk about what we're, what we have, you know, coming out in the future. So we talked about monitoring through observability, DR through replication, quota management, and RBAC through authorization, hardware failure detection and recovery through resiliency, volume placement through intelligent volume placement, crash consistent snapshots through volume group snapshots, what next? So we talked about application mobility or portability and deployment as a key challenge. And we're solving that through the application mobility module. And I'll just dive into a few more details about that module. In addition, another challenge we've heard from customers is that how do they enable encryption for containerized workloads? So we will have another module come out called CSM secure, which is going to enable data security. It's going to enable encryption of volumes. And it's going to enable integration with some external key managers, such as HashiCorp World. So let's talk about application mobility and how we enable that. So, you know, we enable customers today to run our persistent storage in the backend for on-prem Kubernetes clusters using our CSI drivers for customers that have sort of hybrid cloud environments and are using multiple container management platforms or Kubernetes distributions like Rancher, OpenShift, Merantis, and Tanzu. They can use our CSI drivers as well to expose our persistent storage in the backend. For public cloud environments, we just announced a Dell Tech World that will be bringing our storage operating systems to the public cloud. So even in the public cloud, if you have Kubernetes clusters and cloud-native applications running, you will be able to have a similar consistent experience. Now that you have those standard CSI operations enabled, you would also need some of these enterprise data services that I spoke about like observability, replication, authorization, resiliency, and intelligent volume placement. So we'll enable that on each of these environments. Now what's missing? What's missing is the capability to move not just the metadata of the applications or the containers, but the volumes that are associated with those containers. So we will enable that through application mobility. And, you know, through that modules, you will be able to move an application from on-prem to the hybrid cloud, on-prem to the public cloud. If you have a repatriation use case, you would be able to do that and, you know, we'll support that kind of use case as well through application mobility. So we put a little demo together here that I wanted to show, but let me just disable the audio here. So what we did here is like, you know, let's imagine a use case where you have, you know, you have an on-prem environment and you have an environment in the public cloud on AWS. And what you want to do is, let's say you have a sort of a blue-green deployment kind of use case, or you just want to do like test dev on an application, take it from an on-prem environment to the public cloud. So how do you do that with just a single command? And what we did was we wrapped up some of the components in the back end, you know, like we use some open source tools like Valero and Restick. We wrapped them up with our controller. So what we've done is you see on the left is the on-prem environment and on the right is the environment in AWS. And what we're gonna do is with a single command, you can move the metadata of the application and the volumes to that public cloud environment. So you see we've already configured the application. We're using a WordPress application with a MySQL database in the back end. We've already configured the components of application mobility module on both the source cluster and the destination cluster. And we're just sort of demonstrating right now the key components in the AWS environment. So you see we've got our application mobility controllers installed, we've got the volumes configured on PowerFlex with our block array. And this is just the example of WordPress blog that we've created. We've created a test entry there. And now what we'll do is we'll use a single command to move the entire application from the on-prem Kubernetes cluster to the AWS EKS cluster. So you can have any source cluster or any target cluster and with a single command, you'll see that soon the objects will start backing up or being cloned to the S3 environment in AWS. You'll see then the volumes will start showing up on the right in the AWS EKS environment as well. And then we'll demonstrate through just, we've automated basically the orchestration of that application from S3 to EBS volumes and orchestrating that directly with an EKS cluster. So that the application is up and running directly in that AWS EKS cluster. So you see right now that the volumes are backed up to the S3 object store. We see the application data there as well. And now we'll go through, we'll go to the EKS command line. We'll pick up the public, the external IP and we look at that application through the external IP and you'll see that the same blog that we were running on-prem is running exactly the same with that test entry, exactly the same. So there could be many use cases that you could use this for. You could have a resource augmentation use case where you need to run some additional GPUs in the cloud or you need to just have like a really a bursty kind of environment in the cloud where you have a very high CPU requirements, you let it run in the cloud and once again that season is over, you repatriate back on-prem. And all of those tasks are orchestrated through a single command line. So that really is the value here. And you'll see this module come out in the near future. We did sort of talk about this at Dell Tech World but we'll have a tech preview coming out soon for you to test it out. Speaking of encryption and security, as I mentioned, we'll have a CSM secure container storage module which is going to have a sidecar running that can encrypt your mission critical volumes. We'll also enable integration with external key managers like HashiCorp Vault. So you can create your keys through HashiCorp, you can use that key to encrypt a volume, you can seal that key so that no other developer or tenant can modify that volume and then you can unseal that vault and then you can use that volume in an unencrypted configuration as well. So that's sort of again like a future-looking module that you'll see come out in the near future. So all of the demos that I presented today, we have them available on YouTube. We've got a blog as well that one of our VP runs. Really cool blog, I encourage you to check it out, volumes.blog, all of the details that I spoke about today, you can find them on his blog as well. And then in terms of our GitHub repos, I just wanted to provide a few links here and the link to our doc site as well in case you want to dive into additional details. So with that, yep, we're all done. And by the way, we do have a booth as well at the expo floor. So in case you want to come have a chat with me or with Jen or any of our other colleagues, feel free to come over and we can dive into additional details. So with that, let me open up any questions that anyone has. So like I said, with all of these container management platforms today, Dell has compute, network storage, all kinds of products that support these Kubernetes distributions. So if a customer is looking to recreate their cloud-native environment from a public cloud on-prem, we actually have reference architectures that are published for Rancher, for OpenShift. I think the team is also working on vanilla Kubernetes environments. So if they, like in this case that I spoke about, that's what our recommendation was as well, that they can use those reference architectures to recreate those kind of environments as well. You're welcome. Any other questions? All right, I mean, we'll be here and at the booth as well. So let me know if you have any additional questions. Thank you.