 In this three-part demo, we'll cover some key enterprise capabilities of the Dell EMC Storage and Data Protection solutions for Kubernetes workloads. In software development, it is always a good idea to have test cases that are realistic. In this demo video, we are going to take a look at how you can take the copy of production data of a containerized application and use it to spin up a test or a dev instance of the application so that the application can be tested with the latest production data. The storage platform here is PowerMax with the container storage interface. Kubernetes is the container orchestration platform and we are using GitLab for the DevOps automation. Here is a sample application that we forked from GitHub. It is a bare-bones web app to manage a list of todo items with SQLite as the backend database to store the todo item information. And here is where the magic is happening. This is the persistent storage claim for the SQLite part that is part of the todo application. Here we are actually using a tag variable of the application instance to dynamically allocate the right storage volume. If the tag indicates that it is production instance, then a claim against the production volume of the storage class, SC, and in a little while we will see how to pass PowerMax as the value for SC. If it is not a production instance, then a snapshot of the production volume is used instead. If the snapshot PV does not exist, Kubernetes creates a snapshot of the production persistent volume and then uses it. This way, every non-production instance will have its own persistent volume that is actually the latest copy of the production data. And here is the yaml for the snapshot definition. The storage class that will be used here will be PowerMax-SnapClass. Of course, SC is the variable here and we will see how to pass PowerMax for the variable SC. Let's see this in action. Here is the production version of the app. Let me add a few items that are in fact a must in today's world. Now let's pin up a dev version of this application to do some changes to the app. Before we do that, a quick check on PVCs and PVs in the namespace. You can see the persistent volume claim and the persistent volume for the production instance and there are no snapshot volumes here. Cool. Now I create a new branch called colors and the change I'm going to make is to change the color of the title on the web page. I can do a quick diff to make sure and then come it and push the change. This creates a new pipeline in GitLab. Let me navigate to the build and here is the helm command for the application deployment in Kubernetes. Where we set things like a unique namespace for the instance and passing relevant variables. Of course, the variable SC which stands for storage class is being set to PowerMax. The build and deployment are completed and here we can see that the color of the title has been changed. And if you notice the data is the latest production data since it's a snapshot of the production volume. Now we can go back and check to see that the new snapshot volume is created and we can also see the corresponding persistent volume and volume claim in addition to the production objects. I can also check the Unisphere environment to see that the snapshot has been created as part of the deployment within the same storage group. This same workflow can be used for any of the Dell EMC storage platforms listed here. These are basically the respective snapshot storage classes for each of the platforms. In this second demo, we'll see how Dell EMC PowerStore can be used to deploy containerized applications on VMware's Tanzu Kubernetes Grid. Here we click on create a new namespace. We select our WCP cluster and specify the name and then just click create. We can see that the namespace creation completed successfully and the Kubernetes status is active. At this point, I'm adding the Dell EMC PowerStore storage policy I just created to our namespace. This will allow me to create the TKG persistent volumes on my Dell EMC PowerStore Vivo data store. Next, I'm logging in to my vSphere Kubernetes using the kubectl command. I'm authenticating with the administrator at vSphere local user. Then, I'm switching to the new namespace I just created using the kubectl config use context command. By running kubectl get sc, we can see the PowerStore policy. This appears as a storage class within the namespace. As you can see, I'm using this storage class in my manifest file of my TKG cluster. This storage class is used for both the control plane and the worker nodes. You can also see the namespace and the TKG name. Now, I'm just going to create it so now we can go back to the vSphere UI and see what's actually happening in our namespace. We can see that the new TKG cluster has been created and the control plane VM has been built out but hasn't been powered on yet. I will speed things a little bit as this process takes about 10 minutes to complete. By navigating to NSXT, we can see that a tier 1 gateway is being created for the namespace as well as a new load balancer. Now, let's connect to our PowerStore cluster and navigate to the Vivolts tab. Here, you can see that we have multiple Vivolts and each of them belongs to a particular TKG VM. The type of Vivolts provision depends on the type of data that is being stored. Config Vivolts stores standard VM configuration data such as VMX, log, and nvrom files. Data Vivolts stores data such as VMDKs, snapshots, and clones. Swap Vivolts stores a copy of the VM memory pages when the VM is powered on. Now, let's look into the guest TKG cluster I just created by specifying the Tanzu Kubernetes cluster name and namespace. We can also take a look at the nodes to make sure we are in the right context. Here, we can see the single control plane node and the three worker nodes. Now, let's create a stateful application inside the TKG cluster. For the purpose of this demo, I am deploying an application called UGA-byDB which consists of three master nodes and three worker nodes. Each has a persistent volume claim which will represent a single virtual volume. We can see that within a few seconds, all pods are up and running. We can access the application UI by navigating to the service address and specify the application port. Persistent storage is presented through the VMware CSI driver called CNS. CNS uses existing storage options for storage provisioning in a new way. First, it is based on storage policies as we saw before. And furthermore, it uses first-class disks instead of standard disks. Dell EMC PowerProtect Data Manager and PowerProtect DTCD appliances are the industry-leading data protection software and purpose-built hardware. You can deploy PowerProtect Data Manager capabilities for your Kubernetes environment to protect complete applications with the benefits of application consistency and the most efficient data reduction. You can do this both for on-premises as well as for a public cloud deployment of Kubernetes. For this demo, we have chosen a Kubernetes deployment in AWS to show the data protection and management workflows. Our sample application is a MySQL namespace for a WordPress blog. Let's take a quick look at the app namespace that we are protecting and you can see the details of the pods, PVCs, config maps, etc. for the application. To enable application consistency, we introduced the concept of application templates. We are already shipping templates for MySQL and MongoDB and more will be added in the future. And you can also modify these or create new ones yourself based on the requirements of your database. In the application template YAML file, the required calls for quiescing the database are under pre-hook section and the unquiescing calls are under the post-hook section. The nice thing about these templates is that they run agentless, meaning the hooks do not require any agent to be pushed onto the production cluster to make these quiescing and unquiescing calls. To use this template for our application, we'll also make sure that we are using the right application label. Now let's create an app consistent protection plan for this MySQL application. We'll go to the PowerProtect Data Manager UI and create a protection policy for the app using the policy wizard. Select Kubernetes for application type. Leave the default crash consistent option since the application consistency is coming from the template YAML file that we already assigned to the application using the appropriate label. Here we select the MySQL app that we want the policy to apply. Under Schedule, we specify the backup frequency and retention policies. Now let us go ahead and initiate a backup in the backup now wizard using the policy we just created. Once the job is done, we can check the new copy of the application created on the data domain target. Now let us see how we can use this copy of the application deployment to spin it up on an entirely new Kubernetes cluster. Let me go back to the Data Manager UI and initiate a restore using the asset restore wizard. You select the cluster to restore the app to and here I'm selecting the cluster called target for the app to be restored. And here I'm selecting the entire application to be restored and not just the database PVCs. You can restore to one of the namespaces that are already available on the cluster or can specify a new namespace for the restored app. PVCs is already included for restore. I continue and on the summary tab you can check the details and hit the restore button. Once the restored job is done, we can go back to the command line of this new cluster and check the newly created namespace and all the objects of the applications stateful set. The parts, PVCs, config maps, etc. Thank you for watching.