 Hi, this is Mohamed Safe-Shake. This is Alina Ryan. Today we'll be demonstrating how to dynamically provision storage resources for Windows containers on OpenShift. For this demo, we'll be using the Azure File Volume Plugin to provision, read-write many persistent volumes. First, we'll show you shared persistent volume storage solution between Windows pods and then add a Linux workload to the mix. To start, you want to ensure your system admin by running an OC Who Am I. Before you begin, you'll need to be logged into an OpenShift container platform cluster, and it must include one Windows worker node, one Linux master node, and one Linux worker node. Because we're using an Azure File Plugin, we'll need to first create a cluster role that allows access to create and view secrets. We'll add the cluster role to the service account. Next, we are going to define and create a storage class. Here, you create the storage class, and then you can confirm that it was created in a web console by panning over to storage, storage class, and you can see that it was just created right there. Next, we'll define and create a PVC with the input storage class name, our requested access mode, and here we're doing read-write many, and we'll also do our requested storage size, create the PVC, and confirm again in the web console by going to storage, PVC, check that the status is bound in both persistent volume and persistent volume claim. You can also check in the terminal by running OC-CUT-PV, and also see and hear that the status is set to bound. Next, we're going to define and create a workload, ensure that it is set to Windows, and we also want to specify the container mount path, and make sure that it has the right PVC claim name from earlier. Go ahead and create that workload, and you can also check the status in the web console by looking at workloads, pods, and then deployments, and you can also check in the command line by running OC-CUT-PODS, and you wait until the status is set to running. Our pod is up because the status is set. Next, we will exact into the Windows Container Microsoft PowerShell, and here we'll navigate to the file path that we specified in our PVC mount path earlier. The location here is shared and it represents the dynamically provisioned PV with the ReadWrite many access that we listed earlier. We'll create a test file here and then exit. Next, we'll return to the web console, go to pods, deployments, and click the name of the Windows pod and scale that deployment up to two. We'll give that some time to deploy the second pod. Now that the status is running, we'll exact into the new Windows Container Microsoft PowerShell. Here we'll navigate again to the file path and notice the file we created in the first pod is ReadWrite accessible in this pod deployment. We overwrote our original test file and we now exited the command line. Next, we'll demonstrate failover persistence with no data loss by scaling our pods down to zero and then back up. Now that that's up and running, we can exact into the Windows Container Microsoft PowerShell again and attempt to access that same file that we created in our first deployment. We navigate to the file path specified by the mount path and again, we're able to see the shared storage even on the pods that created the file as deleted before. Next, we'll demonstrate the Windows workload to Linux workload communication. Now we'll be deploying a Linux workload that's running at the same time as our Windows workload on a different node and we'll be showing that the storage is available to all nodes across the cluster, regardless of what type of underlying OS the containers are running on. Here we have created a Linux pod using the same PVC as our Windows pod, and you can see that they are hosted on different nodes. Now we're going to enter the shell of our Linux pod and navigate to the mount path specified within the pods YAML. As you can see, the two files that we created from our Windows containers are accessible and the contents can be read. Now we'll show a write from this Linux container, and then take it down in order to show failover persistence with no data loss. So we're deleting the Linux pod that wrote the test file and our cluster only left with Windows containers. As you can see, the persistent volume still exists, and we'll be ensuring that the data that was written by our pods before is still there. Here we're again entering the PowerShell of one of the Windows containers, and we've just ensured that we can read the data that the now deleted Linux pod had written along with the old data that the other Windows containers had written. So there you have it, a persistent volume claim on Windows containers on OpenShift.