 Hello, and welcome to another Portworx demo. Today's demo will show zero RPO recovery point object disaster recovery across two different Kubernetes clusters in Amazon Web Services. To be specific, we'll have an active site running Kubernetes 1.15 and a DR site also running Kubernetes 1.15. Portworx will be deployed across them as a single fabric with two domains, a primary and a failover. This allows Portworx to sync these Kubernetes clusters together, allowing Kubernetes objects as well as the data in the single fabric to be synced across clusters. This enables Portworx and Kubernetes to work together in case of a failure. We can turn on the DR site as an active site. All the data will remain there giving us zero recovery point object and a very low RTL. So here are our two clusters. On the left you'll see our primary cluster. On the right you'll see our secondary cluster as our DR site. You can see there's three workers in each cluster. Our primary has worker 0123 and in our DR site we have second worker 0123. They're both running Portworx. As you can see is a single cluster with all six nodes across both Kubernetes clusters. If we go to the daemon set you can also verify this by seeing that Portworx is running with three pods, one on each worker in each cluster for a total of six between them. Inside the Portworx daemon set we can get into the pxtl cli by exacting into the cluster here and running a few commands. Here we're going to show the cluster domains we talked about in the diagram. You can see worker 1, 2, and 3 belong to a active site named primary and the second workers belong to the active site as a DR site. They're both active because both are healthy even though our application is being served off the primary site currently. You can see that we have a demo namespace in our primary but it doesn't yet exist in our DR site because we haven't paired them together or had a failure scenario. You can see the deployments exist in this demo namespace. Kubernetes Connor is a web application talking to a Postgres database backed by Portworx. It's being served via Route 53 and you can see that the primary cluster is healthy. We can go ahead and interact with the application by going to this DNS and clicking around on the screen. This adds a database record to Postgres which is supported by Portworx for each logo that we have here so we have eight records in the database currently. If we go over to the CLI we can also view our domains by using the Stork CTL. You can see that they're both currently active. Now in order to create a working failure scenario we have to create what's called a migration schedule. This migration schedule targets the namespace, a cluster pair which is targeting our DR site cluster and includes resources but not the volumes because our volumes are already there because we're running in stretch cluster mode. We can create this migration which will then allow us to see the active migrations happening between the clusters. Here you can see our first migration is in the application stage because it doesn't need to move the volumes as it's a single fabric so the replicas are already over there. Once the objects are moved we can refresh our DR site and we can see that the demo project was just created. That's because the replication is now moving objects of Kubernetes such as deployments and pods and secrets from one cluster to the other in case of a disaster scenario. To show this we can go to the primary site and edit our labels. We can add another label here directly called demo label and we update that and since the migrations are continuously migrating every one minute as our schedule is tied to you can see the next migration is occurring right now now it's done and if we go back to the DR site we can refresh and see that our deployment now has that label so it'll keep upgrading and updating this as the primary gets changed it'll make sure that the DR site has all these changes note that the pods aren't running yet though because we haven't had a failure so let's go and create one here we have a small script that'll go ahead and deactivate our primary cluster here you can see it's running a deactivation command this is in case of a failure you can go ahead and deactivate the cluster and we'll go ahead and start turning over the pods to allow them to run on the DR site you can see now they're they're turning off on the primary site and turning back on on the DR site now we have all three running of the web and the postgres is up in a matter of 30 seconds here we can go to route 53 and show that the primary becomes unhealthy initially and if we refresh the DR site should take over as the healthy endpoint there we see that the DR site is now healthy so route 53 understands that the application is now running in the DR site and we refresh all our data is still there and we can go ahead and interact with the application and add more records to that database while it's in the DR site now if your primary site comes back online say you recovered from that failure you can fail back to the primary site so you can activate the primary site which kicks off a process which syncs any deltas any new data that was written while the application was on the DR site back to the primary once that data is written back to the primary it becomes active again and it'll go ahead and turn on the applications in the primary you can see that they're spinning down in the DR site once again and spinning back up in the primary in just a few seconds they're all down in the DR site and back up on the primary site so now we've recovered our data any new data that's been written to the DR site back to the primary our DR site becomes unhealthy because it's not serving the application anymore and the primary site becomes healthy once again and we can go ahead and interact with the application and all our data is still there now once it's back in the primary the migrations get turned back on right so in case there's another failure in our primary site we want to make sure our migrations are continuing to work so we can go ahead and edit and add another label a third label here to show you that that's still working we can go ahead and add our third label again perfect now that's being updated we can go back and look at our migrations which should be running here we go great so that migration just finished and if we go to the application we can see that our third label is now there again and we're ready for another failure and have continually been backing up our cluster thank you for watching portworks can be installed at install.portworks.com and that should give you a way to produce a spec that can be generated to deploy portworks and everything needed to deploy portworks on kubernetes also visit our docs at docs.portworks.com and our socials at portworks on twitter thanks for watching until next time