 Hi everyone, thank you for joining our session and welcome to KubeCon EMEA 2021. I'm Diane Patton, a technical marketing engineer here at NetApp and with me is Jaymon George who's also a technical marketing engineer. We support the Kubernetes initiatives at NetApp. Today we're gonna talk a little bit about cloud native application aware data management within multiclouds. So as we all know, Kubernetes originally started to support only stateless applications. But as we all know, it's not just for stateless anymore. Many enterprises are starting to deploy Kubernetes with more stateful applications, including databases. But there's a lot of challenges around how to deploy Kubernetes with a stateful application. And as we can see, enterprises are driving stateful application growth in Kubernetes. If we take a look at this survey here from CNCF in 2020, we can see that approximately 55% of enterprises are running Kubernetes with stateful applications. And this could include file workloads, CI CD pipelines in addition to what we all think of stateful which is databases. But there are some challenges around this. Containers are very different as we all know then VMs. The VMs and applications are generally bound together whereas containers were made the exact opposite way. We don't want to bind those applications with the underlying OS. And this makes it a little bit more challenging to be able to support applications the way that we do in VMs today. So again, the containers could come and go but those applications we still need to be able to support and we need to be able to support their data in addition to snapshots and backing up and being able to allow those applications to have portability and disaster recovery if it's needed. So these are some of the challenges that we know about with enterprise workloads in containers. And to start off with, there's data persistence. We all know that Kubernetes, one of the main features of Kubernetes is ability to scale up and scale down. And with that, we're able to go ahead and destroy and redeploy pods. But the problem with that is if we've got underlying data we don't want to destroy the data along with the pod. So we need to look at that storage persistence within Kubernetes. We need to make sure that those volumes on those pods even if the pod is destroyed, that volume is still available for when a new pod gets deployed perhaps on a different worker node. And this is also why it's very important to look at things like your access modes, right? Your RWO, read, write once only allows access to a volume from one worker node. What would happen if on a specific deployment you have a pod that gets destroyed and gets redeployed on another worker node, for example? Or you do a rolling upgrade is another good example in that case. You might need to have access to that volume from more than one pod. So one of the things you might want to look into is the ability to run something like NAS, for example, in order to be able to support those rolling upgrades or the ability to be able to move, dynamically redeploy that pod on a different worker node and still have access to that underlying volume. So along with the persistent storage need, we also need to think about how do we bring that aspect into hybrid multi-cloud experience, right? So you can't have the similar experience wherever you go. It can be within the cloud or across the cloud. Or maybe you want to do the same experience even in on-frame. That is very important when we do Kubernetes workloads. Another important fact to think about it, especially when we talk about multi-cloud, there isn't a big aspect of application migration or application movement between them, right? So traditional, we all thought about container workloads or Kubernetes workloads can be moved anywhere easily. That was well true for stateless application, but when it comes to stateful application, as many of our enterprise customers are actually running their production workloads, it's not that easy anymore. There's a lot of things we need to understand. How do I move my application and its data from one Kubernetes cluster to another Kubernetes cluster that can be within the cloud, across the cloud, or from on-frame to cloud? We need to understand how an application has been constructed in Kubernetes, compared to how we do that in a typical bare metal or a virtualized environment. That's what we think Kubernetes require the application awareness to do that job for you. If you look at the virtualized application, all the configuration was with the VM itself, but now with Kubernetes, all the application metadata, which we call Kubernetes objects are actually constructed along with the application. So if you look at the application, config maps, secrets, stateful set, the system volume claims, we all need to understand how it's been actually created and configured within the Kubernetes. When we actually migrate the application, you should be able to migrate the configuration, its data, and the application state. That's an important part of migrating application between multi-cloud. And otherwise, what we end up doing, we get the data and you manually have to copy over all these Kubernetes objects, which is not practical. So when we think about in real-time use cases, we should have a way to automate that and get your Kubernetes data application and its objects into the destination cluster. Right, and we wanna be able to take a look at that application and view everything as one whole application, right? We wanna take all those Kubernetes objects, the PVs, all of that, and be able to manage all of that together as one entity. Another thing, Diane, I want to add here is, no matter if your application is running in containerized form or traditional way in virtualized or bare metal, the data is important. The data cannot be corrupted or lost even when you are in Kubernetes. That's an important part for any business. The next thing we need to think about is, what is my business continuity model, right? Right now, I'm getting fantastic features of Kubernetes. Now, I'm in Kubernetes, but how do I continue my business even when there is a disaster, right? So what is my strategy along with it? These are the things it's really key aspect before even thinking about migrating application to Kubernetes or running application in Kubernetes, right? And then there may be aspect of it that there's a need of large compute requirement, right? So how do I get into that kind of situation, right? If my data center is out of my capabilities to provide enough compute, what do I do to get my compute needs within a short amount of time? That's where customers may look into migrating the app from on-prem to cloud or one cloud to another cloud. Even maybe they have another data center that have more capabilities to run this kind of applications. They are actually thinking about how do I get my application from data center one to data center two? What are the strategies I have to do that if and when I need it? And especially some of the enterprise customers have a strategy of running application in one data center for two months. The same data application need to be run in another data center. They call location or VR the next two months. Exactly, so we need to have that ability to be able to take whatever application that we're running be it on-prem or in a cloud and be able to run that same application along with its data in another cloud or moving again between on-prem and the cloud and back. And of course, we'd like to be able to do all of this with one easy type of uniform experience. There is more solution out there to address the challenges we've gone through here so far. And Diane is going to walk you through how we are addressing that challenges for our customers in Meta. Thanks, Jaymon. So as Jaymon has mentioned, we've been talking about multi-cloud and the ability to manage workloads on different clouds all is one experience. So this is the dashboard of Astra as we were talking about where it shows that it's managing two application and three different clusters. So if we look at those clusters, we can see Astra has a cluster registered in Azure and it also has two clusters registered in GK. They are managed from the same interface. So looking at the cluster running on AKS, we can see the AKS knows the PVs from that cluster, the storage classes that Trident installed and the CSI provisioner and storage classes. If we take a look at one of the GKE clusters, we can see very similar information. The storage, the storage classes that Astra installed are also there. So we have a very recent experience between the GKE as well as the AKS. So if we take a look at the apps now, we can see the apps being managed. We're using WordPress as an example in this case. So we have a WordPress instance running on Azure and another instance running in GKE. So if we look at Azure and the data protection, we can see that we've got a snapshot already there. And if we take a look at the backups, there's also a scheduled backup that's been taken and we can configure the protection policy however we want. Next, we go to storage, we can see the PVs and all of the resources that are being managed along with this application. And then if we go ahead and take a look at the one on GKE, we can see a very similar setup where we've got the data protection, however we wanna set that up, we can do that. Again, snapshots and backups. We can configure it if we want, we can go to the storage, we can see the PVs and all the resources that are being managed by Astra together. So again, it looks exactly the same as what we just saw with AKS. So next we'll take a look at WordPress which is actually currently installed on the GKE cluster and we can see that we have a blog here. So moving back to Astra then at this point, we're gonna go to that specific application which is the WordPress on GKE and we're gonna clone it. So we just do the dropdown and hit clone. We go ahead and we fill in the clone details, we'll just use clone so we can keep it straight, which is which, the destination compute cluster that we want and then we just confirm it and it starts cloning. So after a few minutes, the clone has been completed and if we go ahead and click on that clone, we'll see all of those resources have been copied to that new cluster, the PVs are there, the data protection, we can go ahead and set up our own protection policy on that new clone if we would like and then we'll go over to that services and ingress and we're just gonna grab the IP address of the new load balancer. So we can go ahead and log into that and we can see that all of that content that was on that original cluster has now been copied over to that new cluster and look, we can still see Mr. Fluffypants now. So we have shown the ability to be able to migrate that application between one cluster and another cluster and have the same user experience between different clouds. So thank you for watching. We hope this was very informative. Come by our virtual booth. We'd love to hear from you and enjoy the rest of KubeCon EMEA.