 All right, let's get started. I'd like to thank everyone who's joining us today. Welcome to today's CNCF webinar, Backup and Mobility for Kubernetes Applications. I'm Christian Jens, Cloud Consultant Level 25 and CNCF Ambassador. I'll be moderating today's webinar. We would like to welcome or present it today by our camera co-founder and CEO at Kustin. A few housekeeping items before we get started during the webinar, you are not able to talk as an attendee. There is a Q&A button at the bottom of your screen. So below the presentation, if you have a right, click the Q&A button and use this to ask your questions. Feel free to drop them at any point in time and we try to get as many of them answered as possible at the end of the presentation. This is an official webinar of the CNCF and as such, it is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. Basically, be a nice person to all of your fellow participants and presenters. With that, I will hand it over to Vahab to kick off today's presentation. Thank you, Christian and welcome, everyone. We're gonna talk about backup and mobility for Kubernetes applications today. Here's what we're gonna cover. So the first few slides of the talk, the first two sections are really setting up what the problem is. I like to talk about what are the application patterns that we're seeing in Kubernetes and why backup and mobility, well, what they are and why they're important and we'll kind of tie those two together. And once we get that done upfront, I'm gonna spend most of my time describing what it takes to really get it right in Kubernetes. We're gonna do a lot of demos over here. So hopefully we can make it really interesting. What I'm hoping to do here is by the end of the talk, all of us should have a good understanding of, as we are deploying applications in Kubernetes as we're managing more and more clusters, whether it be developers, operations folks, we should know what we should be thinking about in terms of backup and mobility, how to approach the problem, how to look at our applications and go from there. And then finally, as Christian mentioned, we'll do Q&A. There is the Q&A tool and we'll do Q&A at the end, but I'll try and keep an eye on questions that are coming in during the talk, just in case something needs to be addressed immediately if that would make sense. There's a link over here for where you could send us feedback as well after the webinar is done. We'd really appreciate it. I think it would help us make sure we're getting the right information out there. If there's other information you'd like to see, we'll also be able to send that or get that feedback. It should only take 30 seconds and we also have a prize drawing from that that we'll do on Friday. So that's the link right there and I'll also show this at the end of the webinar. A bit of background about myself and really why am I here talking about this topic? My name's Vavav Kamra. I'm the CTO and co-founder at Castin. Castin, we focus on customers that are deploying applications into Kubernetes and we help them with their day two operational challenges. Backup and mobility is a big part of that and so that's why I'm here talking about this. In my past life, previously I was at EMC where I led engineering for data protection products specifically the Cloud Boost family of products that does data protection in cloud environments. Before that I was at Maginetics and Microsoft. I worked on distributed file systems and databases on storage my whole career. So this topic is really close to me. So let's first look at what kind of applications we're seeing in Kubernetes. Really talk about what the patterns are. Talk about how these applications are storing data, how they're reading data, how they're dealing with data. What are the patterns that we're seeing? The first pattern that we see is that when applications and the data services that they're using, they're all part of the same kind of bundle. They're deployed in all in Kubernetes and they're typically in the same namespace. So when I talk about data services, really I mean any kind of component that's handling data in that application. It could be something like a Postgres database or a NoSQL database. Or it could be something as simple as a block volume with persistent volume in Kubernetes. Also, if you look at the picture there, I've got two applications in their own namespace. Each application has their own data services which are deployed in its own namespace. The second pattern, which is a variation of the first one is where the data services and the application components they're all still deployed in Kubernetes but they're typically managed or deployed separately. And so in separate namespaces, for example, we start to see this as things scale. You might have separate teams that are managing those data services. And then the application components will interact with those either using control or using control or data APIs. If you think about the operator pattern that we see for databases, we start to see that a lot more over here. And then finally, there's this third pattern where the application components are in Kubernetes, but the data services that they're using, they're actually deployed or running outside of Kubernetes. So you could be using, for example, a cloud service such as AWS RDS or Google Cloud SQL, or you might have some data services that are deployed in VMs and they're outside of Kubernetes. And that's what we're seeing as well. And over time, what we're observing is that there's the motivation, there's the need to really move towards the patterns on the left because it helps really unify how you manage and deploy all your infrastructure. You don't really wanna be doing something Kubernetes for part of your infrastructure and then using different tools and processes for something else. So we are seeing that shift over time and when we work with customers. But the fact of the matter is these are all three very valid patterns that we see. And I think the high level point here is that there is data that is being used by these applications, which brings us to the next point. So given that there's data, given that these applications are dealing with it, it becomes very necessary to make sure you have systems in place to recover the applications and the data if things go bad. And that's really what drives the need for a backup and recovery solution. When we talk about things going bad, it could be bugs in our code, malicious issues. You had ransomware that went in and encrypted something or deleted data. Infrastructure is always failing and that could take away, that could result in data loss. But there are other things as well. Application misconfiguration, we're all trying to do get-offs over here, but in some cases, there are always cases where you see production environments, configuration pointing to dev test or vice versa. And so how are you gonna recover from an issue there? Or just compliance and auditing reasons, right? If you wanna go back six months and understand exactly what the state of your application and data was at that point, how are you gonna do that? But it goes beyond just thinking about backup and recovery when you have these kind of applications running in your environments. Mobility is really important and mobility is really the ability to move an application and the data to a different environment. And this can, having this ability, having solved this challenge will help deal with disasters, site failures. It'll allow you to avoid lock-in and make your applications really portable so you can run it in any environment that you might wanna run in. And then this third pattern that we're seeing a lot of, which is instead of upgrading, for example, clusters or infrastructure in place, a lot of customers, what they're doing or what they would like to do is spin up a new version of a cluster environment and just migrate everything over. So how do you do this? And that's really what mobility enables in these environments. So that's really setting up the problem. So we talked about what our applications look like, why we need backup recovery, what mobility is. And so if we're thinking about this, what are the key elements we need to get right when we're implementing this in our environments, when we're making sure we have these abilities in our clusters? The first one tends to be automation. We're doing, we're on this cloud-native journey. A lot of what we're doing is really to make sure we can scale. And we're not really building all of this infrastructure to manage one application or two applications for a single cluster or two. We're doing this for a lot of applications and clusters. So having any manual actions in here will just not scale. They won't scale with our environments. They won't scale with our developers who are now making changes on a much faster pace and they just tend to be error-prone. So you wanna make sure that any workflows you have for this are fully automated. So that's key. The second which really ties into this is making sure that you're not dealing with static jobs anymore. You're not configuring static jobs to go protect something or move something around. You really want to have a more declarative way of doing this, a policy-based approach which ensures you're covered today but you're also covered going forward. It can flag where you aren't covered, where you are covered. And also that gives you the flexibility of controlling how much data you're gonna keep around, how long you're gonna keep around the data. Costs can really add up when you start thinking about snapshotting volumes or databases that we are for 24 hours, 365 days a year. This can really add up. So you wanna make sure you have that flexibility in place. Security and encryption, this is table stakes supporting all our authentication requirements. Whether you're using OIDC or Kubernetes authentication, you wanna make sure any solution you build has support for that and integrates well. You wanna be able to distinguish between who your users are. Have things like rollback access controls. Make sure every all data that you're protecting is always encrypted in-flight address. And then finally, I think this is one which is really important, which is we need to think about what are going to be all our environment needs? What kind of infrastructures are we gonna use? Both now as well as going forward. What are the data services that our application developers are gonna wanna use? A lot of what we do in this world is to give them the flexibility and control over how they wanna build their applications, use best and breed for everything. And any solution that you implement for your operational requirements, you wanna make sure you're not restricting that. So thinking about this is really important. And then take all of this. And I think one of the things I'd like to highlight is think about how you're gonna do this at scale. As I mentioned, you don't wanna do this only once. You wanna make sure this can be replicated for all your needs going forward. Multiple teams, multiple applications, multiple clusters, multiple environments. That's really what you wanna be thinking about. Okay. So now what we do is we've talked about the use cases. We've talked about at a high level what are the things we should be thinking about as in our environments. Let's deep dive into each one of these. I'm gonna start off with backup. For each of these, what I would like to do is maybe highlight a few things that are interesting when it comes to Kubernetes or the kind of applications in our environments. Jump into a demo to demonstrate how to kind of an example of how you would do this. And then come back and go through the other use cases. So I think when you're talking about backup, one thing to consider is how are you gonna capture the data from your applications? We talked about those three application patterns there which you have and a variety of data services that we're using, how are you gonna extract the data from there? And there tend to be different techniques and you wanna have the flexibility to support all of those, right? This can be affected by what kind of applications you have, what kind of data services, what kind of consistency requirements. So let's talk about that a little bit. So for example, in this kind of example where I've got a microservice that's using my SQL and Postgres inside the cluster, I might wanna use storage snapshots to capture the data and that's a very valid technique. Based on my application or consistency requirements, that may not be enough though. What I might wanna do is instead of using storage snapshots, I might wanna use a data service specific tool either because of consistency requirements or for efficiency needs. If you start thinking about replicated systems, snapshotting every kind of, every volume doesn't really, isn't really the most efficient way to do things at that point. So this is something that you need to think about when you're looking at backup. How am I gonna capture the data? We have, there's a blog post I'd like to point you to on the Cast and Website that really talks about this. What are the different flavors of that and how you can go after those. The other thing to think about is what is the application? Where is all the state and the data? In the traditional world, we've been very used to all of our data being on let's say a VMDK or a volume that's attached to a VM. All our certificates and every information we need is all in there. That's no longer true over here. We're no longer worried about the kind of the bottom left side of that slide which is it's just the data components. You have to start thinking about all the other information and state you need to be able to bring up this application at some point in time. So if you look at some kind of real, if you look at a real production application that I have there, we see applications with 500, 600 kind of components and all of that is kind of required. So you need to make sure you're capturing all of that not just the data. So let's actually switch to a cluster and we'll do a demo of this. The way I'd like to do this is I have a cluster set up in GKE. I've installed K10, which is a cast-in product that does all of this in there. And I'm gonna use that to kind of demonstrate all of the things I just talked about. I've installed, we're gonna install a MySQL, simple MySQL application in there. It's got a single pod that is using a PVC and underneath we're using Google persistent this. It's got a config map and it's got a secret. What we'll do is we are gonna snapshot that application and then we're actually gonna move that snapshot into object storage. And then we'll kind of come back to the slides and we'll do a few other variations as well. And in this case, what I'm gonna do is I'm gonna leverage storage snapshots. So the pattern we talked about earlier, the one on the left. All right. So let me switch to my terminal window. And this, as you can see, is my GKE cluster. So I'm gonna go list my namespaces over here. And when I look at my namespace, it's pretty much a fresh cluster. I have the default namespace, I've got the kube namespaces and then I have this castenio namespace which has K10 installed. If I go look at what I'm using for storage classes here, I have my GCE, my persistent storage class in install over here. Let's take a quick look at what K10 also looks like just after I've installed it in the cluster. So this is our K10 dashboard that comes installed when you set it up in your Kubernetes environment. What it does, just an overview of what you're seeing on the screen, we automatically discover what are all the applications in the cluster. In this case, there's one application which is that default namespace that we've discovered. You can set up policies to do all of the work that we just talked about. So we'll look at that in a little bit. And over here, you see all of your activity. We also support authentication and RBACs. In this case, I'm logged in as an admin user and you can also configure things such as where do you wanna write the data. So in this case, I've configured the system so that the data can go to AWS S3. Let's now, I'm just gonna grab my cheat sheet here and switch back to the cluster to kube cuddle. And what I'm gonna do here is in the CNCF MySQL namespace, using the upstream home chart, I'm just gonna install MySQL, right? And with a single volume and the size is just one gig. So let me run that command. And after that runs, you'll see that the CNCF namespace, MySQL namespace was created. So let me actually clear my stream, switch to that namespace and actually just see what happened there. You can see there's a pod that came up, there's a service, there's a deployment, there's a config map, there's a secret. And just to make sure, there's a volume over there that's bound, it's using that standard Google persistent disk storage that we talked about, right? So if you switch back to Kasten over here, what you can see is that we've automatically discovered that application. We flagged it, we've said it's unmanaged. So let's go look at what that is. And it's unmanaged because it's not protected by any policies. So we've automatically discovered that and flagged that. So let's go ahead and create a policy at this point that will help us implement backup. So I'll switch to this tab over here. I'm gonna call it my backup demo policy. What I'm gonna tell the policy to do is snapshot. Every hour we could configure sub-frequency if needed. I can configure how long I wanna keep my snapshots. So let's say I wanna keep all my early snapshots in the cluster. And then what I also wanna do is the early snapshots that I take, I actually wanna move all of them into S3. So I just went ahead and did that. And I could also specify whether I wanna keep all of them in S3, when I do I wanna keep only the daily ones in S3, I have that flexibility here. And instead of just saying apply this to this specific application, I'm gonna make this a little broader and I'm gonna say, apply this to anything that was deployed using Helm. So anything that gets deployed using Helm actually has this Kubernetes label. So I'm gonna say just use that label. I have the flexibility of backing up everything in the application, but if I wanted to filter and say backup only PVCs or only secrets or not backup secrets, I have that flexibility. And I'm gonna go ahead and create that policy, okay? And so this is really what is gonna do, what we just talked about and I'll jump back into that. And if you look at this policy as well, just so that you know, it is a Kubernetes resource. So I could have done this through kubectl as well. And that's what the CR for this looks like. So switching back to the dashboard, what's happening right now is we're doing everything that we just talked about. We are discovering all of the applications state and data that's in Kubernetes as well as outside of Kubernetes. And we're leveraging volume snapshots to actually capture the data. So if you look over here, and this is actually almost complete, we've gone ahead and taken a volume snapshot of the data. But we've also gone ahead and captured all of the other Kubernetes resources that were important that we are gonna need when you have to bring this application back. Close that. What you'll also see now is that after the backup is complete, there's an export task pending. And this is gonna get kicked off shortly, which will take that snapshot, which is still very local to that environment. It's still just a volume snapshot and it's got the data and it's actually gonna move it into object storage. And because we had said move every snapshot, this will run after every snapshot operation. So that's gone ahead and started to happen over there. And that should take a little while as well. Let's go back to the slides for a second and while that's happening. So what we just demonstrated was using storage snapshots to capture the data. But how about if you wanted to use something else? You wanted to use a data service tool. Let's say if we're deploying MySQL, we'd like to use MySQL DOM. Or if you had that third pattern where the data is somewhere else, you might wanna use some kind of API to extract the data from some external service. So let's actually demo how that works in this environment. And what we're gonna use here is we're again gonna use Helm to deploy MySQL. But this is MySQL with one additional piece of information. There's a blueprint in there that describes how to capture the data. And the blueprint is created using this framework called Canister. Canister is an open source framework. And what it allows you to do is specify how you want to capture the data, what data you don't want to capture in a very straightforward way, just using a YAML file. And if you go look at the Canister project, there are blueprints over there for Mongo, MySQL, Elasticsearch, Postgres, Cassandra, also coming in as RDS. So it's very extensible. You can take those and use them or you could write your own, okay? So switching back to my cluster, what I'll do over here is let me get that cheat sheet again. And I'm gonna install, again using Helm, install a version of that upstream chart that has that blueprint in there. And I'm gonna put this in the CNCF MySQL Canister namespace, so let me go ahead and do that, all right? So if I look at my namespaces now, I have this new one. And again, and I've got my Canister pod that is coming up. I've got a secret config map. Let me go look at my storage again. And I have another one gig volume in there, all right? And that pod is now running. We go back to our K10 dashboard. Well, the first thing you'll notice is that export we had done to object store has completed. So this snapshot is now moved to object store. But we've also gone ahead and discovered another application, so it's three now. What you will notice though, is that that application is not in the unmanaged column. It is actually compliant. It's being managed over here. And the reason it's being managed is that that policy that we created earlier has automatically been applied over here. So it's automatically applied to this application. And when that policy runs next time, it's actually gonna back up this application as well. So you don't have to keep setting up new jobs and going in. Let's go ahead and actually just do a manual backup over here. And again, you can choose what you wanna back up, whether it's everything or just a few resources. So I'm just gonna back up the entire application at this point, switch back to the dashboard. What you'll see here is that we have triggered that backup action and it actually completed fairly fast because this was MySQL with hardly any data. But you'll notice something different. It doesn't have a volume snapshot anymore. It has this canister artifact. And if you go look at what that canister artifact here is, it's actually a MySQL dump that we took. So we did not use a storage snapshot to capture the data we used a MySQL dump to do that. And if folks wanna go check it out, canister is the open source repo that has those blueprints in there. Just off the front page, you should be able to find something for MySQL. This was actually the helm chart that I used. It is just a copy of the upstream chart with this additional blueprint in there. And that blueprint really describes how to capture the data which is using MySQL dump. Right? Okay, so let me switch back. And hopefully I'm not going too fast. I know there's a lot of information here, but I'll try and slow down a little bit also. So we finished that and this is really what we saw. We saw that what you need to capture. You need to capture the entire application, all the data, all the resources, how you could do this in a policy-based way so that you're always covered, how you would configure retention. And I think most very importantly, how you would do this for any kind of data service that the developers are gonna throw at us, right? Anything that we would wanna use, we can support in a very extensible fashion. Now let's talk a little bit about recovery as well. How does recovery work? What's important over here? And again, I'm gonna use the same cluster to demonstrate that I have my cluster with which we just saw. We've got MySQL installed, we've got K10. I'm actually gonna pull that restore point from object storage. And I'm gonna restore the application in place. And I'm gonna, what we'll do is we'll repave all the infrastructure, we'll remove the existing application and replace it with a version that was from a previous point in time. Again, switching to my cluster again. I'll do this through the dashboard. If I go to my applications and I go and look at that original application that I had created, go look at my restore points. What I'll see here is I actually have two restore points. And they're both actually the same data, except one is the one that we have stage inside the cluster and one is the one we had moved to object storage. So I'm just gonna use the one that's still inside the cluster, because it'll be faster. And I can use that to either restore in place or I could clone if I wanted to. I have flexibility to choose exactly what I wanna bring back. Maybe I only wanna bring back the volume or maybe just to config map or a secret. In this case, I'm gonna bring back everything. So I have all of these selected, but I could unselect them if I wanted to. And I'm gonna hit restore. And once I do that, that's really all I need to do to trigger that restore operation. So if I go back to the dashboard, we will see that that restore operation has begun. Really what we should do is go look at the cluster. Go look at the cluster to see what's happening there. So I look at the namespaces here. I go look at my SQL namespace. I can go look at the volumes. And let's go look at, let's just watch these. You see what's happening over there. Just to go back and check on the status. What I should start to see is that the existing application, the namespace, we're gonna tear that down. So we're gonna remove the volume that we were using. And we're gonna bring back the same PVC that we had originally, which is the MySQL PVC. But this time it's gonna use that snapshot, the volume snapshot that we used to create a new volume underneath. So that's done. And what we should start to see now is this is a new pod that we just started. And that is gonna use that same PVC over there. And that's come up. And so that's really all it takes to go ahead and restore the entire application as we need it from our backup. And actually you can see that it created a new volume from that snapshot as well. So again, we talked about full application recovery, the flexibility that we should have in our environments. I think this is really important. I've worked with a lot of cases where you don't wanna bring back everything, but you really need one piece of information that you've lost. You wanna bring back that one file, that one PVC, that one database table, and you need that flexibility. And then finally, you want everything to just be automated, you wanna be it to be extensible. So I think that's really important here. Okay. Finally, let's talk about the mobility use case as well. When we talk about mobility, I think really what you wanna make sure, and we've talked about this before, is that you have the freedom of choice. You can support your application running in any of these environments, but not just running, you have the ability to move an application and its data between these environments, whichever ones are important to you. You might only be in the cloud or might only be on-prem or you might have a combination of those. You might be using different data services. You might be using different storage types. So you wanna be able to support all of those. And the way we're gonna do this again through the demo is we're gonna take that same cluster, that application that we had in that GKE cluster that was using a Google persistent disk, and we're gonna take that snapshot and actually move it into this new cluster, which is in EKS, Amazon EKS. This cluster is a fresh cluster with just K10 deployed. We will import our restore point there. We'll restore the application and we'll bring back the same application, but this time it'll be using EBS because there is obviously no Google persistent disk there. So switch back over here. This was my GKE cluster window. I am gonna switch to the other terminal window, which is EKS and let's see what we have over here. So again, a fresh cluster. All it has installed is cast in NIO, which is the K10 namespace. This is our dashboard running in EKS. You can see that there's just that single default namespace application, nothing else over here configured. I'm logged in as K10. I've configured it so it can import from that S3 bucket that we were actually exporting to from the other cluster. So that's all I have here. Let's go ahead and do that import. So the way I do that again is policy-based. I can go here to policies, create a new policy, and I'm gonna call this import demo. And in this case, instead of snapshotting, it's an import action. I do want it to restore anything in imports immediately. You could only import them and stage them for restores later on if you want to, but I'm gonna restore immediately. I'm gonna do this on some frequency. So you can configure it to happen periodically if you want to, for example, set up policies to do nightly refreshes, configure the frequency. And then I need some config data from the other cluster to establish the connection. So what I'm gonna do is go back to my GKE cluster, go look at that policy that I created, and there's this piece of text that gives me what I need. So it becomes very easy to kind of establish that connection. So paste that text over here. It detects where it should be pulling from and create the policy. And that's really all that's needed to set up this connection between the two environments. These clusters are not federated. All they need is they need to be able to talk to the same object store location. Going back to the dashboard over there. We should see, as we've seen before, we should see an import job start to kick in fairly soon. As soon as the policy kicks in, sometimes I can take a few seconds. And looks like that happened. I'm talking faster than our dashboard is updating. And so the import happened. And what you can see is it put broad in all those artifacts that we had talked about for that application, the namespace, secrets, config maps, all of that stuff. And what it's doing now is that it's initiating that restore. So let's go back to my terminal and go see what's happening here. If I look at the namespaces over here, I can see that now that CNCF MySQL namespace has been created. So let's switch there. And this is my EKS cluster. And let's go look at the, first off the storage classes I have in this cluster. So I have only EBS available. Let's go look at the PVC. It's got created. So I have now my SQL PVC and you can see it's using that storage class, which is EBS. And if I go look at my pod, I had a restore job that actually ran. That's just finishing. And now I've brought up a MySQL pod, which is running and has that data in there. And it's not just the pod, it's everything. Everything that I need to bring up MySQL and have my application running. So all of that information is there. So let me switch back here and really talk about what we saw. What we saw is how you can clone an entire application across clusters from different environments, really build a solution that abstracts everything, all the infrastructure specifics from your application and really get that true portability. We didn't use any kind of storage overlay. What we did here is we used the native storage from each of our environments, which is best in class, without actually having to install something else, like an SDS solution. So there's no extra management, performance impact, no cost of that. And then finally, this is all automated. You can set up a policy to do this one time or you can do this periodically. And that really helps with things like if you start thinking ahead, DR testing, prod to DevTest, refresh is nightly, so you can run some kind of sanity checks or performance tests. A lot of use cases get enabled through this. I did use our Kasten K10 product to do all of this. What I really am trying to do is really highlight how to think about all these use cases and what the best way might be to implement this. Our K10 platform, it's built for Kubernetes, it runs inside Kubernetes, supports a rich ecosystem. We support all sorts of on-prem, cloud environments, all storage types, all data service types. It plugs in seamlessly with canisters. So if you have a data service or a cloud service you wanna extract data from, you just write a straightforward blueprint and it plugs in. And there's a link there for a starter edition. If any of you guys wanna try it out, it would be, the link is right up there. And that's really all I had. So I like to take time now for Q&A. Please do use that link to send us any feedback. We'll register you for the prize drawing. We're also gonna be at KubeCon. I know a lot of you are going to be there. So we'd love to talk to any folks who would be there, have questions about this and go from there. Thank you again. And I'm gonna start looking at the Q&A page now, see what we can talk about. Yeah, thanks for a great presentation. Let's do the questions. As you mentioned, there are CQ&A tab. Please just let me ask this one guy that didn't use it, but I still have in mind. So first question is, for persistent volume backup, is there some script used to Qwish guest file system in container for data consistency for RISO, like VM backup solution based on vSphere snapshots? That's a great question. So K10, at least what you saw and we used K10 for this, it has native integration with a variety of storage systems out there. And we take care of all of that. So you don't actually need to add any other script, or sorry, anything to quiet the file system or do any of that. We do support extending the workflow using Canister. So if you wanted to do something more interesting, for example, let's say you wanted to quiet your application, you could easily plug into that just by writing a Canister blueprint, and we would actually run that in the right place. There's Christian, do you wanna give me questions or should I just take some from the... As you like. I'll take some, there's some really interesting ones here, so thanks everyone for that. I'm gonna take a few there. There's a question around, is there any way to test backups? And the answer is yes, absolutely. And that's really what I talked about with policies and automation where you can easily set up policies to do chained actions. So once you take a backup, you can restore in a different environment if you wanted and run any kind of verification you wanted. Again, you can plug in any verification post-step that you wanted into the workflow, and we would run that. Customers are also doing this for, I also mentioned the nightly dev test refresh. That's an example of what folks are doing there. There's a question around, can Cast and Do Spring Boot pods along with MySQL pods? And the answer is yes, we include anything in the application, so any kind of pods, deployments, any kind of data, we integrate with that and we can go through, we can back those up. There's a question around deploying, I think my first slide around what the application patterns are, whether we should deploy database outside of Kubernetes. So I think every team has their own requirements and their own ways of getting there and own reasons for doing that. What I found over time is that you wanna unify all your processes and toolings and even how teams are getting organized. And so you really don't wanna have two sets of infrastructures that you're managing. So we're seeing over time, customers who are really successful at this over the long-term are moving everything inside Kubernetes. There's a question around, does K10 support STS, also multiple PVCs in the app? Absolutely, and so STS here is, my assumption is this is AWS to support roles. We do have that support. We do support using AWS roles rather than having to configure IAM users and policies. And multiple PVCs, that's actually a really great question. So what I used for our demos was MySQL because it's just a very simple one to get started. But we're seeing a lot of applications with large number of PVCs, data services such as Cassandra or Mongo or even Postgres that have replicas or clones. And Kasten is aware of that, K10 is aware of that and supports all of those. Question around AKS support, do we support AKS? Yes, yes we do. Interestingly enough, my backup demo over here was an Azure cluster, just in case one of my environments didn't work, I was gonna use Azure. So we support all the cloud environments out there. We also support on-prem, things like PKS, VMware, Tanzu, I guess is the new name in those kind of environments. There's a question around access control setup for Kasten. Is that possible? And I'm actually gonna repeat the question because it's very interesting. What is the access control setup? For example, and this is very common in enterprises, you have multiple namespaces, you have projects or engineers who should only see their own data, shouldn't see any other data. And we do support that, we support full RBAC, which what that means is you can dictate who has permissions to set up policies, who can see data, you can control who can see data, whether they can back up only their own applications. So all of that, we have that flexibility in there. And I think it's actually very important. And I'm glad you raised that because it goes back to my earlier slide about security. It's when you start looking at multiple applications, multiple environments, you need those controls. There's a question about migrating from AKS to OpenShift. Can you support that? Yes, that would use same, it would essentially be that demo I did and we would support that out of the box. There's a question here around, sorry, I'm just going through the list over here. There's a question around whether all of this works through the cast and server. So the way, and this is really important, when we talked about, when we kind of thought about how we're going to build K10 and what's really important for this environment. And this goes back to that operating at scale question. You really need to take a look at what the architecture looks like for data capture, data movement, how things are going to scale as your cluster scale. So our K10 server is installed inside Kubernetes. It is cloud-native. It scales as your environment scales. It's able to leverage the best practices for cloud-native applications and do a lot of that. So not everything over here is centralized. There's no external service which we're running everything through. It is all done inside your Kubernetes environments. And you have full control over how you want to, what nodes you want the workflows to run on and how to get the best performance out of the system. I think that's most of them. I wanna thank everyone for all the really great questions. I've put my Twitter contact in there. I'm also on LinkedIn. Happy to talk to anyone. Will that be at the booth for any more questions? Really looking forward to KubeCon in San Diego. Please email Twitter and send me any questions you might have. We'll be happy to kind of follow up with you. Perfect. Thank you so much for the presentation. So that's, I think, about all the time we have for today. So the webinar recording and slides will be online later today. Just give it maybe a few hours. We are looking forward to seeing you at future CSF webinars. And have a great day. Thank you so much.