 And we'll get things going. Thanks, everyone, for joining us today. Welcome to CNCF's live webinar, Kubernetes Data Protection Requires Orchestration, Canister.io Delivers. I'm Libby Schultz, and I'll be moderating today's webinar. I'm going to read our code of conduct, and then hand over to Michael Corsi, solution architect, principal, cloud native project manager, both with cast in by Veen. A few housekeeping items before we get started, during the webinar, you are not able to speak as an attendee, but there is a chat box down the right hand sidebar, where you can leave any of your questions. Feel free to drop them in there. We'll get to as many as we can at the end. This is an official webinar of the CNCF, and as such, is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct, and please be respectful of all of your fellow participants and presenters. Please also note that the recording slides will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They're also available via your registration link, and the recording will be available on our online programs YouTube playlist. With that, I will hand things over to Michael and Mark. Thank you, Libby. All right. Hello, everybody. So Michael and I, we'll just add a little bit more, but we both represent Kastin, which has been a CNCF Platinum sponsor for a number of years. We're always at KubeCon, so come and find us. We have a number of open source projects, one of which we will speak today about, as well as contributing directly to Kubernetes itself. We'll show you that at the end with the resources. But I'm coming to you today from Austin, Texas. And Michael, where are you based? I'm living in a part of France called Normandia that you, American's guy, should know for good reasons. Yeah. Yes. And Michael, give us a tiny bit more of your career, if you would. Yes, sure. So I've been a GE architect for a few years. So I was really on the development side. Then I start to help different teams with their DevOps process. And one day, Kubernetes is under the scene. And we had to deal with that. And we had to deal the protection of the workloads on Kubernetes. That was one of my tasks. And we were looking for solutions, like Kubernetes, Kastin, and other kind of solutions for protecting workloads on Kubernetes. And then I naturally moved to Kastin, which is a tool really focused on that. So mainly now, I can describe myself like a solution architect at Kastin. I help our customer to deploy Kastin on their infrastructure and to integrate with the different kind of database that they have to manage in this different kind of process. Yeah, not speaking too much about me. But that's a good summary of what I'm doing now. Great. And that's exactly where Canister comes in, customizing solutions to do data protection operations, which is exactly the heart of our presentation today. So we'll give you a quick overview of what data protection challenges are on Kubernetes and then how Canister helps solve that problem. We'll wind up with some conclusions, give you all the resources that we've cited in this presentation, including GitHub repository with the demonstration code, and then we'll get to your Q&A. All right, let's get started. So when we start talking to people about how do they solve their organization's ability to continue running no matter what, we get into all of the challenges of data protection. And the time-honored rule is the 321 backup strategy. You need three kind of online backups of your data in two different locations, and one of those has to be at least off-site or offline, in fact. So bringing that forward into the cloud-native world with Kubernetes really shows a whole new set of challenges for these old problems, which is that we see our customers and our prospects all at different stages of adopting Kubernetes and their maturity with cloud operations. So we're going to get into all of that. Stateful versus stateless workloads, why a lot of people think Etsy backup is the way to protect Kubernetes, but it's not. What is an actual application-consistent backup and why would you need to do that instead of backing up a Kubernetes cluster? And ultimately, we're going to get to exactly to how Canister solves this problem in an open source manner to give you application-consistent data protection. So first and foremost, I'll actually show you this is the data on Kubernetes community report. We'll link to this at the very end. We see basically everybody growing for the last couple of years to get past stateless applications and finally add stateful applications, which means that they have storage, means that their state is important, and we'll talk about exactly those workloads. But as soon as you're successful with Kubernetes, more and more workloads, more and more complex workloads, and more and more traditional workloads follow there on. So we see that growth exploding. So the first myth and question a lot of people come up when they meet us is, isn't everything stateless on Kubernetes? Why do I need backup, recovery, disaster recovery, and so on? And the answer is the cluster itself has a lot of state, not just at CD, which we'll get to in just a moment, but actually all of the secrets, all of the configuration. And when we go even back to 2020, the CNCF found that 55% of their respondents were already running stateful workloads. We know that that has grown quite a bit since then. The majority of those workloads, I'll just jump ahead also, are primarily databases that are stateful. And we see even a third of everybody back in 2022, already running many different variations of basically databases and stateful caches. So getting back to this, another workload that we see more recently growing a lot is even traditional virtual machines running on top of Kubernetes with the Kubevert project, another CNCF project, is also another growing workload that is incredibly stateful. And the reasons for that are as many, but ultimately our customers workloads grow and they grow in their complexity and they bring more traditional workloads on. But the next step is that, well, it's not just enough to have, if you don't even have staple workloads, it's not enough to present everything as being stateless because you, in many organizations and in many industries, you must regulate, you must be able to audit, you must be able to prove what the cluster configuration was and get or doing it by hand or just a Wiki document is not enough, actually backing up and restoring, having the ability to restore your workloads and the configuration is required for audit purposes. That's all there is. And if you're not at the point where you need to be audited, if you are successful, you certainly will get there. Michael, you wanna join, add anything here? Yes, what about you said that everything is in GitOps does not mean that everything is actually in the Kubernetes cluster. So what you said about the intent, everything should be like GitOps said and what you really have is there is a real distance from a legal point of view, you can't say, I have to prove that my state was this one because look, my GitOps, my GitOps things is like this. You need to prove that on Kubernetes, it was like that and only on Kubernetes. So Kubernetes basically is the source of truth. And from my experience, what I can say is even if the good organization have very good GitOps process, when you are facing urgency, when you are facing obsolescence of an applications, when you are facing some time ignorance, many people, we see many cases where people are changing things manually. So they deploy their GitOps process, then they do some manual change. And it's very important to be able to track that as well. Even if you don't have state food workload, as you said. Agreed, agreed. So we fully endorse GitOps, but you need to add the ability to audit. And more importantly, do disaster recovery to keep your business running or keep your data because the configuration in code is not enough, right? We always have our persistence. And so that's what we see over and over and over again and our business is growing roughly double just last year alone. So yes, this is the reality. And the final myth that we also have to explain to customers is that just because they're on public cloud, which has great uptime, don't get me wrong, they're not fully protected, right? They have outages, they still need to satisfy audit requirements. And ultimately, every public cloud provider wants you to do disaster recovery, can be to another area inside that public cloud, of course, but we have many of our customers trying to figure out how to do multi-cloud using Kubernetes to be able to do hybrid and multi-cloud workloads. And so ultimately, disaster recovery and auditability and stateful workloads is the maturity journey that everybody takes. And we hope you're along those lines too because you will need canister later on. So we've covered this very briefly. We'll have links for these at the very end. So let's get to the next major concern that most people have. When they learn about Kubernetes, one of the first things they learn is how to get onto the cluster and then how to insert custom resources. And then they hear that backing up at CD is the way to preserve the state of everything. And while this is true, however, in reality, we have never seen anybody successfully do at CD backup and restore all back to that cluster once the cluster is in a bad state. That's all there is to it. I'm happy to be disproved, but I would even argue even if I'm wrong, that's a 1% use case. We see that customers do use at CD backup for forensics and audits, even development and test kind of scenarios, but restoring at CD and production when the cluster is constantly drifting, constantly changing its state and at CD is only as good as the last backup that you did. And when you do the restore, you'll go back to that point in time, but that doesn't represent where the cluster is. That drift is not represented well. So in practice, also, the Kubernetes project is trying to lock down the control plane as much as possible, such that you don't have access to even do at CD backups. And all the other Kubernetes vendors are doing this as well. So we only would advocate, if you are beholden to this strategy, you need to test whether or not it actually works. You're only good as your last restore, but the truth is that this is not the right way to actually do backup recovery and disaster recovery or auditability. It's part of it, but it's not sufficient. Michael, anything to add here? I would say that if you are in the situation where you think, oh, I should restore ATCD, it's most of the time it's too late. You are already in a very, very bad situation. It's really too late. The best is to rebuild another cluster somewhere and to restart your application from your backup. That's a much better strategy. If you have a backup system, recreate your Kubernetes cluster somewhere else and restore your application, but don't try to restore the ATCD backup. I must say that I've been an open shift administrator for three years. I never, ever restore an ATCD backup, never. It never happened. I never need to do that. Sometime I lose some clusters, hopefully I have backup, but I never try to recover from an ATCD backup. It's just a bad idea. Yeah. That's not to say you can't use GitOps to populate things on that new cluster, of course, but you'll need to ultimately restore the state as well. And not or. So this is where we get to the final contention and the takeaway that we'd like you to have is that really the applications on a Kubernetes cluster is what's important. The clusters should become cattle, should become ephemeral, should not be the logical concern of how you do backup recovery and auditing. It is one of the logical concerns, of course, but the applications themselves, their state and their configuration, that is what everybody is after. So when you approach this from an operation standpoint or a traditional backup standpoint or an infrastructure standpoint, Kubernetes allows us to finally deal with the entire vertical stack, top to bottom application infrastructure and all the operations together. This is revelatory for a lot of people, but they still approach the ability to maintain their business in traditional ways. And so this is no longer necessary. And again, Canister will help us solve this, but let's justify exactly why we need to talk about application consistent backups. Well, most people think that once they have persistent workloads and persistent volume and their persistent volume claims, that all they need to do with CSI snapshots, volume snapshots, is that's good enough. But the truth is, while that may be a crash consistent snapshot of the storage on disk at that point in time, it is often, and I would argue almost always, never good enough, right? So while you can bypass CSI container storage interface and go directly with your storage provider and use their volume snapshots, that's one thing some of our customers do. A second stage that a lot of our customers try to do to make it a more generic operation across any storage provider so that they can go multi-cloud, multi-provider and even multi versions of Kubernetes, right? As you know, CSI changes, everything changes. Even that LCD backup between different versions of Kubernetes won't work potentially. We see generic backup being the next solution that most people do where they mount a file system and just basically take copy every file. But this and the next step, which is the traditional CSI volume snapshot, all of these have failures for crash consistency in the sense that until the application and all the data is at rest on that storage medium, you do not have a proper backup. You certainly will restore it and not get what you thought. So we see backups and restorals fail because the backups weren't crash consistent and application consistent. So the next level up from that is logical backups where you use the application's backup facility if it exists or even the backup operator if it exists as a way to do an application consistent backup. And this is certainly a better state but we'll show you that it's not the final state of how to achieve this properly. So what we see, let's say you mount a MySQL container in a pod and you do MySQL dump on it then you need to get that artifact off. What happens is those databases, those logical backups, those files grow and grow and grow and grow and they don't become incremental. You have to figure out a whole new way to manage everything in order to even get to incremental and why would we need incremental backups? Because now we need to bring down the database, get everything logically flushed on disk and get going. Actually I'm getting ahead of us, that's the system backup. Long story short is that this is a good first step but actually not good enough and not stay the art of what we have in the more traditional area of bare metal and VMs for backup. So system backups are really where we are in that world, more traditional world and we don't have that exactly on Kubernetes. This is where Canister comes in. We need to actually stop the application, lock the application or flush everything to storage. One way or another or a combination of all these things, bring things in and out of load balancers, scale up and scale down everything to make sure no transactions are in flight and then a CSI volume snapshot works and then we need to invert all those operations to orchestrate everything, to unlock everything and get the application back to a fully running state. This is where we go but once we have this orchestrated set of operations for system backups we need to then orchestrate all those notions and then figure out how to do deltas or incremental backups because that's how we get to shortest backup windows and the least amount of storage required for all that backup and recovery. So this is where that complexity of orchestration comes in for an application can consist in backup and that's how you finally win to get to what everybody expects to happen on Kubernetes but is not currently the state of the art. For performance, storage efficiency, and so on. Any other comments you might have there? Oh, yes, all you said is so true. I could just add a little thing is in my experience when the storage is not working anymore because it happens sometimes that you lose the storage for many reasons, most of the time the snapshot is not working anymore as well. You are not able to restore from the snapshot and you absolutely need an offsite copy of your backup. So just leaving, you already said that but I'm just saying that it's a common pattern to see that when you have a disaster it's often a storage disaster and you just can't use your snapshot anymore. So it's not always the case but it's a common pattern, unfortunately. So great, we've now shown you everything that you need to achieve on your journey to get to mature data protection on Kubernetes, all the characteristics of that solution but let's describe exactly how we can start to address this, right? Michael, you were going to tell me a little bit about how you used to solve things in a quick and dirty way, could you do that? Or take it? Yes, I can give you my experience. So I remember on the first time we were doing all our Kubernetes cluster on AWS. So we had the EBS storage available and our first solution was to create a script. It was a lambda function on the AWS and we were taking a snapshot of every EBS volume and that was our solution. And one day we had a disaster and we had to recover and I remember that it was a nightmare because it was very difficult to make the relationship between the EBS volume snapshot and the actual PVC that is running on the application. So we had to recreate this mapping, it was very difficult. Also, we were dealing with a lot of solution for storing the backup. So on the first place, we were storing things on AWS S3, but then for legal reason, we were told that we should send the backup on on prime S3 storage. So we had to change all the code to make that possible and we even have to rewrite the library. So that was a real pain. And also, how could we handle the logical backup for example, when you try to backup a MySQL database with MySQL and DOM, you need to establish a connection between your clients and your database. So how do you do that? Do you create a path forward? Do you open a route? And do you try to do that on the Kubernetes cluster? All these questions were pretty difficult to solve. And we quickly felt that for all those reasons, we needed the framework. We needed something generic, something that solved this problem. Right, and so we've enumerated some of the problems that our customers typically have when they do a quick and dirty backup script, right? It may actually work, that's not the issue. The issue is, does it work for everybody else? Is it available, is it flexible? Are you going to maintain it? Who else has the skill sets to run it? Is it delegated to everybody else in this world of DevOps and platform ops? Can a developer run it? Who can run it at four in the morning when you are on vacation and so on, right? So this goes on and on and on and that's why we have a company. But Canister will show you next, starts to address all of this as that flexible framework that gets you an application consistent backup. So let's go on to the next slide and we'll show you Canister now. So as alluded to, you want to be able to work with any sort of application, right? You can't hard code everything for one application in one cluster in one provider. Just won't scale. So Canister is a cloud native solution. It's an open source project. It's a patchy to license. It's available on GitHub. It is, it follows the Kubernetes operator pattern in the sense that you can use a Helm chart to install the Canister controller onto your Kubernetes cluster. It introduces three new custom resource definitions which are basically a blueprint, a profile and an action set. Michael, could you take us into a little bit more detail about how we use it? Yes, yes, yes. So to start from something simple, the profile is where you put your data. Is it on an S3 bucket? Is it on an Azure Blob? Is it on a Google bucket? Is it on the S3 compatible bucket? Where, in which region, with which credentials? So profile is all about that. So when we do a backup, we give a profile information so that the backup system know where to put the backup. Then come the blueprint. So the blueprint is really the, you can see that blueprint and action sets, they always come together. The blueprint, you can see that as a library of functions like functions that define a backup, a restore or the deletion of an artifact. And an action set is the actual invocation of a blueprint action. So you can see blueprint like a function or library of a function, an action set like an invocation of this function. So you always create an action set saying on which workload I'm working on with which blueprint, which action on the blueprint and which profile. These three things create the backup orchestration activity. So that's how we, that's how I would define this three big custom resource. Yeah, awesome. And remember, the whole goal of this for disaster recovery is to get those backup artifacts off of the cluster to be disaster recovered any place else. Right. Most customers bring down example blueprints. They customize them for their need. They upload it after putting canister, installing canister on their cluster. They upload some blueprints. They set their profile configurations. And then the action sets are the actual invocations that we trace for the lifecycle of executing a blueprint with its runtime arguments and its profiles to do a backup delete or other crud like create read update delete type operation for your artifacts application by application. All right. So that's a quick overview of what it does. Let's get a little bit more detailed. So how do you interact? Once you've installed canister on onto a Kubernetes cluster, usually you can use a cube cuddle or cube CTL. This is a religious discussion on how to pronounce that. But we also have a canister CLI tool called can cuddle which also helps with the facilitate with the lifecycle of blueprints profiles and action sets. But you can use cube cuddle as well. So once we actually have loaded on blueprints, loaded on profiles, those CRDs onto a cluster, the canister controller is constantly watching for action sets to be created. And once it does, those basically runtime arguments say with this blueprint, do that action with that profile and any other runtime arguments. And an example, the example we will be illustrating is with my SQL, please back it up or restore it to an S3 bucket. Very simple use case, typical for everybody, but obviously we can change everything with the profile and we can do many more operations than just backup inside the blueprint. Okay. So we create an action set that invokes all of these things, buys all these things together. The canister controller retrieves all those objects, the blueprint, the profile, et cetera, creates an action plan and then starts executing it. Those individual actions are examples of anything like a cube executive or executive type things. So that can be a shell command, that can be everything we can do through cube cuddle, can be any CLI, can be any API. And that's how we bind to not just what's inside the cluster, but anything outside of the cluster because we do often have to orchestrate external systems, DNS, load balancers, et cetera. It canister controller continues to exact all each of those actions, do all those operations, ultimately typically with our my SQL database instance in its pod and gets the my SQL instance appropriately stopped, flush to disk, volume backup or logical backup, in this case, a my SQL dump, and we get it off of the cluster to the backup location and S3. Canister tracks all while executing all this tracks, everything updates the action set with its status. Ultimately, when the artifact is created and exported off, we get that final state concludes and returns that status back constantly, but we finally get a completed action set. And that's really roughly how canister works. Anything I miss, Michael? No, no, no, this is perfect. The only thing we, I could add is the action set is how you're going to track your operations is the, every time you create a backup, you create a new action set. Every time you create a restore, you create a new action set and so on. So if you want to get the history of all your backup and restore activities, you just follow the least direction set and you know what happened and what failed and what succeeded and so on. Good. So shall we start showing everybody this in action now, Michael? Certainly. All right, let me stop sharing and hand it over to you. Let's go for a small demo. Let me know if you can see my screen. Can you see my screen? Yes. Okay, cool. So this is going to be a really common line demo, but yeah, that's how we use canister. So I am deploying the solution on an open shift cluster to be accurate is going to be on a row, which means open shift on Azure. And I am on the namespace MySQL test. On this namespace called MySQL test, I have a M-shart, which is deployed. And this is a MySQL M-shart. So I got the pod, which is a stateful set. Actually, it's a pod of a stateful set. I do have a stateful set, of course. I also have a PVC because it's a stateful workload. And I also have secrets, which is the credential to the database, which is this one. Now I can visit the content of the database just to show you that I do have some content inside. So let's exact inside the pod of the stateful set. And I'm going to connect to the database MySQL. And let's see the database. I do have the usual database and also the test database, which is something that I created. And inside this test database, if I use this test database, I have some tables, and actually for the demo, it's just one table, it's stable. And if I do a select star, yes, select star from this, I can see one line of a hamster. So all that to say that I have that in my database and I want to do a backup of my database. So what I'm going to do is I'm going to use canister for that. So first I need to create a profile. I already created a profile. So I can show you the profile that I created. Click the get profile. So this one is a S3 profile, which means that it's S3 compatible, but actually it's a profile that point to a WSS replicates. And also having a blueprint. So the blueprint is how you define your operations when you do the backup. And the blueprint has been created by just creating a blueprint object. So I can find my blueprint. It's here, I can see my blueprint, the my is called blueprint, and we're going to have a look to this blueprint very quickly because this is not the goal of this presentation to go into details of the blueprint, but we can just have a quick look about that. So a blueprint is made of different actions, a backup actions, a delete action when I delete my backup and a restore action. So if I go to the backup action, let's see the important thing. Actually, it's a MySQL dump. So I'm doing a MySQL dump on my database, and I zip this dump and I push this dump to the profile. That's what I'm doing exactly. And once I'm good, I'm saving the path to this dump so that I can reuse that later. Now, I can just do a demo of a backup. So what does it take to create a backup? So let's first grab the profile. Let's make sure this one is good. PRO, yes, correct. Okay, so yeah, the profile is there. And I'm going to create an action set, the famous action set that we've been speaking. So what I'm going to do is I'm invoking the backup action on the MySQL blueprint, which is living in the namespace, but it's to save the stateful set, which is leading in the MySQL test namespace and having for them MySQL release. Then I will send all that to this profile location that I already showed. So you see doing a backup has become something very simple and very easy to follow. So now let's do it. Okay, let's go. Okay, the action set has been created. You see its name is there. And I can grab the name in a variable to make it easier. Okay, so what we want to do is to follow up, see how the action set is going. And this is very simple. The only thing I have to do is to just do a get action set. And what I can see is the backup is complete. So it means that it succeeded. So let's have a look in this case. Let me take you to the bracket. And if I reload, I see a new folder. We are just seeing your... Oh, I'm sorry. Okay, I was showing you my AWS console. Okay, well, why don't you cut over to that? I'll explain a tiny bit more. So because you can see the CLI invocation of this, there is a full API of it. So this is how you can put backup and recovery into GitOps, right? Don't just place GitOps, we augment it with the data operations, the data protection operations with Canister. And so you're absolutely right, this is an object. And we can see the content of this object. This is really a Kubernetes object. So it's an action set with a name, with a name space and so on. And you can see the three components, the blueprint, the profile. And no, that's all, the blueprint and the profile. And yes, and on which thing we are acting, the object. So the three elements, the blueprint, the object on which we are acting and the profile. And in the status, you can see that we created the dump here on this S3 bucket. And we've got a state, the status of the good completion of the backup. So the state is complete here. So now let's imagine that I lose my data because I know I made a human error, it can happen. So let's imagine that I'm removing my SQL data pass. Okay, if I do also get O, you see that nothing. If I do also get STS, nothing. And let's say that I also remove the PDC. Oh really, no chance. Let's see, get PDC, you know, everything is gone. So I need to restore. So the first thing I'm going to do is to reinstall the whole thing, the data pass and the PDC. But this is going to be completely empty. So there won't be anything inside this data pass. Let's, yeah, yes, we're not creating. So we need to wait for the pod to be up and running. Now we're just doing this on the same cluster, but it could be any cluster at this point. Yes, we could restore that in another cluster. That would be perfectly possible. Okay, so my pod is now running. And I could go inside the data pass and show you that the test data pass does not exist, that the pets table does not exist. But I'm pretty sure that you trust me. So now I'm going to create a restoration. So this time my approach would be different. I'm going to create again an action set, but I'm not going to define the profile, the blueprint or the object because I'm going to work from the preview section set. I want to restore the backup created in the preview section set. So the only thing I'm going to do is create an action set with the rest action and with the action set name. So let's do it. And getting back to auditing. This is exactly how you can target the appropriate point in time action set record to get the database back to where you need it for an audit purpose at any point in time. You just have to find the right action set. Yeah, that's true. Something that I did not show, but I could have also get action set minus and cast an IO. It's where I put all my action set. You see many action sets. Some of them failed because I was just doing some fine tuning. So you can always list the different action, the backup and my restore attempt and so on. So you can follow up your action. So let's do that now. Let's create a restore action. Yes, you can see we did a dry run of our restore earlier today to make sure everything was right. So now, yes, it's because we are introducing a new CRT called repository server, but never mind. So everything works fine. I can grab the new action set. This one actually the restore backup, but I want to put that in a variable easily. So I'm going to just execute this command. Go to action set. Yes, this is the one. And if I check the value of this action set, I'm sorry. You see that it completes, which means that it's successful. And the things I can do is now connect to the database and just check my data, our back. So I'm going to connect to my code, connect to my database and just check the database. Basis. And this is there. And I can just. Oh, no, let's just select start from pets. You're right. Use best and select start from pets. And no surprise. We get back a world economy stuff. So to summarize, once you have the framework is installed and the blueprint are okay, the only thing you need to do to backup is this. Cube cutter, create an action set. And when you need to restore, the only thing you have to do is this. The restore consume the previous action set, the backup action set. So yeah, that's my demo for the moment. Thank you, Michael. All right. If you would stop sharing, I'll finish up everything. This one. Okay. All right. So let's conclude and wrap up with what we had. So what we've shown is that canister is a cloud native open source extensible framework for Kubernetes data protection. Please adopt it. Please join us. Please improve it by joining us at canister.io. So we have community bi-weekly meetings on Zoom. We have a Slack channel that you can join and ask questions. Our engineering team and many other customers and adopters are there. And so that's how the community interacts and we figure out what we need to do next on the roadmap. For instance, I think I saw the complete percentage doesn't really seem to make sense. There might be some bugs there. We have a partner in Kube campus that does a lot of Kubernetes training and they have a tutorial for using canister. Please come to canister.io and you'll be able to get to all of these references. The MySQL database blueprint is available on our GitHub in the project. And today's webinar materials themselves, the code that was executed, is also available there. I've also linked over to the references that we cited earlier, the CNCF survey about going from stateless to stateful. The data on Kubernetes community 2021 report for again, increasing stateful workloads and which databases. The Datadog HQ Container Report also for more database insight. We are actively involved with the Kubernetes community and Kubernetes engineering in the data protection working group that has a charter and a white paper on all of the data protection concerns that are needed between the storage provider community, the application provider community and so on. So please come and join us there. In particular, we are leading a lot of the effort for Kubernetes enhancement proposal 3314, which is to introduce change block tracking to the CSI volume snapshot operation. And so if you're interested in helping us get that spread and adopted, please join us or even comment on the design. We are in prototype phase right now. So that gives you, I think, a great overview of what Canister does, where it does it and how it does it, which is most important because we're helping all of our customers and all of the entire CNCF and Kubernetes community grow in their data protection maturity such that they have disaster recovery in an application consistent way. And it's not easy, but with a community like this, we're solving it and we're solving it at scale. So, Michael, any other final comments? I think that's it. We probably, if you're in the last thing I would say for my experience, don't try to implement the backup solution yourself. Don't try to implement the backup framework or a very small teeny backup framework of your own because it's an incredibly difficult problem to solve. It's better to rely on framework that have experience on this matter. And we do have a lot of experience on that. And believe me, the Canister framework has been built right, really built right, but you need some experience and to exercise, to understand. Yeah, we'd like you to make mistakes with Canister and recover from those mistakes and achieve data protection that much sooner rather than relearn all of the mistakes that we've already corrected with our community and proved in many more use cases outside of even what we imagined. Because we do data protection at Veeam for traditional workloads and because Kasten does it for Kubernetes and specific, we created Canister such that we didn't have to do a specific integration for each and every provider and each and every application. This is leverageable by anybody in any scenario. So this is our contribution to the entire community and we hope you find it as valuable as we already have and our customers already have. All right, Libby, how are we doing? We are doing good. We are ready for some Q&A. So if anyone has questions, go ahead and drop them in the chat and we have about 10 minutes to answer anything. Any burning questions? So I think we did a good job on timing then. Excellent job. Anyone have questions for Mark and Michael? Either we did an excellent job or people are still trying to figure out what's the right next question to ask. Well, long story short, as we said, we've got lots of resources. Come to canister.io and learn about it. And not only that, you'll be able to get this entire presentation and our video recording, Libby, hopefully a little later today, I'll send you final version of these slides right after we finish. Yeah, perfect. All right, no questions. Is this our final offer? You can reach out to Michael and myself through GitHub, through Twitter, et cetera, et cetera, et cetera. And our first name, that last name at beam.com. Awesome. Oh, here we go. If I lose not only app data, but also cluster as well, do I have to restore actions before I can use canister to restore data? This is a very good question. Yes, thank you for asking that. Actually, I show you that I use the action set to recover, but this is not mandatory to create another action set. You can just create an action set out of the blues, out of the blue, I'm sorry, not out of the blues. If I share my screen again, if you want to make sure you have your screen again. Yeah, stop sharing. Go ahead, Michael. Yes, show this window as well. Okay. If I just execute also get the action set, the action set minus n, because then I hope, this last one, it was the restore one. Yes, okay. And now if I just pick the content of this action set, you see that you can perfectly provide the information yourself. You don't have to get back to, yes, you don't have to rebuild from a previous action set. You can just create this restore action set directly by providing your information and that will work as well. So it's just these things, the from, is just to make it easier. But at the end, what you create is a plan, is a plan action set where you provide the artifact where you want, on which you want to work. Yeah, I hope this is answering the question. I'll add a tiny bit more there. So yeah, if you have a brand new cluster, you would need to get some basic things installed such as canister, get those profiles and blueprints loaded and then yeah, you can start with your action set. But that's a get-ups operation in my opinion, not hard to do. And we have the home chart, so really easy to do. All right then. I think Libby, we'll thank you for your time. Thank you for seeing us today. Well, thank you both, Mark and Michael. Thank you everyone for attending and those of you who view this later, thanks for watching. We'll get this up as soon as possible. And join us again for another live webinar with CNCF or all of our online programs that we post weekly. And thank you both so much again and we'll see you next time everyone. Thank you. Merci Michael Auvois.