 Okay. Thanks everyone for joining us. Welcome to today's CNCF live webinar, Protect Your Stateful Kubernetes Workloads with Canister. I'm Libby Schultz and I'll be moderating today's webinar. I'm going to read our code of conduct and then hand over to Michael Cade, Field CTO, and Mark Lobby, Senior Cloud Product Manager both with cast and by Veeam. Mark, I hope I didn't just really butcher that. No, you're good. Thank you. Okay. A few housekeeping items before we get started. During the webinar, you are not able to speak as an attendee, but there is our chat box. Feel free to drop your questions in there. Say hi to us and where you are listening from. Just leave your questions here. We'll get to as many as we can throughout and at the end. This is an official webinar of the CNCF and as such, is subject to the CNCF code of conduct. Please do not add anything to the chat or questions that would be in violation of that code of conduct. Please be respectful of all of your fellow participants and presenters. Please also note that the recording will be available later today, as well as the slides on the CNCF Online Programs page at community.cncf.io under Online Programs. They're also available via your registration link, and the recording will be on our Online Programs YouTube playlist on the CNCF channel later today as well. With that, I will hand things over to Michael and Mark. Take it away. Awesome. Mark, why don't you kick things off and introduce yourself? Great. Thank you, Libby. Thank you, Michael. Hi, everybody. I'm Mark Lavey, and I'm the Principal Cloud Manager at Castin. Now Castin is a business unit of Veeam. So you will see us say Castin by Veeam. And certainly we have commercial products, but we've been long involved in the community of Kubernetes with engineering efforts in the CNCF and et cetera. And so we'll be focusing our open source project, which will just be joining the CNCF as a sandbox project canister today. I'm responsible for working with our communities and our external resources and our upstream and downstream projects. Over to you, Michael. Yeah, cheers, Mark. And hey, everyone. Yeah, so I've been at Veeam software for just over eight and a half years now. So my full focus is around data protection of all the different platforms that we know and love. So whether it be physical, virtual, cloud, cloud native, if there's data in it, then we generally probably need to be protecting it. And, but over the last two, three years, it's been a huge focus around the cloud and cloud native environments, workloads, platforms, et cetera. So what we kind of want to get to today is talking about canister and giving a refresh off of where we are from a canister perspective, but also some community project that I created as part of a hackathon, but really it's about visualizing the status quo of canister with inside your cluster. So with that, I think we can, I can pass it back over to Mark for the first few slides, at least, I think. Great, Michael, you'll be my virtual clicker. All right, so we've already gotten the introductions aside. So our goal today is to talk about data protection on Kubernetes. We'll talk about the problem space that canister addresses and also give you some updates on how the project is progressing. We'll certainly demonstrate this and we'll show you exactly how to visualize the blueprints which are kind of our artifacts of operations and orchestration with the blueprint visualizer that Michael has created. We'll wrap up with some references to all of the points that we've made and ask you to join our project and conclude with any Q and A if we don't address it along the way. So let's get started. Michael, why don't you address this? Yeah, so this was a slide that we put together based on the whole stateless versus stateful conversation that we've had for a long time within the community. And this is what I always go back to in terms of if we think about the Kubernetes from a release point of view, we have three releases a year and generally speaking over the last three, four, five years we've had something relative to storage within our cluster. Now granted that might have been the CSI it might have been the entry provisioner and it might have been some things that we don't even use anymore and they've been deprecated out of our Kubernetes code. But as we get more towards today in maturity we see things like the volume snapshot going GA and even more stuff over the last few years around volume health monitoring and cloning technologies and the enhancements around the CSI the container storage interface and what we can do there. And it's no longer and I don't want to still mark thunder over the next few slides but it's no longer a is it a stateless versus stateful because really what the conversation is is in the cluster or is the state outside of the cluster because there's not many of us that are running just simply stateless workloads within our Kubernetes cluster because that website or that static website is going to be attached to some sort of storage whether it be a database or some sort of data service to hold that information. So really this is to highlight that storage in Kubernetes is a first-class citizen people are putting storage up there from a data service point of view within their cluster as well as outside as well and we'll get to that later on as well. All right, let's go to the next slide. So yes, what are those stateful workloads we see from number two on all the way downwards and this is from the data dog container report their databases for the most part there's certainly many, many, many more persistent workloads out there but databases are pretty much what we see coming on to Kubernetes clusters. Let's move on. Just one thing to add there, Mark is I basically have that conversation about especially if it's someone in the community using stateless workload or thinking that their Kubernetes cluster is stateless I have the conversation do you have any persistent volume claims in your cluster and that soon unravels to things like observability Prometheus metrics, et cetera. Well, are they important? Should we be protecting them? But ultimately we find out well, there is some state in that. And I think that's, and that's, it's okay. It's okay to run staple workloads within your Kubernetes cluster. Certainly, we have many, many cluster many, many different customers and use cases. And so we'll cover three kind of major patterns that we've seen. These dash boxes represent Kubernetes cluster with multiple pods and workloads, staple or stateless. And in the first case on the left we basically have databases and everything completely over the network off of the cluster. Second middle case here pretty much is where we have databases. Well, they're in, they're on the cluster but they're in separate pods maybe with dedicated storage or even dedicated workers and so on, right? Isolated in that regard. But really where we see development and test and really the most agile workloads and operations and coding and development practices is when the databases are co-located with and in the same pods on the same cluster as the workloads and can be deployed as a whole atomic unit. Usually this is great for development and testing. You might scale it differently for production. However, for making it a quick atomic load this is the most efficient way that almost all of our developers are evolving today. Everything on the cluster, everything declarable through Kubernetes and controllable through Kubernetes and cloud native operations including the storage and everything that's considered. So we see people very much going from off cluster to on cluster to in pod as a whole unit of deployment. Yeah, I think to add to that is it really depends the skill sets that you have and the performance required because the further right you get here will generally mean that you're gonna have your data service right next to your application which is gonna give you the most control because you're gonna have all the access to the database and the data service. You're gonna have application there as well and you're gonna be tweaking the configuration. So that's the control. Now, if you've come from a virtual machine that's hosting that database then it might be easier to offload that control and administration to a PaaS based service such as Amazon RDS or another PaaS MongoDB Atlas or something along them lines and then that middle ground could be a dedicated infrastructure Kubernetes cluster alongside your application clusters and that infrastructure cluster is serving out storage to all of your individual applications that require that storage there. So three different options as to what we see out there in the wild but the big thing that we'd like you to take away is that it doesn't matter whether it's inside the cluster a dedicated infrastructure cluster or using a PaaS based system. We still need to think about protecting that workload. Data protection is key from a backup point of view. Backup is relatively boring. I've worked in it for long enough for me to be able to say that but the recovery is ultimately gonna get us out of a lot of trouble when bad things happen and I don't generally mean the topical stuff the ransomware stuff. Yes, that is a bad thing in a failure scenario but I mean we all make mistakes, generally make mistakes and that's when we tend to want to go to our recovery points. Right, and as you'll see canister can address all of these situations especially for orchestrating off of cluster concerns. So great, we've already started to address stateful is the reality and if it isn't for you it probably will be because you've been successful with Stateless today and we've seen this increasingly growing. We also see customers saying, hey, I have GitOps I don't need to take care of all this but the truth is the cluster has state you need to deal with forensics you probably need to deal with auditability and so on. Those again are persistent artifacts that we need to basically get off of the cluster because you can't prove that the deployed state actually was unless you audit it. And then yes, everybody's like, well I already have RDS I already have Azure I already have these workloads on clusters with a managed service provider Well, the truth is everybody has outages and all of the cloud providers recommend that you have disaster recovery so that means taking your data off of those workloads and those persistence and those artifacts need to come off of those clusters out of those clouds to another party and maybe come back into those clouds certainly but there needs to be an independent way no single point of failure that's basic disaster recovery 101. We'll talk a little bit more about that now but please carry on my Yeah, no, so with that myth three mark I was going to say about the public cloud I was just at AWS re-invent last week and they're very vocal about the shared responsibility model. Yes, they're going to keep the infrastructure up they're going to keep the lights on for you but that data is your data if you drop a table if you make any bad mistakes and you accidentally delete something that's your problem that's your you've made that deletion it's the shared responsibility and then from the Github's perspective is if you lose the cluster then that's great you might have all of your application in Github or some sort of source code repository but that database won't be stored in there so how do we go from version one to version two of our application? That's great. We've got that all in our source code repository but the database is the key important part and probably the most critical data that you have within that application as well. Okay, get you to this one. So yes, a lot of people then say at CD backup is my way of backing up a cluster certainly that's the right way and we want to basically bust this myth. I believe that yes, at CD backup is a requirement for most everybody however it is not a disaster recovery solution for restoring the state of your applications because of the persistence that we've already mentioned but more to the point I've never seen anybody actually restore it directly and then get what they needed because the cluster drips, the applications drips, the data on disk drips and you will not get that from at CD. So yes, at CD is part of the solution however it needs to be put in the right context and the truth is we really put it in the forensics and modability context not for backup and disaster recovery. All right, so one more thing because we have a lot of developers here bringing their applications to Kubernetes and they need to know a little bit about data protection and the three to one backup rule is the kind of ordained way to look at disaster recovery. Michael, do you want to tackle this? Yeah, I think this is fundamental and think of this as a methodology as well like it doesn't matter whether you're using canister K10 as a commercial project or any other data protection tool across any different platform. The key thing to take away is that one we need to have backup because we need to be able to recover from a certain failure scenario that we may have and it might be a dramatic one that makes the news but it equally might be one that we make we make our own mistakes and something happens. So, and this actually came from a photographer way back I don't know if you know the name Mark I can't remember that. I'm drawing a blank. I will add that reference later on. But the methodology stands the same is that if we make sure that we've got three copies of our data on two different media types and one of which is offsite and immutable I would add there it means that we have pretty much covered across any failure scenario in that if you think about the three copies of data I have my production that we're working on right now I have a backup of that in a different media type maybe that is off to object storage maybe that is a snapshot as well for really fast recovery point but then I make sure that I have one of those offsite and away from my original copy and I make sure that I'm leveraging immutable functions whether that be object like API from an S3 point of view or some sort of immutability lock on Linux or Microsoft Azure Blob Storage there's a lot of different options that we have out there from that offsite and that will protect us if something happens of production on a different to that media of the production but also to have that offsite as well which then leads us into the next piece of why, why do we need to keep this? Because if we think about that failure scenario that I mentioned whether that is the accidental deletion or the topical ransomware attack we have that failure now if we're only taking a backup every 24 hours and we have a recovery time or we would like a recovery time of X amount of minutes or hours we have to consider how much, what that, how far back we need to go because 24 hours it means that we're well if you make a mistake now you've got to go back 24 hours or to the last backup as the recovery point which then leads us onto or how long does it then take us to actually recover the workload recover the data now if that's petabytes of data it's going to take a lot longer but ultimately you've got to be aware that that gap in the middle there where how far, how long is really going to define how much data you lose but how long your cost like your business your application is actually offline so just a very easy simple diagram to say is that traditionally and I'm traditionally saying about back in the physical world we would say about backing up every day and maybe you would take the tape off site and it would live in your car foot welfare for the evening or maybe even the week and then you'd come back the next day and it would go again from an admin's point of view now that we're in the world of cloud and virtualization and other pass by services maybe we don't have to do that every 24 hours maybe we can actually get down to the minutes and maybe we can get down to the hours on some work clothes that we don't necessarily see as really important so it's about squashing that RPO RTO down the failure is going to happen whether whatever failure that may be so thinking about how far back we need to go from the last backup and how long it then is going to take to recover and where as well you have to think about that because if the failure scenario is a catastrophic one we've lost a certain site then we have to think about okay how do we build that new site how do we then recover that data from that offsite location and there are all things to be thinking about when it comes to data management Mark over to you. Sure. So let's now talk about this because if we were talking about three to one right, three snapshots and you know CSI snapshots of my persistent volumes on the cluster two of those getting off of the cluster at the very least, right? Two copies or two a second place and then one of those has to be way out of that availability zone, right? You know, just geographically so that's the way to think about three to one but now what is the backup, right? With Kubernetes we now have all of the infrastructure concerns and all of the application concerns together plus a lot more with network and storage providers in the mix of the entire stack so traditionally back in the bare metal and the VM days, you know just taking stunning the VM with a hypervisor and then take a snapshot of the disk was good enough for a crash consistent backup but with Kubernetes we can get to application consistent backup because it is not good enough to just snapshot a disk because we still have to get all of the data at rest on that disk and so that requires potentially the entire application so we have a lot of methodologies a lot of techniques and they progressively can be added together to get to whatever granularity or tool or methodology we need and sometimes it's a combination of these factors in order to get us to an application consistent backup because those databases and the website and the front end and the backend logic and all the load balancing and any other sort of stateful information needs to be persisted and needs to be at rest before we can even use say the snapshots that are built into Kubernetes for container storage initiative providers but again, if we haven't frozen everything and again, this is where Canister comes in to help us orchestrate all these concerns we can finally then take a snapshot but you may need to get a logical dump from the application because we're not dealing with one piece of data on one node, we're probably talking about a distributed database if we're doing this properly so yeah, there's no such thing as a snapshot of just one little shard of a cluster not going to work so we need to have probably logical tooling to persist all of a snapshot onto disk and then we can use system sort of tools or snapshots or a combination thereof such that we can get this all persisted to disk and then we can get a snapshot off of it we can use advanced technologies like change block tracking to make this an efficient backup so it doesn't take up a lot of time therefore decreasing our RPOs and doesn't take up a lot of storage therefore not growing our bills exorbitantly and therefore it can also be transmitted over the network effectively for two in our one right in our three two one backup rules to make sure that we have disaster recovery so there's a lot of concerns to balance Canister helps us address all of these and in conjunction and in orchestration together let's move on so yeah, as soon as some people start to address with how do I back up my persistent volumes they start scripting it and using the storage provider tools but this rapidly becomes a single point of failure in coding as well because we've targeted all of the pets or all of the single points of failure or all the single providers in a solution which in a disaster recovery situation may be on a different provider different zone, different availability different quotas, different storage providers in fact, even migrating between clusters we consider almost like a disaster recovery event so all of those challenges mean that the scripts are going to be very brittle bespoke, not well managed and not well managed and distributed like a cloud native application and all of that manageability and those enterprise needs for disaster recovery a small script will not meet the needs meet the temporary needs just to get started but rapidly we get to where the need for canister happens to change all these factors so let's move on. Cool, so let's get into we've met me and Mark have both mentioned probably canister 25 times now but we haven't told you what it is so Mark do you want to take it stab at this one first then I'll take. Sure, so in the interest of telling you what we're going to tell you and then telling you and then telling you what we've told you we've already told you that persistent workloads primarily databases but not exclusively are really here and a concern for almost all of our customers and all of our adopters of the project certainly of canister and canister allows data protection operations to be done in a completely cloud native way an extensible way to achieve application consistent backup and recovery has been an open source project since 2017 and we'll go into it but we already have example blueprints that deal with many of the community blueprints that are needed for many of the databases out there and database passes and services but we see this growing to other customers asking us like how do I deal with my new vector databases and so on is there a canister blueprint for that well we want to collaborate and we want to solve this problem in a cloud native manner for all persistent workloads from this point onwards so we're hinting at blueprints and we're about to go there next Yeah, exactly that the other thing I just always have to highlight is the ability this doesn't always have to be inside of the Kubernetes cluster so services like Amazon RDS and if you notice the innovation MongoDB Atlas and just basically any external database to the Kubernetes cluster can still be captured by a blueprint and we'll talk about what that blueprint enables us to do as well. You remember there's three kind of major patterns of deployment with stateful workloads and yes, canister accommodates all of that. Yeah, so from an execution walkthrough and I'm just going to be very brief here because I know we've covered this in the past but canister as a standalone project is a home chart deployment. Super simple, you'll see on the right hand side here that we have a Helm repo add a canister repository within Helm QCTL create namespace of canister and then a Helm install command and then you'll see that we're made up of or canister is made up of CRDs or custom resource definitions and that's what we're really going to dive into but the key part and the extensibility of canister really comes from those example blueprints or blueprints in general and you can see that we have a list of examples down on the left hand side from the GitHub site if you go to canister.io that will get you to the GitHub and then there's examples in there that you can go and use either straight out of the back or you can modify that and you should modify that towards your data service as well. So in order for us to instruct a print to go and do something so within a blueprint we have three actions we have a backup, we have a restore and we have a delete we require an action set an action set is a kind of an if this then that and it's gonna call on that blueprint to go and do something. So an action set is an instruction to what we're going to do you can kind of see it as a job however a job would be a continually orchestrated affair that happens on a scheduled basis these aren't scheduled unless you use something like a cron job and each time you're gonna be creating a new action set on that but the action set is the instruction that we're going to tell the controller to trigger and that'll be via a blueprint that we want to do. That blueprint will then use that function to QS that workload within that database service in or outside of the cluster and then we would offload that if you wished we do have some that are snapshot orientated examples as well but generally speaking to adhere to that 321 rule we would drop one out into the object storage or cloud snapshot type environment and then we report back to our action set to say that everything is good we would also use that same action set if we wanted to a restore process as well. So breaking down the blueprint as it's kind of the key part if you haven't noticed and this kind of just summarizes what Mark just touched on around consistent so crash consistent I like to talk about if you remember physical systems or maybe even your physical PC if you were to pull the power cable out maybe not so much today but definitely 20 years ago the likelihood of that coming back in a fresh clean state would be impressive it would probably come back if it was Windows it would probably come back with a blue screen of death and would have to do some sort of check disk to confirm that everything is good so crash consistent might be all we can achieve it might be just a case of going right I need a storage snapshot and bang we take that because there's no other option but we know that there's a risk that when we bring that back especially if it's transactional workloads in there it might not come back in a good state we might lose some of those orders we might lose some of that important data what we really wanna start getting to is the application consistent where we start freezing some stuff or start freezing the data service and then offloading that to a storage snapshot and then I'm freezing that data service yes there's gonna be a bit of a delay there where we're freezing but then we let things go again now that's it's a bit better than crash consistent now where we talk about database consistent that's gonna give us the ability to leverage logical dumps via something like PG dump or Mongo database or Mongo DB dump and that's gonna allow us to offload a full copy of that database in a consistent fashion off to that object storage location that's great but every time you do that to 100 gigs worth of files it's gonna be a hundred gig every single time which takes time but again that might be all we can get to or we might be looking at the full application capture and we might use a combination of tools and layers to achieve that within that so as I mentioned when Canister is deployed within our Kubernetes cluster as a Helm chart deployment we or Blueprint are seen as custom resources there's three customers there's actually four but I'm gonna talk about three three custom resources in that we've got Blueprints we've got action sets and we have profiles action sets are the instructions that I mentioned Blueprints are the how we do something and the profiles is where we're going to store that so these would be seen as YAML files but YAML manifests within our Kubernetes cluster now within the Blueprint as I mentioned as well we have three different actions we have a backup action and if you can see here and I'll try and describe what's happening is under that backup action we have a set of instructions that says this is what I want you to do when this is triggered to my Mongo database in this instance so I'm going to hit the stable set name which hits the pods within my stable set and it then connects into that uses a dump command of MongoDB it then archives that out to a profile now all of that profile and everything is already defined as part of our action set but really what we're doing here is we're instructing this container image to take a copy of that or a backup of that in a consistent state and offload that into our profile our object storage profile when it comes to our restore action so you're only triggering these every time you need them so if it's a backup action set if it's a restore action set or a delete for retention period then you're only going to be calling on these actions as and when you need them so a restore action set is if you hadn't guessed it's a reverse option or reverse operation we're going to take that copy of data that we stored in our profile and we're going to pull that back and we're going to dump that back into our or we're going to restore that into our data service so again here we're using native tools such as Mongo Restore to bring that consistent copy back into our data service which also could be used from a migration point of view we could use this from one cluster to another so we can start using that copy of data for clone purposes and then finally again quite simple is the delete action think about the delete action is probably around the retention period we could continually do these action sets on a daily basis for the next 100 years you probably don't want to keep that many so we want to start introducing a retention period deletion out of that so we could do that with the delete action as well okay anything to add there Mark did I miss anything? No, keep going Cool, so that brings me on to a little hackathon project that I created so so far the screenshots that you've seen are all CLI I do have access to a cluster that has canister installed so we can jump into there as well but one of the things that I wanted to do so command line is great, yeah if you're familiar with that I come from an operations infrastructure background and I tend to quite like UIs maybe not to the a lot of people do like CLI's but I feel like you can get a lot of you can get a lot of output from or feedback from a visualizer so as part of this hackathon it was 24 hours long I'm no developer by any stretch but when we look at the command line yes I can go through here you can see that I've got several blueprints you can see that I've got several action sets some are restores and some are backups I've got some profiles so you can see that I've got some in Azure some in GCP, some in AWS and some S3 compatible storage great stuff, now I can interact with that I can start using CanDo which is a CLI command tool for canister maybe that's for the next hackathon as well is how can I incorporate that but basically what I wanted to do is a way to visualize these resources and not so much the profiles and the blueprints they don't tend to change too much but I want to see what's on my cluster what I did want to see though was these action sets because these are what's running on a scheduled cron job basis so as you can see with that action set and I'll touch on this a bit more but I wanted a little bit more visual feedback on okay if it's good then we're all green because everyone knows green is good however if it's bad then I wanna see when it's bad so I can drill down into that and go and work out what that is and a stretch goal for this was to hover over that action set and understand a little bit more give me the QCTL describe on that action set to understand what the problem is and maybe next hackathon we'll get to that so let me just walk you through a very simple print and again forgive me I'll share a link to the GitHub repo and don't judge me too much I'm not a developer, I'm a hacker at best but I wanted a way in which we had a navigation pane to get to our canister resources so that would be our GitHub, that would be canister.io it would be our docs.canister.io as well as our Slack channel that Mark is very active in to speak to the rest of the canister community so really that was a navigation bar that I just wanted there for a quick win to get around I wanted to display my profiles and notice I used some icon graphics to distinguish what cloud they were potentially in and they're gonna give me the details of that so I can see what marries up to my object storage locations. Again, I'm gonna have my blueprints so here I have four or five blueprints that I've added and again they use some if selections in the code and then uses some icons to dictate what the workload is that that blueprint is protecting and I'll show in the demo there's a load more that I've factored into that and then finally around action sets as I mentioned I wanted that visual so I wanted to know what the status was I wanted to know what type of action set that was I wanted to know what particular namespace that was protecting so which application and I wanted to know what profile it was using so kind of marrying up that profile to what we see over on the left hand side. So with that, hopefully this will allow me to do play as soon as we see the mouse moving, yeah it's moving. Okay, so on the GitHub repo you'll see a read me this walks you through how to get started on this really there's two key ways in which we can deploy this and use this with our Achilles Custer one is we could deploy it with the ammo files that you see within the bucket, sorry within repo and the other is to run it locally either in a Docker container or using the binary locally and what this does is this will connect into the cube config or it will be deployed within your environment so basically what this read me does is give you two options as to how and where you want to deploy it. Now what I've done is I've deployed it within my Kubernetes cluster within my canister namespace it runs alongside my canister deployment as well. I feel like this is stopping. So yeah, so I've gone that you can see that we've got the manifests so what we do here is we create a deployment a role-based access control we create a service account and a service and the service is what we can gain access to. If we go to canister.vzilla.co.uk you'll see a live version of this you won't get into the cluster but there is a live version of this that you can get to. Again, if you wanted to use a Docker file you could do that as well because not everyone wants to run that binary locally. So again, I went down the route and it also does go through the canister deployment options as well. So if anyone that is just getting started that just wants to get hands-on with canister and start playing around with some examples then it also talks about how do we deploy canister using that home chart. It also talks about adding profiles into our cluster now documentation does as well but for this project I wanted to be clear and concise to the hackathon to try and win the hackathon. So as you can see, we use canctl is the CLI that I touched on earlier and these are the example blueprints that we have within the canister GitHub. I've just simplified them so that I can quickly show what that looks like as part of the UI. So if I go kubectl get pods you'll see that I have a canister visualizer pod already running. It's been running for four days as up to this demo, very simple deployment. Nothing too fancy, no persistence on this. And I also have a service which if you can copy this quick enough you can get there as well but canister.vizila.co.uk will get you there as well. So this is in an EKS cluster. And as you can see my canister visualizer and my canister deployment has nothing in as part of this demo. If you went there now you would see a different story. Obviously like all good developers I put a dark mode into my interface to get extra points. And then I also added a button over on the left-hand side that gives us an about project. Now you can read through that but ultimately it's what I'm telling you now it's a visualizer that allows us to see what that CLI is. And then it goes through these options these web links that I touched on the navigation bars but what we really wanna see is actually some profiles, some blueprints and some action sets in our cluster. So with that, if we come back into here we've also got an application that we run in here. In fact, before we do that we're gonna add our profiles we're gonna add our blueprints and we're not going to create our action set. But as you can see here just a very simple bash script so I'm not sharing any secret keys it going through it's adding your S3 profile your Azure blob, your Google storage and a couple of blueprints. And we can now go and create those action sets but we'll need that profile name. So I wanted to display that as part of that. So with that, if we refresh this page, magic and we've now got some profiles to send our backups to we've got some blueprints to associate to our workloads but we have no action sets we have nothing sending anything anywhere we have no backup plan to that. But you can see here, yeah I've got one in Azure, one in GCP just to show the agnosticity of where we can send our backups from a canister perspective. You've got blueprints for MySQL for Elasticsearch for Redis if you go and look at that example you'll see a lot of these and a lot more logos are coming later on. If we don't have the icon for that blueprint we're gonna use that top one that's why I had that in there for so if it's a bespoke blueprint that is not listed then it's just gonna use that. Now, if I wanna go and create that action set a backup of my mission to collaboration so let's just see. Okay, so I have a MySQL test obviously MySQL test is very important to me and I wanna make sure that the data is Let's see if Michael comes back. There he is. Oh, sorry. I don't know where he got that. So, I've got that MySQL test mission critical application that I want to protect and I'm saying from the namespace canister take this blueprint MySQL blueprint pointed to the staple set MySQL test MySQL release and then I'm hitting this profile probably a S3 W65QL and I'm gonna use a secret so I can get inside of that mission critical SQL environment. So I've run that and you can see, yeah, all good or in fact, it's running. Again, I wanted to show a progress bar in there. The fact is there's not enough data in here for it to really show too much. So by the time I refresh this and go and have another look, it will be done. It tells me the profile that I'm sending that to what namespace it is. I think demo me should click refresh now and we see that it's complete. Good stuff, all green. However, I'm now gonna simulate a, do I simulate a restore first or a failure? Maybe it's a restore. Okay, so let's say that I dropped some data I didn't dive into the MySQL database but in theory, it's gonna be this process. This process, we've made mistakes, mistakes were made. So we would then simply say action set, which is a type of restore now. And again, if we go into here, it shouldn't take very long. You see that we're already complete because there's really no data in there. But in theory, as a longer demo, I would go in, I would show you some important data. I would take it back up. I would then cause a problem and then restore that. So now let's create a action set that I know will fail. So I'm gonna actually call another blueprint against my MySQL database. Well, that's not gonna work because it doesn't have the capability to be able to protect that. So let's run it. And again, this was allowing me to test that that status is now red. I can visually see that so that I know that there's a problem and I can go and investigate what that is. Okay, so what else did I show? I'm sorry. I've never been so happy to see a failure. What's that, sorry? I've never been so happy to see a failure result. Yeah, and again, this comes down to me showing from a hackathon point of view that testing is important to understand that. Yeah, I've showed you that much. I've showed you the profiles. That's a good demo. We ready to move on? I wanted to stretch goal to go, right? Is when I went to that action set and it was failed, like to hover over that and say, could not render object reference, failed to render template app not found. Just something like that that would give you a pointer to be able to go and investigate what that needed to be. I think what I do is actually add a stream of blueprints, all of the examples that we have available to us into the interface, but maybe I do that very much at the end. I'm conscious of time as well. I know that we've got, no, maybe I didn't. Okay, let's not play it again. So there was one more slide around like how to get involved in that. It's on my own GitHub at the moment. I would love feedback about what to do next or if there's anything we should be doing on that and collaborating on. Yeah, and then I think over to you, Mark, for the wrap up. All right then. Okay, so Canister is a project that we started back in 2017 and open source back then and it's grown a small community and we worked with the CNCF to submit it as a sandbox project just back in June. The vote was in September. It was accepted and we're working on all of the onboarding now. We've also got a press release announcing this. This happened with KubeCon Chicago just a few weeks back and we also announced our first independent software vendor ISV support of a Canister Blueprint. That's EnterpriseDB with their Postgres database. And we also had some of our adopters also give a customer testimony, if you will, of what they've been using Canister for. So check that out. We'll have a slightly more updated version of these slides which have all these references so you can follow through on that. But our community is growing. We have bi-weekly Zoom meetings and we have an active Slack channel where our engineering team and other contributors are answering questions for all the new people who are coming in and learning about this brand new sandbox project. We have a number of example blueprints. I think we showed you a bunch of icons but go check them out. And this is where the community is contributing updates and improvements and new workloads all of the time. We've also pointed to a number of slides earlier if we're pointing out to these references and we're actively involved in the data protection working group of Kubernetes. As a CNCF Platinum member, our mission is not only to the community and not just to open source but to advance Kubernetes engineering as well. And this is our contribution, Canister being part of it in addition to all of our commercial activities. And so when we drive things forward such as other CNCF projects like Qvert and how are those backed up? How are VMs managed properly? You can now guess behind the scenes, Canister is one of the tools that we put in our arsenal for doing data protection of any type of Kubernetes staple workload. And Canister is that escape valve that allows us to orchestrate any concern on cluster, off cluster with built-in tools or with inside car containers with any logical tooling that Canister doesn't have in itself. We've really built an extensible system that really defeats the need to do this the wrong way or in a proprietary way but in completely in a vendor neutral cloud native way for application consistent backup and recovery. And so that's been our contribution to the entire community. And please join us, please come to canister.io. I've put in the link in our chat for how to get to Michael's visualizer, I can't speak right now, visualizer instance and we will point to all of the code base as well. So we also, this is our second Canister webinar. If you wanted to see a more extensive demonstration of how we set up and all that stuff I'll point you to our July one here. And this set of slides is already in our GitHub repo. So go to canister.io, click over to the get repositories maybe go back up to the demo repository and you will see the details of all of these links. We also have a hands-on lab at cube campus. We have a nearly 70 page white paper taking you through the how to become a blueprint maintainer and creator. It's focused around our elastic blueprint. That's a way paper requires registration or hands-on lab requires registration but almost everything else is purely open source. And if you have any questions come and join us in our Slack instance and we're happy to get you on board. So I thank you for your attention today. Thank you Michael so much. Thank you Libby so much for hosting us. We're really excited to kind of grow Canister with you and we're really excited to solve application consistent backup and recovery for staple workloads on Kubernetes. This is why we contributed it and this is just the tip of the iceberg for how to solve this problem. Thank you so much. Thanks Michael, thanks Mark. Great presentation. Thanks everyone for participating. You know where to find everyone if you wanna follow up and we'll go ahead and wrap up. The recording will be online later today. Again, you can access it through this link or by visiting our YouTube channel for CNCF and everybody have a great rest of your day. We'll see you again next time for another CNCF live webinar. Thanks so much, cast and crew. Thank you everybody.