 Thanks everyone for joining us. Welcome to today's CNCF live webinar integrating backup into your CICV pipeline. I'm Libby Schultz and I'll be moderating today's webinar. I'm going to read our Code of Conduct and then I'll hand over to Michael Cade, Senior Technologist and member of Technical Staff at Cast and Byveen. A few housekeeping items before we get started. During the webinar you're not able to speak as an attendee, but there is this lovely chat that everyone is commenting in. Thank you for doing so. Keep saying hello and put your questions there and we will get to them either at the end or whenever Michael takes a little break and it works out and we'll get to as many as we can. This is an official webinar of the CNCF and as such is subject to the CNCF Code of Conduct. Please do not add anything to the chat or questions that would be in violation of that Code of Conduct and please be respectful of all of your fellow participants and presenters. Please also note that the recording and slides will be posted later today to the CNCF online programs page at community.cncf.io under online programs. They will also be available via the registration link you used to get into this webinar today and it will also be on our online programs YouTube playlist which is linked in the chat. With that I will hand it over to Michael to kick off today's presentation. Thank you Libby. Yeah hey everyone and everyone on the recording as well but yeah in terms of questions I'd just like to say that yeah I'll get to as many as possible. It's super important to get that feedback and answer those questions because I appreciate that a lot of us are in different times of their journey whether it comes to complete rookies or pros when it comes to the cloud native landscape so I appreciate that we're going to be talking about a couple of areas that are probably a little bit more advanced but I'm going to try and simplify them as best I can so that people get an understanding of where we're putting this project. I've spoken about Canister before as a project I've just put some of the links in and I'm going to touch on some of those areas as well as part of the session but really the premise of the session is really about integrating backup into your get ups and I put CI CD pipeline it's more so your CD pipeline I'm going to get into why that because really the CI and CD you always see it put together but actually the CI is one thing CD is another they are closely linked but it's about deployment it's about delivery of your applications and just before we move on to the next slide and start walking through some of this a bit of the theory before we get into a bit of live demo is that I'm Michael Cade I'm a senior global technologist I live within the office of the CTO at Veeam but really concentrating on our cloud native cloud native ecosystem both open source and commercial products also if we don't get round to answering any of your your questions today then you can find me on on the social like on Twitter at Michael Cade one more than happy to get involved and help with any queries that you have with anything that we're doing over over at Castam from an open source perspective so I mentioned around continuous integration and continuous deployment and generally being two separate methods or ideologies that we have when we're creating applications or deploying or delivering our applications so continuous integration is we're going to create something we're going to create our applications we're then going to put it into some sort of version control they're going to build it maybe we're going to put it into Docker hub and we're going to test it make sure that everything's good and then the software is available on Docker hub for us to take now the more and more I speak to a lot of our like community is actually that bit is done by a lot of software engineers and software delivery companies as such as well and we're starting to get more commercial products or more open source products available in the docker hub or in any container registry for that matter as it closely links here but we can also think about how software is deployed developed from a from a more traditional ecosystem point of view when it lands into ISOs or different binaries that we then want to be able to leverage and it's really that release phase where all of us regardless of your background whether you're an operations person that comes from looking after platforms as a platform engineer as an infrastructure admin as a people that are ultimately keeping the lights on for and delivering the as a service and the fundamental infrastructure underneath that the release is has always been there we've always constantly we've built our virtual machines our physical machines we've installed software on top and then the release cadence has always been whatever that's been from the software vendor and we've deployed that out into our into our ecosystem or into our environment now or you're coming from a developer point of view where you're the you're the people that are actually writing the code version controlling that releasing that building that testing that and you're looking after that or you're in the DevOps where you kind of linger in between and you've got a bit of everything that you're that you're concentrating on so the real focus for me today is around this bottom part of the of the diagram and hopefully you can still all see my slides building out here but it's really from that release now in this example I'm using Docker Hub as my source but really that could be any container registry any binary or anything that you're you're deploying in your in your environment so again we're going to we're going to have that update we're going to have version one version one gets controlled through our potentially our own version control system and then we're going to deploy that most likely you're going to deploy that into a staging environment and then you're going to validate that version one is good version two is now out and we're happy how that looks from a staging point of view before we then release and deploy that into our production hopefully that makes sense there's a bit of a high level overview between continuous or CI and CD but what what why am I coming on here and starting to talk about backup in this instance because if we think about over on the left hand side we've got our developers our developers are creating their version controlled software and they're putting it onto GitHub and they're potentially then releasing that out into your environment using a product called and I'm using another open source tool here called Argo CD which is focused on the continuous deployment of your service and that gives us our application our Kubernetes application for example or everything that consists of that whether it be ingress services deployment staple sets system volumes and that's great and version one comes out version two comes out and that for the most part a sync can be great and it's going to keep keep updating for you and that's why one of the big reasons get ops over the last 12 to 18 months has been a really big focus area for a lot of businesses because we're now getting to the stage of well how do we automate that deployment better and in a much more efficient manner but then when we think about okay I just mentioned persistent volumes there as part of that so let's say that we've got some sort of database a MySQL database and now we've got potentially customers writing data in a way to that database could be customers but also equally could be people internally and we're right into that persistent volume and MySQL is the application the backend database that I'm going to use because I'm going to use that in my in my example as well so what happens there like we've got the application in version control so I can roll back and I can roll forward from a version one to two to three and I can even roll back to version two if things go wrong and they don't look right but version control is not going to be looking after your MySQL database which has an external user input whether that be a user input from internal users or customers outbound but ultimately that data doesn't get doesn't get stored in version control so that's a concern for us is that all of our version control our GitHub our repositories that we have out there is going to bring back all of our Kubernetes objects and our artifacts it's going to bring it back into into play however if something has manipulated that data then we're not going to have a copy of that data and what why this is really interesting is if we like we have the we have a concept of config maps within Kubernetes which allows us to interact with external services but also external data sets and it's quite common that those config maps get used to manipulate data it might be that we want to remove something or we want to change the way we see that data or maybe we're shipping that data between one database to another well those config maps if maybe you've made a mistake through that code control and you've decided to delete all of the data obviously not through choice then that data is not captured at that point so one of the key areas that we've been talking about or thinking about from a canister open source project point of view is well how do we make that simpler for the people that are deploying the application within our environment whether that is the developer the DevOps or the platform engineer and everything in between but how can we make sure that before we go from version one to version two that we have a level of protection against that and that's where it comes in is that well how do we how do we provoke or how do we start a backup job or some sort of data move but before we make any version control or code changes so that if anything was to happen in the code how can we ensure that we've got a copy of that data in a safe location so I keep saying about data data has been highlighted and bolded and capitalized all the way through this but and I'll keep on saying as well about any persistent data or volumes used by applications well they're not captured generally through version control there might be some in terms of lock lock login systems messaging queues but you're not really going to want to store them in your lean efficient github repository but basically any staple set that we have within our environment such as a relational database such as no SQL or no SQL database um that is going to you're not going to be storing that data that database or the data within that in a version control system so it requires the entire application stack including the the data now we've already we've already established that we have all the code and that's great and in other words like we can always deploy version one two and three again that's always a should be an easy role now for anything more orchestrated around that you might want to go down the route of looking at other products that allow us to do that as in an all-encompass way but the data is the most important part if you couldn't tell and the the data and its dependencies of the stack need to be discovered tracked and captured in a way similar to what we do with version control but version control such as git will not capture what that is and the way in which we're going to do that in part of our our demonstration in a in a short while is we're going to use something from a canister point of view called an action set and an action set can be triggered either via a cron job out of band so we just make that happen on a daily basis hourly basis using a cron job within kubernetes and we're looking at how we can better orchestrate that but then also how it also allows us to use our cd pipelines to trigger the action set which then allows us to offload that into a external data service so whether that be something like awss3 or s3 minio for example and canister is the the linchpin to be able to move that application data out of the container and into into a solid external source so in the scenario what we're going to do is i'm not going to use the staging area because i could do and it's just going to confuse what it looks like on on on a demo so i've already deployed argocd and what i've done here in particular is that this could be any kubernetes cluster anywhere in the world it could be eks it could be aks manage kubernetes but i'm actually going to use minicube another open source framework for deploying kubernetes and i'm going to use get on my local linux machine and we're going to update we're going to do some stuff to our application and we're going to deploy it to our kubernetes environment our minicube environment what we're also going to do is we're going to deploy my sql and we're going to add some external data so imagine and i'll get to that and i'll show you what that looks like is a user's going to dive into our container connect to it and it's going to push some data into the database in fact if we want to think about this as a vet clinic and a database we're going to push some data into that database from that perspective then we're going to something's going to modify that data now i'm not going to simulate a config map i'm just literally going to change that data um or i'm going to modify that data in a way we'll see when we get to it but then let's say that mistakes were made someone drop the table that's it exactly what i'm going to do i'm going to dive in i'm going to drop the table or some other failure scenario has made that mistake for us so we now no longer have access to that data our data is gone by all accounts in that application our data is gone so then what we're going to do is we're going to leverage canister to not only be backing up that data but we're also going to um so we're going to back up everything to an s3 bucket i'm actually going to use minio also hosted on the mini cube cluster not best practice because if you lose the whole mini cube cluster then you're out of business you don't have an external data copy of your of your um application data but for the purpose of the demo think of that as an external copy of the data as simulated through this this image and then what we're going to do is we're going to again leverage canister with another action set so within canister we have an action set that enables us to back up and then we also have an action set that allows us to restore now they use the similar api it's just about being a push or a pull into whatever however it's getting that data back into our environment so then we go back we fix our mistakes i we restore or we change that config map or whatever needs to happen so that our code is good our version is good again and then we can restore that data back into back into our environment now i'm going to run through this we might run through it again at the end just to make sure that um we're clear on the execution walkthrough and how canister works but basically what we're going to do and i've already done that uh no i haven't i haven't done this i'm going to go and deploy canister using a helm chart into our mini cube environment and you can see there that's known as the controller the operator that we have within there and then we're going to we have something called we're going to leverage something called a blueprint and you can see an example of all the blueprints the stable blueprints that we already have created for all of the various different data services out there now i said about relational databases i said about no sequel databases but you'll see from that list they're not the only ones we've got things like elastic search i know there's a kafka one being worked on as well there's a any data service that is using a database workload or or some sort of um persistent volume we can generally use a blueprint to lift data from a to b and that's the quite exciting thing about where canister is at the moment so then we create an action set that i mentioned and an action set consists of a backup it consists of the blueprint that we're going to be using for that and it's a set of instructions that basically say i want you to do this with this particular um application and if it's my sequel then i'm going to do uh x y z as part of the the um dump of that it could be postgres and do a pg dump etc and then it's going to push that out into our into our profile which is one thing i don't have on here but it's going to we're going to get to so the action set comes in you either run this when you want to run it or you have it part of your cron job in our execution walkthrough in our demo we're going to be using our go cd to trigger this the controller then looks for the blueprint that we're asking for again we'll we'll go and deploy that as well and then what that does is it executes a canister function against that database workload and then allows us to offload that into object storage or by taking the ability to leverage a cloud snapshot from that point i believe this can also go to nfs but don't quote me on that i need to need to check i've not done it but object storage is is normally the de facto for where we're storing those backups which is a good good um good place to be storing an offsite copy of our of our data then from a recovery point of view we're going to again we're going to leverage that action set and we're going to basically perform the the reverse of what we just went through so let me just check the time okay we're good on time so you've seen we've got our we've got our go cd already deployed we've got min i o with a canister bucket already deployed so just a couple of things to i'll just check if there are any questions i think we're good did did they um progress at all the slides if not okay cheers Ivan um okay cool so you should be able to see this inception screen at the moment which is not pretty for anyone um just a couple of resources i have posted them in the chat but they might have been early on so kind of just want to run through a couple of places where you can go and find more about canister one is canister.io super simple it's going to give you a bit more information about what it is we're going to show but as a project i'm going to show you one aspect that canister can be used canister can absolutely be used as a standalone tool to just take those point in time backups of your application data so um you'll find here you can go and fork it on get hub give it a star interact with us have a discussion give us your ideas contribute more than more than welcome to to get involved with with what we're doing over here i'll deploy it as well the instructions are also there another one is docs.canister.io again this is just going to go into a little bit more detail this could be another great area for contribution in terms of like run through it for us see what it looks like give us feedback on the documentation one of my biggest pet peeves gripes is documentation so i want it to i want it to basically walk me through how to get a project up and running and i know that's a big thing when it comes to the the cloud native landscape is documentation has been like a real focus for a lot of the projects that we're working on and then obviously the the get hub page where the project resides you can see that it's very active so three days ago was the last last commit you can see that the releases are going up it is a helm chart deployment but go and have a play everything's written in go um you'll also see a list of community applications as well as stable applications if you go into examples and stable then you'll see that long list that i had in the slides and then also just take a look at some of the other community workings that we have at the moment so being able to use canister to protect AWS RDS so not always not always does the data service live within the kubernetes cluster so we want to be able to protect that but maybe the application maybe the front end maybe the stateless part of the application does live within the kubernetes cluster and the data lives outside well we we can use canister to be able to protect that from that point of view you also see things like catheter in here as well as a demo system called time log um what else was i going to show on here uh okay yeah the the steps in which i'm going to walk through from a demo perspective so this is actually on my own i don't know if i did link this i'll link it quickly now so you can all see and get ahead of what we're going to be doing i don't know if i did share that at the beginning but basically what we're going to do is we're going to install canister so in fact roll back one we're going to deploy mini cube again this will work on any of your x86 kubernetes clusters i'm just using mini cube because a lot of us don't have access to a aks eks gke manage kubernetes services so being able to use mini cube on literally pretty much any any um desktop machine laptop this gives us a huge opportunity to be able to leverage that and actually see it because i think that's one of my big things about learning and learning in public is well it needs to be accessible um so how to how do we install canister so we run through adding the the helm repository creating the namespace we'll run through this deploying argo cd i've put the steps in i'm not going to do that today i've already deployed it as you saw and then we're going to create a bucket in mini cube going to deploy mini cube again i've already done that loads of great resources out there to say how argo cd is deployed and min i o so i probably don't need to go into that too much then we're going to create a canister profile using can ctl now can ctl is the cli for canister um original naming um is that this gives us the ability to create those three things that we first spoke about so being able to create your profile but then also being able to um interact with those blueprints as well as being able to create the action set as well so we're going to create the canister profile using can ctl we're then going to create or deploy a canister blueprint you can see that i'm taking it directly from that stable myc called blueprint dot yaml if we can go and look at that again in the in the github repository and then we'll just confirm that we've got it and then what we're going to do and this is so at that point i could just use canister to go and protect my application that's been deployed without using argo cd but that probably wouldn't fit the bill for the for the session so what we're going to do is we're going to go in into argo cd we're going to create a new um mysql application our pet clinic database we're going to run through these steps we're going to create some data and you can see here that we're just going to add um one hamster into our database um and then we're going to sync our project so that's going to then go and deploy our application out into our kubernetes cluster our mini cube cluster and then we're going to go and introduce that bad change into our application now this is actually going to be us deploying a mysql client and you can see here i'm pretty sure i made it quite uh quite clear we're going to create a database called test in the previous step and we're going to just simply drop that database as part of the deployment of mysql client so then we commit we push that think about that as version two of our code set and then we're going to dive in you're going to see here we don't have any uh any test database anymore so then we need to obviously be able to restore that and this is where we then take advantage of can ctl as well and i'll touch on actually a couple of areas of well where do i get can ctl obviously there's the ability to get can ctl from the uh release's page of of uh the github but then also being able to leverage something like arcade so arcade from a Alex Ellis point of view an open source application marketplace kubernetes marketplace and in here you're going to find canister is available within within here um so i think that was it i think me jumping around isn't properly helping but uh so yeah we're going to restore our database and get get our data back so let's jump into visual studio so just to confirm our environment so i have a mini cube just called simple mc demo we're using the docker driver we're using container d as the runtime blah blah blah port version of kubernetes and how many nodes you can also see here that we have that one node and then you can see here that we have these namespaces so we have our go cd we have our our default namespaces and then we have min i o obviously we don't have any canister so what we want to do first is we want to add canister helm now i already have that so if i do helm repo list you're gonna see that i have canister in in play already and i have just done an update beforehand so i'm deploying the or have the latest helm charts available so let's then go ahead and create our namespace and then we're going to deploy canister using these so helm install my release in the namespace of canister canister operator which we'll see in a sec and then we're going to use an image tag of version 71 now it should work with 72 as well i'm not as daring but 72 only just came out as you saw three days ago i haven't tested it i wanted the demo to to go as it should okay so we can go and deploy that this doesn't take very long at all to spin up before we do that what we're going to do is we're going to go and check out some custom resource definitions that get deployed as part of the helm deploy of of canister so you can see that there's three if maybe jumping the gun a little so hopefully you can see that that we have three we have our action sets which is what we use to backup but also to recover we have our blueprints which is there to integrate or interact with our database and we have our profiles which is where we're going to store our backups so you can see that we create these custom resort resort custom resource definitions so that we can then leverage that as part of the the kubernetes api so if i then quickly run a kubectl get pods under canister okay good stuff less than a minute and we're up we've deployed our gocd like i mentioned we've created our bucket in uh we've deployed minio and we've created our canister bucket and we're in there already so we don't need that and now we need to run canctl so let's clear this so that we can concentrate on this so we can see here that we're going to create a new profile so if i do a uh qctl get pro let's do the full api you see that there isn't a profile in our namespace but we want to create that and this is going to create the location profile for us to store our backups in that minio so i've already set these environment variables with the commands up above so in theory unless i've cleared out my cache which is absolutely possible um i should be able to run all of this and it goes and creates a secret as part of that using the access key and the secret key and it creates our profile now if i go back and run get those profiles in theory there should be one now called f3 profile 6v nv9 okay so we've confirmed that we have that profile i've jumped ahead um notice as well if if i was to use so here in the instructions and i need to change that and maybe anyone on the call that wants to dive in and and just contribute that to my um repo if i do a cube ctl uh get profile i think it's profiles and then minus n canister uh so if you run other applications that use profile that's why i had to use the whole crd api string clearly i got rid of the application that was using it was actually castin that also uses profiles so there would have been two profiles um available which would have come back and said that doesn't work okay so next up we want to create a canister blueprint so canister uses blueprints to define these database specific so in our instance we're going to be deploying a mysql um application so i want to go and create that blueprint based on that stable uh one that we have in our get repo or get hub repo i should say okay so we can check that in our environment make sure that has created like no spoilers here it's definitely going to be there because it just said created okay now we're going to flip to argocd to deploy our app and we're going to create a namespace called mysql i'm actually not going to do that because we can do it as part of the the app deployment um create in argo a new project mysql app project name mysql and use the default project so what i need first is i need to jump into here uh i need to take that and i need to jump into here and we're going new an app so we're going to call this what did it say mysql i've probably kept it quite simple project name mysql namespace mysql super simple right mysql project is going to be default we're going to use a sync policy of manual now you might in your own testing want to make this automatic so any new changes happen they can automatically be rolled out within your environment um we haven't created that namespace so we want to tick that the instruction state to create it but i'm going off piece a little um repository url is argocd canister and i think everything lives in base yep everything lives in base so back to that um so sometimes if you've ever seen me do this demo before we could use base or we could just say right everything everything available in our whole get our repository we could just say dot and that goes recursive through through everything um might save someone a couple of seconds going through going through um the documentation uh cluster url so the default cluster url and our mysql namespace we could say directory recurs as well here to go through everything but i think before we do that let me go back down here let's clear this let's show that i have uh okay no mysql namespace or mysql namespace sorry so let's do a let's do a get pods mysql that might fail because there isn't one no we're good at the moment so if i go create okay i now need to go into this but it's saying it's out of sync but you can see here that it's going to go and deploy the mysql it's going to deploy the mysql namespace with a secret called mysql a service called mysql and a um a stateful set called mysql so if i want to then go and sync that and start things going on we can just hit sync and then synchronize and if i go into here we should in theory as long as nothing breaks i haven't done anything wrong well good stuff but what you'll see here is that we've actually already kicked off a canister precinct i'm going to show you what that looks like as well so if we have a look at our there's our go now you you suddenly now see a load more things that have appeared so we have our canister precinct um which is actually a uh a user account we have a job we have a role-based access um admin role and an operator okay good i'm glad that did that and basically if we go into if we go into the base and we look at what that canister precinct precinct job looks like you're going to see that these two annotations enable us to before any commit happens of anything else we are going to run this canister precinct this precinct and the wave two so that in order for us to take that back up now if we run through this you're going to see that we're going to create an image we're going to deploy a container and we're going to run a few things that look very familiar to a profile and an action set notice that we didn't create an action set as part of our canister getting ready i'll like i'll walk through canister because we're actually going to do it here so uh what else does it do it then throws that data so everything looks good that pod is just coming up if i go into here again we're going to see that my sequel is one of one um let's go back into our that should refresh any second and if we go into our minio browser although there's nothing there we should see and this might kick me out because there was nothing to back up because there wasn't actually a mysql database um to back up so our action set would have gone in there and gone there's no there's nothing to back up so with that what's going on here let's just hit refresh okay everything's good okay so what does it want us to do next so let's clear that so our application is now running version one has been deployed we've got a safe canister precinct that we know is running but we don't have any data because we have no data in our environment right now um so once deployed check the service account that was the essay that I was thinking of is the service account for canister precinct can create an action or read a profile set that we created this as part of our job so I'm hoping that it comes back and says yes you can um so we're checking these these two um service accounts and that's good the answer should be yes for both good stuff okay now we're going to actually go and create some data so think about this as being the external data set and someone is coming through maybe a web portal and they're submitting some information that they want to be stored on this database um so what we're going to do is we're going to connect into our pod again this probably this this is us simulating us wanting or needing to be in in there so let's do this copy this and you'll see now we have puffball a hamster that was born in 1999 showing there and now we can just come out of that because what we want to do now is simulate a code change coming into our repository so by rethinking your project you're going to trigger the creation of a backup okay so let's minimize that let's sync again which is not going to change anything in our database but what it is going to do is it's going to go and create that that canister precinct job again and now when this finishes we're going to see something in minio so what else does it say oh that's for the bad bit we'll do that in a sec and let's just take a look in here oh as if by magic there's my mysql backups and you'll see that in here if we drill down we've got a date we've got a time but we've got that mysql dump now we could take that and we could go and deploy that and restore that into any other mysql and that's another another session for another time is it gives us the ability to be quite fluid in where we take that data obviously i'm using very unsecure not best practice because of the sake of time and demo but you can see here that everything's good we've synced everything is okay so back in here and hopefully everyone's still following along with my sporadic changes in windows so let's imagine you create a mysql client app which is going to drop your database in your code that's a mistake but mistakes happen um so create this pod so i want to create i want to create this in fact here's one i made earlier now if i just go and drop that in there and i do a git add and i do a git commit and i say add in a bad change and i say get push i don't think this will show you anything no good stuff and then if we go back in here and we go back to our base we should see added a bad change and in here we should see our mysql client okay good stuff now because we have it set to manual if we press refresh now it's going to say we're out of sync because we're missing this mysql client pod so if we now hit sync and synchronize and if we go back to our environment and we go to clear and we do a kget pods mysql we can do a watch on there because we're going to actually remember we're going to go and create we're creating that client but we're also going to take a backup first so we are going to take a backup before we deploy our new mysql client covering us because think about this as going from version one to version two but some of our code is going to manipulate our database our actual core data service so when that's done we want to jump back in and remember i showed you as part of that my mysql client is that that's going to drop the database test the test database that we created i should be more original with how i call these databases part of the demo um so i'm being impatient but i think that's because there's stuff going on still let's just check let's go back let's go back in there okay there's the two so we've got one that went before and one that went after so okay but let's go and check whether we have any data in our databases or actually is the pod or did the pod actually just self it might have just got rid of itself straight away yeah it was it was purely there to be malicious so let's now jump into our mysql one or mysql zero sorry our pod that we have and let's simply go and check our databases show database oh and now our test databases is no longer here and oh the horror the sinker's deleted the database but fortunately can us to protect your database in the execution of the precinct okay so what have we got left 10 minutes 10 minutes to get the data back so first of all we before so the sink above represents a simple change in code that could affect our data at this stage the bad mysql client yaml should be removed or configured correctly before continuing with the restore process so this is really me saying about this could be a um a config map this could be any but whatever whatever you've just committed to your database is or whatever you've just committed in terms of code has manipulated your data and cause some call some bad stuff so what we want to do is let's uh let's pull drag that out of there and put it back to where it was let's exit out of here again let's do a get add and get commit minus m let's say removing bad mysql client and then push that up good stuff did it before it timed out on the on the last bit um okay so before we restore we should check that we've got those restore points we looked at and we saw them inside minio inside our object storage but actually let's um let's take a look at action set and it's stored within the data center sorry within the canister namespace okay so we have two they look very different to those in terms of naming because we want to be able to name and differentiate when and when these were taken so what we're going to do is when we have the list we choose the correct action set to restore from now i want to go back to this one because that one's the the latest and we know that that one was before the code change in terms of the the code change that manipulated our our data so let's run canctl let's do namespace i'm going to cheat a little bit and i'm just going to go and find what that is called again which is this one copy that and drop that in there and so now what we're doing with canctl so canctl is used to create your profiles in terms of where you're going to store your backups but it's also used to create your action sets now if we weren't using our gocd to create our action set we would be able to use canctl to create that here but you'll see here that action is a restore not a backup from our restore set so we can hit enter here and we can see that it was created and if we run yeah if we run now how lazy can i be qctl namespace canister describe action set and then let's delete that restore action set so you see now that we want to go and pull this action set because we created a new one called called that i've missed off the r need that right paste okay so now what this is going to do is give us a description of everything that's happened so we executed the action restore we executed it from blob store meaning object storage we completed that restore phase and then we updated the action set restore blah blah blah to status complete okay that seems pretty good so let's now go back into our pod and it's as if by magic i've restored our mission critical database so jump back in run that same simple command again we don't need to create or just want to show it there you go there's another change that needs doing um so we're going to connect to my sequel and show databases and you see here that we've got test let's just make sure that our little hamster is is there uh i didn't check database use test use test um select from pets good stuff diane will be happy that we now have our data back in our database and then if we go over to so remember we changed um we took away this my sequel client so if we go and sync that now in theory that should get well it won't be deployed again but a new backup will take place if we come back into our base you'll see that our um my sequel client has gone um we made that change you see removing the bad my sequel client so back in here we should be seeing that another canister precinct will be taking place we should have another backup in here eventually yeah all good and then version two or version three has now been deployed and we're back in back in business but we've also made sure that the operations teams can obviously use action sets to set their cron jobs to go off at eight eight p.m every night if they want that or every hour but now we're enabling our developers or our application owners or our dev ops teams to be able to automate and integrate into their cd pipelines the ability to protect their workloads as well so if them to if the version gets updated we want to take a backup and also that doesn't stop the uh the operations team being able to do the same thing okay just on time so let me are there any questions at all uh sorry if the screen share is glitchy i'm i have done a recording of this so i'll um i'll make sure that i have a have that available as well as part of the the the read me um i see some people are all good and some people are having issues with audio so um i'm just seeing if there's any questions yeah any questions anyone nobody maybe i said it too quick i think you were just very thorough i think you did a great job all right well there are no questions michael if there's anything you have to wrap up with i'll let you do that and then we can give everybody a couple minutes back but the recording will be on youtube at the link on the youtube playlist um after the after this ends of sometime this afternoon and you'll be able to find it through your registration link as well yeah so anything else yeah just to finish up hopefully i'm sharing my screen again is canister is obviously a big focus for for us from an open source perspective um some of these features are already in so i just want to highlight these three big features um file store destinations i mentioned nfs maybe i'll just blew the cover off that but um that that's one of the destinations as well as others also being able to encrypt the dupe and compress those as well using another binary called can do um and then improving our canister functions to manage certain data services like or data service operators such as kate sandras so from from data stacks and their open source initiative there again this is a big shout out this is a community effort um anyone that wants to take a look ask questions learn more contribute we're all very welcome welcome into that go and take a look see what you think ask the questions and i think on closing um please take a look at the the project um i think there's some really cool stuff in there this is just a a little bit of the the the puzzle like i said about we could go into a canister overview which we've done in the in the past um there's some other areas around protecting like aws rds or data services outside the cluster the way in which we make this better is feedback from the community and contributions spread the word and yet the concentration here is it's an open source framework for application level data management on specifically kubernetes so yeah i think with that that's probably a good place to to finish it liby all right well thank you so much michael thank you everyone for joining us um again you will find the recordings later today along with the slide deck so just look for that there and if you have any questions join us on slack or reach out directly and uh we'll see you next time thanks everybody