 Hi. This is your host, Sapin Bhartia, and welcome to another episode of TFI. Let's see. Cloud Casa has announced the launch of Cloud Casa for Bellaro, a new offering to manage and run backups at enterprise scale. To discuss this launch and see it in action, we have two guests from Cloud Casa, Vecatologic, Sanan Shinde, Senior Director of Injuring, and Sebastian Glaub, Cloud Architect. Sanan, Sebastian, it's great to have you both on the show. Hi. Thanks for having us. Thanks for having us, yeah. Talk a bit about what is Cloud Casa for Bellaro and what specific problem you folks are trying to solve for customers. OK, so Cloud Casa for Bellaro allows Bellaro users to manage their cluster using our software as a service platform. By plugging directly into Bellaro, Cloud Casa provides user-friendly, intuitive UI for Kubernetes cluster management, utilizing Bellaro as the backup tool. That means that Bellaro users are no longer required to use Cloud Casa agent for backups and restore, as they can still continue to use Bellaro while benefiting from our solution. So what are the benefits, right? First of all, user get a multi-cluster and multi-cloud management in one place. They don't have to jump across different environments, access different clusters to see what's going on. They can simply go to Cloud Casa and manage everything from there. They also get a real-time job monitoring. So whenever they run a backup or a restore, they will see it right away in our UI. They can click on the backup job and see life progress there. The UI itself is user-friendly. It's very intuitive. When you define a backup, you just go step by step. Same with a restore. We also allow users, Bellaro users, to do advanced cloud recovery. So we can take a Bellaro backup and restore it to Cloud, creating a cluster, installing Bellaro, setting up everything and basically in a couple of minutes giving user a fully working environment. Also, Bellaro users get better compliance and governance with failure alerting and reporting. So we design an alerting system, like whenever backup or a restore fails, there is a broken connection. We'll create an alert in our UI, but also user can specify email art. So whenever something goes wrong, backup goes wrong, they will get an email so they can troubleshoot. And that leads to my last but not least, benefit, which is faster troubleshooting because of a central log collection system. So whenever user adds a cluster to Cloud CASA, he will see all his backup jobs, all restores. They can simply click on a backup or restore job. It will be download log option and they can download a CSV file. So they don't have to access the storage or access the cluster to get the logs. They can simply get it from the UI. And the best part of all these benefits is this feature itself is that Bellaro users can get all these benefits without worrying about infrastructure management, without causing any disruptions or without having to undergo any migration process. So yeah, these are the key benefits in my opinion. Excellent, thanks for talking about that. Now it's time for us to see some action there. Swaran, please go ahead and show us the multi-cluster manager for Bellaro. Brilliant, thank you very much. Very excited to show this new feature where Cloud CASA is now going to work with existing Bellaro deployments. The interface hasn't changed much in terms of registering a cluster with Cloud CASA. So if you have a Kubernetes cluster, it's running Bellaro deployments at present. It's very simple to add such a cluster to Cloud CASA. You just go to configuration, clusters. I have already added one of the clusters that I have. It's already registered over here. But registering a new one is equally simple. You just give it some name. If you want Cloud CASA to manage Bellaro deployment on this particular cluster, you select this option else you just leave it as it is and then you just register. It is as simple as that. You do get a command which you have to execute on the client cluster, Kubernetes cluster. And that's it. It will be registered just as any other cluster. So folks who have been using Cloud CASA so far, registering a cluster which has a Bellaro deployment is virtually no different than what you were earlier doing. So once that is done, you will see your cluster go into active state and there is a bunch of information that you can see about your cluster. But before that, if I go to the clusters page, you can see that there are two clusters. If I install the agent, you'll see a lot of information about it. But even through this page, if you just have a look at this page, you're already been told what is the version of Bellaro that is running, what is the Kubernetes version. And if you just happen to click one of these clusters, you will be given some more information. You will be told what namespace your Bellaro is running in and what are the various backups that you have run using your Bellaro deployment, a list of restores, a list of recovery points. Now, if you look at the concept in Bellaro, you will have to define a backup custom resource in order to run a backup. But Cloud CASA has tried to simplify it a little bit. If you define a backup, Cloud CASA clearly thinks that you're trying to define the scope of the backup. So it needs to be called out as a backup definition. Now you can take that backup definition and you can run a backup based on this definition a number of times. Even though Bellaro doesn't allow it, Cloud CASA is intelligent enough to interpret your runs of the same backup. So what you see here is a simple list of all the backup definitions that you have. But you can also see a list of all recovery points, which means if you would have run 10 backups, you would see those 10 recovery points listed over here. A short summary of the backup. So in this example, you can see that this backup ran, the backup was successful, the start times and end times. What's most important is the settings feature. You don't have to go to every Bellaro deployment and try to figure out how your Bellaro server is configured. You can clearly see what are the features that have been enabled for that particular Bellaro deployment. You can see a list of environment variables. The most important thing, you can see all the Bellaro plugins that are currently deployed on the server along with the versions. So generally what might happen is, assume you have 10 Kubernetes clusters. You could very easily, through this particular mechanism, see which of those clusters is running Bellaro 1.8, 1.10. You can clearly have a look at which of these clusters are running older versions of Bellaro plugins, so on, so forth. So essentially, the bottom line of this entire feature if you have 15 clusters, you will be able to have a clear view of how they are configured to run through a single pane of glass. And then of course, you could delve further into each one of them and look at what is the configuration of those clusters. What's amazing is that you also get a bunch of metrics, like for example, in this particular case, it is clearly telling me that whatever snapshots that I have, sorry, whatever volumes that I have that are being used, they are entirely protected using snapshots, so my snapshot coverage is 100%. So this is where the value addition happens if you use CloudCasa. This will clearly tell you if you have missed any persistent volumes in the scope of your backup definition, so that you can include those and make sure that you are protecting those volumes. So yeah, so this entire demonstration talks about how you can manage multiple clusters which are running Valero. By the way, if you are a CloudCasa user, you will see your normal CloudCasa clusters displayed over here, listed over here as well. It just won't have the Valero logo beside it. So again, overall, a single pane of glass, you can manage your Valero clusters, you can manage clusters that have been protected using CloudCasa seamlessly. Thank you. Now I think it's time to see the demo for creating new backups and this tour for custom resources in Valero through user interface. Notice the simplicity with which I can create a queue backup, okay? So I'll just go to Define backup. I'll select the namespaces. I have a namespace, so you can see that all the namespaces on my cluster are actually listed here automatically. This is the discovery which runs behind the scenes. So I select the namespace which I want to protect. There are a bunch of other configurations which can be used to further refine the scope of the backup. I can either choose to take a snapshot of the persistent volumes or just leave it as it is. Do I include all the cluster scope resources? Yes, no, I'll just leave it to auto. Whatever backup storage locations I have automatically get listed over here. Backup storage locations in Valero are nothing but targets where your backup needs to be deposited. So right now I have the default BSL that I have. I just click Next. If I have to define any hooks, I'll be able to define hooks if I need them and just give it a simple name. Test simple backup for demo. Right, and I simply click Create and Run. You see that backup has already started. I can click on this particular job that is running. It gives me a lot of information about my backup. The activity log tells me how is my backup progressing. The PVD details page will give me information about all the persistent volumes which are being snapshoted. Wow, so my backup actually finished. And if I have to go to my cluster, I can verify that this backup has actually happened. So you can see this backup CR which has been created as simple backup for demo. Now I can get more information about it by trying to describe the CR, the various things that are there in the status. It's showing me that it's completed so on so forth. But isn't this a much better way of visualizing your backup jobs? The story doesn't end here. If something goes wrong, okay. By the way, look at the activity log, the amount of information that is automatically being populated from the backup. It tells you the number of resources that have been backed up, the number of namespaces that have been protected, the number of CSIPVs that have been protected. It tells you the volume which has been snapshoted. It also tells you whether the snapshot is really complete or it's in an incomplete form so on so forth. And the best part, I mean, all of us use this feature the most. If something has gone wrong and I want to see what's gone wrong, it's a single click where you can simply download the logs of this particular job. It gets downloaded as a CSV file and you can simply filter the logs, log messages, you don't have to execute any other commands. And yeah, this makes debugging triaging so very simple. So this concludes the demo of defining a particular backup. We can quickly go to the restores. I mean, defining a restore is no different, right? You can just select the restore. You'll have to select the recovery point which needs to be used. You can avail the various other configuration options, select namespace. Okay, this namespace was there in my recovery point. I will leave the rest of the options to default. By the way, you can restore also to another cluster and that's going to be covered by Sebastian after this. But for the sake of this demonstration, I'm just going to restore it to the same clusters from which I had taken the backup. What I'll do is I'll try and rename the namespace. So the earlier namespace was test CSI snapshot. Let me rename the contents of this particular namespace to demo restore. Again, if you want to preserve node ports, so there are a bunch of other pieces of configuration which you can leverage. Just give it a name and that's it. Your restore should start. Excellent, thanks. Now it's time for us to see easy button for ad hoc backup runs in Valero. Right, so if you see that for this particular cluster we had earlier defined a simple backup, right? This one. Now, if I have to run this backup again, it is as simple as going to the actions and just clicking run now. You just run it and your job should start. All right, so your job has started. The backup is progressing. Again, if you had to do this through Valero, you would have to create a new backup CR. You would have to specify the scope of the backup again. But all of that has been simplified. You're using Cloudcasa. As Swapnil already said, it's just about clicking, about choosing the backup and just deciding to run it now. Yeah, so that concludes the demonstration of the run now option as well. Sebastian Swannan, thank you so much for taking time out today and show these demos to us. And as usual, I would love to have you folks back on the show, thank you. Thank you very much. Thank you Swapnil.