 Hello. Good morning. My name is Murli Balcha. I'm the city of Trilio data. So our product is called Trilio Vault. So we'll discuss how Trilio Vault can be leveraged as an abler for the hybrid cloud. So let's start with, you know, what open stack deployments typically at our customer sites, right? So most of the time, the customers deploy multiple versions of the open stack, because that's how the journey goes with open stack. They may have Juno, or Kilo, or Liberty in a production, and then they want to upgrade to a new version of the open stack. So they set up a new open stack version, they test it out, and then gradually do a rolling upgrade, or gradually failover the workloads onto the new open stack. So this is a journey of the open stack. This is how you deploy and manage the open stack. So when we talk about the hybrid cloud, like that hybrid cloud concept or the challenge hits home much closer than, you know, private cloud or a public cloud. So the challenge here exists the same way as, like, you know, how we basically tie the private cloud or public cloud, right? So in this case, you know, we should have the ability to basically take a workload that is running on Kilo-based open stack and then restore it to the new time based workload, new time based open stack. So that is the journey goes on. So we'll talk about, like, how we can leverage the Trilio Vault to enable this kind of use case, and maybe in the future, like, we'll also, we are also thinking about how we can integrate AWS or other public clouds. So the common question that customers ask is, obviously, how do I migrate my workload that are running on my older version of the open stack to a newer version? Or they have two different open stacks that are set up at two different geolocations, and how do they failover or how do you recover a workload that is running in one side to a different side? Or how do you failover a complete tenant from one cloud to different cloud? So the underpinnings is, essentially, they are looking for a workload migration between the clouds, whether it's, you know, same open stack clouds are between two different kinds of open stack clouds, right? Different kinds of clouds. So when we step back and say, like, what essentially what typical workload is within the workload, a workload can be multi-vm. So multiple VMs, they have some network connectivity, and for each of these instance, you apply some security groups, and you have some persistent storage attached to this one, right? So this is typically a definition of workload, and I'm sorry, I'm sorry. So this is the typical definition of the workload. And then, you know, you should be able to capture this workload, and then you should have the ability to basically, you know, move the workload between any cloud in your deployment, right? So they want to do it for various purposes, right? It could be a DR scenario, or it could be test data purposes, or it could be some archiving, or it could be some other use case that they want to realize as part of their application lifecycle. And typically, they want to use the cloud storage that is accessible for all the clouds within their deployment. Now, if it is an open stack or even between two different clouds, they should be able to recreate the workload on the target cloud, right? For example, if they were to do it on AWS from open stack to AWS or vice versa, so they need to reconstruct all the resource type that they captured in that backup and then recreate that to the AWS. So they need to do, they should be able to do the translation from open stack resource types to AWS resource types and then recreate that workload onto the open stack cloud, onto the target cloud. So how do we accomplish all those things? So briefly, what is Trilio? Trilio is a data production for your open stack. So just like any other service like NOVA or the network service, Trilio is an add-on service into the open stack. So we have this, it has its own mind, sorry. So it comes with a Python wrapper for the CLI and then it has the ground up we built RESTful API and then we registered to the Keystone. So it appears as in the horizon dashboard, we surface a tab called backups and it's completely tenant-driven. Now what do we do as part of the data production? So unlike other data production of the backup solutions, we capture the entire blueprint of your application, including the VME majors, the network settings and the standard volumes and everything that goes with your workload definition. And then our capture results are very efficient. The first time we capture the full backup of your entire application environment and then subsequently we only capture the incrementals and we are forever incremental. So once you capture the full backup of that one at the P0, we only capture the incrementals going forward. So how do we capture our backup images? Our backup images are captured as QCUT images. So you guys know like most of the KVM QCUT is the VM image format. So all our backup images are captured in QCUT images for various reasons. One is QCUT is space efficient. So anytime you have zero senior backup images, discard those things and efficiently stores the backup images in a storage efficient way. And the other one is incremental. So the snapshots can be represented within the QCUT as an overlay file. So we basically build this chain of base image and then the incrementals. So the good thing about this one is every incremental is an overlay file to the previous backup. So all our backup images are fully formed so I can access any point in time without moving the data. So for example, if I want to restore a single file from my backup image, I should be able to do that. So our backup capture is very, very efficient and also the restore process also very efficient. So we support various backup targets. We support the NFS and we support the Swift natively and then we support the Ceph object store and we are working on adding S3 as a potential backup target for our data production service. Now, so for the demo, what I did was we have two OpenStack clouds. One is the production cloud and then the other one is the DR cloud. And I configured OpenStack Swift as a backup target. So all the backups are stored on the Swift object, both the full as well as the incrementals. So what I'm going to do is I'm going to import the entire backup image like including all the point in times into the new cloud and then restore it to the new cloud with just a click of a button. So let me start the demo. So in this case, we are creating a backup job and this backup job includes at least five VMs in there. So we have various instance here and then the backup tab there, we go and create a backup job with five VMs. So each of these VMs are different flavors or different configurations. Some of them has booting from the center volume. Some of them are booting from the ephemeral storage. They have different kinds of network settings and then the security groups are assigned to that one. So every time we capture, we capture the entire application blueprint. So the backup store includes not only the data but also all the metadata that goes with that workload. Now I'm going to show you like how all the objects are stored on the Swift storage. So this is the Swift storage. Essentially, we efficiently use the Swift storage to store our backup images. So from the exploring just the Swift storage, it doesn't make any sense because we split the objects into smaller chunks to efficiently manage the backup store. Okay, now, so at this point, what we have is we have a five VM workload, right, that is saved on the Swift. Now the next thing is like user, we want to transfer this, the workload onto your second cloud and then restore it. So for that, we run a CLI command. And the CLI command essentially takes the backup image that is on the Swift and then reassigns that to the new cloud for a new tenant, right? First, it discovers all the backup jobs that are there on the backup store. In this case, it's Swift. And then we run the command to transfer that ownership of that backup image from the tenant one on the cloud one to tenant two on the cloud two. So now the backup that is on the Swift that belongs to cloud one, tenant one, it's successfully transferred to tenant two here on cloud two. And let's log into the tenant two on the different cloud on the second cloud. Now you go to the backup tabs. So you see that backup that is imported into this cloud and it has the full history of the backup that was done on the cloud one. Now we go to the restore operation and then here we need to map the resources that we captured on the backup to the target side. It may have a different network type. It may have a different volume type. So once you basically map all the resources that were captured during the backup to the target cloud and just say restore and it's going to orchestrate the whole thing and restore all the volumes, all the VMs, all the networking, all the IP addresses. So at the end of the operation, you have your entire application up and running on a different cloud. Okay. So you have the entire work cloud restored onto the different cloud. So this is not a simple migration tool. This is think about this as a way to realize your hybrid cloud. So you have multiple clouds in your environment. And then you have different workloads. You are using one object store. For example, here it's a Swift. And that is your backup store. That is where you are basically persisting all your backup stores. And then you have the flexibility to take any point in time and restore it to a different cloud. It could be for DR purposes. It could be for test. It could be for other purposes, right? So this is think about our solution as an enabler for the hybrid cloud for your in your deployments. So that's all I got. Thank you. Any questions?