 Hi, good afternoon, everyone. My name is Aishad Galore. I work for Huawei. And I will be talking about crossing the hybrid cloud with Kabul. So let's start. So I'll talk a bit about hybrid cloud and hybrid cloud migration. So hybrid cloud is basically defined as any kind of combination between the on-prem and the some service that you run on public cloud, even if you're just using Gmail in your business that's already considered officially as a hybrid cloud. So where do we find hybrid clouds? So we find hybrid clouds in traditional data centers. We find them in the private cloud. And we find them in the public cloud. And what are we using hybrid clouds for? So we're using hybrid clouds for services. Like take external computation, for example, machine learning like Spark, text-to-speech, like Amazon Poly, external email like Gmail or Office 365, or even deep learning like IBM Watson. We consume hybrid cloud in platforms, like platform services like Cloud Foundry or AWS Elastic Beans. Sorry. We have orchestration engines like Kubernetes. We have storage services in the cloud. For example, external storage is like AWS S3, or managed database services like AWS RDS, or Google Bigtable, or Spanner. We have cold storage like AWS Glacier, and so on. And we have infrastructure, compute network storage, classic infrastructure service. We are using infrastructure in use cases like Cloud Bursting, where the on-prem can start consuming services from the cloud when there is not enough resources. We use that for disaster recovery plans. Or we use it side-by-side, like private and public together for increasing the uplink to the internet or just saving costs. So let's look at the current state of the workloads. So we see that basically the world now divides into two, kind of paradigms. It's a traditional on the other one side when you have bare metal or virtual machines in the VMware style. This is characterized by having everything managed by operations, by a dedicated team of operations that is managing what we call Pets. It's very specific machines with names on it, and static and constant IP address, and the numbers are much smaller. The traditional workloads are, it's actually not called workloads, are monolithic. You have one operating system, and inside you have many services like Java Enterprise Beans or together with a database, together with additional batch processes running. That's all packaged together in one image, running inside the same VM or bare metal. On the other side, we have the cloud native applications. That's like the new kid in the block. So workloads are containerized in Docker, for example. They are dynamically managed. They are orchestrated as cattle versus the pets that we see in the traditional paradigm. And we're very reliant on microservices and microservices are very oriented to microservices, which are decoupled and stateless and basically are less influenced when there is something going down. So extending to the public cloud, let's say we are an enterprise, we're a company. We have some legacy software, and we basically have three options to move to public cloud or hybrid cloud model of using resources. So on one side, we have the lift and shift. Basically, take the stuff, move it, run it on the cloud. On the other extremity, we have redesign. So completely rewrite everything, redesign everything as cloud native and go with Kubernetes and stuff like that. And in the middle ground, we have the refactoring, which is the evolutionary approach. So I'll just touch on what each of these means. So redesign. So redesign the application to be cloud native, write it as microservices, continuous integration, continuous development, containerized, orchestrated, and everything that we know from Kubernetes. The problem with this is a slow process. You need to rewrite all the software. It's very expensive. It's very intrusive to your business. It's very pervasive to your business in terms of development processes that need to change and skills that need to change. And the way things are thought on need to be done in completely new paradigm. And the risk is very high as well, of course, because you're changing all your business score and rewriting it. Refactoring parts of the application, just taking some parts of the application which makes sense and moving into the cloud. So you do the necessary changes. Maybe you rewrite, maybe you redesign, but just parts of the system. And some of it you just leave as is. The problem is this takes time and quite a lot of time. It's also very difficult for some of the applications. It's expensive. It's unpredictable. There are many stability issues because now you are taking something that works and you are making it basically like combining two different paradigms. This never works really well. And the risk is high. The third option, the lift and shift which makes maybe most sense if you want to be very quick and safe about it. Lift your application as is. Just deploy it in the cloud. So it sounds simple. It sounds quick. It is quick. It sounds like it's low risk, but is it really low risk? Do you really know what's going to happen? Are you sure that your application is going to work the same? When are you going to do that? It sounds like it's cheap, but is it really cheap to do that? So it's not sure because it's inefficient. Your application will not be optimized to run on cloud. You just take it and move it so things might break and definitely will not utilize the cloud resources properly. And then costs start to pile up and businesses then move back to the on-prem because it doesn't really work as promised. So Cobor. Cobor is a big tent open-stock project which we started I think was a year and a half or two years ago. Cobor basically does three things. It exposes data protection API. It's not a data protection. It's not a backup software. It's not. It's a data protection API which provides plugins or a set of plugins or a framework of plugins that allow you to protect any kind of a protectable. A protectable is everything that you can think about in the open-stock application that you want to protect like a VM, like volume, like an image, like your settings, your network configuration, each of these with any provider where a provider is a solution that maybe you're buying from someone or maybe you're just using an open source or maybe you just write yourself and protect it at any bank. A bank is where the data is protected. You could say it's on the public cloud. If you're on the private cloud, then the public cloud is where you will basically protect your data. And the third thing is the data protection service itself, which kind of running all this workflow. OK, sorry. No, it's important to mention that the data protection capabilities versus how today data is protected are exposed to the tenant and not to the admin. So basically in Cobra we took this approach. We divided the roles between the tenant and the admin. And we see the admin as the person who actually owns the backup solutions, who actually pays the subscriptions. So they know what they have. They know what they want to provide to their tenants. And the tenant is the one that decides what to backup, what to replicate, what to protect, and where from the options provided to him by the admin. In many senses, just like the rest of OpenStack works. So another thing which we say is that the data you want to protect is actually bigger than just the data the application is generating. So just protecting the data the data the application is producing is just backup, right? But what about the resource data, like network, like the server, what about the resource metadata, the configuration that you are constantly investing in? So data protection today is mainly revolved around the application data. But the metadata changes over time. So you can say that my approach is, OK, I have a template. I've written a heat template. And if something breaks and my system burns and my site and there's a meteor hitting my site, I already have backup for my volumes. And I just run this heat template in the other site and everything will work. But that doesn't work because the system is more complicated than that. There is a lot of, I don't know, many changes that maybe I didn't synchronize between the sites like the versions of the software and the settings and the firewall settings that I found out that there was a port that they blocked and they forgot to update it. And things break. So we know it never works. So basically what you want is something that adapts and that takes like a constant notice of all the changes that happen and treats your application, your deployed application as an organism that changes all the time. Some highlights for CalBor. So I mentioned this before. It's a pluggable, completely. That's the whole purpose. We are not implementing the backup solutions. We are just defining the interfaces. We can protect any open stock resource. And we offer very versatile kind of multiple protection providers can coexist in the same place. We are not limiting the differentiating options that each protection provider, in this case, provides. So we are not enforcing something like a lowest common denominator. We enable through our plugins to give out to the tenant all the options that exist in the products that are implementing the protection itself. So values for the vendor itself, the vendors can just have standard APIs that they can use to implement in order to introduce their software into open stack very painlessly, as opposed to today where they have to implement many lines of codes across many components in open stack and follow up on the interfaces. For operators, it's a possibility to provide the data protection service. This is something that we see it as a very substantial service that cloud providers today that use open stack are missing a lot of, let's call it, services that help new customers to do onboarding easily into the cloud. So if the customer is not the cloud native software, most of them are not, how do you bring them on? So that's one of the problems. So this is something that can make this much easier to do because a customer can start by doing a DR, just a disaster recovery, just using Calbor to protect his own prem on the public cloud and later just use this capability in order to switch to migrate, which I'll cover in next few slides. So I mentioned before we have a few terms in Calbor, protection, protectable, which is the items that you protect. This is one kind of a plugin. You have a protection, which is how to protect, copy the data, create like a template from the data, from the insert data into database and so on. And the background game is where you actually put the data. So if you take a typical 3D application and I'm purposely giving an example of a legacy application and not a cloud native, like a database, a tier, application tier, web tier, which goes out or which receives a communication from the cloud. OK, so it seems very almost trivial to look at this and say, OK, what's the problem? I'll just recreate it every time. I'll just back up the volumes. But what Calbor does is actually looking inside and realizing all the interdependencies with all the components that you see inside this. You have the project, the OpenStack project. And you have two web servers. And each web server has a security group and a Linux image. And you have the web network. We have three networks, the web, DB, and application. And you have a router. And in each of the other networks, we have another component, application serving the application net, a database serving the database network. And the database has image, it has volumes, and so on. So there is a lot of protectables. Each of these requires a different kind of protection. You will protect a VM image differently than you will protect the data volume. And you will protect the router different than you will protect security group, so on. So the basic flow is you start with a protection plan. Protection plan is something that built by the admin, but set up by the tenant. He decides what to protect. And then there's a protect operation which creates a checkpoint in the bank. A checkpoint is basically a group of all the data that consists of your backup. And there is the restore operation whenever you want. You go to the carbon service. You put the bank where you've been keeping your checkpoints and you can just restore it. Rebuild the project from that. I'll skip these slides. Just leave them a few seconds so that they are being videotaped. And you can look at it afterwards. OK. And jumping to the protection provider. So the protection provider is actually, it looks a bit complicated, but actually it's not. This is the selection that the admin can make from all the sets of, let's say, backup software and the store and volume and storage options and everything they have and the bank options. And let's say they have S3 for backing up your data. They also have Dropbox or something else. So they can decide which to use and can build a profile, which we call the protection provider, which is the mixing match of all the options. And this is what the tenant eventually can select. So you could look at it as kind of an offer or a plan, like a subscription plan, for what the tenant can actually use. Now I'll move to the hybrid cloud with Kabul. So the main concept is, so we already have this backup or this protection plan that keeps all of our application protected. We can recover from that plan, from that checkpoint. Whenever we want. When we get to the recovery point objective of the last checkpoint that we took, and everything will be reconstructed is a single snapshot in the new cloud environment. So we can do that, and we can use that in order to do migration. So basically, it means we will create a plan, mark the application that we want to move. We will protect it. We will create a checkpoint. The checkpoint will keep the data in a bank. A bank is already in the public cloud. It will save all the metadata, all the configuration, changes, everything that we have put a lot of effort into stabilizing. And then you just go and restore it in a new place in the public cloud. So basically, it will look like this. So the first step on the private cloud, the application on the private cloud, we see it on the left. Just an application. It doesn't matter which. There is a carbon service running, and then we have a bank which was defined by the admin somewhere. So the first step we do, we use the carbon API in order to protect the application in its entirety, including all the settings. Basically, this will create a checkpoint in the bank. The next step, the application is on the private. The checkpoint is on the public, right? What we need to do now is just recover, restore the application from the checkpoint in the public cloud using the carbon service. So now we have the application running on both, but only the application on the private cloud is actually receiving client requests from the internet. So the next step will be, this is the next step, and the next step will be to just start switching external clients to the application running on the public cloud. So either with a DNS load balancer or just switching them all over, it's entirely to the decision of the person that is doing the migration. So basically, that's it. And we are always looking for additional contributors to the Carbon Project. So if you want to join us, we are welcoming new contributors. And if there are questions, then I will be happy to answer. If anyone wants to ask, just use the microphone over there. OK. Thank you very much.