 Okay. Hello, everyone. I'm Sagi. I'm a Carver PDL. This is Yuval, one of the leading core contributors for the project. This is going by itself. We are for the Carver team, an ecosystem for data protection. We want to solve the problem of application protection, how you protect your cloud application. In this talk, we're going to go about what does it mean protecting your cloud application, how we look at protecting an application, and how we're going to actually do it. So let's talk about why data protection. So there are various events that can happen that can cause you to need that thing. One of them is a natural disaster, hardware issues, but one of the most common ones is stuff like power outages and human errors, which means someone changed something, someone manually modified something, and that causes us, causes you to need to go back to a previous point in time. So let's look at the two main applications for using OpenSec or Cloud. You've got Pets, which is an application that you just set up. You have a name for each VM you're using. You know their job, you know their function. So for this kind of use case, it's the classic way of using application backup. You know what you want to do? I don't think there's a lot of need to go into it. But when you're looking at Cal, which is what OpenSec is really about, there's a lot of questions on why you really need application level data protection. Because if you look at the classic case, what you do is you create a template, like Heat or Tosca, and then you orchestrate it and then it's in your cloud. And then it just runs by itself. So why do I need something to backup my application? So one thing, every application has data, has something that it needs to save somewhere. It has a database, it has some file storage, it has object storage. Something in your application needs to be configured. Also, if you're looking about changing your application in any way, the way you do it is you modify it, and then you see that it works. You modify it manually, and then you need to change the template. And after you change the template, you need to test the template. And after you test the template, and they work, you have to hope they would continue to work. What Carver wants to solve is to take all that away. You just modify your application in any way and tell Carver, this is the new state. This is where I want to save how application looks and works. And for that point on, it would just automatically work when you restore. You would just get the application as it was, similar to how you would do it if you would manually create templates and save data manually. So let's define data protection. We couldn't find a really good definition, so we made up our own. Hope you all agree with it. Data protection is a set of measures taken to ensure data is reliably recoverable on demand. So what we really want to do is not just get the bits that you got on disk, is you need to be able to recover. You need to be able to get your application up and running when you need it up and running. Not just be able to look at a hard drive with all the data and see if my user could connect to it or use it or if my application was running, I would have everything I want. So what we look at as data is not just storage. We look about all the resources you have, not the disks but the VMs. We look at metadata. We look at how things interact, which means networking, which means your user accounts. It means everything that is around your application that is built to make it to work. If you go around, if you go around, you would see there's a lot of companies selling a lot of products that are about how you get your application up and it's not just hard drives. And we want to make all of that be able to run, work and be recoverable under OpenStack. So let's look at an example application. So I got the three basic layers. I got database. I got application and I got the web where people connect. It looks simple. I only have like four VMs now. I can scale if I built it correctly. The network seems quite simple. So what's, what's the problem? If, if we actually drill down to what's actually happening, what's actually managed by OpenStack, we can see everything is much more complicated. You got the project that contains all of the, who's the administrator? Who can do what in my cloud? What's the permissions? What's the policy about modifying and changing my application? I got the VMs, the metadata about the VMs. I got the secure groups that they're a part of. I got the network configurations. I got the routers, the subnets, everything. It's a complete group of interconnected things that all need to work properly from applications to be, to be available. We also have protection aspects that are specific to backup. It's not just how my application looks and runs now. It's if I get a fail, when is it going to be up and running? How is it going to be up and running? What is, how far am I going to have to roll back if I ever get a disaster? Where is it, if my entire, if my entire site goes up in flames, where is it going to be recovered? Is it going to work? How much is it going to cost me? And if you look about how to solve these problems, there's a lot of diversity and we all know we're in open source. Diversity is good. So you can do backup and you can do replication and you can do a mix of both. You can do differential backups, incremental backups. There's a lot of ways you can solve that problem and each of the solutions is valid for a specific group of problems or solutions. You have and there's no, there's no need to limit you to do just one thing, to back up your application with a different, with a specific vendor in a specific way. You want to be able to take your applications and use the correct way to back up or to protect for you. So that's what we're trying to solve in our, what we aim or goals are to be pluggable, what you want to protect, how you want to do it and where you want to store it should be completely pluggable. There's no way we're going to force you to use a specific product or specific vendor or limit you to only use specific, to just limit you to protect specific resources. We want to be also versatile about the use cases, which means we don't want to just look at a specific disaster point, disaster use case or a specific protection use case or a specific backup use case. We want you to be able to integrate our solution into your workflow, into your disaster recovery plan and also to be open architecture with, which is being part of OpenStack. A lot of solutions exist, but if you're not open, you're not going to be able to integrate properly with your application. So if, I think it's kind of obvious, but my manager told me to put the slide, it means that you get a lot of stuff. Vendors are able to integrate directly with us, which means that if someone already uses Garbor and a vendor integrated with Garbor, it means that it gets all the benefits of your protection solution directly from us. What that means, because we don't limit use cases, we don't look at the lowest common denominator, it means that if you can get all of the features inside Garbor without limiting yourself to some set of specific APIs, operators can now get tiered backups to their tenants, which means backup and backup strategy is not just the cloud administrator's job. Every tenant or user can have their own choice about how it wants to protect their own application, and the users themselves, once they have options configured for them, can decide when and how to backup a simple user, and not just administrators. Right, so I'll speak a bit about Garbor components and how we build Garbor in order to answer all these questions. So Garbor is based on a pluggable architecture. It means that we have three types of plugins which you can write in order to extend Garbor and help Garbor cover new areas. The first one is called protectable plugin. What is protectable? Protectable is about finding new resources which Garbor can protect. This can be a server, volume, share, any resource that Garbor can protect. Protection plugin is about extending how Garbor protects the resources and how it restores them. So you can have a protection plugin on how to restore and backup a server, on how to restore and backup a volume, a share, any new protectable that you write, you need to write a matching protection plugin. And you have the bank plugin which dictates on where Garbor puts its data. It is a generalization of an object storage and it can be SEP, it can be SWIFT, it can be S3, any object storage that you would like. So Garbor components work together into this basic flow. First you have the protection plan which is a recipe for creating checkpoints. Checkpoints are stored in the bank and contain all the information sufficient to make a restore. And restore represents a running restore process for your application. And these three stages are actually happening across a large amount of time. You may protect your application on one point in time and then store it in the bank for a period of time and only when you need it you perform a restore. Let's talk about a bit on the bank. The bank is a pluggable generalization of an object storage. It means that you can extend Garbor writing a new bank plugins in order to dictate on how Garbor and where Garbor puts its data. It is responsible for where Garbor puts its metadata and probably the data off the backup. We have the checkpoint. Checkpoint is stored in the bank and it is sufficient for performing a restore. It means that Garbor is responsible for creating this checkpoint in the bank which contains the data, maybe references for where the data is located. And with this checkpoint, Garbor is able to later restore your application. And restore is an object representing a running restore process, maybe a complete restore process. Then you can get information on whether the restore was completed successfully about all the types of resources that were restored. So we said that we have a checkpoint stored in the bank and it holds all the sufficient information for performing a restore inside your deployment. So what do we need in order to create this checkpoint, this magic checkpoint which contains all information? So this is the protection plan. It's a recipe for creating a checkpoint. It consists of resources which you want to protect, of a provider which dictates on how to protect, and parameters. Maybe you want to protect with a specific network configuration, different configuration. So as we said, it is a recipe for creating a checkpoint, and it contains resources which are, you get those resources by querying protectable types. So if you write a new plugin, for example, on protecting a new type of resource, then Garbor can explore and find those resources, add them to a protection plan, and then create a checkpoint containing the data of the new resource you've written. Protectable plugin is about finding new resources. So if you have a new resource which we haven't written yet, for example, Trove, for example, other open stack resources, maybe non-open stack resources, then you can write a new protectable plugin in order to find them. For example, right now we have protectables for image. You can explore all your images in your deployment, your servers, your volumes, your shares. Right. And a protection plugin is maybe one of the most important parts of Garbor. This is where you connect your existing or new data protection software, actual implementation into Garbor. So you can have your implementation of volume backup and restore connected to Garbor through this plugin. This plugin, protection plugin, is about actual implementation. For example, if you want to protect a volume, right now we have a sender backup protection plugin. It means it takes volume and backs it up using sender mechanism. But you can write new protection plugins using your proprietary or open source implementations and connect it to Garbor. Protection plugins are also responsible for restoring your data back into your deployment. And after you've written protection plugins, protectable plugins, bank plugins, in order to find your resources, say exactly how you want to protect and restore them and where to protect them to, then you compose a new protection provider which includes plugins for every protectable you've written and a bank. And you can mix and match in order to create your own protection provider. One protection provider for high valuable resources, one protection provider for maybe less valuable resources, which is less costly. And then you have two, three, maybe 10 protection providers. You can offer them as tiers to your users and maybe charge money for them. And they can fit different types of resource, maybe more sensitive resource. We'll use one provider, maybe those requiring encryption. We'll use another provider. And here is an example for one reference provider, which we currently have. It includes for sender, glance, neutron, nova, keystone. And this is an example for a provider. And once you have this, you can actually select resources using cardboard, pluggable architecture, using protectable plugins, query resources, and use this provider in order to protect and restore them. So to sum up, we talked about how our goals and we now give you a bit about how we use those in terms of getting the protectables, the checkpoints inside the bank. Now we're going to talk about what is how it looks like more in an architectural point of view. So if you look at the solutions you've got now, you've got backup systems that already exist. You got some disaster recovery system that might look at the volumes and the images. You got from the cloud user point of view, it got stuff that might be able to backup. You could say I want to backup a VM or a volume of that VM. If you're looking at application backups, you got a lot of solutions like replications for database or being able to get your file system replicated across. You got a lot of solutions, but they're all managed by different people and are not part of a unified plan. So if you followed us along, you see that that's where we want to be put ourselves in. Where all of the different users can configure all of the different backup plans into a single unified protection solution. The way we do it is we got our API services, similar to a lot of other open-sec services, but if we look at it, we look at that. We got the plans where you configure them. Plans are site local, which means they're not pushed into the bank or part of your backup. Protectables is where you define your resources. It looks like you always have to write them, but we supply a lot of them by default. Most of them are young, but a lot of them are most important open-sec ones. But you can write your own for your own special application needs. After you invoke something like a backup, it all goes to our protection service, which what it does, it figures out how your application looks. It uses the protectable plugins to figure out what are your VMs, what are the VMs connected to, how is the network picked out. Get this graph of how everything is laid out, similar to what we showed in a previous slide, and then runs all of the plugins to be able to use the correct backup system for the correct resource. And in the checkpoint, we will save all of the metadata about how everything is put together, what was saved where, and how everything is put together. So this is a very simplified view of how it looks. The users ask the protection service. The protection service runs all of your backup plugins, which runs all of your backup services that already exist. It puts a checkpoint in the bank, which as we said is metadata and some of the data of your application. Once you want to restore, the data is in the bank. And it has all the information about how it was saved, what plugins were used, where everything was saved. And then a protection service maybe even somewhere else in a different site can get all the data through that information and restore it as your application looked when it was backed up. We're going to watch a short demo of the system in action. Right. So what we're going to see right now is a short demo on how cardboard does a cross-site protect and restore often another instance. So what we're looking at right now is two sites, different deployments. We create an instance on site A and we'll protect it to the bank. And later on, we store it back into the second site. So right now we create a Nova instance. It has been created. And we see that once it is built, I'll start building a protection plan and upload it into the bank using a protection plan. So the Nova instance is building right now. It is finished. And now we go into cardboard's dashboard. And cardboard's dashboard includes protection plan. I create a new protection plan. And right now, in the protection plan screen, I can choose a protection provider, which is currently open-stock infrastructure provider. And I choose the resources. And for each resource, I can choose different parameters. Right now, we'll use default parameters. We choose the instance and the Fedora image to protect. And then we tell cardboard protect now, protect this plan using the open-stock infrastructure provider. So this plan currently resides on this site. And the checkpoint is being created right now. It is being protected. And after the checkpoint is being protected, we will see it on the different side, on the second side. Right now, we see before there is no instance over there on the second side. And now, we see that the checkpoint is being protected from the second side. Because checkpoints are located on the bank, which is available for the two sites, both sites see the checkpoint. Now, the checkpoint has been protected. And it is fully available on the second side. So retail and cardboard restore this checkpoint on this specific site. And we see the two resources that has been protected, the Nova instance and the Fedora image. We see that we can set different parameters, do a parameter as restored, maybe set different network configuration. And once the restore has begun, we'll see the image being built, being uploaded, and the Nova instance being built. Right, so protection has been, restore has been created. We'll see right now that there is another instance being created. This is the same instance that we saw on the first site. Here it is. And it will complete successfully. And we'll have the same instance and the same image we had running on the first site, running up again on the second site. Instance is up and running. And an important thing to note, as you've seen, is that we detect the relationship between the different entities. And when the user selected the virtual machine, it automatically detected that the image is a dependency of that VM, which means that this VM will not run without this image. This means that also if you change this VM, replace the image, attach a volume, all of that will be automatically detected by cardboard in the next time you invoke the plan. And all of that would be backed up. This dependency checking and application building is happening on every checkpoint and not just on plan creation, which means everything is as dynamic as you want it to be. If the user were to select the entire project, every VM, everything related to this project would have been automatically backed up. That, Carver, has plugins for, obviously. I think we're ready for questions. Please go to that mic. I was told that's the only way this could go. If there are no questions. No questions? Right, so yeah. Sure. Please go to the mic over there. I'm sorry. Just a quick one. You've got cross-site restore. If you're using Cinderbackup as your data collection, how are you doing the cross-site restore on that? We actually spoke about how we're going to do it. Last summit, for this case, we can use CinderManage to let you know about a new volume and then restore from it. Just attach it. It's not from the Cinderbackup, because if the storage itself, if it exists on the storage, you can add it to the database. In case of people not knowing what CinderManage is, any more questions? Actually, I don't know how much time we have. Which parts do you already cover with the plugins for Carbure and what's next? Which parts are you intending to cover? Right, so currently, we have protectable plugins, which are resources which Carbure can detect, which are volume, server, novicever, project, image, glance image. We intend on extending to ManilaShare Trove databases. Protection plugins, currently, we have a protection plugin for back-upping and restore using Cinderbackup for volumes, and back-upping and protecting and restoring networks, servers, and soon neutron networks we have. And a Swift bank? For a bank, for stuff we want to protect, as you said, the thing gave a complete list for our future, for the future protectables. We actually have a working session on it to see what people want, stuff like Trove and Manila. We got a lot of requests for it already, so they're high on the docket. But we would like to, it's one of our goals for this summit is to figure out what users really want and need. The stuff like Neutron, Nova, Cinder, and Glance were obvious. The rest is, I think, a bit less to know what the other most common services are. Any more questions? So just to sum up, we've made really, I think, broad glance on how everything works. There's a way, there's a lot of inner workings. You can go and figure out and talk to us either in the working sessions or just in the hallway. Hope we all check us out and come send patches. You can find it out. You can find out about it. Ricky?