 Okay. Okay. Hello everyone. Good afternoon. Hola. Okay. I want to know how many Spanish are here? Spanish. Okay. Very good. Okay. My name is Xiang Xinyong. Me llamo Xiang Xinyong. Okay. I'm from Huawei, and I have 80 years of experience in storage and data protection. And I have contributed in OpenStack more than one year. And today, and my partner, I will introduce this topic about the container, cross-cloud container purchasing data. Hello everyone. My name is Zhu Rong, and I come from China, 99 Cloud. And I'm a co-owner of Murano and Salome. So, okay. Okay. This is the agenda. Firstly, we will introduce what is the container purchasing data. And secondly, we will introduce when do we need container purchasing data. And then, and we will introduce the scenario and the background about the migration of container purchasing data. And next, and we will introduce what can be used to migration container purchasing data. Okay. Next, I will show how to migrate the container purchasing data. Okay. And last is the, the last one is the demo. Okay. That's it. And first, my partner Zhu Rong will introduce the, okay. As we know, the container is status, and the data is available. And therefore, there are two types of data, the information data and the process data. For the information data, they will be lost after the container is removed. But compared with the information data, the process data will be still available after the container is removed. So, we will take the only file system for the example of information data and the container volume for the process data. For Docker, the only file system uses Docker storage driver and the Docker volume uses the Docker volume driver. So, we will introduce the container-only file system. Look at the first picture. When our container is initiated, a red light, a red light layer is added on the top of the red-only layer. So, any change made to the container when it is, when, when, when the container is running, all the, all the data is reflected in the, in the red-only layer. Actually, the image layer, the underlying layer in image layer, the underlying image layer are never affected. As soon as a change is initiated to an underlying, a new red-only layer is added. This is the referring as a copy and write. Using a container layer provides several benefits. Most of these, too, is speed and minimize the storage space needed. When a new container is studied, Docker does not clone the red-only base image. It just creates a new one and refers to the base image. This can be done very fast. And about, and the next picture is about the Unifier system. The Unifier system used to provide duplicating a complete state of fire each time you run as image, as a container, container, as a new container. And the Unifier system separate changes to a container fire system in its own layer. If you didn't have Unifier system, for example, 200 micro bytes image may run five times as five separate containers, what I mean, you can choose one gigabyte of the disk space. So what is container volume? A volume is not controlled by a storage driver. And the volume is initially when a container is created. If the container's base image contains data at the service of the mounting point, the existing data is moving to the new volume upon volume initials. Volume can be started and reused among containers. Changes to volume are made directly. And the changes to volume will not introduce when you update an image. Volume precessed even if the container itself is delayed. So what do we need a container precessed data? The precessed data where the information of what contains offers myriad of possibility when it comes to scaling and disaster recovery. But if also get a risk, the risk is that data inside the container, if the container is dead, anything over the, anything over the rewrite layer is lost. But usually the application need a precessed data storage. Maybe it is a database or an audit logs. So when do we migrate to container precessed data? In the following such scenarios, such as business change, working load balance and the hardware system upgrade. And maybe the cutting of a cluster and some network and disk fault. All the scenarios maybe we need to migrate container precessed data. And so what can be used to migrate container precessed data? Usually there are two ways. First is data volume container and another one is fork. The data volume container is used to share some continuous update data to the container. This is the simplest way, the simplest way is use data volume container. The data volume container is a common container. Provide data for other container using the mount. About the fork, about the forker, docker special or partition volume driver to perform the actual work. This volume can mount it when a container is created. Fokker is one kind of third part, third part docker volume driver for data volume. So next is about the data volume container to about the backup and restore. And first we create a new data volume container. Let's use the postgres for example. We create a new data volume container. The next one, the next way we can create a docker and refer to the data volume container. We're using the volume from arguments. Then we can backup the data volume container using the following commands. So when backup is finished, we can prepare a new data volume container. So at last we can restore all the data to the new data volume container. Let's make the volume persistent. So this is the fork architecture fork is about the forker. Fokker is an open source container data volume manager. By providing tools for data migration, Fokker gives the ops teams the tools they need around therefore service like database. When you use the database we can use the fork to migrate the status data. Fokker managers stock container and volume together. When actually it's when you are forker to manager your staff on microservice, your volume will fall in your container when they between different hosts in your custom. So next where your partner, Xiang Xinyong, introduce the next part. Okay, thanks too. Okay, firstly, we will talk about the relationship between container and over stack. You know, Maglan could deploy into the cluster in over stack and can use the container orchestration engine like docker swarm and methods and Kubernetes. And in the over stack there is still a project named dream and this project will manage the API of the containers. Okay, let's talk about the storage in over stack. There are a lot of projects related with storage in over stack. For example, the sender provides block storage. Okay, Manila provides the file system service and Swift provides the object storage service. Okay. You know, there are lots of docker volume drivers like forker, like convoy, like rex3 and there's still a project in over stack. Its name is Fushi. It's a docker volume driver. Okay, let's back to the topic on how to migrate container person data across cloud. You know a lot of technology to make a migration. Firstly, we can make some hard copy and secondly we can backup it and restore. And today we will show a demo in over stack how to make a migration person data volume between cloud like this picture. Okay, before that I will introduce the open stack project. So this project is a cover. Okay, cover is an over stack official project and its former name is smog. Okay, cover is the color native name. This animal is very cute and it's living in Australia. And this animal have a pocket so it means cover could protect everything in open stack. So and cover also designed to provide a high level framework to integrate a lot of plug-ins like from some vendor plug-ins. Okay, and this is a cover API. It defines what can be protected and how to protect. It means protect a plan and a protection provider. Protection provider includes two parts. First is protection plug-in. Also we call it bank. And another thing is protecting plug-in. And the protected data are stored in the bank. We call it checkpoint. It's a little like the Run C. Do that. Okay. And also cover will define wing to protect. It's about schedule operation and how to restore. Okay. This is a cover architecture. First you can see a lot of resource API. And in the left part it's operation engine service. Okay. It provides trigger engine like times like event. It's triggered by user defined. If a user defined once a week, twice a week, something like that. And the operation engine service will trigger the protection plan. Okay. The very important service is protecting service. It includes the workflow engine. And it also includes protection plug-in and bank plug-in. And cover defines this high-level framework to provide the windows to write some code to provide the protection action. And the bank could be used to store the data and include the metadata. The bank could be a swift or S3 or safe and so on. And also there is resource plug-in and will define which resource could be protected. Covering element to protect any resource in your stack. So you can see the checkpoints. It's a little like the RunC. Okay. I will introduce the solution how to migrate container person data across cloud using cover. And you can see we have two clouds and cloud one and cloud two. And the container is running on host. And the container is deployed by Maglon. And this volume is provided by Cinder. And the volume is mounted in this host so the container can use this volume. And for example if we run the database container and the volume will store the database data like we mentioned in the previously slide. Okay. And firstly we can use cover to migrate the volume to the cover bank. We call it checkpoint. And on the other side we can use cover to restore from checkpoint. So when the volume is restored and the container could mount this volume and continue to work. So this is the migration process. It's about checkpoint and restore. Okay. This is an environment exist in cloud one. I introduced we use the Kubernetes and Docker. And the volume is mounted in VM or bar meter. And we developed the cover volume plug-in to migrate the data into the bank. Here the bank is swift. And in the open cloud, open start cloud two and we will use Maglon and H to deploy your environment. It also includes Kubernetes and Docker. And then we will restore the personal data from bank back cover in the open start cloud two. So this process is finish the migration of the container person's data. And we could deploy the database container in cloud two and see the database is still working. Okay. At the last step we will check whether the database container work. We prefer a demo to show that. Okay. It's a little small. First this is the cloud one. It's deployed by Kubernetes. And we use the one master and one load. And here we will log in the Kubernetes master and the and the Minion. Firstly we will create a sender volume and record the volume ID. And this is the Kubernetes deploy files. We will use the sender volume ID. Okay. We can use the control to create a container cluster. Actually it's a MySQL cluster. And you can see the sender volume is used. The container will use the sender volume. This is a container ID. We log in MySQL and create a database. And then create a data table. We call it cross side one. Okay. This is a table structure in MySQL. And then we will create a protection plan in Khabar. And choose the volume resource and create a protecting plan. And we can protect it now. It will generate the checkpoint in Khabar. You can see that. It's still protecting. When the checkpoint is available, we can do that. Do the restore in the other side. Set two. So you can see that Swift have some data about Khabar. Okay. We can restore in the set two. Restore a checkpoint. The checkpoint ID is the same because they are shared. They are the same Swift. We also call it bank. When the restore is finished, you can see the volume is restored in the other side. This side has already deployed into the container cluster. Also, we can modify the Kubernetes deploy file. And insert the sender volume ID. This volume is migration from the cloud one. We can use this command to create the MySQL cluster. And mount the sender volume into this cluster. You can also see this volume is in use. Okay. The cluster is running. It will take a lot of time to deploy the Kubernetes cluster. And then we will log in the cluster million to check the database. Okay. We have already logged in the MySQL. So you can see we have a database cross side. Also, we have a table cross side one in this container cluster. Okay. So what's the next? You can see a lot of ways to migration between cloud. So we will need a new project like this one. Data migration as a service. Okay. You can see a lot of resources in the stack. And like container, like VM, like volume, like share, and like image. And this resource can be migrated in one cloud and or migration between clouds. So we propose this migration workflow. Firstly, find the resource entity. For example, we can make some compose like VM like container, like volume, and like some share. And then we will specify the migration target. And then we can create a migration group. And then we can launch this migration. And this migration will create a migration task until the migration is finished. So this is our idea about the migration in OpenStack. Okay. We can discuss more in detail in the design session. If you have some interesting, you can take a photo at this design session. Okay. We can also discuss it in the IRC channel. Okay. Thank you very much. Okay. Any other question? So I would like to ask how does it work in Keystone? I mean the identities between both clouds. Yeah. The Keystone is a problem. So we, you know, we used the bank in cover. The bank could be a swift S3 or safe. And in this demo, we used, because the cover has a bank plug-in. And it's very flexible. And we can configure the address and the user name and the password to connect to the bank. And so we, in this demo, we share the same Keystone. Okay. That's it. Yeah. Yeah. We could also use the Amazon S3 as a bank. Any other questions? Okay. Thank you very much. Thank you.