 OK. So good afternoon, everyone. So very glad to have this chance to share with our latest contribution and the solution proposals for the OpenStack Power Hybrid Cloud, Open Hybrid Cloud solution. I'm Dennis Koo, Chief Architect of Huawei Cloud Computing Solution. I will be the main speaker for this session. And my colleague, who is the hybrid cloud architect Leo Lee, will assist me for the live demo of this session. So basically, it is well known that the key benefits that could be brought by the cloud computing technologies like TCO savings, the business agilities, can be brought by the IT infrastructure consolidations that is powered by the cloud operating system of OpenStack is especially achieved by means of the unified management entry point and standardized APIs for the on-demand infrastructure as a service. So the larger the range of the resource consolidations, the higher levels of resource automations and utilizations, as well as the better resource elasticities can be achieved in order to satisfy the rapid changing requirements on the business and market. So as we know that before the introduction of OpenStack, the legacy virtualization technologies like the VMware and Zen servers, they can be used for a small range of resource consolidation, which is based on the legacy, small cluster virtualizations and unified management. So after the introduction of OpenStack, the range of resource consolidations is enlarged to the whole data center range that is covering multiple clusters and even availability zones, meaning that the multiple clusters of virtual machines can be managed and automated through a unified single pane of glass, OpenStack, or the service consoles for all the cloud tenants. And also, of course, the scalabilities of each single cluster is enlarged to several tens of physical hosts to more than several hundreds of physical nodes within each cluster or AZ. So furthermore, as introduced by Huawei's proposed by Huawei's OpenStack cascading proposals, it can be further enabled to consolidate multiple data centers resources in the infrastructure layers to be a unified logical poolings that is covering multiple deployment site and instances of OpenStack. That is another major project that is covered by the so-called tri-circle project, also led by Huawei. So by adhering to the very same philosophy, is it possible to adopt the very same architectures to even further enlarge the infrastructure consolidation range to other third-party heterogeneous cloud? Our answer is yes. It is fully feasible. And in the following session of this slides, we will show you how the working mechanisms and architectures is working behind and how to what extent we can actually achieve the capabilities of cross heterogeneous cloud consolidations and how smooth experiences can be achieved via the unified management portal of OpenStack dashboards. OK, so it is well known about the common benefits that could be brought by the hybrid cloud to enterprise-private cloud constructions, especially, first of all, in terms of enabling, extending, and migration or scaling the resources from the private cloud to the public cloud, especially to satisfy the cloud burst requirement to handle the temporary large scale of resource requirement and enable the Dev and test in public cloud while smoothly migrate that whole certified environment without changing any configurations smoothly from the public cloud back to the private cloud. This is one of the typical application scenario of hybrid cloud, especially focusing on the users of a private cloud. Another major usage scenarios that have been identified is the private cloud to public cloud backup and recovery, which is featured by backing up the co-daters or non-confidential daters from the private cloud to the public cloud without the necessity of building his own NAS or massive storages within his cell phone data center sites. And of course, another major issues possibilities is trying to deploy the different sets of applications in different cloud point of presence. For example, mission critical and application backhand databases located within the private cloud while the non-mission critical or more temporary resource and bursty traffic patterns workloads being placed within the public cloud. But with all the relevant workload, with unified IP addressing, security planning, and governance, of course, both private and public cloud environments. So besides the benefit to the hybrid cloud, benefits to the private cloud, is there any value added to the public cloud scenarios? Our suggestion is that also it could be equally applicable to public cloud for these in order to build up some differentiators for public cloud operators. One of the potential cases might for the multinational enterprises, who will be one of the major type of customers of the public cloud services for telecom operators or other cloud carriers. It might be possible, for example, for a T-System public cloud that is trying to offer the public cloud services to China or Japan branch of German companies without the necessity of building T-System data centers in the territory of China or Japan. He can just leverage the local public cloud resources and utilize the hybrid cloud to establish the connectivity between the two clouds in order to offer a better interactive cloud services like video workspace to the remote cloud tenants without building his own data centers, which is close to users access proximity. Other cases might be useful cases, might be seamlessly moving workloads from non-open step public cloud or private cloud to the open step based public cloud. These migrations could be anticipated to be migrated in a very seamless manner so that the duration of service interruptions or the level of service continuities can be achieved as far as possible as the provider can do. And the final cases benefits for the public cloud hybrid scenarios is for satisfying, of course, the temporary bursty traffic requirement in case there is a shortage of resources for self-owned physical available resource pooling. So how about the technical readiness? And is there any blocking point for achieving this ideal usage scenarios of hybrid cloud? We think that there are still lots of challenges ahead. First of all, then is the challenges about the consistency, the capabilities, and API exposure, as well as the interactive consistencies across all different types of cloud capabilities that is covering both OpenStack, as well as the AWS from Amazon, VCloud from VMware, and Azure from Microsoft. So as can be seen, in various dimensions, including the image types, the metadata capability, the data volumes, the security rules, and the API differences, there is lots of differentiated available capabilities in all these dimensions across these heterogeneous cloud choices. So it is really hard for any cloud tenants or cloud administrators to provide the real unified management experiences across all these heterogeneous cloud backhands. And especially when the back-end capability is not available, it is even more difficult for the pure software based adaptation to smooth these differentiation. And the second biggest challenge is because the addresses, majorly the addresses of networking plannings and security policies are not fully unified, managed across the cloud boundaries. And the ACL and communication metrics need to be set up manually after the adjustment and all movement, migration of the relevant workload between the cloud presence. So these kinds of rescheduling of workflows across the cloud will bring tremendous complicated manual work efforts in the networking management environments. Third challenge could be the difficulties of moving the workloads across clouds with a single point, with a single one-click triggering of the whole migration tasks. Because of the images, first of all, the image types and format across these different types of hybrid cloud is quite different. And also the networking, the layer 4 to 7 networking capabilities, including the security group, firewall, and load balancing are also not fully managed across the cloud. So it will also bring lots of difficulties of moving workloads around the clouds and especially to do some automatic tasks of scaling the workloads without changing the networking and security attributes during the whole procedure. And fourth challenge is, of course, always will be the security potential breaches that could be introduced when trying to establish the secure guaranteed tunnels between the interconnected cloud resource poolings, both in the interconnecting networking channels, as well as the boundaries of the anti-doss borders of the public cloud offerings, as well as the potential security attacks between the tenant boundaries within the same hosted environments. So it will be also very crucial for guarantee the private cloud or the extended cloud environments when he is trying to apply the additional resources from another cloud and security governance range. So how could finally these key challenges be finally handled and also what the potential architectures behind could be satisfying these challenges and key requirements? Our answer is that by adopting the same philosophy of cascading or multi-site open stack by means of a unified cloud brokers that is setting on top of a series of heterogeneous cloud back-end, no matter whether these cloud back-ends is provided by a self-constructed data center or some third-party heterogeneous cloud already in presence, this including other third-party open-stack-based cloud or full heterogeneous cloud like Amazon's and vCloud, Azure's, and et cetera. So that these unified cloud brokers we're serving as the unified API entry point, no matter what type of cloud back-end is sitting behind and also sharing the very same service catalog, service orchestrations, and open stack, uniformed, open stack API exposures, and most of the importance, the quota management for all the tenants across all the underlying cloud resource providers. There will be unified and flexible deployments across all clouds and seamlessly workload movements between the clouds and very easy to invoke scalings across the cloud boundaries and, of course, the fully guaranteed securities. With more deep dive into the cloud brokers, under these end-to-end open-stack hybrid cloud is the cascading open-stack-based umbrella or the unified entry point of resource quota management and API exposures, as well as the service catalog that is covering all the back-end hybrid clouds. These cloud broker layers is fully standardized cascading open-stack-based, which is also named as the tricycle that is used to orchestrate and connect multiple clouds, no matter what is the semantic differences that is sitting behind these unified cascading open-stack umbrella. And what about the cloud broker? Underneath these unified umbrellas of cloud, what about the cloud gateways? That is also working in signage with the unified entry point of a cloud broker. Here, as can be seen, the cloud gateways is also consists of a key, first of all, the key controlling layer components from the open-stack, leveraging all the key elements and building blocks of open-stack of Nova, Sinda, and Neutron, talking the full standardized open-stack languages to cloud broker, while at the same time underneath these key fundamental compute storage and networking services, there is a series of crucial building blocks behind, including the cloud drivers, which is used to adapt the fundamental resource invocation API capabilities to the back-end heterogeneous cloud, including AWS, vCloud, and other future possible cloud backends. And V2V cloud for necessary cross-cloud image translations, hyper-switches for establishing the cross-cloud overlaying networkings and the border gateways, especially to guarantee the security and establish a highly security-guaranteed context and cyphered protected tunneling across the clouds. And of course, storage gateways, which could be used for migration, workload migrations, and real-time or incremental disaster recoveries across the cloud boundaries. So here's the standardized cloud gateways. It's just deployed on top of heterogeneous cloud back-end clouds corresponding to a specific OpenStack AZ. And here we just give you the more deep dive explanations of unified networking across the clouds that is covering the enabling the one networking across all the clouds with unified IP address management, layers 2, 3, connection across the clouds, and also the DTRS security channels across the clouds, as well as the multi-tenant overlaying VXLAN, layer 2 or layer 3 interconnectivities between the clouds. And of course, with the introductions of the so-called storage gateways, which is actually an overlaying storage layers on top of heterogeneous back-end cloud native underlying storages and the upper-layer virtual machines so that all the intermediary IO attempts will be intercepted by these storage gateways in the guest operating system building blocks, while at the same time making it possible to do the necessary incremental snapshot backup and disaster recovery across the whole cloud boundaries. And of course, while at the same time, it is necessary for enabling the API exposures in extra to the fully standardized API sets. And of course, by the way, for the purpose of the cross-cloud migration, workload migrations, the V2V gateways is also optionally present here in order to handle the task of a fully automated V2V translations across the different VM images types. And finally, of course, is the Docker. Here's the Docker over OpenStack hybrid cloud illustrations where each of the tenant will be deployed. It's a dedicated master or Docker container schedulers that is based on the Kubernetes. And also here's the OpenStack service orchestrations and workflow managers like Marano can be used to automatically deploy all these container resource management and schedulers, as well as surrounding VM environment for all these microservice Docker containers. And here specifically, by combining and running all the Docker application distributed on top of the OpenStack, it will be seamlessly bridging and combining the benefits of fully orchestrated storage and networking with that of the agile, fully guest OSD coupled Docker deployment on top of various kinds of hybrid clouds. With very seamless user experiences. And there will be, of course, comparing to the VM-based hybrid cloud approaches, this Docker-based hybrid cloud solutions will need no more VM image translations between the different hypervisors. And also there will be no so-called hypervisor drivers visible to the end users, the guest environments. OK, so that's just a brief overview of the key architectures that is employed to support the architectures of the OpenStack-powered hybrid cloud. So next, we would like to give you a live demo and show the what extent can be achieved for these seamless experiences and unified workload management across all these heterogeneous cloud where fully standardized OpenStack dashboard of Horizon. This is actually a modified, as can be seen, in the right side screen. This is a fully standardized OpenStack dashboard with just slide modification and enhancement to reflect the different specific availability zones that is corresponding to specific heterogeneous backhand cloud, including Amazon cloud in Singapore. And two heterogeneous cloud, one OpenStack cloud, deployed in Shenzhen, China. And the other V-cloud, VMware V-cloud, also deployed in Shenzhen. This is a fully live demo with all the operations really performed without any simulation. And the next following key, we will have a demo of five major cases. First one will be the unified management of multiple clouds via these Horizon-based portals, as can be seen. OK, so I will launch your instance from the OpenStack dashboard to any underlay cloud. So I choose the AWS cloud and give a name, test, and select an image, and select the KPL security group network, and the user data. So all these things is a standard OpenStack message. So we can use this message to launch any instance to Amazon or V-cloud. Yeah, during this demo cases, you can observe the fully unified user experiences without any difference on the operation capabilities on different types of backhand clouds. Because we are using the cascaded or the native OpenStack to be deployed, or you can consider it which is OpenStack-based cloud gateway, which is natively deployed in the third party heterogeneous cloud, like AWS and V-cloud, so that all your operation capabilities is 100% OpenStack compatible. So no need of any cross-heterogeneous cloud adaptation or loss of some certain capabilities, not compatible capabilities. OK, so this is done. We can continue to the next. Yeah, so just within minutes, you can see with the single virtual machine, it can be seamlessly provisioned in any of the backhand clouds. And also, back to the initial dashboard, we can also see that. Yeah, you can see after the provision, you can see from different key dimensions, including CPU, the number of virtual CPUs, RAM, volumes, and volume capacities. It's different resource distribution in different backhand clouds in terms of percentages for the specific cloud tenant. This is fully leveraging the ecosystems and the capabilities of the OpenStack communities. Yeah, so next demos, we would like to adopt an open-source online shopping systems, which is based on Megantel. Megantel is an open-source online shopping for fashion goods. And also, we are previously deploying these online shopping systems, so-called as e-more, e-more systems, on top of the OpenStack-based private cloud. So that cloud can be, for example, under the event of a rushing hour or promotion day or some Christmas holiday events, it is required to apply a huge amount of resources in a very short period of time in order to enable the cloud burst across the cloud boundaries of OpenStack, private cloud, and the Amazon public cloud. This is enabled, as demonstrated by this graph, enabled by deploying additional web front-end instances in the Amazons while maintaining all the load balancing and the backhand databases within the private cloud, so that all the interconnections, especially the big layer 2 connectivities between load balancers and the newly extended web instances, will be guaranteed and fully automated without any human configuration. OK, we will start the operation of these cross-clouds web app scaling. We don't have enough time, so maybe we just launch the instance and go to the next demo. So you say we have a load balance here. The load balance already have a floating IP bounded to the load balance. We could have 10 more minutes. So we connect to this IP so we can say the web server is working. And now, to do the test, I will remove the members from this load balance to add, to scale new member from Amazon. So here, I just delete this member from load balance, the right side. So we can see this server will go down. OK, one moment. Yeah, so before we extending the Amazon web instances, we are deleting the web front end from the OpenStack private cloud to demonstrate to you that after the extending of web front in the AWS, it is working again. Yeah. OK, so I scaled to new web front to Amazon. Yeah, OK, so you can see there's two new automatic launched instance to Amazon. And then you will also automatically join the load balance of our website. So three minutes later, this website will go back. Yeah, it will take a while for the launching of the web instances in the AWS and for the initialization. You see, Amazon is also doing this. This is AWS portal, as can be seen. It is being spawning the relevant instances in real time. OK, so I think we can go to the next. OK. We only have five minutes. We will return back for checking the results. And the next cases, we're going to use another interesting application, which is an online editor of Etherpad. This Etherpad, we will use this specific applications for the migration across the cloud, as well as the geographical redundancy across the OpenStack, across the VMware, OpenStack, and AWS. These three heterogeneous private and public clouds. OK. So first, we check this server, I think it's already ready now. And I will change to this. Etherpad is another project in OpenStack. So we have already deployed this Etherpad. Yeah, it is originally deployed in the vCloud, in the vCloud instance. And we will try to migrate these vCloud. And maybe you also need to enter into the vCloud client to check the instance. Yeah, this is the vCloud client. Five instances here. And this server, the Etherpad is here. And we just changed something. This is a migrate test from vCloud to OpenStack. OK. So we check. This Etherpad is saved. And now we just click Migrate from vCloud to OpenStack. Yeah, so within two minutes, it will be migrated in real time from vCloud to OpenStack. This is not a live migration. It's a volume-based migration. You can consider it as a cold migration, but without any persistent data loss during the migration. OK, you see the first Amazon website is coming back. OK, now it is working. This is auto-spanning. The newly extended web instances. And this one, the migration, you'll say it is done because we are doing the migration. So the Etherpad is reconnecting to the service. And so it needs about two minutes to do it. It's still migrating. OK, we can continue with the next one? Yeah. Yeah, the disaster recovery. Here, we are trying to demo you the cross-cloud disaster recovery from OpenStack to AWS. OK, so we have the same Etherpad application. We can first raise it and call it dr. Yeah, we will enter some customized content to check that when filled over, we can recover that content as the regional input. From OpenStack to AWS, OK? So this dr will do an incremental backup from OpenStack to AWS every 10 minutes. So to do this demo, I will do a manual backup to check it. So here the relevant volume will be incremental snapshot synchronous from the OpenStack to the AWS. And then we can base on these incremental snapshot and the regional volumes to recover the whole volumes and virtual machines, booting it from the AWS. OK, I will shut down this OpenStack dr service to simulate crash. Yeah, a real disaster recovery situation. So you can see this dr, it will service down here. And then we do recover to Amazon. So with this specific demo cases, actually this cross-cloud disaster recovery is achieved by means of so-called overlay storage gateway that is sitting between the virtual machines and the underlying native storages of the H2 Genius cloud, including vCloud and AWS, so that all the relevant IO attempts can be synchronized in real time or in incremental snapshot manner in asynchronous matters from one cloud to the other. OK, you can see the migration test is finished. Now this user path is already migrated from vCloud to our OpenStack. OK, so the dr, we are a little, I think, two minutes, but we don't run out. OK, so we just finished with the last demo case of container across the three H2 Genius cloud that is deploying the Docker containers, irrelevant of the back-end cloud types sitting behind. Do we still have time to do this? Maybe a short. OK, I think maybe. OK, so you can see here, we have a cluster here. And it is pre-deployed to multiple different cloud. So this is a Docker environment across three different cloud. So we can just deploy the Docker application into this environment. Yeah, here we are using the online blocking systems of WordPress, which is packaged in a Docker manner to be deployed across all these three H2 Genius cloud of OpenStack VMware and AWS. OK, so now you start to deploy across different cloud. Yeah. OK, so you did about one minute, so we don't have time. OK, so finally just a quick summary and a recap of the key ideas of these sessions is that the Unified OpenStack API and the ecosystems that is covering H2 Genius public and private cloud is fully feasible and possible by means of these cascading-based architectural approaches. And it can achieve consistent experiences and capabilities across all these H2 Genius cloud that is covering unified Layer 2-3 networkings, unified APP configuration networking management, one-click disaster recovery and migrations across the cloud, and also Docker-based container, lightweight agile deployment across the clouds. So finally, we are just proposing if you have more interest on the detailed demo cases and walking behind mechanisms, you can visit our booth, Huawei booth, with always waiting for you for all the relevant detailed explanations. And also, we are going to propose some incubate projects for these OpenStack-powered hybrid cloud that is featured by several incubate project proposals like the H2 Genius cloud AWS Azure and VCloud adapter, the V2V services, and the hyper network. Basically, these capabilities, the user plane capabilities will be left to the vendor-specific implementations. Here, we are just trying to organize the controlling layer, or the API service layer, standardizations in the open source group underneath the unified OpenStack umbrella. OK, thank you all. That's all for our demonstration today. Thank you. Thank you.