 I happen to invent the open-stack cascading solution and the designer of the open-stack architecture. We have finished the POC and published the source code to the Stacker Fogart project. You can get the source code from Stacker Fogart. Thank you. First of all, we would like to give background information on driving forces analysis of the emergence of open-stack cascading mechanisms. Then we're followed by a waterfall case studies introduced by Mr. Alexis. And also a more deep dive of solution proposals for open-stack cascading by Joe Huang. And then followed by a live demo which is remotely connected to our live open-stack demonstration centers within China. So firstly, we would like to talk about the overall background and driving forces, especially the raw philosophy of trying to do the cloud consolidations in order to eliminate the isolated resource poolings that is existing under the administration reign of the large enterprises and businesses. We believe that the once isolated resource and data silos which is once dedicated for certain applications will be unifiedly consolidated with the unified resource and data poolings shared by multiple applications. So that is actually open-stack-based cloud operating system. It's serving as the core engines for enabling these consolidations and especially bringing lots of key beneficials to the open-stack cloud adopters. In terms of firstly the improved business agilities and management simplicities. So the more level of consolidations, so the more improved business agilities and management simplicities you will get. And also with the larger scale of resource poolings and optimized resource utilizations, you will get higher level of TCO advantages over the other competitors. And also here's the, with the enlarged range of resource automations, you might be extremely requested a consolidated, a unified API entries, which is also trying to be vendor-neutral and compliant with the de facto standard industry proposals. So due to the history or the step-by-step deployment strategies, we know that for lots of large enterprises they have already in place built up a series of open-stack instances, which is mostly distributed in geographical dispersed data centers, data center sites. So different open-stack currently for most of the larger businesses, they are still independent of each other. So even the automations, the instance management and orchestrations is still, you know, isolated as the stope pipe model. So we urgently need to consolidate it into a unified one for future-oriented cloud transformation. And also how to achieve this unified orchestrations and automations across multiple open-stack instances in data center. The answer is possibly two viable ways. First one is based on a proprietary consolidation or orchestration layers set on top of a series of open-stack instances. Well, this approach will be featured by a proprietary API in the northbound, which is exposed to upper-layer orchestration and management software and the whole open-stack ecosystem applications. While the other approaches might be end up with a unified open-stack compliant exposure, API exposures, serving as the unified orchestration layers. This could be, we believe, the ideal transformation target compared to the first one. And the other is that even not, you know, addressing the multi-physical data center size scenarios, even within the same data center where there might be standalone or different vendors pool of deliveries, which is all of them based on open-stack. So, how do these multi-vendor or even different versions of open-stack pool of deliveries can be efficiently, unifiably consolidated with a single API entrance? Or in other words, how to do the multiple modular open-stack cloud inbox consolidations? There's another, you know, actual challenges we are facing with. So, besides the traditional approaches of expansion, host by host, we will offer another viable solution of doing the expansions by modular pre-integrated pool of deliveries. This is the second important driving forces. And of course, here we are definitely encountered with the situations where a unified cloud tenant, virtual data centers or workflows being distributedly deployed in geographically different data centers. So, we will definitely also need a fully automated network deployment and configurations, both in the Layer 2 and Layer 3 networkings without human intervention. And also, even we're requiring some traffic optimizations between the virtual data centers, also demanding an open-stack API-driven network automations. And also, we are especially encountered with the challenges of deploying single open-stack, trying to deploy single open-stack instances across multiple physical data centers. The reason is that the unavailability of enough bandwidth resources in the wide-area network will provide lots of potential challenges for the internal open-stack message queue communications between all the API servers, the controllers, the compute nodes and networking nodes, which is deployed in different geographical data center sites. So, this will definitely involve with the difficult RPC decouplings of the standalone key open-stack components, which is deployed physically apart in different data centers. And also, the challenges for troubleshooting and configuration and rolling upgrade because of the tight coupling relationship of the relevant key components of open-stack that is spread across a large geographical areas or a large amount of VM workload on top of that open-stack platform. So, we definitely need a more loosely coupled, well, horizontally scalable architecture to solve this problem. Okay, so, next, I would like to transfer to Alexis for a waterfall case studies for relevant cascading requirements. Alright, so Michael is working, that's good. So, basically, when we start working together with Huawei, we ask ourselves a few questions in terms of what is the business of doing cloud inside of Vodafone. So, I wanted to first describe the situation that we have today in Vodafone, which I think applies to many companies or many large multinational companies. So, we have 40-plus operating companies. We're acquiring companies, we're selling some of them. It's quite a dynamic environment. And what happens with that is that the level of technology, the level of maturity, the networking behind these companies are all very diverse. And an integration exercise every time is both time-consuming and can become quite difficult if you try to push standards into each of the operating companies that they may not be comfortable with. Another thing that is always important to remember for us is the local relationship. Acquiring a new operating company shouldn't necessarily mean that we should break their relationships with actual vendors. So, whoever they buy servers from, whoever they buy software from, we're trying to look for a way of integrating all of these together without breaking what works in these environments. So, I kind of made a slide of what is the ideal view of where we'd like to get. An open stack is really something that we feel is going to help us achieve that. We'd love to achieve two things. One, to have a global standard API to access resources across the business. And that's also true when we are going to acquire new operating companies. But also, the part that's very important for us is the global access. And with a full open stack to open stack orchestration, we can create global networks that are based on overlay networks. So, all we need is a basic IP connectivity between the sites. We can optimize these when they need to be optimized, of course. But the sort of awareness of open stack that a tenant can span multiple countries, multiple sites while respecting all the local deployments and the complexities that they have for us would be a very good way of basically pushing clouds inside Vodafone and truly giving benefit to the business. And I repeat that in every single keynote I give in the open stack conferences, the key here for us are really the APIs. The abstraction behind the APIs is of course important and we love the work that open stack is doing to make open stack work. But even more so than that, the fact of standard that the open stack API is, is really for us a driving force of pushing open stack into further places. And one last note before I hand over to Huawei again. When you see these open stack deployments here, you should not only think data center, but you should also think network elements. So network function, virtualization. And you should look at the broader telecom space as potentially being driven by open stack APIs. You could provision compute resources as much as you could provision network resources. All globally, all instance and all with one standard. So it's quite a vision, but I hope we can achieve that together. Hello, this is Joe. We just learned that the requirement for open stack cascading in that multi-site, multi-open stack instance and multi-vendor. And we also want the unified cloud with open stack API because ecosystem friendly. And we also need the cloud data center network automation. So what's the answer? The intuition is that if we want to keep open stack API in the cloud, why not use open stack to orchestrate open stacks? Yes, and the idea is very good, but is it possible or feasible? Yes, the feasibility is that open stack provides very good architecture. Each service provides a pluggable and extensible architecture. You can plug any back-end to each service. For example, for NOVA, you can use KVM as a back-end. You can also use vCenter as a back-end. So why not use NOVA as the back-end of NOVA? And also for single, why not single as the back-end of single? And even for Neutron, Glance and Stereometer. So when we use this idea to make the open stack cascading come to the truth, so it's feasible and we just need to add some back-end to the open stack. We have introduced some components called NOVA proxy. NOVA proxy is the back-end for NOVA. And the NOVA proxy will be used to manage the cascaded open stack NOVA service. And in the same way for single-proxy, L2-proxy, and L3-proxy. And in the cascading layer, the open stack just works as usual open stack. And the proxy will transfer the request to the proper cascaded open stack. This is the way the open stack to do the scheduling between different open stack. And also we use L2-proxy and L3-proxy to do the class data center or class open stack networking. Okay. In this way, the cascading will work like a flag. We just use the stuff-stimulator mechanism to do all things for the scheduling and the orchestration. Just like you scheduling computer knowledge, scheduling center volume, and to do the networking slow at the population and there is a little update for DVR population. Yes. And I think for this session, I have, there is no enough time to dig into the technology in detail. But we can have a face-to-face talk in Huawei booth and to see the, leave them how it works. And I just gave you an example how the cascading will work like a flag. For example, we know that if a machine was added to the network LAN, that population will be activated to do the networking for the machine located in different computer nodes. Yes. And in cascading, if we create a new virtual machine, LAN, in each cascaded open stacker, they will internally have an L2 population. And for the cascading layer, the L2-proxy will pull in the status of the new machine. And if the L2-proxy will pull the status of the port, if the port status is up, then the L2-population process will be activated in the cascading layer to carry the information from this open stacker to this open stacker. So we use the same mechanism in orchestration like scheduling and L2-population for network automation. We use all the same mechanism which has been designed in the open stack to do the cascading cross-open stacker orchestration. And with cascading, because we use standard open stacker API to orchestrate the underlying open stacker, so there are a lot of benefits that could be achieved from the architecture. For example, any open stacker could be clashed, but the other part of the cloud still works and running. Even for the cascading layer clashed, all the open stacker still can be managed through the API and the CLI for the open stack. So the cloud is always workable and manageable, standard alone. And we also can integrate different vendors infrastructure in plug-and-play manner because we use the standard open stacker API. So if you have one more data center to join the cloud, it's very simple. You just add one proxy node and then register the open stacker API and the point in the keystone. Then the cloud will be joined the new open stacker will be joined in the cloud. Very fast. It's just like a USB dongle plug-and-play. And because we use the standard open stacker API, so we can throw the issue for vendor lock-in. You can deliver some convergent infrastructure with the open stacker built-in and then integrate it into the cloud very fast. And also we reduce the upgrade and operation and maintenance challenges. For example, if there is a bug, you have to make a patch. If you have only one open stacker or other way, you have to do the patch for all the cloud. But for the open stack cascading, it's very clear responsibility. If vendor one's infrastructure has a bug, okay, just ask vendor one to fix the bug. And the other part no need to be involved in the patch making. So it's also scale-out architecture. Yes, at first the architecture is not for scalability, but suddenly we found the architecture could scale out very, very, very big. In our lab, we are doing a stimulation test to expand the open stacker cloud to one million version machine and can integrate 100 open stack into the cloud. Yes, I just mentioned that we are doing this. And if you are interested in the architecture, welcome to join us. Okay, so we have a dedicated Huawei exhibition booth at the marketplace. So welcome all of you to join the in-depth discussions. The final part is the live demo, which is remotely connected from China. Actually, we deployed three instances of open stack, which is geographically apart 1,000 kilometers in different China cities. This is actually the first use case we want to demo you. Actually, the consolidation of a brand new open stack instances into the unified cascaded open stack cloud. So we would like to use the desktop sharing. Okay, so this is the entry portal management open stack entry portals where the multiple size of open stacks. So actually, there are various administration models of the open stack. You can just simply entry the domain name or the IP address entries for the newly added open stack instances. Then within one minutes, that open stack will be consolidated into the unified existing cascading open stack. So this is quite differently from the traditional approach by doing the bare metal provisioning of machine by machine expansions. We'll allow you to, you know, providing the online cloud services while adding another open stack resource poolings by just a simple, you know, minutes of operations. This will enable the system expansions much more efficiently under a unified open stack environment. As can be seen, the newly added open stack instance in Shenzhen cities has now already been successfully added to the consolidation poolings. And also here's the abstract of the open stack software information based on the ice house version of the open stack and relevant to the resource information and summary. Okay, so this is the first demonstration case. Secondly, we would like to demonstrate a scenario where Mr. Alexis Andrews is deploying a unified cloud tenants virtual data centers across multiple physical data centers. Okay, so here actually the virtual data center or the cloud tenants is logging into his dedicated self-service portals. Then he's trying to, actually due to the time limitations, we will not do the actual creation of virtual images. We actually here as can be seen in the unified, you know, subnet address information. Where's the 192, 168, 20, 22, 21 and 19, this is belonging to the same subnet, but actually spanning across different cities apart 1000 kilometers away. Xi'an, Shenzhen, this is northern and southern, you know, Chinese cities within Huawei lab. Yeah, so actually we will currently we will use the pinning operations to check that within the same subnet of the three virtual machine nodes, the latency is quite different. You know, within the same unified physical data centers, the latency is less than one millisecond. Well, for the same subnet, big layer two connections across the long distance of city connections, the latency will be, yeah, the latency will be, this is less than one millisecond. Well, for 1000 kilometers away, this will be 56 milliseconds. This is just one example where we're allowing the geographically dispersed VPC being deployed in multiple data center. And the third case, and the third case we would like to deploying the demonstrating the cases where a active and standby or active, active load balancing application. So virtual, virtual APPs that is capable of being deployed in the geographical redundancy matters, especially by enabling zero interrupt, zero interruptions. This actually we are, we would like to use, yes, we would like to use the slides to give extra explanations of the working, since this is a more complicated cases, where actually due to the cascading, we will achieve the benefits of using the unified cascading layer OpenStack. With almost the same, you know, syntax and semantics of API, OpenStack API, to do the creation of the relevant volumes for both the active virtual machine and standby virtual machines. And also doing the, you know, extended OpenStack API to enable the shadow volumes between the different data center, physically part of data centers. That is the volume and virtual machine data content applications. And then upon the failures of the active one, this will enable an automatic traffic, service traffic sterings to the newly available standby virtual machine. Actually, here is the, we were using the NFV virtual IMS as an instance where to enable the, you know, the continuous core session controls without interruptions. As can be seen, the active and the standby virtual machines here is, which is traditionally located within the same Layer 2 network. Currently, you know, split apart in different data centers, but it's deployed within the single VXLAN Layer 2 networking. So that the load balancing node can do the necessary, the traffic sterings, just as within the same subnet, within the same data center. And as can be seen from the share screens, yeah, the whole core sessions will not be interrupted even upon the catastrophic failures of one data center. So the final cases we would like to illustrate the VM mobilities across multiple data centers, also based on the cascaded OpenStack. Actually, this case is quite similar to the previous zero interruption geographical redundancy. Okay, only with the exceptions that the virtual machines will move seamlessly with the whole instance of both the running context as well as the volumes being totally moved within one minute from one data center to another. And we will also use the PIN operations to check the continuity of the services to prove that the service interruptions with the whole procedures of VM migrations from one DC to another being less than one minute. So after one minute of connection broken of PIN operations, it will be recovered. So the whole, actually this is a specific case we're using the VDI, Windows virtual machines, you know, simulating a VDI serving users, serving, you know, a walking users, you know, doing the business, having a business trip from one city to another, so that its virtual machines can, you know, just roaming along with his traveling. This is just one example. In other cases, we'll be, you know, moving the virtual applications or certain, you know, DevOps applications between the production data centers and the test and death data centers. This is just in some showing cases, you know, demonstrating the potential benefits of the OpenStack Cascading. Okay. Thank you. Okay. So lastly, yeah, we would like to welcome you to join the Design Summit sessions dedicated tomorrow afternoon for the OpenStack Cascading, especially its comparisons and pros and cons. Comparisons with the cell technologies, cell solutions. And also, warmly welcome you to Huawei booth for the live demo of Cascading and deep dive discussions. Thank you. Thank you. Any questions? Any questions? Yeah, please. We have... In different data centers. Yes. Okay. For network, we use a reactive plan to connect the virtual machine. Okay. Different data centers. Because for reactive plan, you only need to know the demotor IP for the tsunami. Demotor holds the IP for training and processing. So the population will collect the host IP information to another data center. The virtual machine located in another data center will know where the other virtual machine is located. And for the SD communication, we will use the GIU tunneling for the network located in different data centers. And we also use the router updates just like the DVR process to update the information. Sorry, I have a little problem. The large scalability testing of the OpenStack. Yeah. Yeah. Actually, as demoed, we shown the part of three cities, 1,000 kilometers away. We've already proved the latency issues. It is acceptable for the normal operation experiences or anticipations for the normal operation, the cloud administration and operation targets. And also in case of scalability and the performance issues, we also already established a performance and the low test simulation environment in order to simulate one million virtual machines that is cascaded waiting 100 OpenStack instances. Yeah. Trying to simulate the horizontal scalability of these approaches. Are all features implemented or do you think more in your roadmap? Currently, because we just finished the POC, so for NOAA, almost 80% feature. Just the POC, not the production code. Yeah. But for single, almost 90%. But for neutral, because all the features relied on the DVR, so only about 30% to 40%. Because we have not finished the North source networking. And the other ones, the rest, like Lord Balancer, Bell War and VPN, yeah. We just finished basically L2 and L3 networking classes OpenStack. And also we've accomplished the glance image management cascading. Yeah. We have multiple glasses to organize distributed major surveys. And for Stylometer, we just started studying. And I hope we can contribute the cascading to the community. So we publish all the code and ask for one background session for the afternoon. Oh, Cinder is also already supported in the cascading mode. The Cinder is actually the most simple one. Simplest one. We use global Qiskong surveys and authentication automation surveys. So the Qiskong is one data center. What do you go for? Distributed, like other services? One Qiskong surveys. But for one Qiskong surveys, you can design it as distributed or active class or disaster recovery. Actually, at the current stages, we are both the cascading layer and the cascaded layer or the native OpenStack layer. We're sharing the same keystone. This is the current approach. But in the future, we are anticipating the possibility of doing the keystone federation or even by using the same models to support hybrid cloud, maybe using the OpenStack cascading layer as the unified orchestration reference architecture to do the heterogeneous hybrid cloud adaptation to the same unified information model API. Hello. Can you indicate in site one? Can you indicate in site two? No, global Qiskong surveys. Not deploy the Qiskong in each stack. Global, global. Global surveys, maybe not one instance, but global surveys. It's very important because you have to keep a consistent global view for the resources, for project, for lower, for quota and for domain information and so on. If you have no global surveys, it's very, very, very difficult to do the consistency for the distributed system. Right. I understand that question. We often use, we generally use PKI. In site one, you have global keystone. How are the performance in one centralized keystone that are you available or do you say global or maybe... It's currently centralized position and deploying. Yeah. In fact, we have an updated keystone in order to get a regularity distribution of the terms that I sent us. I think I love that F1 is not dependent on any keystone. So in the case, I suppose you either distribute it everywhere or maybe it may take a lot of time when you're starting a volume that is not within this ability to do it. How do you achieve a new manage? You mean failure or... You mean failure or any other... I don't know. But when... Yes, yes. I understand. But for example, for virtual machine, in the creation request, the availability zone will be a parameter option. But if you did not specify availability zone, then the system will give you an availability zone. And for all other operations like restart, reboot, or pause, or resume, virtual machine, it will just like what we used in a general open stack. The virtual machine request will be transferred to the proper computer not the slow... The API server will loop the request to the proper computer because of the host ID. And for availability zone, one proxy node will be represented by one cascaded open stacker. So all the virtual machine created in this availability zone will be attached to the proxy node that belongs to this availability zone. So all the later operations could be looped to the proper cascaded open stacker. Except that, for example, some common settings like flavor, or host like data, it also could be managed. I have a lot of methods to do that, and we have already finished something. We can have a face-to-face discussion. Actually, in principle, in the API, in the cascading API, if you don't have the specification of AZ, we will enhance the scheduling layer, scheduling and filtering layers of the cascading layer to do the corresponding scheduling algorithms that is required to do, and also recording all the bounding relationships of the underlying specific open stack instances and the appropriate instances you select for the open stack API provisioning. Is it? Do you think that the availability zone is specified in the right machine that you go and do the filtering again and then you want to do some kind of scheduling in for the specific data center? Yes, I really don't. It's used for the cluster looped. For each proxy node, we'll be configured to manage one cascaded open stack. Each proxy is one? Yes, one proxy only. But you can have multiple proxy nodes to point to the same cascaded open stack. We can have a face-to-face discussion. Welcome to Huawei booth.