 Great you can stop the music Okay, good evening everyone. I know it's always tough to be in the presentation the last one before the beer, so So anyways, my name is a shed galore. I'm working for Huawei European Research Center, and I will be talking about the hybrid cloud Let's start So with the show of hands anyone here attended the multi-site session two hours ago Okay So I will do some review on parts of it that are crucial for this one So what is a hybrid cloud so? the 15 years of evolution of data center we've seen We moving the data set from bare metal data centers to virtualize data centers then to private cloud and VPCs Then we started doing cloud burst using Extending our private database the data center story into public cloud and In recently we're talking about hybrid clouds, which is essentially connecting together Public clouds and private clouds in some way and to gain extent additional resources Today we're talking about hyper cloud hyper cloud is a hybrid cloud over clouds, and I'll explain about that so How does it look like? Basically, you have some top management open stack API Where the users are working the tenants are working and on the bottom you have? Multiple public clouds, maybe private clouds as well And you have some resources that are residing in a specific cloud some resources will extend across different clouds What can you do? So first of all is a single pane of glass, which is you know what the customers want You have one resource management. You have monitoring in dashboard from one place You have user role management in one place image repository, which is always a headache, and you have network topology You don't need to take photos. I have it on slide share. I will send you I'll give you the link at the end so cross-site networking, which is the Holy Grail extending the L2 and L3 network across across different clouds Allowing you to have the same subnet and the same IP range in different clouds Which is very important if you want to Move VMs maybe VMs across different clouds without doing reconfiguration of everything Quality of service you can do rate limitation based on the tenants when you're crossing the WAN Heterogeneous cross-cloud overly that's very important because you have you may have different vendors different clouds different overlay tabs On each public cloud Maybe even each private cloud. It's very important to cross between them security alignment You want to have cross cross cloud network access controllers to you have to Use the same security groups Otherwise when you're moving your VM for one place to another start things will stop working distributed firewall service Geolasticity which as my colleague said earlier is a fancy name for redundancy You want to have a geo a load balancing? You want to have the resources that you need for load balancing in the right place I mean if you have traffic coming in suddenly from North America you want to have your resources in North America and not in APAC You want to optimize? Automatically the service for the best user experience while keeping the OPEX lowest because cloud business is about OPEX You want to maintain regulatory conformance if you cannot move resources out to your country on you know, you know these issues Quality of service and SLA constraints if your service needs to have Specific SLA or specific latency restrictions you want you to run in the in the right place Zero configuration so When you're moving the VM today? You're moving the VM and then you need to move its image and you to move it all its volumes all its networks all its configuration everything That's that's a headache. So we're talking about moving the VM along with all its ancillary configuration the security groups IP address volumes images firewall entries and everything Load balancing pools you can define load balancing by the subnet meaning you create a subnet and Every machine that loaded that acquires an address in this subnet no matter on which cloud it resides automatically becomes Part of the load balancing pool without doing configuration manually every time Images so images is is tough problem You want to have images automatically synchronized You when when you are moving a VM from one cloud to another you want the image to already be there and to be the right image in the Right version and you want to have automatic format conversion. You don't want to deal with it manually so Maybe a risk one. No, it's okay. So what what can we use it for? important So I have a few use cases nothing fancy Cross-cloud application deployment You have an application you want to deploy the database in your private site. You want to deploy the web front-end Instances on on other sites. You want to deploy the application back and in different site. So it's possible with that Cross-cloud application migration. So You want to move an application from your private side into the public so the order what are the while I'm from a public site back to the private Cloud cross-cloud application scaling or cloud bearers as we know You have an application running on a certain load now. You want to scale it out So you want to scale it out to different clouds without doing all the configuration required Add clouds dynamically you want to extend your hyper-cloud While it's running so you can add more sites Even public public site as well as private sites and of course you may want to take out the site for example, you want to move from AWS to Azure for example because of price or something like that Cross-cloud the disaster recovery so some kind of calamity destroys one of your private sites and you want to move the workload To your disaster recovery public public public sites cross-cloud containers So containers are becoming more and more handy these days and one of the problems with containers is first they they don't work With VMs the same network. It's a problem And the second is you certainly cannot deploy one container in this cloud the other in a different cloud and still have them communicate So something like this Okay Parts of your application deployed in different containers in different sites and they get they can stay communicate with each other So about containers, we're gonna have a session about career the new project gal My colleague is going to talk about it tomorrow. So I recommend going So what is the challenge making this work, so this is Tower of Babylon It was an amazing project while everyone everyone was talking the same language. Okay, but then When people started talking different languages, it didn't work. So Connecting public cloud is basically the same. They have different API's and that's by design They have different features also by design because that's that's a differentiating differentiating value. So Cross-cloud activities That's that's very complicated to do with just API conversion. It's a headache as you can see this guy upstairs Trying to make several building. It doesn't work The answer cross-call consistency You need an over cloud management in order to maintain this or manage this consistency otherwise, it just won't work so Why am I talking about this in OpenStack meeting? So It's because we're going to make everyone OpenStack and that's why we will be able to use OpenStack to do it so What are the building those are going to use? First of all, I'm gonna use OpenStack. That's a big surprise We're gonna use the tricycle multi-site management Which was presented two hours ago. I will touch about this because most of you did not see that and The last building block is a data protection as a service Which is a new project that we are also launching and another colleague of mine will be talking about this on Thursday I will I will give you the link soon Okay, so How to make it work with all those public clouds is by using jackets. I will explain about that Of course, this is them if you want to see the deep ass, sorry Data protection service. It's a nice acronym Just Make sure to attend this one. I will give another picture with at the end. So let's talk about tricycle a bit in an actual So tricycle, you see the picture is very similar to the one I showed before but the bottom side is all OpenStack So tricycle is part of the OpenStack. It's actually it's a project and OpenStack has even production environment right now and Which is essentially giving you one open stack to manage one top open stack to manage multiple bottom open stacks Okay, so how does it work? So this is actually the Experimental branch. It's not the one that is currently in production. I don't go over it, but this one. So basically You have a top side You see as you can see and multiple bottom sides and the open side on the top side You have an unmodified OpenStack management layer. That's basically the API's of OpenStack without any modifications and then you have an OpenStack adapter, which basically Hooks and text all those API calls and Handles them within the all the rest of this green green boxes. You see okay, so I don't want to go too deeply into it but eventually you have all your activities like launching a VM and everything that precedes it being handled and distributed through the cascading service of the top service see the cascading service and cascaded services you have two services and Eventually it is being performed down on the bottom side Okay, so this layer provides you the consistency of doing everything that is related to Life-cycle management of your OpenStack resources done as a transaction Okay Try circle across site L2 connectivity. So try circle Talks about extending stretching URL to Network across the OpenStack instances. We need that for the high hyper cloud hyper cloud So this is how it is supposed to be to be done Currently the proposition is to use something we call a broader border gateway It can be anything to be a hardware. It can be software We are working on doing it as in the reference representation using the Some extensions to the neutron L2 gateway. So this is something that we are currently In discussions with the L2 gateway team So you have multiple compute nodes on one side and you have multiple compute nodes on the other side and What you do is you have this border gateway basically keep being capable to mapping your tenant Tunnels on one side ten of ten of talents on the side on the other side and moving the data between them. Okay, so It's basically done through either L2 population on both sides or flooding. It's it's open for implementation Okay, and then you can you can see that you have the same subnet both sides So a bit about hyper cloud now So the hyper cloud architecture as you can see you have the hyper cloud management on the top is in one side which is basically a The open-stack and tricycle open-stack management layer on the bottom side You see on the left side or your right side you see an open-stack instance This is basically like tri-circle is doing the same and the other one are public clouds I've picked AWS in Azure. Of course, it can be I don't know everyone The idea is that Whatever is running inside the open-stack as bare metal or you know under the hood core services Needs to run inside VMs on the public cloud. You don't have access to their core So everything runs as a VM. So we have VMs that are actual workload VMs and we have a VM that is running our jacket Okay, which is This this part of the system that makes AWS and VPC on AWS look like like it's an open-stack resource Hypertend and this is very important. So this is the abstraction we use when you are creating a VPC on Let's say AWS and CC so you have a tenant in AWS you create multiple VPCs up to five if I remember correctly and You can define the Sider the network the network IP range for each VPC and from that point on Amazon will manage those IP for you You cannot do a static assignment on the so I've put here two tenants on AWS Two VPCs on one of them one VPC on the other and I have another tenant on open stack Just a regular tenant on open stack and what I want to do is to define this new entity called hypertenant, which is basically a tenant That runs across is like an overlay on top of multiple Provider tenants, okay What does it mean? How do we do that so? We manage it tricycle managers basically on a single keystone right now all the users So all the all the bottom open-stock instances will connect to the top keystone where all the tenants are managed Of course in this case we have the public outs where we don't have access to the tenant management We have a different tenant than the one we are actually talking about so we have our jackets talk to the keystone on the top or a federated keystone in the future and Basically do all the local activities For that hyperto top tenant hypertenant Do it with the local? Provided end from Amazon and locally just on behalf, okay Well, now I have to say the same thing. I just said so if you redundant here, but Okay hyper network so a hyper network is This overlay on top of Different clouds in this case we have on the left side is flaming Amazon Rick the decomposition of all its entities down to the VM So you have a region you have a tenant you have a VPC you have a subnet you have VMs Okay, the VMs are given as I said Located IP addresses by Amazon you cannot change it You just decide a your network range and that's it on the other side We have the open stack tenant and network would you create and a subnet would you decide? whatever you want We want to have the hyper subnet basically Putting all those VMs on the same Subnet okay in this case It's the same like the one in open stock because in open stock we can do whatever we want in Amazon it's more challenging so This is the same hyper subnet so How do we do that in Amazon or in other public clouds, which is the big question here? We will use open stack for management So I know these slides a bit complicated, but it gets worse. So bear with me So on the right side You have the open stack You have four hosts here for physical machines to running a compute node Nova compute node one running the network node one running the controller node It's typical open stack On the other side you have four virtual machines running strange animals we call hyper node hyper control and a hyper switch so Let's take off the hood and look inside the engine. So what is inside? So I know it's complicated, but actually not So most of it looks this kind of I don't know gray This this color. I don't know how you call it It's all open stack. It's just open stack. It's an open stack Compute open stack controller open stack network. You see you can see it's the same on the both sides In this example, I've put dragonflow, which if you are not familiar is It's a neutron implementation using as the SDN as the controller. It's a very nice project I recommend you to look into it. It's very good But it's of course not mandatory you can just use you can see also you have the q-agent the neutron agent which basically different the default implementation of a neutron So Basically, it's the same. We didn't change anything. We deployed the compute node the network nodes and the controller node inside VMs on the public cloud and We did a small addition to the Nova driver, okay, which I will show soon We've added an additional bridge in the OBS. I'll show it so another complicated slide Here we can see the Amazon on one side. We can see the open stack on the other side And we can see the cascading service from the top from the price. We mentioned before Managing both of them. So what you see is Each VM in Amazon basically has two IP addresses now it has one IP address got from Amazon That's the PIP the provider IP Okay, and it's also has a hyper IP address Which is the IP that we actually want to use its day It's it within the same subnet as open stack, right? And we have a router the Amazon the Amazon router Which makes sure that everything being sent goes through our hyper node our compute node proxy From there it goes through the hyper switch to the other side through the border gateway same as tricycle it's the same and The control the hyper controller basically makes sure that we can You know do the lifecycle management locally do all the creation of the aims creation of resources tying everything together This is the the control part. Okay, and it feeds directly from cascading service and uses the keystone on the top and of course, there is also the the novel driver there that is Responsible for creating additional hyper nodes as we as we go along and need them to create more subnets create more happy notes So and to inflow another complicated slide so we have This orange big orange box there's Amazon. I've actually The other I tell you doesn't matter if it's another public cloud or it's an open stack Once it gets out. It doesn't really matter. What what is the other side that could be the same could be just native open stack? It doesn't really matter. So how we did that was first We've edited the car they the routing table We did some custom route table, which is an Amazon capability We've placed our hyper node which gets an IP address from Amazon We've placed it as a default gateway for the customer table. Okay, you can see It's the default gateway for everything that is supposed to go on The hyper subnet Okay, so we make sure that if the VM VM 1 in this case tries to ping VM 2 on there You know the hyper IP address It will reach the hyper node the hyper node. We do the translation from this IP Okay To something that is routable to the other side Okay, so you can see the red box is an additional bridge that we've put into the hyper node in which is based on the compute note there the novel compute note of open stack this bridge Does the the does the address from station? We did that with by adding a This is a bridge in the OVS bridge. So we added a few open flow Rules there which do all this trick. Okay It's we're going to release this in opens. It's going to be an open source project So we're going to release it soon. So everything I say here you will be able to look into the code You also see the L to pop so that's optional. You can do pre-population of L2 teaching both sides the MAC addresses of the other side and You can just do it with flooding it depends on whatever implementation you choose to deploy In order to make the hyper node the VM that actually host the hyper node able to get this into or inject itself into the the data path because you know, you know one sending is Packets to this to this VM. You're sending basically to the other one. So you just do source decision check disabled in Amazon and then it will allow you to get past those Security checks that Amazon there puts on all the VMs. I'm not going to talk about hyper image and hyper volume in this session Because it will there will be not enough time for it And I do have a couple screenshots and I see I actually finished much earlier than I planned so How does it look right now so The user experience basically is is not really changed from format We thought you're used to you just get another drop-down when you're watching an instance the data center Okay, so here we see an AWS Tokyo. We said vCloud and Beijing so on it's actually Yeah We is actually taken from a working proof concept that we have in Huawei Then afterwards you can see you have you know all your instances and you see basically What tricycle does it mapping and availability zone to each side? So What you see here is each Public cloud has a sign it availability zone and that's how you use the the existing opens like abstraction to manage all of it and This is basically it for me because you know people want their beer cold so if you Want to check out these two sessions tomorrow and yeah, I'll give time for expressions All right, and if you want to grab this on slideshow, then it's your link and now hoping for questions Yes, if your Amazon controller stops Okay, so Basically, we had the same questions in the tricycle Session where it wasn't in Amazon or it was just a site or something So it's basically the it's a policy that you need to decide whether you want if one site is you know Goes off the map. You want it to stop working completely or you want it to continue working everything that still works works Just the control you cannot manage the your running instances. You cannot Stop them. You cannot create new ones on that side by everything that is deployed works because you know The controller is not on the data path Okay, you don't need it to run in order to have stuff that is already deployed working three H a for the controller. That's a good question. He's my boss by the way, so Thank you. Okay. Yeah, H a for the control also possible More questions. Yes. Yeah, good question. So it's a good question. It's actually two questions I will repeat it if you do here. So the question was you have two VMs and correct me from wrong you have two VMs in Amazon, okay and You don't want to go through The hyper node if these VMs want to talk to each other or else you do want them to only be able to talk to each other using their Hyper IP addresses. Okay. So first of all, let me emphasize something. I missed saying before the hyper node that I showed Is just one you one of multiple solutions that are possible in order to make this work It's basically it's an agentless solution when you put the hyper node as a node is actual node You need to go through there are other alternatives. We've explored for example Doing the hyper node functionality running inside the VM as an agent, which is It's good for performance, but customers don't like it because it's introducing a Some kind of software pieces runs inside the VMs It's also possible to do it in other ways like for example using a container that runs this VM or sorry a container that runs inside this VM and other nested IP and virtualization there are other solutions to do it as well and then it's easier to do this cross You know VM to VM on this on the same side without going to the hyper node in this use case You need to go through it. So in order to make sure that your VMs cannot talk to each other You're doing private IP addresses. They need to provide the IP addresses. It's a security Rule that you put you just block it but in real life if someone will actually deploy this kind of system I'm not sure that they will be aware of those Provider IP addresses because basically they moved an application from one place to another The application keeps on running. Nobody is actually logging into the VM and looking at the IP addresses to just expect it to work so So I understand the question, but I'm not sure if you might if I answered it Any more Any more questions? Yes, Neil. Okay. Thank you Okay, so This is actually This is actually something that we are currently working for production so we are doing it in open source, but We are also basing the commercial product in it. We have several Several Customers that we are currently in late late stage of proof of concepting and prototyping and so on and We also use it in the Huawei public cloud and HWS to some extent, of course Huawei itself has a compelling use case for this kind of Solution and they run their own Ecommerce site the VMAO where they sell their Mobile devices So when they have a big sale, there are some sort of dates Where the traffic is too large, so they cannot meet this traffic surge on their public cloud So they Spin up VMs in a Alibaba cloud So they basically need this for themselves and for other customers Question Okay, so I repeat the question. So you're asking about Let me try to rephrase it Migrating the application not just a VM, right? So I mentioned it in a use cases. This is what we want We want to migrate an application and not a VM Because you need to migrate it with all its sincerely configuration a security group and the IP address and the firewall Configuration everything and attached volume image so Because we are moving it with the IP address and making sure everything is basically working the same way Putting aside the fact that you may have now Traffic going on the on the one that you didn't want But putting that aside for a minute This is basically solves most of your problems because now the VM has the same IP at just runs in a different place Same l2 It's still it can still work with the same that we doesn't even know that he moved It just may have You know degradation of performance so but what we are doing in in in this is part of this the tricycle actually and Also the data protection, which is a different project is handling the Protection of the application itself. I don't want to touch in on it because there is a Different presentation for it that the one my colleague is going to give on Thursday about data protection Which is going to touch on all these? Ancillary and the ability to protect the application not just the VM Many more questions beer. Thank you