 Start yeah Hi, everyone Thanks for showing up Even though it's the last day of summit and probably what's the last or pre-last session that you're catching. So we appreciate you taking time with us My name is Mitry Novakovsky. I'm a solutions architect at Mirantis EMEA So here with me today my colleague Evgenia Schumacher who is our partner integrations lead and and my ex colleague Maxim Datskovsky who is Solutions engineer from HP Japan What we will do today is we will do a sort of a follow-up session to the one which me and Evgenia did in Paris last year so We'll talk about three things Thing number one is we will describe what is the the overall idea of multi hypervisor open stacks So when it might be useful how it looks why people are trying to do it while people why people are actually doing it Then I will hand over to Evgenia and what Evgenia will do is she will show you the reference architecture of Multi hypervisor opens that installation which we have been promoting and using with Mirantis product called Mirantis opens deck And also we will cover some aspects which we discussed In Paris when there are certain limitations on what you can do with multi hypervisor hypervisor open stack and what Constraints you had what technologies you had to use inside Your environment to make it work and eventually we will show you the beautiful pre-recorded demo. Yes, it's open stack summit It's so I prefer to pre-record things sometimes like you cannot get to your environment or whatever So we have it pre-recorded and max maximum will provide the commentary to it Yes, I did the introduction so already so before we actually begin. I want to ask a couple of questions So first, can you please raise your hand if you've seen our talk in Paris six months ago? Yeah, hi Second can you please raise your hand if you have something else than KVM in your environment such as this fear such as Oracle VM Wow. Yeah, that's nice and Can you please raise your hand if you tried into integrated with open stack or you want to integrate it with open stack? Wow, amazing our marketing did a good job because in Paris it was like we don't care So Multi hypervisor open stack what and why So what the idea is I guess pretty obvious to anyone who is who has been using open stack for some amount of time Right. We have the component called Nova. We have the component called Cinder We have the component called Neutron who are managing obviously computer resources storage resources and networking resources The idea of doing the multi hypervisor opens stack is to manage more than one type of hypervisors underneath single Nova installation and obviously providing some facilities to tenants to distinguish which Hypervisor type they want to schedule the virtual machine to The use cases, I mean There are tons of them But the the ones which I've seen talking to customers over last year about this and like the motivations why people are trying to do it So first of all first of all is the one which again are marketing powerfully enabled Called enabling developers, right? So many people today want to strike start trying open stack They want AWS like experience in-house despite of running VMware without VCAC or vCloud and they want their developers to be able to at least for test environment to leverage Programmatic interfaces for building the infrastructure for the applications that developers develop So in such case the idea of running open stack on top of VMware It actually makes sense because if you have the existing VMware infrastructure You can get open stack API is running there and start and abstracted completely Behind open stack from from the developers in your company That's actually the story which VMware is heavily promoting this This VIO product because the overall idea there is that you get open stack API on top of Existing VMware infrastructure, and you can start doing these DevOps agile kind of things Second idea is to avoid vendor locking. So this is a typical scenario specifically for multi hypervisor deployments and the idea here is that When you run Your compute or even your storage infrastructure from single vendor This means that at some point you might find yourself in a situation when scale and further Using this vendors pricing becomes steep to you. So like vendor a may tell you okay Now you have to pay me twice this year. We all know who that vendor was Doing this couple of times And the idea of multi hypervisor open stack enables users enables companies to Be in control of what infrastructure you will use underneath open stack API and saying that mr. Vendor a I'm happy that you want to make more money on me, but I mean my developers are using open stack API My portal is integrated via open stack. So if I want to scale further without you I will just add KVM knows on commodity hardware and it will just work and nobody will notice the difference of course, they will pass it's the further discussion and Doing the heterogeneous infrastructure in such way enables company or infrastructure owner to be in control of What decisions will be made in what infrastructure will procure and and expose behind open stack API's and the third use case which I also see quite frequently people talking about and deploying multi hypervisor open stack for is To support particular application needs because there are still applications, which we in Paris We called them classic. We didn't want to call them Enterprise or legacy or whatever we were calling them classic and the idea was that these are the applications Which are not very I will actually speak about them a little bit later, but these are the applications which are What we call non-cloud workloads, but we also can call them traditional workloads So the picture here, I mean you can download slides later It basically summarizes the idea of what is the traditional workload? What is the cloud workload and the point which I like here the most? Check one two. It was my The point which I like here the most is the SLA so the SLA of application Which is cloud ready does not depend on the availability of particular VM The apply the SLA of application which is traditional or legacy quite often depends on particular VM being not dead being running being responsible Making making sure it's job is working. So that's I mean that's like an Indicator of whether you can easily lose a VM or not. So typical web application Three-tier rights or front-end back-end worker using message queue and the data Replicated database layer in the on the back. It's a cloud radio workload the application Which expects infrastructure to be able to keep it up and running or restart it very fast if the if the VM died This is not cloud radio application So what we were what's doing multi hypervisor opens tech cloud? Enables us here is it gives us the ability to use features from VMware such as ha such as full tolerance and Apply these patterns actually to the workloads which are running on open stack. So you create a virtual machine using opens tech API or opens tech UI horizon and with a number of tricks Which we will show later you can place it on the vSphere cluster, which is ha enabled and in such situation if it happens if one ha cluster dies F1 Is excited out in ha cluster dies the virtual machine will be restored using the tools which VMware have beautifully developed over last 15 years to keep the workloads alive So this is kind of one of the ways to put this is one of the ways to bring your traditional legacy workloads to open stack faster Just connect open stack to an infrastructure, which will be able to support VM SLA unlike standard KVM deployment in case any one of you is interested in Kind of getting a good summary of cloud radio applications versus non-cloud radio applications and what are the design patterns for them? I highly recommend this book cloud architecture patterns by Bill Wilder I Was actually submitting the proper speaking proposal for summit, but nobody wanted to see it So you'll have to read it by your own Otherwise I would give you 40 minutes summary So yeah, I highly recommend this one if you want to be able to clearly contrast and compare how to Consult your customer or your user how to develop for application in a cloud radio manner so the possible conflicts the typical configurations which we see in the discussion of hypervisors open stack are four ones and I mean sir Samuel is not very happy about the amount of Docker speech In this summit and in these keynotes So I will start from the end we can combine KVM and containers in the same open-stack cloud But we will not be talking about it here Because you all probably have already seen your Docker talks in this summit there there have been just too many of them So typical typical use case here is KVM plus vSphere. That's the one which will focus typical configuration That's the one which will focus for the rest of this presentation Because vSphere as I was saying it enables many HA features it enables many things which enterprise applications needs and Thanks to the effort which VMware has been doing over the last two years They have pretty decent drivers and pretty consistent configurations in upstream so that vendors like Mirantis like canonical like redhead can easily leverage this KVM plus then over the last year and a half in Mirantis I've seen the one customer who we've been discussing this with they had Zen workload. They had work load optimized for them They were actually leveraging PCI pass through using Zen hypervisors So they were passing course from Nvidia K2 directly into VM quite cool stuff. It's really rendering inside the VM So we were discussing the configuration with them of doing KVM plus then we ended up doing two separate clouds because then Had been was tricky and tricky at that point to be combined with neutron So we had to do separate cloud with Newton plus KVM and separate cloud with Zen plus normal network and third use case which we often also sometimes see in the field is Combining KVM plus Oracle VM as an activation here is quite simple Oracle VM as a hypervisor is a solution for you to get a certified virtualized environment for Oracle applications, right? So if you're running an open stack cloud if you're transitioning your organization into using open stack as a main cloud management platform If you have Oracle applications many of them I mean perfectly are perfectly capable to run in virtualized environment and Oracle even promoting it using Oracle VM If you want to preserve the certified state and make sure that when you call Oracle for support They will not be unhappy that you are running something on KVM or even on vSphere Here is the option you can combine Oracle VM and KVM on The same open-stack cloud and have the separate like availability zone based on Oracle VMs only So yeah, and no containers this time. Sorry folks With this I would like to hand over to Eugenia Do explain the typical reference architecture of Open-stack plus VR based installations that we've been doing and supporting in our product over last year and a half Thank you, Dmitry. Can you hear me guys good? Yeah, as Dmitry mentioned We are going to focus mostly on VMware Plus KVM multi hypervisor environments running with open-stack running on top of them and the reason is pretty simple most of the Enterprise customers who actually decide to start the adoption of open-stack. They are big VMware shops So what they want to do is a first step is kind of get Stay with their VMware Environment and just to deploy open-stack on top of it and manage everything through open-stack APIs and then start adding KVM Resources there. So let's take a look at this nice picture that actually was stolen from one of VMware presentations and just kind of To just take a look and and see what VMware did for open-stack which drivers and plugins were written What and how we can use so first of all I start with Nova. We have a V center sender driver as you know guys they used to be your success driver now It's only the center. So the way it works the Nova compute for vCenter is simply a Proxy service that translates open-stack API calls into vCenter API's calls and it gets connected not to a particularly success but to vCenter cluster so you can get all the things that vCenter gives to you like all the HA and all that stuff, right? So for cinder and glance there is a VMDK driver So you if you want to use the center you have no other options You have to use VMDK driver just to be able to have all those benefits from from from vCenter in its functionality and the most interesting part. I guess it's Networking so here you see at this picture you see neutron with NSX driver though It's like NSX plug-in you guys probably saw a presentation that Marcos did on Monday about new NSX plug-in that Vmware team released so now The picture will look a bit different, but still there is a plug-in that Works well with all other stack of VMWare solutions. So that's the kind of the whole story around Open-stack or VMWare products in Open-stack Right and let's get moved to the way Multi-hyper-wise environment usually looks like that's a very like high-level picture of like what we usually What our customers usually ask for it what usually we suggest them So we can say that there are there are two availability zones in one. We have VMWare computer sources in another one we have KVM computer sources So a user can decide in in which Availability zone he wants to provision the VM Depending on a particular use case it can be like if you to climb more like production type of workload definitely he will go to To vCenter if it's something more like depth testing so it might go to KVM and as Dmitri already mentioned it all depends on the types of application you you're going to run on your VM so that's the very like high-level picture you guys see and What user gets VMWare is it for for Virtual machines running on vCenter. It's VMWare ZHA DRS we motion and in case a Customer is using an NSX and you have an a sex in your In your company, which is pretty pricey so guys I assume you are very wealthy. So if you have it Well, so if you have it you can use all all the benefits of commercial SDN solution, right? and Where's the catch So I guess I can start and Dmitri you can kind of interrupt many times you want okay good so as you Might seem at this on this picture. You don't see any like any networking things. It's only about like how the compute resources organized, right and All the catch is always in networking so If we do a sort of like retrospective I take a look at what we were discussing during our talk in Paris we said that the only like viable option that we had at that moment that we could use in Environments with vCenter and KVM running there was NSX and In particular NSX MH, which is multi hypervisor version of NSX and you probably know there is an NSX V Which is a specific version of NSX for vCenter only So that was the only option with all like enterprise level features that commercial SDN provides and the plug-in written for OpenStack and Supporting all the all those features So it's still working like that and but not everyone wants to invest a lot into very pricey very pricey solution and We are all like OpenStackers So what we want we want a sort of like open source Option because we want this type of freedom that we can decide if you want to pay or you don't want to pay so This whole slide is based on our own like experience in Marantia. So the next the next line is contrail So why contrail because there are two versions of contrail, you know guys it's open open contrail open source version of it and Juniper version of contrail which is has commercial support. So right now It's kind of it's not very easy to To have the same level of functionality that NSX provides this contrail So contrail yet support CSSI only not vCenter So it's without vCenter functionality won't be able to get the HA and all these like enterprise Features that people like when they use VMware So contrail is not there yet, unfortunately But we assume that like in a couple of like months or maybe like half a year We will see a lot of nine months Yeah, we will see a lot of traction because there is open source community there and people are really interested and Nova Network well, it's a it's always an option. You always can use instead of neutral Nova Network and that's what we actually do in For our customers now because we used to do that and we have this option even like now a distra I don't want to promote me right this open stack But yes when it comes to multi hypervisor support Nova Network is the kind of one of the most viable Options that you have it works, but it has its own limitations. It's not neutron so we can't give to people what they need and The other thing that a lot of people Have been asking about and I know like Gabrielle for example like back in in October We had a really interesting conversation in Budapest about like hey, what's about DVS? Is it working? Is there something and actually, I guess that what one of the hottest topics because having a Driver or a plug in I would say like I'm also driver in in neutron for distributed virtual switch which the center uses is a kind of Thing that everybody is looking for because that if you have it you can use Open v-switch for KVM resources and you can use distributed v-switch for v-center So what's the state of art now? Actually, there is a number of different DVS drivers available and surprisingly like You can you can just count and see the like one two three like seven of them with different types of different levels of implementation unfortunately The blueprint that was created long back like a long time ago in you in In neutral project it it seems like abandoned because nobody wants to own it a lot of people trying to contribute But it we always get stuck There are three types of implementation with their own limitation existing and most of them All of the code is available in github so you guys can go ahead and check it We actually played with one of the implementation it worked with some limitations, of course VMware team provides a DVS driver actually is a part of the NSX plugin. I guess now it's all on Stackforge and Just in case folks if you get the slides later then what what is Underlined it's actually links to to github on party of particular driver so you can go ahead and use any of these Play with it if you're interested use ours yeah, and Well, maybe it's not a very like community way of doing things, but Vietnamese realized that we need to kind of Move things to in order to move things forward. We need to implement in another version of DVS I'm also driver for This enter so we actually implemented one and we put it on Stackforge as well So and the way we think we we are going to use it and Later today like Maxim will show you the demo with this driver installed and configured and also The other the last but not least HP Helion obvious the app Option I will let Maxim comment it on that so if you guys are interested you can Come later like after the after the talk and Maxim will Answer answer to your questions. So all these options like are available They are not a part of like neutron, but they are on on Stackforge. It means that they are streamed So go ahead feel free to use them So the key here is that you don't get any of these other than the view or driver out of the box So if you install view you get the configuration the option to configure your driver out of the box If you install mirages open stack today, we have we don't yet ship the driver out of the box But we will in next couple of months But if you want to use it today, it's on Stackforge and it's tested against Mirages open stack So with Mirages open stack you can already get it and get it working just in semi-automated way thank you and I guess My part is done. So I will pass the ball to Maxim to show you To Dmitry. Oh, sorry. Dmitry starts. Okay, so you guys gonna figure it out. So anyway, like I guess If you have any questions about my part, I'll be happy to answer after the we are done. Thanks Yeah, we'll have some Q&A time in the end. So what are we going to do today on the demo on the screen right now? You see the environment which we have built and recorded demo on So basically, it's a couple of hypervisors here. I have to technically there will be three so Maxim will explain it We have to be central clusters. We have one isolated say KVM node all connected to the same switching ESXi nodes run DVS on them KVM node runs OVS on them We have Mirages open stack 6.1 development release running on top We have the configured we have configured two drivers underneath ML 2 plugin for Newton So we have configured DVS one from Mirantis. We can configure OVS one from community We also have put HAProxy load balancer in front of them using Standard standard open stack load balancer as a service. We have put both We have created two VMs every run each of them running on different hypervisors We can put them in the load balancing pool. So Maxim will show it slightly later The demo scenario will be like this. We will create two VMs. Actually, they are already created So we will not be going through them and both VMs will sit in different availability zones But in the same subnet so one VM will go to be a variable ability zone One VM will go to KVM availability zone, but they will share the same subnet. So they will be it's like they will share a VLAM We will show that both VMs each other so they can ping each other we will show that VMs are the members of the same load balancer load balancing pool and what we did we have configured both VMs to run HTTP and expose a single simple web page Which is showing it's like the host name of the VM So what we will do is we will run a script which will pull on the load balancer web And you will see that at the same time both VMs are responding and acting as a members of the same load balancing pool Yeah, I was told that non talks on not any talks on summits are not good talks unless you show some config files So here is the config file of my ML 2 plugin on Neutron. The only interesting part here is The only interesting part here is The line mechanism drivers which have to which has two drivers enabled VMware DBS and open the switch and the section ML 2 VMware where you can see that my DBS driver will be talking to the center Using standards approach with host name and API and so on. Yeah, by the way The limitation which we have today on the DBS driver and which you need to be aware of if you are going to start Experimenting with it is that we haven't yet implemented support for security groups. There are different approaches how this can be done Given how it works in VMware by default. We are thinking we're evaluating different ones We don't want to put a VM and hairpin traffic through a VM which will be running on a ESXi cluster and kind of enforcing standard IP table based rules So we are looking for a solution to do security groups So right now if you will be installing this driver security groups will work on OBS on KVM nodes They will not work on DBS on ESXi nodes. So just track the development. We will do we will get this done in the next couple of months So with this I've shown in the content. I'm going to run The video Which maximum will provide commentarity and nobody sees the video yet, right? No, everyone sees the video very good So go ahead Okay, we have two costs. We have two different type of hyperlases here We have two nodes here and another one is the KVM with one node. Let's check what instances we have here so first one is VMware instance running on availability zone for the center and another instance running on KVM In no availability zone. They have Same subnet shared between them and 14 IPs to reach them. So let's again to KVM instance and we will check We can ping from KVM instance our VMware instance. It's okay and Let's check about deep details Uh, we're running this route to check that we don't touch ESXi agent or any router between them So they're truly on the same subnet also, we created what burns as service and we put Our nodes here what burns using HA proxy with HTTP and route robin Algorithms, okay. Let's get IP address of our load balancer Oh, yeah, let's check that we have our little machines here inside load balancer pool Okay, we've got our IP address and Let's go again to external virtual machine to check that we can reach our load balancer Okay, we can get connectivity to them Then we can run scripts to check Or about board balancer Okay, so we have response from our two back ends one from KVM and another from Viva and your run-tropping algorithm They response one each after one. Okay. Me too. Just to make sure everyone understands what happens here Each response is from different virtual machine So you have like for example, if it was the web tier of your application, you would be easily able to have Front ends spawn spread across KVM nodes and these XI nodes and all sitting in the same subnets Talking to each other being exposed under same load balancer. So that's like heterogeneous That's a use case number two, right, right avoid in avoiding vendor lock-in and providing heterogeneous infrastructure um Yeah, so that's it with the demo And what we will do now is we'll come back to The slides And do some summary that summary So conclusions uh multi hypervisor open stack today It's a possible being useful So you can do it With different hypervisors combined with KVM and you can actually use it to Do a number of things enable your developers reduce the amount of money you spend on your appropriate infrastructure um and Also cater for different needs of applications that you have so you can actually speed up the adoption of open stack in your organization Since Paris i'm really glad to see that a number of companies even though we did it not in a not very structured way but a number of companies have actually Contributed and made an effort to enable proper dvs programming from neutron So today there is a number of driver options To to try one i will show the next slide which which which wants to try to which wants to try to start trying this But it's not like it was six months ago when the only way to run multi hypervisor with ESX would be to use To use vSphere to use NSX Multi hypervisor is here to use It was NSX. It's not the case anymore. Now there are open source drivers By Tokyo summit, I believe that we will have two more gaps closed So we will have the option to run this contrail which is quite a nice sdn Which is directly competing with nsx But which is lacking some features today and one of them is multi hypervisor And in the rightist driver will solve security groups by that time And I believe that we will also start shipping it with mirage soap and stack distribution So it will be completely out of the box solution How to get this this whole goodness today? three options either get mirage soap and stack you can download it for With 30 days retrial from our site and use the guides which are on github how to install dvs driver on top You can also take the video. The video is very nice I mean, I've seen the demos the design work the ux of installation is very nice I mean if you plan to run not actually multi hypervisor, but open stack on top of VMware That's a great option. So it's not really multi hypervisor But it's it will cater for your for a couple of use cases as well To enable developers and to get open stack running on top of VMware And of course, if you fill in adventures, you can get into do it yourself So, I mean all the drivers are up there in the neutron You can go ahead and figure them usually it's a couple of lines Just make sure that you can figure not only Nova in neutron, but also cinder and glance looking at the indicates That's it with the slide where let's open up for questions. Do we still have time? I believe we have about seven minutes So any questions? Everyone wants to go home and Well In terms of Nova, it's actually hard for me to comment I haven't heard of any drivers at least of those which I've been listing here like from VMware side being moved out The significance is obvious. I mean if you configure your open stack cloud to use If you if you wouldn't configure your open stack cloud to use old Citrix's ESXI driver a couple of years ago And now it would be deprecated You would have a migration effort to transition into the new driver, which is from VMware I don't expect VMware driver to get anywhere out of open stack given the level of contribution of the company So I don't I don't think it's a risk of introducing it today In terms of neutron. Well, neutron is more dynamic, but again the ml2 obstruction layer is There and I don't think it will go anywhere. So again, if you invest into building out the cloud using Mirantis driver today, for example There is certain risk because we haven't yet upstreamed the driver So yes, only us are to depend on on Keeping it running, but it's open source. It's in github. So worst case if we go Somewhere and decide not to maintain it anymore. You can maintain it by yourself The api is quite is quite stable over there. So it's not like it's a huge effort to bring it to the new version of open stack That's a very good question. Evgenia, what about IPvc support? No such plans so far. So it's a it's a very good question, but I guess the same question actually Was asked during the session that VMware team did so it was like not yet. Yeah, like we even don't look at it Well, I mean business wise Obviously for every company the investment in redevelopment this driver or that driver is like something that is being checked against attraction on the market, right? So today we have some deployments which we are doing in such manner Later people will be asking for idv6 and being ready to pay for it. We will do it Okay, okay