 Good afternoon guys, I think so we can start with the session So before we start the session Just for technical reasons we won't be able to play the whole slide deck in the full screen mode due to some display issues But we'll go through everything we'll discuss Good afternoon, how is everyone doing? Good, did you have your coffee and everything for the evening? So you guys are ready for the multi hypervisor? Thank you for selecting this some session on managing multi hypervisor Open stack cloud with the single virtual network We submitted this session because we saw a need of it and then we got the response back from you guys in regards to Having it here and discussing more about it. So before I go into the Session, let me talk about like my name is there are seagull. I work for plum grid. We are a Solution provider SDN and NFE solution provider for open stack clouds. We are market leader I will say for SDN and open stack networking for Sass type of platform service providers retail finance and these have been very unique cases where people started with the Default open stack and then in default open stack neutron options But then as they build their more clouds and they were looking into multi hypervisor type of situation some of those use cases They went for more mature more stable more differentiated Open stack solution and we provide plum grid O and S as a solution SDN solution for open stack clouds With me. I have Omar son here. He's my colleague at plum grid he is Sales engineering head and leads all the people through these deployments and he will walk you through like how We do the solutions to So moving forward What we see is in open stack We have seen like people have different options of hypervisors out there. They have Zen. They have hyper we they have ESXI and then KVM Generally with ESXI we have seen as people are deploying their Clouds out there. They want to leverage their traditional ESXI workloads and VMs Into their open stack environment With minimal disruption as they invest into these new type of cloud workloads So a lot of people have the ESXI traditional workloads They're building their clouds. They have many options of cloud and As they are moving towards more of a cloud workloads, which are dominant KVM They want to connect the ESXI Workloads with the KVM. So for that what we see is networking has to enable this transition Where people still keep on leveraging their traditional workloads for their apps But are able to migrate it towards the cloud side of the story So as we talked more to the enterprise IT and the DevOps during our discussions and they are saying they're adopting Cloud quite a bit in their private data centers They keep on coming to us that okay for their old ESXI traditional workloads How they can fit them into this open stack environment and then move it forward So let me share this slide in regards to the SDX central survey which came out recently Like I think so a couple of weeks back and what we see is for different type of clouds people are using Hypervisors and VM where ESXI is one of the largest hypervisors out there in clouds, right? There is no denying about it and the reason people have used This is because they want to have access and utilization of their ESXI Traditional workloads and they wanted to be the integral part of their cloud strategy The conversation Around any type of cloud management platform it can be open stack It can be we realize or anything people want to have ESXI option As for their old workloads, but how does it connect into their? The new cloud management platforms and There are few reasons they gave us in regards to the VMware EXI why they think it's important because you can tell them Just move over to KVM put everything out there. It should be good. It should work There will be a little bit of work to be done, but it should work out for you guys They have given three very key reasons behind why they want to keep using ESXI traditional workloads and then have a path to migrate as more and more of their workloads going to KVM Then move over from ESXI So the first one they talk about is the app workload optimization By that what they have mentioned most of the time to us is that for single app any app they're developing if they have a Need in which they want to leverage ESXI they cannot go back and port everything over to KVM or another hypervisor They want to use ESXI and they have optimized their physical infrastructure or they have optimized or fine-tuned their whole ESXI environment around this particular Subcomponent within that Workload and they want to keep it that way and they want to use it for their apps and in the same category they have KVM right for their Web server or their app server. They're using KVM's and they want to connect it to database server, which is Running within ESXI from the get-go so they want to connect both of them The second reason they have shared with us has been more about the in-house hypervisor development knowledge Deployment knowledge so for last so many years 10 years 15 years They have worked extensively with ESXI to a point that people know about it, right? They have been trained they have been gone. They have gone to VM world. They have learned about new technologies For some of them the their journey of virtualization started with ESXI So they want to keep that knowledge base or to leverage it, right? So training cost from moving people out of VMware into a new one For the traditional workloads, it's too much. So they want to keep it that way To leverage it as much as possible The last one which has been shared quite a bit is about the multi-year licensing agreements so as they have purchased the licensing for ESXI and They are in process of getting new licenses or leveraging the old ones. They want to keep their cost down So these multi-year licenses basically force them also or incentivize I will say to keep using their ESXI environment and For new cloud When they look at this agreements, they are like, okay, how much can I leverage my already? Expense or the cost into my data center for these new cloud strategies and the implementations What happens here is that as they look backward they see that Virtualization made their life successful But it came with a window lock-in type of situation where VM wear ESXI made them lock into the ESXI Across the board and then they had to stick with VMware for the cloud. They have different options. They have vCloud They have OpenStack and whatnot They don't want to go with the we realize or vCloud or whatever options of proprietary cloud management platforms That's here. What they're looking at is they want to use more and more of OpenStack type of clouds so Not to repeat the mistake or I'll say not to repeat the history of vendor lock-in in regards to the cloud management platforms They want to use OpenStack But still they want to use the ESXI the traditional workloads for these cloud management platform So that they can already leverage their existing knowledge. They can already Leverage their app workload optimizations. They have done for traditional But as they move on they have KVMs. They have containers They have bare metals and everything coming together for them and they want to basically bring move on to that Strategy so that as they more and more workloads go into the as More and more workloads or applications are built for the new clouds. They are able to Transition seamlessly to that point So this survey brings out the point in the same thing was about the OpenStack as you can see OpenStack is by far the most popular cloud management platform out there in the survey 58% of people have basically said they Want to use cloud management platform of OpenStack and they basically are building strategies around it We and where we realize Is 48% as a choice and most probably it's because of the reason of the previous where they have built the proprietary clouds around it one thing we want to mention here is the Another point which was brought on in regards to multi-cloud management platform So as you can see people don't want to manage multi-cloud management platforms So this reinforces the idea about OpenStack that It's the most popular cloud management platform people are investing more in it and they want to converge their clouds on OpenStack But they still want access to do their traditional or previous workloads for the new apps They are developing for their cloud So as we look into this People are looking at the OpenStack Then the question was like in cloud what has been the biggest problem, right? And what came out is networking is the biggest Major cloud problem, right? In regards to the most immature component within a cloud management, this goes true for VMware, this goes for OpenStack and everything and I think so the mixture of Having two different type of hypervisor working within a similar cloud management platform like OpenStack Contributes to this 76% number, right? Most probably the 76% number might be Little bit less if the networking of multi hypervisors is a result But definitely it will bring that number down quite a bit The second problem they had was ease of management that is 15% But when I look like three years or two years back within the OpenStack community the ease of management was also very high percentage People had a lot of issues in rolling out OpenStack clouds managing the clouds, orchestrating the clouds, spinning up the projects and whatnot and today with the work of outstanding distro partners like Rackspace, Mirantis, RDO, ROS and the community itself the ease of management has been like Substantially dropping and I think so that's the same approach for networking which will go out there too So in regards to networking why It's a problem and how we see it in the OpenStack cloud for multi hypervisor We went to the OpenStack multi hypervisor Examples so if you go to the deployment guide of OpenStack.org This is a diagram they give in regards to how you can orchestrate an OpenStack control plane for Multi-cloud type of environment because this multi cloud is because of the reason that you have two type of hypervisors running there One is ESXi the other one is KVM. So let me just walk you through this Diagram like how they basically position it so what you can see here is on the left-hand side You have a proprietary cloud which has a proprietary network physical network And then you have a proprietary storage on top of it and compute you are running ESXi Which is pretty standard model for the I'll say the traditional workloads And then you have to put another proprietary virtual network switch which works with the ESXi We all know what is the best proprietary virtual network switch which will work with ESXi it's a VMware solution for networking Right and that basically plugs into the OpenStack control plane and that's how you manage But you what you can see here is that you have built an entire cloud separately here Within OpenStack control plane to manage right so Basically now you are building two clouds here one is proprietary cloud and then they're on the right-hand side Which is the cloud which is promoted and most adopted within OpenStack is the open cloud, right? You have open cloud you have physical network cross-86 type of Servers and then you have KVM and open source hypervisor Sure It wasn't working for me. So let's see if it works now I'll try again. I tried that it didn't work also better Let's try this way better. There are few seats available here also if you want to Thank you. So in the back. Can everyone see it now? No and work for me, sir That's fine. Yes So Microsoft crash So it crashed now Okay Okay, so I think so we won't do trouble ID troubleshooting We'll go with this one so I'll go with the Desolation by the guy in the back and that's a better better Okay back to square one So Continuing with the this is a new chapter about it. So OpenStack multi hypervisor examples. So you go to the OpenStack.org you go to the manual Right and they have specific examples and one of the example is this one And most probably this is one of the oldest one they have and from the start of the open stack itself I remember there were questions right people used to ask us quite a bit in regards to okay I have ESXi workloads. I am developing my cloud strategy I'm developing all these applications for cloud, but I need to use ESXi so this is the diagram which came and This is exactly the same diagram. So we just Created in PowerPoint what you can see here is there's a proprietary cloud Under the open stack control plane It's ESXi based it has proprietary storage. It has proprietary network and Then the proprietary virtual network switch by Most probably that's where they think about the software defined networking and it's based on NSX and That basically talks into the control plane, but on the other side you have the open cloud right which is based on KVM Has multiple storage solutions. It has physical network and everyone desires to go towards the right side They basically want to invest more into the right side, but They are always always looking in what is existing there for them in their data centers and how they can make it work with the control plane So with this what we have seen is Generally people have to build two clouds right one entire stack on VMware side of the story and the other side is the open cloud and and People have always Thought about like how they can make their life easier a few things which can be done here is One is about the networking right so you have proprietary network switch. You have proprietary open Switch so you can club those so you can have one Networking and then on the top layer also they look at us. Okay. Can I make it faster? Can I make it automated? So they are looking at the templates also So they don't want to learn the two template policies like one going towards a proprietary cloud and the other one which is going towards the Open cloud so they want to have a single template where they can basically orchestrate the networking and all these Configuration orchestration of IS layer for their whole cloud irrespective of which hypervisor they are using So with plum grid on us People have been able to basically combine both the clouds and make it into an open cloud this is Where the open networking suit our plum grid solution basically comes into the picture People have deployed it across the board on basically doing the networking for the ESXi part of the story and also Putting it on the KVM side so build one networking layer which basically take care of both the hypervisors use The policy and template driven orchestration provided by OpenStack that is basically the heat so automate entire process So you don't need to worry about learning about two type of networkings two type of clouds Policies Two type of orchestration single OpenStack and through OpenStack you manage everything So before I jump into like how it is done Let me explain you a little bit about plum grid on us why and how why people decided on on us, right? Why did it become important for them to consider this option rather than investing both sides? So plum grid on us Is a software defined? Networking solution it has no hardware dependencies basically we are We render the full network inside the compute nodes So you have your compute nodes your physical infrastructure So entire Network your full network is rendered on your compute nodes as you're running the hypervisors there Generally the traditional solutions you have your switch or a router running in compute node But then you have your layer 3 to layer 7 something running up in switches or northbound, right? So that basically creates a traditional situation where you have a very low level functionality running in compute nodes But as you are scaling as you're making your Clouds bigger all your traffic will need to go through some DHCP some DNS It will need to go through a firewall It will need to go through a nat, right and all of that will have to go up and then create scalability scalability issues it will create performance issues and Those type of problems in case of plum grid on us all that is rendered In your compute nodes. It's as simple as that, right? So no traffic Lee Leaves your compute nodes and goes top into a localized Appliances and goes back. So what you are able to do is you are able to create a distributed data plane and You have a scalability you have performance like as you are building your cloud You add more nodes these virtual network functions basically create your network topologies within the nodes and You keep on scaling with that with with on us you get the Plumgrid network functional libraries like you get the router DHCP DNS NAT everything out of box and then we also Provide the ability for third-party virtual network functions like firewalls and load balancers to be inserted into the networks Topologies you are building in the software so what happens here is as you So as you are building your software defined network, you're building your topology for your app What you do is you basically build your virtual domains. So what is a virtual domain? it's a logical data center, which we are building for your network and Admin or a user can basically define their virtual domain which basically connects Whatever the way they have their topology in the case on the left side You can see a person wants to have the bridges and routers and then go through a load balancer into the cloud so anyone having access to the The open stack can build this topology and be done with it, right? And someone wants to have a different topology which has DHCP or the firewall They can basically build that virtual domain and Further you can replicate and clone these virtual domains as you scale further for different tenants, right? So if tenant a is similar to tenant F You basically build the same virtual domain for them and they can keep on replicating further up there For us The underlying technology here is based on a Linux foundation Collaboration project named as Iowizer The importance about that one is like as I talked about like we are rendering all of your network Full network on your compute nodes. It's because of the Iowizer. So the Linux Foundation be basically uploaded upstreamed our core code in regards to the Iowizer and it's about Universal virtual machine which is running in Linux kernel at runtime. So as you build your network topology, it's built at runtime and basically doesn't create any static type of Issues or platform Problems so with the beauty of the Iowizer you are able to build your network runtime as you're building the virtual domains Tear them down build them again and scale them as you move forward as it is running within the kernel Linux kernel You don't have performance issues of going into the user space or coming back into the kernel space All of this is processed within the kernel space. So you get the very Efficient performance of the CPUs or bare metal you are looking for in the compute So what are we doing demoing here? We are going to demo For you connectivity between KVM and the ESXi workloads across a single tenant We are going to show you how it is done with open stack heat template and It's basically a standard neutron API based the calls we are making and the setup here is going to be simply a open stack distro installed on a Compute notes One of the compute node is running ESXi the second is running the KVM and then you have three controllers We have a gateway which connects this topology to the outside world and then you have VM where we center Running the ESXi managing the ESXi So Can each one still have to talk about so just to give a recap of what Dheeraj was explaining On a physical side right we have is a Standard open stack deployment you can use any distribution this particular demo is a recorded video demo We will show you is based on Merantis and then you have compute nodes where your hyper visors are On the open stack controller nodes, we will also we have also running plumb grade directors which is the management part of our solution and on the KVM compute nodes we have Plumb grade iO visor running in the kernel module and then we also have sorry. This is the KVM and on the ESXi We also have plumb grade iO visor which is the You know on the on the compute nodes Additionally, we have plumb grade gateway This is the icon which connects the these VX LAN fabric to the Internet or you know a legacy IP environment which does not support a virtualization To make this solution work. We also have In an ESXi environment. We have a VM where we center running, right? What makes the solution work is the following we have additional pieces of software one of them is a plumb grade virtual VMware agent Which runs on the directors Additionally is a standard vCenter Driver and open stack and we have applied some patch to which which will make the whole magic happen What we'll demonstrate is on a single open stack project We're going to bring up a create two networks connect them via router then we will Spend a VM on connect them to a network on this side, which is going to be an ESX VM second VM is going to be on another network We're just going to be running on KBM and we will just ping across right a very simple demo What is a very powerful message, right? You can also apply a standard open stack API as you can also use security policy you want to use albass whatever you use open stack standard the solution can support it but six minute Run, and I as it goes through I will try to You know talk about it as we go along First we will give a bit of an overview of the of the setup We will look at from the fuel node where various control nodes and the compute nodes are You can see you know one of them is running VMware other are running KBM and then we will switch to Open stack horizon and open stack horizon you can look at the setup of your system Under system look at the various hypervisors And as you can see there's one node which is running VMware ESX and there are a bunch of other nodes which are running KBM You can also get a view of this on Plumgrid GUI which gives a zone You know Plumgrid is one overview and I was showing you there are three nodes running Plumgrid directors And there are other nodes which are two of them are running as gateways And there are several other compute nodes which is going to onboard the VMs The resolution is not clear. My apologies. Okay. I can happy to explain afterwards too Now this is again is a is a is a heat template the YAML file that we have created where we are going to actually through heat templates going to make standard Neutron calls when we are going to run it in open stack. This is where we are creating the the networks Creating a router connecting them. If you want you can have external connectivity If you want to do security groups, you can do whatever you want in an open stack So we keep all these two bridges and provider. That's just the file name then We will go back to horizon matter of few seconds and in standard open stack way you will go ahead and upload the the YAML file Apply the tenants. You give it a name and then you will apply the tenants credentials Either as an admin or the tenant and then you will Enter the password and then you hit on launch Now it will go ahead and make standard open stack, you know, NOVA and Neutron calls and The topology will start getting built in open stack and as it is happening on an open stack, we are also building the virtual domain that I was showing you in the previous slide dynamically on the plumb grade UI Again, this is happening in the background. The virtual domains are getting built It's done now we will go ahead and launch an instance and We're going to go the first instance. We're going to launch is going to be With the VMDK file, which is going to be expected to install on on the sx node. You can see that We will select the VMDK file We will associate this one to a network called dash 200 This instance is coming up. We can go back and again look at plumb grade again This is the already topology is built on open stack the same topology will show up as a virtual domain on plumb grade site And we'll take a quick look at that What you can do is the first time the virtual domain the icons may look a little bit Unorganized you can welcome to drag and drop them however you want and then do a right-click Deploy on our on our GUI and that will save the view of the Of the virtual domain as you want and then after that you can go back and forth and every time the view will look as You would desire it to be you look at the networks Remember 200 is the one which is the ESX Network and the second sorry hundred is the ESX and the 200 is where the KVM is now we launch another instance and We are going to go and this is going to be With the same standard image and we're going to go and boot it up on on the KVM node Now if you want even through heat template you can boot up images to I mean This is just to demonstrate that the connectivity is there So again the VM where VM came up with a hundred or two the other one came up with two hundred or two The one on KVM. I will go simply to be centered We'll go ahead and console into the VM Which was hundred or two the IP address here's hundred or two ten ten hundred or two and it will simply go ahead and ping 200 or two If you want you can apply security groups if you want to do LBAS integration you want to create whatever complex apology It's a very simple solution. We support it So I'll give it back to to Dheera to complete the rest of the slides and happy to take any questions you have in the end So as we showed like how it's simple basically for even a multi hypervisor example that you can take make networking also easy to Implement and use it for that type of environment for a open-stack cloud Plumgrid ONS basically will provide you out-of-box easy straight and simple templated form in which you can automate whole of your orchestration of cloud For the networking and then the workloads for the apps With this you can make sure that your traditional ESXi workloads for the same apps can work with the KVM Workloads as you're getting them together. So this is One deployment case study. We cannot disclose the name of the customer here, but what Happened here is I don't I can't do the animation on this one But the point here was they deployed the Plumgrid ONS on their open-stack cloud They had The Plumgrid director's edges and the gateway and the LCM and the VMware Center everything deployed They had a few VMware SQL servers and Microsoft SQL servers running on the ESXi and then they had The Apache web servers running in the KVM and at the point here was they are pushing apps onto the Website and they are basically looking to build more and more of it So with the virtual domains what they did is suppose one of their customers came and asked for a specific set of an App they went into the library. They created a NAT function with the routers and the switches and built The VMs and this spun up the VMs of Apache and the SQL server and then gave them a Specific virtual domain or virtual network profile which works with their app Then the second user came asked for the same thing But had a different requirements in regards to the router and the NAT and the configuration They basically spun it up through the heat template and created another virtual domain for that customer and it basically Spans further and further like you can have a lot more Tenants of yours projects of open-stack projects of yours coming in here further and further down the lane and you can keep using your VMware workloads traditional workloads and For and the KVM workloads for your cloud so in summary what we want to point out is SDN Within the open-stack multi hypervisor environment has to fulfill these four things out of this presentation If there is anything I want you to take away is these four things right one is Micro segmentation that is basically via virtual domains can be done in open-stack with both ESXi and KVM It's not just towards the vCloud or we realize type of environment Or it's not towards just that you can run KVM and then do it right So if you have a multi hypervisor environment Micro segmentation can run in open-stack with a single SDN You can basically apply all your security policies and Group permissions everything out of box the open-stack ones for a multi hypervisor environment So there is no need to go and do specific Security type of things separately for an ESXi environment and then for a KVM environment So this can be pushed into the open-stack cloud with such an environment out of the box You can automate whole of this process Which will run basically a few seconds few minutes as much time your VMs basically you spun up with the heat templates And the last one is you can optimize your existing hypervisor investments if you want to learn more if you want to discuss further about your Environments where you have such situation you have ESXi workloads But now you are moving towards cloud and are using more applications based on KVM Stop by our booth. It's a t69. We have demos running there You can basically pick up the book where we have Discuss this test case and also has the user testimonials and whatnot and then we'll be more than happy To help you with answer your specific questions. I think so we are out of time, but any questions Thank you