 Sure, thanks Ashraf. Even I have trouble saying my name sometimes. So guys, this session is going to be about OpenStack Networking Evolution with OpenContrail and we'll talk a little bit about the container networking updates we have made to the product, right, and what we have contributed to OpenContrail from that perspective. Sugdev, why don't you go ahead? I'm Sugdev Kapoor. I'm a distinguished engineer at Jennifer, part of OpenContrail team. I'm one of the new members of the OpenContrail team and I'm here in the spirit of what Randy was talking a little while ago to make an announcement of a new project which I've sort of kicked off in the spirit of this openness of OpenContrail. So I'm here, I'll take a few minutes to make an announcement, then I'll step aside, we'll let him continue with the presentation and then I'll take the questions after the presentation if that's alright. So networking OpenContrail, so for those guys who are familiar with the neutron, they know what it is, but for those who don't know what it is, this is a brand new project which I've kicked off and it will be under Neutron to integrate OpenContrail with Neutron. That would be the goal, will comply with OpenStack governance, become a part of the Neutron stadium. So this will become like any other project within the Neutron community. This is one piece which has been missing so far that OpenContrail was available as a monolithic plugin only. Now what we're doing is we're bringing it into the Neutron stadium to level it off with everything else, so what this does is this gives you the ability to run OpenContrail in a multi-vendor deployment scenarios like you will do any other projects, ODL or Onus or anything else. So this will be pretty much at the same level fields for that. What will it have? So it will have a new set of ML2 drivers, it will have a full suite of service plugins to be fully compatible with the Neutron services and will have its own third-party continuous integration system to make it fully testable and fully integrated with Neutron. So now with this you have an ability to be able to run OpenContrail with say for example F5 load balancer or Palo Alto network so you can pull the distro out of Neutron and be able to test it, integrate it in a true multi-vendor environment. That's what it gives you. Will it change the existing monolithic plugin? That would be an obvious question probably in your mind. The answer is absolutely not. So that initiative, this has nothing to do, this is a brand new initiative as a community initiative I have taken upon to run the Charter and go with it. So essentially what it does is it gives a new front end to the same functionality which you have but you can come in now through Neutron in a true multi-vendor deployment initiative. So going forward there will be two options available either or but not together. So there will be a configuration knob which will allow you to deploy OpenContrail in a Neutron mode or you can continue to use it as a monolithic plugin as it is today. Why should I be excited about this? You can deploy now OpenContrail from Neutron along with any other driver service plugin this is what I mentioned or you can continue to deploy as you do it today. So there will be dual mode option available. So as you know OpenContrail offers a whole lot more in terms of functionality than what is available in the Neutron APIs. So that potentially could be a slight downside by using but it does give you the ability to now mix and match different vendor drivers and run it as an OpenContrail. So the choice is yours but you can test it you know you can deploy it or whatever but the goal is going to be we're going to try to bring it at par as possible the both distros. So now call for action right. This is a community program right so I am the one who's like essentially trying to kick it off get it going. So we need contributors. This is a huge effort so we need to write ML2 drivers we need to write router service plugin we need to write load balancer service plugin we need to write VPN service plugin firewall as a service service plugin everything you know along with BGP VPN service plugin. So there's a lot of development effort so this is not something which is ready to go or you can pull it off next week and start using it this is a kickoff. So to come help us out you know participate and work with us and become a core developer and influence the future you know so now you get to make your own changes you know you get to test in a different environment and you can push the changes you can influence the future you know and how can I contribute. I've already kicked off the project it was approved it's the repo is already available it's empty so if you went in there and tried to look for it there is nothing in there yet because I just got it got it off the ground last week I believe or a week before you know before coming in here right so we're gonna start dropping code in there you know and as we start dropping things now we'll be announcing it it'll be visible you know in the meantime if you have questions you want to reach out you know hey where do I start what do I where things are I'm always on IRC my IRC handle is Sikthiv you can you can ping me on Neutron channel I'm pretty active in the IRC in the open stack channels so you can find me there or you can drop me an email but that gmail is my email so that was my quick announcement I'm gonna step aside let the DP continue with the presentation and I'll take questions at the end thanks Sikthiv let me just plug in this slide navigate just give me one second sorry about that okay works so thanks Sikthiv I think just to reiterate we always had an integration with open stack right right from the beginning we used to have a core plugin we still have that what we are doing is providing more flexibility and more choices to our customers by starting this networking open contrail project and you know getting into the ml2 plugin way of doing things what I'm going to focus on now we have about 30 minutes so what I'll do is for the next 20-25 minutes I'm going to talk about the latest developments in open contrail what are we we are going to have a release in a couple of weeks 4.0 that Randy alluded to as well we'll talk about containerization both at the manageability plane as well as at the you know supporting container work containerized workloads what our philosophy is behind that because there are a lot of container networking solutions that are coming up these days right so I'll try to explain to you the value we bring into this environment and feel free to ask questions at any time okay all right so Randy mentioned the talked about global ubiquitous networks and how we want to add more value on top of that when you look at open contrail what is the vision right what do we want to do right the idea is to help connect people to the applications connect applications to applications get the right people to talk to the right applications right and be able to have visibility and all that into that so at the end of the day it's all about basically having people on one side their apps on the other side provide connectivity security and manageability yeah now when I talk about this there's no mention of infrastructure right that's because whether you have a multi what do you call whether you have a multi-site data center whether you have in terms of virtualization environments you have VMs or containers or you have bare metal appliances or you're a solution running in a telco pop or you have an public cloud environment some of our customers are already doing that and I see some of them in the audience as well or the I think if you listen to the open stack keynotes there was quite a bit of talk about edge computing and internal internet of things etc from our perspective the idea is keep the infrastructure agnostic keep the endpoints where they are all that information kind of agnostic from the end user right provide connectivity at the same time give enough tools so that as an administrator as a network administrator or a security administrator you're still able to enforce policies make sure your deployments are compliant to certain requirements that you have and regulations you have at the same time provide manageability right I mean and give you very good visibility into how you're operating are there any hard zones how are things and be able to project pretty proactively alert you to any kind of issues right so this is like we talk about vision but actually we have completed quite a bit of stuff towards this goal right and very soon you'll see some I'm sorry some announcements around you know what are we doing about security and so on how we are enhancing our existing network policy framework to be able to extend it to security policies and be more infrastructure agnostic yep so this is the basic kind of high-level overview of how contrail applies in your environment does any questions or comments good all right so how many of you are familiar with open or let me ask this how many of you are not familiar with open contrail architecture okay there are a couple of fans so let me just go with this briefly right so what's open contrail open contrail is an open source SDN solution to deliver on secure multi-tenancy and how do we do that we use overlays-based network segmentation yeah and like any typical software defined networking solution we have a control plane and we have a data plane right so we have a controller as I show here open control controller and we have a forwarding element called read outer which runs on the computer on the computer so the end points now the controller runs in a high availability cluster there are three instances and it can run in an HA cluster and it plays multiple roles that of config control which is basically bgp control plane and analytics I think Randy mentioned you know his when he used to think of overlays back in the past his main concern was okay I'm going to lose visibility into you know my underlay and how if something goes wrong what kind of correlation information will I get so we understand that right we understood that right from inception so we packaged analytics module into our solution right so what we give you with this analytics module is basically an underlay overlay correlation along with a lot of rich set of tools that you can use to really analyze your flows be able to get a lot of information out of it so that's analytics so the controller has three main roles and we'll talk about these three roles in the context of containerization how things have evolved and if it impacts any functionality in a little bit bgp is the control plane signaling so basically when a VM comes up we have to announce reachability to that end point right so we use bgp control plane highly scalable internet runs on it yeah so we can support massive scale data centers or you know very easily extensible you have multi regions you can use bgp federation to federate across regions and so on we use xmpp to talk to to talk to a weed outer agent which runs on the computer and which programs the forwarding tables in weed outer right so a lot of our value at comes from what we have done with weed outer on the computer it's a very rich functional forwarding plan and the same weed outer can actually support be the container or a vm or any kind of virtual network function we also have bare metal support today if you have bare metal appliances and you want to terminate your overlays on the top of the rack switch we can do that with jennifer switches today with a vsdb but we are also extending it to do it using netconf and netconf based and basically having an ebp and makes land based fabric a key element of this sdn solution is the gateway right so having a gateway to basically interface with the internet and be able to talk externally going for traffic going outside your data center so we have a it's a programming the gateway for overlays etc is built into the workflow so there's nothing no manual steps needed or no additional scripting but basically when you create a virtual network we automatically go to whatever is needed on the gateway through the workflow so we do that based on using netconf again to provision these gateways and the core of open con trail is basically to beat control plane whatever feature said it be we implement using standardized protocols so that at the end of the day you can run multi vendor environments if that's the choice you want to have right so that so everything here is standards based as you can see that's a high level overview of the con trail architecture so the goal is have this architecture deliver the logical segmentation or representation you're seeing here yeah okay so we on at the when open stack summit started we actually launched a blog as well about our 4.0 release where we talk about the upcoming evolution of the product right so there are two key themes to that one it's about containerization of our own control plane I mean contrary less open contrary less software right there's a controller software there's the forwarding element so just like any application software is moving towards containerization we are doing the same we are doing we're taking our control plane we are containerizing it why are we doing that it's easy to use easy to deploy easy for life cycle management easily upgradable right the packaging dependencies are gone now it's all well contained and more manageable so like I said we are containerizing control plane for easier manageability what does that mean right like I said earlier the controller plays multiple roles config analytics control right so we're going to have three primary so there's going to be a controller container which plays three primary roles that of config I mean let me point to the right one config and control we'll have a container for analytics and one for analytics DB right so these are the three main containers that are going to perform the controller function optionally we'll be packaging you know we always package load balancer HA proxy with us but at the same time we provide you know we work with a lot of different partners like f5 avi solutions at avi networks etc for load balancing functionality but we also have an optional HA proxy if you would want to use it we have weed outer agent which runs on the computer okay so that's the weed outer agent the functionality is talked to the controller and program the forwarding plane right so that's going to be containerized as well so some of the key and these can be deployed on bead bare metal or VM so you have open stack clusters with VMs you can actually deploy the controller containers inside the VMs yeah we'll talk about that as well in a minute there's no change in the functionality no change in apis or anything like that the only thing that has changed is the form factor or the packaging of how the controller is being delivered here oops what are the benefits right primarily it's about life cycle management all dependencies are now contained typical advantages of white people are looking at containers right and then as we are control plane evolves we'll actually be evolving our container like having a single container for control function later split into microservices based architecture so that you can have different elements be upgradeable easily right so the first step we have taken is in basically containerizing the function we have but still have one single container randy briefly mentioned about a little bit of complexity in deploying and using contrail right with containerized approach it becomes very easy to provision contrail and we are going to package we are going to provide ansible based deployment scripts etc playbooks so it's pretty simple to deploy contrail it's a you can there's no complexity involved there integration with third-party provisioning tools simplified as well some of our customers use chef some use helm and so on and it's easily integratable yeah so these are the benefits of containerizing our control plane primarily the other thing we are doing when it comes to container networking and how do we want to support containerized workloads right now we are not talking about control plane but we are talking about the workloads so if you take this example right so let's assume this is a use case where someone is coming up with a Kubernetes cluster with leveraging our open contrail integration for Kubernetes that's releasing in a couple of weeks and they wanted an open stack on top of it I mean this is a model which some of I think I think Mirantis is also doing with mcp where they're on open stack control plane as containerized in containerized form factor on top of Kubernetes right and let's say they want to run multiple clusters or multiple tenants on top of it I mean there are too many layers here but let's I mean these are real use cases for whatever reason right so now the object see that most of the solutions that are there in the market would require you to have a separate networking control plane for each of these layers what we are trying to do is whatever be your reasoning or whatever be your motivation to have an architecture of layering we want to be able to deliver networking using a single control plane solution single SDN solution do how many of the layers you want right what does that mean you have flexibility you see especially when you look at telco world or some a lot of these applications not everybody has containerized their software right so you're always going to have interactivity between bare metals and heck we even see mainframes in a lot of our you know some of the environments we go into when we talk to customers so they're going to be bare metal they're going to be VMs they're going to be containers and they all need to be on the same network and have access to one another for whatever reason right so the key objective is to be able to give that flexibility without having to add overly over overly over overly right so this with a single control plane solution what we are trying to address here is a nested environment to whatever degree of nesting you may do okay and that's a very big value we provide and that's what is resonating well with our customers and I can confidently say this is not something that anybody else can deliver to and now we'll in a couple of minutes we'll see how exactly we are delivering to that okay so what are the key so when we started looking at okay we want to onboard supporting containerized workloads and you know and be able to provide connectivity and security and all that stuff across the board what are the key drivers right I mean we have been luckily successful in open stack deployments for the last four years even if you look at the latest open stack user survey contrailers open contrailers the number one commercial SDN so we are really glad about that and what that means is we are so broadly deployed that we have learned a lot from our customer deployments and so we have added enough quite a few features into our product what we want to do is bring that bring that into containerized environments which is still not mature in that terms right like Kubernetes is still maturing there are a lot of features that are not available we'll talk about those features soon but as it's maturing there are gaps in let's say these orchestration systems and what we want to do is bring in our feature set so that we can help you a transition today or help you have coexistence of different environments but still be able to do multi tenancy different levels of isolation and all that right so those are the features we are going to bring in we also want to from an enterprise perspective we also want to be completely seamless okay seamless both in terms of if you're migrating from other virtualized environments to containers or seamless in the sense you have a developer workflow and then you have your administrator workflow right so we want to keep this completely independent and transparent so that if developers want to do something they can go ahead do it without having to be infrastructure you know aware right and at the same time like I said we want to keep the admin workflow going independent of the developer workflow so that you don't have to tell the developer hey change your deployment YAML or any of that so that you can add these particular networking primitives or anything else right so we want to keep it completely seamless and have the developer workflow and network and the administrator workflow independent I'm going to skip this I'm assuming folks are aware of how many here are familiar with Kubernetes okay okay just a quick thing right so Kubernetes has namespaces namespaces we should not confuse with tendencies it's more about organizing certain it's a way to organize stuff there's the notion of a service services what is visible externally and it's backended by pods okay pods are the instances delivering the service okay and this is like a basic and each pod has multiple containers it's like a basic architecture for Kubernetes what is it that we are doing in terms of Kubernetes integration right Kubernetes by default has a cluster mode which means it's a flat network so everybody can see everybody okay you don't want everybody to see everybody then the only thing you can do is say nobody can see any I mean it's a complete whitelist model where you say deny all first rights nobody should be able to talk to nobody so then you have what do you have to do every time a new instance comes up you have to start adding policies to say oh okay you know now a can talk to b or b can talk to c and so on which is again complex right what we are doing here with our implementation is bringing in isolation right it's not just either you everybody talks to everyone or you know what I want to shut I want to shut everybody off it's about okay within the namespace you have a namespace concept you've organized your pods into the namespace with some logical reasoning so probably they can all talk to one another in that case you can use namespace isolation right so each namespace is a virtual network of its own you can't go outside the namespace but let's say now you want actually finer granularity of isolation you can isolate to the level of your pods right and lastly we have custom isolation or user defined isolation where heck I want to pick this VM I want to pick this pod I want to pick that bare metal I want to put them all together in the same virtual network you can do that right and there are users have use cases who want to do that they're more advanced in some sense right it's not about introducing complexity but it's about coming to terms with the fact that there are going to be different types of infrastructures and like Randy mentioned we want to be ubiquitous and we want to be infrastructure agnostic and we want you to basically be able to make your transitions at whatever pace you want those are some so isolation is a very key thing and that's where a lot of our customers as we talk to them are seeing value the other things we do which are gaps today in Kubernetes and man-mans is basically distributed load balancing so I talked about the service notion in the earlier slide right so service if you take a native Kubernetes cluster service is backended by an HAProxy and that will then distribute load across the pods whereas with contrary we have something called ECMP load balancing so ECMP load balancing is done through native data part it's a distributed load balancing what does that mean you don't have HAProxy to manage service becomes a logical notion there is no network function backending the service right so you have taken one more element to manage out of your equation and it's all done natively for layer 4 you still need to use HAProxy I'll talk about in a minute as an ingress in Kubernetes if you want to do application level load balancing which is using let's say your HTTP links I talked about multi-tenancy we also what we bring here is basically the notion of floating IPs distributed SNAT and all the cool features we have done back in the open stack world right so we are leveraging all that and bringing it to Kubernetes and as Kubernetes matures and as they try to address all the use cases these are table stake features that are needed and what we are trying to do is bring them before even Kubernetes is able to provide but the way we do this is without changing Kubernetes primitives right we want to like I said developer workflow independent of administrative workflow means we have to stick to what Kubernetes provides and yet give this flexibility and that's what we are doing we are not introducing very customer annotations or labels and Kubernetes environments okay yeah so those are this is like some of the key highlights from an architecture perspective this is basic Kubernetes architecture right so you have the api server your replication controller scheduler and on the compute this is more like the master node the control node these are the minions and on the minions you have pods coming up with containers right and there's a cubelet what we are doing with open contrail integration first of all there is this notion of cni right container networking interface this is the standard like open stack as ml2 cni is a standard based basically it's a standardized way of integrating networking vendor plugins into kubernetes environments so we are going cni way I mean we have supported containerized workloads for a while now but with the introduction of cni we want to leverage that so the way it works is basically when we we have a cube network manager component which listens to the api server when pods get created cube network manager informs contrail controller one of the things I didn't mention earlier is in native kubernetes clusters ip address management is on a per node basis whereas what we give you with contrail open contrail is basically a centralized ipam which is what typically you're used to right so that's again another gap we are filling so when a pod comes up we do ip address allocation here right and then contrail controller informs weed outer agent which is running on the computer weed outer agent which is running on the computer and cni plugin cni plugin is an fumetal binary right so it comes up it when the pod is created it plugs in the pod to weed outer assigns ip addresses and then it exits right so that way the pod gets connected to weed outer and now it's ready and accessible so that's kind of a workflow at a high level that a product manager can deliver okay sorry that was a bad joke but yeah but we have on a serious note we have quite a few of our technical leads here in the room so you want to go into more details about this implementation we can have a discussion about that offline as well okay how do like i said we want to leverage kubernetes constructs and not create our own annotations because that means in your workflows we are introducing a lot of customization right so how do things map so kubernetes has a notion of namespace it maps to it maps to single project if you're doing namespace isolation or it maps to shade project if you're doing flat cluster based deployment pods map to virtual machine interfaces vmi's if you're familiar with open contrail vmi is an existing concept service maps to like i said ecmp load balancing service is a logical notion when you use contrail it's nothing uh no virtual function to do load balancing or physical function there ingress concept maps to hech epoxy load balancer which i mentioned earlier that we also package but again if you want to use a different uh back end for your load balancing functionality we work with a lot of different vendors when it comes to network policy in kubernetes i mean it's still evolving right i mean this is the first time they've introduced network policy in open contrail we have always had rich network policies we do service chaining using policies and so on what network policy maps to in the contrail world is basically security groups so if you want to do advanced policies you can still use contrail to define your rich network policies but basic network policies from kubernetes maps to security groups okay um i'll quickly go over open shift and then we can open up for questions okay um so what we're doing is leveraging our kubernetes implementation right and integration to integrate with open shift open shift here's open shift is red hats pass right and open shift uses kubernetes is built on top of kubernetes and um open shift addresses a lot of let's say pass level gaps that kubernetes had and has and that's why we see quite a bit of traction with open shift and we have pretty good partnership with red hat and we wanted to deliver customer use cases where it was very relevant that we integrated the open shift so a lot of the things you heard about kubernetes um environment is going to apply here as well okay i'm going to skip uh open shift but kind of talk about where open contrail plays a role in uh open shift environment so this is like a layered architecture of what open shift delivers as value the role that contrail plays is that of so open shift comes prepackaged with a multi-tenant sdn as they call it um again i think when it comes to multi-tenancy and sdn we have a lot more features than what open shift provides natively so the functionality that we kind of plug in for is the router notion and open shift sdn for isolation right so we are augmenting that functionality using contrail okay now in the open shift domain it's very much similar to kubernetes how do we map again namespace to single project or shared project part to virtual machine interfaces services to ecmp load balancer and the they call it router so the notion of ingress load balancing in kubernetes was actually introduced by um was upstreamed by the open shift team they always had an admin router concept which did a similar functionality um and then network policies same as security groups okay so we are leveraging a lot of our what we did with kubernetes integration to be able to integrate with open shift beat open shift origin on the enterprise version i think that's basically kind of a high level overview of what we are doing with containerized networking that i had so there we want to come up so i think we have five more minutes so i want to open up for qna and please feel free to ask us about networking open contrail or the containerized workload support that we are doing or even the containerized control plane hi guys uh you were talking before about the integration where we had open something with something else inside of it like mesos and then we'll do kubernetes and then we'll put vmware and then we'll just like stack this whole like crazy chain together um that's super fascinating to me because i've been thinking about that like how do i make one flat network so to speak um without just having like 87 nested tunnels uh so you said you're gonna talk a little bit more about that was that in the following presentation or no so um um what i've skipped is basically the gory details of um not the gory details but the details of exactly how we are implementing it um we are running out of time but what i can do we have techniques in the room and i can talk to you as well how we are doing it um let me just take one example right let's say you're you have an open stack deployment now you want to try out containerized uh you want to try out kubernetes cluster you bring up this kubernetes cluster um on top of your open stack environment yeah so now um um you want to connect to vm in open stack actually let me just pull up a slide it's easier than trying to uh let me just go in a minute sorry you're gonna go over five minutes yeah okay let's talk about it outside but the point is i mean um the way it works is you have control controller residing in the open stack environment yeah and kubernetes uh master node running in a vm in open stack vm right what we what you don't need is basically uh the entire control controller container running again in the kubernetes master node it can run outside and if when you look at a compute and it has container and vm running on it um you have a single vdr agent which is going to um help connect the container to uh to to the vm um i can i can share this anyway what i'll do is when we publish the slides i'll make sure i add that slide in as well so that you get the real picture of how we are doing that but the objective is to say what's very typical we are seeing is basically having open stack clusters and trying to have nested kubernetes on top of it what i showed is obviously in extreme case where you're leading up to three to four different orchestration systems people are doing it so there you go that's what even one of the analysts we talked to yesterday said gotner analysts so i guess for something yeah two quick questions if i can about networking open control um first on timelines and second so with with juniper then have two product lines basically one with monolithic uh the monolithic open control and the ml2 open control is that the is that a plan yeah so what we're doing is we're working through the actual detail like the devil is in the details so we're working through those so essentially uh we don't intend to uh change anything in the back end so possible uh refactoring of the front end so essentially the api layer will still remain exactly the same everything below is identical there's just two ways to come in so you can come through neutron through the drivers or you can come through the web web GUI the way it comes today let me add to that it's a single product but you'll have we'll be packaging both and you'll have a config option available to pick which way you want yeah oh i see so it'll be uh so that's what the configuration exactly yeah and and how about the the timelines timeline is a good question so that that's where we need your help so we need contributors like i mentioned earlier there are ml2 drivers there are so many service plugins so so we're gonna need help so more contributors we get more quicker we can get going but we've only kicked off the work on the ml2 driver side of it so hopefully we'll have that sooner now all right so uh are you planning to like kind of formally following some sort of pepe pepe policy or something like that in the python code what policy like the python code standards basically uh yeah yeah it's yeah or i mean otherwise i would volunteer that as a suggestion because he makes the code you know yeah it's a little bit missing so so you will have an opportunity to become a core of this project and therefore there you go you you can do it you can enforce it and it will be through open governance so it's not like you know you know it's behind the closed doors so it's it's part of open so it'll be like any other project in the open open stack so but community driven community decides community does everything anything else so the first target is ml2 ml2 driver so that's the first thing we we have already started looking at it yeah how are we gonna so the biggest issue is uh the source of truth when we come through neutron and neutron database is the source of truth all resources all drivers everything all plugins the work of neutron database in open contrail the source of truth is in the open contrail database so that's once we are able to separate that out you know after that it's a matter of just adding the code and and ml2 is our first attempt to sort of create that separation once we achieve that after that it's a matter of just bring me more warm bodies and we can simultaneously be doing four service plugins at the same time as opposed to do one and then go to the next one and so forth yeah yeah this will fix all of this will address all of that I found a lot of bugs like that too yeah yeah the more bugs you find the more you get into but one thing I also wanted to reiterate which I did not cover one thing you didn't see is missiles integration the reason for that is there's a riot session riot game session later in the day today please attend that one they will be talking about how they are using missiles and how they're using open contrail in that environment so definitely add in that one there's a question yes I've seen some value in multi hypervisor support but but at the same time it seems that it's kind of been faced out at least for being where Howard what are the plans regarding that so are you referring to vmware closing out access to the hypervisor api for example a customer with a multi hypervisor environment and where they start integrating open stack and open stack supports that orchestration yeah and then they want to to bring a solution oh that's fine so in a month let's take example of open stack and via a vcenter right or yeah so indeed we have a lot of different ways of using contrail to integrate these two clusters you can have open stack vcenter has a driver right an open stack so you can have open stack on top you can have vcenter cluster as a compute underneath right in that environment we can still function we don't are running in user space on those compute nodes you can also have pure vcenter based clusters where you have contrail control plane and we don't are running in user user space on the ESXi nodes and you can have an open stack cluster okay with kvm and we don't are on kvm in that case it's going to be in kernel and we can still bridge these two clusters either with vdota as a gateway kind of a functionality or you know we call it vcenter gateway functionality or so we have a lot of different ways of onboarding vmware or we have a lot of different ways to support coexistence of open stack and vcenter clusters okay okay and where are other hypervisors that have you seen so we are okay so we are obviously the next thing we are addressing is hyper v that's an implementation that's actually ongoing we should have it done sometime soon this year um so that's the next type of visor we are targeting but typically you see kvm, ESXi and hyper yeah yeah but during transitions where we want to to to help customer with with open stack for example yeah like I said you can do both open stack and vmware have both environments together and they can coexist and we have a lot of different ways of doing that there are quite a few blogs on open control don't always talk about that okay thank you