 Okay guys, okay, so I'll do a quick introduction. I don't have any Ironman pictures, sorry for that. So Dimitri Desmet, I'm part of NSX, which means I'm part of the network and security division within VMware. Yes, VMware does more than compute virtualization. It does also storage, it does also networking and I'm part of the networking piece. And we'll spend the next 40 minutes to talk about Neutron, so the network piece within OpenStack. I've been, actually, I come from NYSERA, an acquisition you may have heard about it if you're a little bit old. It was like four years ago. So I've been in the networking virtualization business for, I don't know, five years. We can, so if you have questions at the end, we'll have a Q&A, feel free to ask your questions at the end, but this session, yeah, we'll focus on why spending extra money to buy somebody to do network and security services where you could actually do it for free within Neutron. So why there are multiple vendors in this vertical within OpenStack, within Neutron, and why I do believe that VMware NSX is really the best choice today to fulfill this network and security piece within OpenStack. It will take me like 15, 20 minutes. Then we'll do a demo, or I will do a demo showing you all that live, it's a live demo, and we'll do key takeaways and Q&A. Actually, it's not like a very large audience if you want to do Q&A during the session and make it live, feel free as well, it's cool. So, why using a network vendor for Neutron? Actually, OpenStack, it fits, I'll go fast because like, oh, who knows OpenStack? Who plays with OpenStack in the audience? Yeah, pretty much everybody, which I would expect. Who is using OpenStack in production, for real? Not like your lab with 10 VMs. Okay, so now we're dropped by like 90% of the audience. Okay, but, and who loves, who knows Neutron? Who does the Neutron piece? Okay, pretty much everybody, that's cool. Who loves it? Who thinks it's like you just rock shining best thing on earth? Okay, who thinks it's okay? Who thinks it's crap? Oh, okay. So, okay, quickly, what is OpenStack? I mean, I won't tell you what is OpenStack, you know, but high level, it's pretty cool, I mean, it's amazing. It comes from the OpenStack official website. It's just virtualization of your compute, storage, network, provide a beautiful open API on the top so you can program all that in a dynamic way with all your applications, the tool you love or you can use OpenStack Horizon if you want and works on anything you have underneath. So it's pretty neat. Now, that's what your CTO, he can stop there. The real people doing the real stuff, I mean, they have to digest that and yeah, it's plenty of components and they all talk to each other and that makes your life, if you do OpenStack a little bit, not a little bit, much harder. There are people who know that very well. I'll call them the OpenStack warriors. Like Mark, before we started on Cactus, I'm not as old as him, I started on Diablo. But yeah, so some people included within VMware, but not only VMware, I mean, I mean, Mirantis, Red Hat, I mean, some enterprise or companies, vendors, have those real geeks who know how all that stuff work and that's fine. But in the enterprise, you don't have those guys and you will never hire them because they are not interested to work for you anyway. But you won't OpenStack because you'll see XOTO, it's cool, I won't OpenStack. So that's why you will go to those guys, Mirantis and they have Fuel, Red Hat, they have their older thing. At VMware, we have our own OpenStack distro, VIO, where you can do click, click, click, and that beautiful OpenStack is up and running, you don't have to understand the stuff on the left. If you want to, you can, because at the end of the day, it's running OpenStack, but there is like a wrapper that makes your life easier. Neutron, which is the goal of this topic, oh, I should check the time, oh, it is six minutes. Neutron, I mean, high-level view, it's pretty cool as well. It's a server that talks to a database and that talks to RabbitMQ to a message queue to configure somehow all the network and security services. So that's pretty neat, pretty simple, any CTO can understand that stuff. In real life, I mean, if we just take basic, basic switching, L2, you cannot make simpler network. Logical view, it's pretty simple, it's a blue line and you plug your VMs on it. In your KVM, that's this piece with a bunch of Linux bridges where you plug your VMs and you must have those Linux bridges because you want security with the IP table that will be done here. You have your OVS, I guess, because you're here, you understand OpenVswitch, but you have your OVS and you have plenty of other bridges and that's how it is implemented and that's what you have to understand if you want to make this working in this beautiful OpenStack. So there are gurus as well, people understand that stuff and it's beautiful and we have, at VMware, we have a bunch of those and yeah, we participate to Neutron and all that beautiful thing, but even if you have those gurus and I can argue in enterprise you'll have a hard time to find those, still you have challenges if you follow the, by the way, this comes from OpenStack. I didn't make it up. That's really the solution, the reference architecture of OpenStack. You'll still have challenges on performance and troubleshooting because you have all those bricks, logical bricks, but still steps you have to go through within your hypervisor and so that's why you have challenges in pushing a lot of traffic out of a single hypervisor KVM and troubleshooting. If something is not well programmed because something did not work as expected within Neutron, message queue was overloaded or something happened, then go for it, troubleshoot that. Where is the issue? What is badly configured? So, PackageWork is just a nightmare. Do we all agree on that or? Okay, cool, I'm not the only one thinking that. And that's only L2. I'm not talking about L3 and anything else. L3, I'll go fast. I mean, that's this giant thing and I won't spend time on how it is. So we talked only about L2, L3. You have to masterise this if you want to run this in production, not in your lab where you don't care if it's dead, you just reinstall a new OpenStack. But in production, you have to understand that. And I didn't talk about distributed virtual routing, DVR, which exists now in OpenStack. I didn't talk about security. I didn't talk about load balancing. So yeah, if you have troubles to sleep, you can start learning that stuff. Now, if you don't want to learn that stuff or you don't have the skills or it's not your job anyway, then yeah, that's why you will go with a vendor because it will make your life simpler to offer those services within Neutron. It will, you'll have a number if some stuff happens and in OpenStack, there is always stuff happening. Then you have somebody to help you, 800 number. Then you have better performance. I won't talk here about deeply why we can push more, but yeah, higher performance with vendors. At least that's what they claim and I can explain why me and SSX, we do it. Management troubleshooting, that's a key piece. A lot of customers in the enterprise, that's why they go with NSX. It's to be able to manage the stuff, to understand what's going on, to have alerts, to have the beautiful green, red, orange light, these kind of things. And stability, okay. So why NSX? I just explained why you want to use something on your Neutron to offer the service. Why NSX? I'll go fast because I really want to show the demo. Whatever distro you use, or if you don't use any distro, you install it from source. I mean, you'll have a hard time to get from them the scale number, how many things you can have on your Neutron, how many hypervisors they supported, how many switches, routers, security groups, distributed routers, they will claim and they will make a POC and they will do that and they will use some heat templates and that will work, but now at scale when you have multiple concurrent users and so on and so forth, will that really scale? They cannot give you a number. For each release we do, I mean we're a vendor, for each release we do, we test that under test, scale test and longevity test to validate, we support those numbers, those scale numbers. H8, Neutron is pretty painful to have H8, it's just by design, you don't have to do something special. I know we are at OpenStack, so everybody loves KVM and that's cool, I have a paycheck from VMware, but I like KVM too, I didn't say love, but like KVM too and NSX will work on different kind of hypervisors. Today we support KVM, Ubuntu, Red Hat, we support ESX obviously, look at my sticker. We're working on Hyper-V, it's really a multi-hypervisor solution. So you may have some compute that are more critical than others, it's not just your dev ops guys and if the VM dies and the hypervisor dies, who cares? So vSphere, nothing about networking, but vSphere, the fact that you run your VMs on vSphere, on the compute, will give you all the vSphere stuff, like the HA, the DRS and all the beautiful things of vSphere. So you can have like our customers, maybe everything on KVM, maybe everything on VMware, but also a mix, maybe some compute, some tenants will go on KVM and some, or pieces of the tenant will go on KVM and some pieces will go on vSphere. It's fully supported with a mix of both with the same feature set. What can we do in terms of network and security services? We can do L2, we can do L3, we can do NONAT, we can do NAT or floating IP, I should say, we're here at OpenStack. We can do the communication with instances on your VMs, hypervisors, whatever ESX or KVM, or communication to your physical servers like you have a real server with your database or whatever. Communication L2 is possible between the two. We support firewalling, or security groups, should I say, stateful, it's not stateless, or real firewall where we check the TCP handshake, the sequence number, the acknowledge number and all that beautiful things, and load balancing. So we support all that. And obviously all that is configured from your beautiful OpenStack, Horizon, CLI, API, with the Neutron talking to NSX to implement all that within NSX. Something pretty cool we have also, in the enterprise, nobody loves NAT. Feel free actually to contradict me if you disagree. But in the enterprise, from what I've seen, 100% of the enterprise customers hate NAT. It's painful for the logs, then it's impossible to know who is who, and so NAT sucks. In OpenStack, you always do NAT, and you're floating IPs because there is no nice integration or there are some work in progress. But as of today, there is, and that will stay like this for a while, I guess, there is no easy integration with the real world. And what we do with NSX, we can make that very simple. What we do is, within NSX, nothing to do with OpenStack. Within NSX, you have this logical router NSX that is connected to your physical world. Your physical routers you love. Cisco, Juniper, I don't know, Cumulus, we love them all. And we do a BGP adjacency with them. OpenStack is not aware of that. You do that on day zero. And of course, my NSX router will advertise nothing because nothing is connected to it. But the adjacency is there. And then from OpenStack, you create your networks. You create your, from OpenStack, let's say this, your two-tier topology with this, with no NAT, what OpenStack will do, on Neutron, will do without NSX driver. It will plug your tenant blue logical router to not the physical world on a VLAN, but to my NSX logical router. And the adjacency exists already with your physical router. So you don't touch anything, nothing on NSX, nothing on physical router. Automatically, your physical router will learn those subnets. And now you have communication through those subnets without any human intervention. Okay, that's the trick we do. If you still want to do NAT, you can do this and that. And now this NSX will advertise the floating IPs. And yeah, it's supported in both cases. Okay, NAT and no NAT. And advertisement is automatic, that's unique. And that makes customers happy because they don't like NAT, as I said. Performance. So that's what it is in OpenStack. With us, it's simply this. So it makes your life much easier to understand what's going on. And also the performance will be much better because you don't have all those internal steps within the hypervisor. And we still, so this left is converted to this right. And if you want to add security group, so security, which everybody does, it's simply this. So the internal architecture of your KVM is much, much simpler which goes at the end, it can push more. Routing. We support disability routing. So I won't go, I'll go fast on what it is. I mean, the logical view is this. You go from VM to logical router to VM. The physical is you go from VM on your KVM one to the Neutron server and go back. And so you have those ping pong. It works fine. I mean, no big deal is just, yeah, performance sucks and you use a lot of your fabric for nothing. If you do distributed, then you go directly from hypervisor to hypervisor. And if they are on two different racks, you go from rack two to rack one. If the two VMs are on the same rack, you just go to KVM to KVM straight. And if they are on the same hypervisor, you just don't leave the hypervisor. Even if, again, the logical view is you do L3. So the beauty of distribution of services. For North-South, when you want to go out, we scale, I mean, very, very high, especially if you compare to what you have with Neutron default, where you'll have a hard time to go over one gig. How do we do the magic? So this logical router in OpenStack, tenant router, is distributed again. But when you go to the physical world, you remember you go through this green logical NSX router on the top I had. This is a logical view. Actually, it can be multiple servers doing this green logical router you had at the top. And each one supports DPDK, which is an Intel thing library that helps the NIC to process plenty of packets per second. So each one can do 80 gig. And with the power of distribution with ECMP, different VMs will go through different of those routers. And up to 8, so 80 gig times 8, it's much more than you'll ever need, especially in the enterprise. OK, last thing, which is maybe the most important, and then I'll do the live demo, is management. I mean, when something breaks, VM1 cannot talk to VM2. It's a pain. So I guess I'm a little bit a geek as well, because I understand that. And I do that as well. The beautiful OVS, DP, Kotl, Dumbflow, Pipe, Grap, blah, blah, blah, to know the open flow that is on your OVS. But I guess I'm not a real, real geek because I hate to do that. And I don't have to do that with NSX. What we have, and I'll show you that in the demo that will be easier, is we really have a mapping on what's going on in the logical view and the physical view. And we can trace flow easily, I mean, with click, click, the traffic path. And I'll show you that in live demo what that means. It will be easier with the live demo. OK, all good? That was the lecture. OK, now real stuff. So demo, and yeah, 20 minutes, perfect. So in this OVS, before doing the demo, let's show you what I will present to you. So I have now nothing in my lab in OpenStack. I did not create any logical router, no logical network, I mean, network, subnet, routers, security groups. Nothing exists in my OpenStack. It's empty. But I have this NSX logical router connected to my physical router and BGP established. My physical router doesn't know anything because my NSX logical router doesn't advertise anything, but it is there and they know each other. And then I will deploy this basic application, two networks, one router, and with SNAT, with your SNAT, floating IPs, I should say. And you'll see that automatically my physical world knows about those IPs. And now I can talk to the VMs via the floating IP. And I'll do another one without SNAT, without any floating IP. And automatically, my real world will know about those subnets, 10.22, 1, and 10.22. And with no human intervention whatsoever, just the tenant creating the stuff. And then what I will do is I'll use a heat template to create that because I don't want to do 200 click, click, click in front of you. I'll have the use case where the tenant, user two, calls you and say, hey, my VM web cannot talk to my SQL server on my VM too, and it's because of you. Your cloud doesn't work. And I'll help you to show you what a cloud admin running NSX will do to investigate that in an easy way. So let's go. So I have nothing now. I have no switches, just the VLAN connected to my real world. And I have no logical routers. But the top one, which is not known in OpenStack. OpenStack doesn't know this one, connected to my physical router. And you can see the mouse. You can see I configured BGP on this NSX logical router. And I advertise it. It does advertise the stuff that will be created, and now there is nothing. So that's what you see here. And in OpenStack, you'll see that I have nothing yet. My Wi-Fi is super slow. I just have the external network in OpenStack. And I guess I'm kicked out. Oh, by the way, do you see yet? Me rent this. So NSX can run on whatever OpenStack distro. I could have used DevStack. But I guess, yeah, DevStack is not really production. I could have used VIO, the VMware OpenStack distro. But just want to show you that it can run on anything, the OpenStack distro you love. And I'm not here to explain you why VIO is better than others. It just works. NSX, as NSX, I don't care. You use whatever you want. So to-to-toon. And routers, I have nothing. And if I go on my physical router, which I lost, timeout thing. Physical router is a Vietta. If it's a Cisco Juniper, that would be beautiful as well. So BGPNeighbor2020.3, the BGP state is active. Show IP, route BGP. But it did not learn anything. My physical router did not learn anything yet via BGP, which makes sense. Nothing exists in OpenStack. Now let me log as user1. I've been kicked out or not? Yes, user1. So it's live. As you can see, it's not recorded. And I guess I lost my- and I'm not sweating. I'm not shaking, so full confidence. So that's a heat template. Something pretty cool, there is no lock-in. The only lock-in you will have if you use NSX is you'll love it so much you won't use something else. But all the tools you use to create your stuff, like heat, for instance, but you could use any other tool or horizon click, click, click, whatever, there is nothing proprietary. You see in that script, or it's maybe too little for you. But I'm just creating networks using heat constructs. I create security groups. I create floating IPs. I create VMs. It's just a typical thing you would do without NSX anyway. It just happens that your OpenStack is configured with Neutron with NSX. Control A, Control V. And let's do that quick. And let's call it user1. And password, whatever. And beautiful. So heat does its job. It's creating the network. It's creating the router. It's creating the floating IPs. It's creating all that stuff. Actually, I can see it here in this nice. So it's not fully finished. I have the router. I have the two networks. Here we go. And the VMs are coming. That's typical OpenStack stuff. I won't spend time. What happened in the backend in my NSX? I have now, if I click correctly, I have now the two networks. You remember the web tier and the DB tier? They are logical switches in my world. Actually, that's pretty neat because also you can have for troubleshooting, we use tags. So the Neutron plugin will provide to NSX all the information about OpenStack. So you can quickly see, oh, that's a logical switch. NSX, HeatNet, blah, blah, is actually in OpenStack done by Tenant One with the UUID, blah, blah, which is the Neutron network, UUID, blah, blah. So easy to map when you have thousands of those. And we have filtering capabilities. And you can see the ports on that switch. The one for DHCP, one to go to the router, and one to go to my VM. Same thing for the other guy, for the other switches. You can see my router. If I click refresh, refresh. The Tenant User One router, which is connected to my, remember the big NSX green router? Going to connect it to that up and down to the logical switches. This one, I created two logical switches. Why do I have a third one? It's for metadata service. We can talk about that. So all has been done automatically. It's beautiful. And now if I look at my physical world, look at that, the two floating IPs I have are known in the physical world, and I can access it. Now let's do quickly before I run out of time. Definitely quickly. User two. Shting, let's log out. Let's log in as user two, VMware One Bank. And oops, I see the password. That's fine. We're all friends. Let's create the same topology with user two, but this time without NAT. OK, next user two, password. Beautiful. So it will create the same stuff. And in 20, 30 seconds, it will be done. And then you remember the story I told you? The user calls you and say, hey, you know what? My web VM cannot talk to my SQL VM. And obviously, it's because of you. Everything is because of you. And then what you would do in real life, you would ask the guy, hey, give me an SSH to your VM or root access so I can do some TCP dump, or you would go on the KVM, and you would do the beautiful OVS DP cut all dump flow pipe crap to know if the open flow is really there. You will go to the IP table. I mean, yeah, go for it. Or, and you'll go first to the tattoo shop to get your big tattoo. But you don't have a tattoo. And you have an SSH. And because you're not a full, full geek, so you don't have your tattoos. And you would go there and, oh, sorry, before going there. So you will ask, OK, what's your tenant UUID? And he will tell you I'm user 2. And my logical switch is blah, blah. So OK, so you go on an SSH to his logical switch. And you know it's his logical switch because of this, user 2. And you take the ID of his VM. So it's 264. And the ID of his DBVM, 264 and F4C. And you would go here and 264, I said. And you would say, OK, 264, 264. 264 and shoot. What did I say? 64D, I think. Automatically you, oh, I think it's the other way around. 64D and 264. And you see, automatically, the IP address and the MAC address of the VM, you don't have to enter them because OpenStack knows the IP and gave the information to NSX, we're friends. And you can say, OK, I want to try Mr. Customer User 2. OK, you tell me MySQL. So MySQL is 3306. And let's try. And when I click Trace, actually the NSX manager, that stuff, is talking to the hypervisor, whatever, if it's KVM or ESX, and will tell the hypervisor, inject that traffic, that packet. And it will inject it for real just after the VM nick. And with a flag that says Test, and I don't know why it takes so long, and then the hypervisor will process the packet and will forward it, if he can, up to the end. Live demo, I guess. Why it doesn't like me. And you will see, or you should see, quick, what's going on? Oh, I lost completely access to my lab. No. OK, beautiful. That's the beauty of live demo, but usually it's never like this, not usually. I just did refresh of the page. Sorry, I'll redo it again. 6-4. And I say, what did I say? 2-C-4, I think, yes. Oh, no, it's not this. Oh, that's why it was not working. There is no communication possible. I was wrong on the port, sorry for that. I should write them down. So we say it nonatweb, which is F4C, and 2-C-4. And 2-C-4, F4C, 2-C-4. Tools, F4C, yeah, it's much better. 2-C-4. OK, that's why it took time. It would have timed out, because there is no possible connections between user one and user two. They are completely isolated. 36, 3306. And let's go. And actually, I could have seen that with the IP addresses of the reported there. So NSX manager asked the hypervisor to do it. And you can see, quick this time, that it's VM1 is on KVM1. And the logical view is VM1 is on KVM1 to that logical switch, then to that logical router to that logical switch to go to this other VM2, which is on ESX. Actually, I have a mix of ESX and KVM. Communication between ESX and KVM is all good. So on the logical view, it looks cool. Everything is on the logical and physical, it looks cool. But when the KVM1 received the traffic, it has been dropped at the beginning. Why? Because of a firewall rule. And you check the firewall rule. And you see, I'll go fast, because now I'm running late. But you check, OK, let's do it. You check that, oh, OK. It's heatnet is this one. And I scroll down. And, oh, Mr. Customer, in your script, you allowed not my SQL, 3306, but you did a typo with 3307. And the tenant can fix it directly in OpenStack. So I'm tenant 2. I'm not the admin. I'm tenant 2. I go to the security group and DB, manage rules. And then the typo I did in my heat template. I delete that guy. That's incorrect. And I add 3306 from only the web tier. And now I have my firewall rule, my security group in OpenStack, which is obviously translated to an NSX firewall rule, which is here. And now that the right, I guess, didn't do it yet, the right TCP port. And you can see it here. But something cool is if you go to the tools and do the same trace law as before, and again, forgot the ports, I think it's this one to my DB. Was it this one? Yes. And I redo that test. When the customer can test in his application, it will work as well. But I can test also from here. And again, it's an NSX manager that will talk to KVM, because KVM is a guy with the VM1. And we'll ask to inject the packet, and it will go where? It will go, so, received by KVM1. KVM1 will do the first step, which is security group. And it's matching the firewall rule that says, yes, this traffic is accepted. You can see the ID. And if I click, go directly to the firewall with the rule that accepts it, then it does. And you can see on the left, it's matching where you are on the logical view. So then it's doing the switching. I'm still inside KVM, KVM1, is doing the switching, is doing the routing. I'm still inside here on the logical view, but I'm still inside the KVM, because we have DCBT routing, DVR. Then it will go to, once the routing is done, it will go to this logical switch. But I'm still inside KVM1. And then it needs to go to the VM2, which is on ESX. How do I go from ESX, from this box to that box? I go through the real world, encapsulated. But I go to the real world. That's what you see here. You go to the real world, to ESX1, to the physical. If there is any issue in the physical world, then you would see this red, because the communication between the two would not work. And you would see nothing received here. And then you reach ESX1, the security group, DFW, and you go to the VM. When you go up to the VM, we send it up to the VM VNIC, and we drop it here. So the NSX agent inside ESX knows it's a test packet, and it will drop it. It will not really send it up to really the VM. But if you sniff in the real world, you will see the packet. It's a real packet that is sent. It's just sent from the VNIC up to the VNIC before it goes to the VM. So I find that much more cooler than OVS, DP cut all, and TCP dump, and all that stuff. Now, if you love those, you can still do it on an ESX. That's fine. You can keep your tattoo and still do that. All good? OK, so I think I have three minutes more. So let's do a quick key takeaways. So yeah, what I did for 40 minutes, or 35 minutes, is really that it's, yeah, Neutron offers network and security services. And honestly, between you and I, it's made by Geeks. And we have a bunch of those. We have three guys over there. They are made by Geeks for Geeks, honestly. And in the enterprise, you don't have so many Geeks. So that's why Enterprise, again, feel free to correct me if you disagree. But Enterprise, they use vendors. Why do I believe me, vendor, is the right choice for your Neutron? And the best choice for your Neutron is because of all of this. It's multi-hypervisor support. If you have KVM, beautiful, I support it. If you have ESX, there are many people in this building that cannot say the same. I love it, too, ESX. And it works as well. And you can combine the two like I demonstrated with you. It's great on stability. And NSX, we have more than, what is it now, or close to 2,000 customers in production. Not a lot of people can say that here. So stability, high-performance support, you can call us. We are number, and we understand Neutron. It's not we understand NSX, obviously, but we understand Neutron. We are number five, six, whatever, in the Neutron space. And OVS, open V-switch, used by KVM also if you don't use NSX. I don't know, like, I mean, the vast, vast majority we invented it at NYSERA, and we are still the key contributor by far. So we know OVS also upside down. And we offer you, and I think it's the most important thing, that's why we have customers, and they select us. It's not really for performance. I mean, it's really some of them. But most of them, it's for the day-to-operation. They don't have a tattoo. They don't know how to operate that, and they want a simple tool. And you can use that on VIO, beautiful. You can use that on the distro, open stack distro you love. Okay, it's not tied to a VMware open stack distro. For the network guys in the room, so I'm old as well, and people see me like this. If you ask my colleagues or my wife, unfortunately, she sees me like this, but I want to be seen like this, and we can help you a little bit. Open stack helps you a lot, but if you want to look cool, we can also give you tattoos to look like the real geeks. I have some, you won't be a real geek, so they will last only for a couple of weeks, but for two weeks you will look cool. And that's pretty much it. Q&A now, if you want. I think one more minute left. All good? Yeah. Yeah, so the question is, do we support micro-segmentation, which is a marketing term we invented at VMware, which is cool. So I guess you're a little bit VMware addict. That's nice. Yeah, we do support it. What is the definition of micro-segmentation? It's the technical definition, and it's the ability to secure traffic anywhere, even if it's in the same subnet, in the same L2 domain. And open stack with security groups, you can do that. You can secure traffic, even if two VMs are in the same domain. And what you can do, which is pretty cool, is you can do security, but the rules are stateful. So we check since in act, sequence number, acknowledge number. So yeah, fully supported and it's all cool. Any other question? Yeah. Yeah. The question is the NSX demo I did is, with which NSX? So we have two flavors of NSX. We have the NSX flavor for vCenter, and we have the NSX flavor for multi-hypervisors. Here, I'm not at VMworld. I'm at open stack. And I know you love KVM for good or bad reasons. And obviously, what did I demo here? I used the NSX flavor multi-hypervisor, so I can demonstrate all the network services and security services on KVM as well. Well, if you use the other flavor, I can demonstrate that only on ESX, okay? Any other question? Yeah. So the question is, how do you do NFV or service insertion? So today, the way it is done is with routing. So you build your network so the flow is going there. We don't have today. As of today, I did not talk any, I didn't talk about roadmap. Everything I showed here is out of the box today. But yeah, if you want, we can talk about that, but it's not available to, oh, it's available today if you make your routing, make it going through your stuff. But we're working on a nicer, cleaner, not cleaner, but nicer and more optimized way. Yeah. I missed the question. Can you? Oh, the question now is, how do you provide services that are not stateful layer four firewall but advanced security services? So we have with NSX, the ability, we have an ecosystem as it is called. So you have the ability to offer advanced services with our partners like Antivirus, which does not exist natively in NSX, provided by NSX itself, natively. Deep inspection with Palo Alto, Checkpoint, Fortinet, Simantec, Maccafe, whatever. Beautiful vendors, I love them all. So within OpenStack, we can do that, but with that, two flavors, you remember, multi hypervisor V, we can do that two day to day on V. We cannot do that on this flavor. Does that mean we don't want to do it? No, it's not that. It's, we are getting there, but it's not available in this flavor. If everything is ESX underneath for the hypervisor, I can do that and you can magically ask Simantec to do its beautiful work, or Palo Alto or whatever, but not with this flavor with KVM today. Talk to me next year. Hyper V support, we have that as demos. Why, how does that work for us? It's like KVM. Why? Because now Hyper V made a big move, which is they support OVS on their stuff. So you can have OVS running on Hyper V, and how do we manage KVM? How do we do all those services? It's doing some stuff with OVS. So we'll do the same stuff with Hyper V. We have that in labs, but it's not fully QA regressed, and don't wait so long. End of the year, we can demo that in PoC for Hyper V. And then you'll see three stickers. Hyper V, VMware, KVM, or maybe VMware will be first. Okay. By the way, we're looking for beta testers on Hyper V. So if you want, come back. Oh, yeah, yeah, yeah, yeah. So the question is, VMware, we have zealons of product, I even cannot name them all. We can do natively some cool stuff on the management operation, like the one I showed you, with click, click, click, nice UI and graphics natively in NSX. But we have also other products that provide management, monitoring, and beautiful things, more advanced things as well. One of them is VR ops and one of login site. One of them is VRNI. VRNI has a plugin to talk to NSX. Yes, the flavor NSXV today. Not the flavor NSXT today. So I cannot do today VRNI with this. NSX Flavor with KVM. I will, but I'm not sure. I don't know their roadmap. So they're working on this, but I don't know when that will be available. Okay, if you love VMware and you're VMware addict and you have V everywhere and VMware sticker tattooed on you. Yeah, VRNI is already available. Yeah. If you have a little penguin, wait a little bit. Okay. Okay, thanks a lot.