 Okay, thanks for joining. So, this, yeah, let's start with the introduction. I'm Dimitri Desmetes, it's... And I'm Yves Folzer. We are both from NSX, which is the network and security piece within VMware. And so today we'll talk about the Neutron, so the network and security piece of OpenStack. Just before starting, I just want to know in the audience who is considering himself more on the network and security side. Not so many people, so I'm not sure you're in the right session. And who believes Neutron is pretty cool and he's a piece of cake and simple and all good. Two guys. Okay, because me, when I think of OpenStack and Neutron, I picture myself more like this. And actually I'm not the only one because if you look at the... Yeah, here we go. The clicker doesn't work. If we look at the survey in OpenStack, the one of, from 2016, Neutron with out of all the project Neutron with Thelometer was considered the most challenging project and very complex to use. And if you look at the latest one that just released a week ago, I did not vote, so my vote is not in this one, but still the same. Neutron is considered as pretty hard and needs to be reworked and make it simpler. So it's not, so I feel not that dumb. I'm not the only one feeling that's a complex project to make it up and alive and do what you want to do. And there are reasons for that. That's what I'll explain in the next five minutes. And if you look at, those come from OpenStack. I didn't made what you're looking at now myself. If you look at Neutron, the reference architecture, just to do L2, you cannot do something more simple. Just to do L2, if you do L2 in a Visio, you just draw a line and plug VMs on it, that's it, pretty simple. If you do physical world, you just create a VLAN and plug physical servers on it, pretty simple. You've done that all your life or your dad did that all his life. In Neutron, that's what it is. Don't tell me that's simple to you. And so it's a bunch of Linux bridges. It's a bunch of OpenVswitch and you try to, or OpenStack does it for you, plug everything to pretty much where it should be. But if something bad happens, yeah, troubleshoot that. So some people are good at this. Some are gurus and understand that diagram. I gave up a long time ago to try to understand it. But even if you understand that piece, I just say piece, yeah, that piece, you still have challenges on performance and troubleshooting because you have so many ops within your KVM you love, then it will slow down the whole process. And troubleshooting that when OpenStack doesn't do what it should do or KVM doesn't do what it should do, then yeah, good luck. So that's only L2. Now if you do L3, just basic routing, that's in your Visio, you love what it is. Now that's in OpenStack. I want to save time so I won't go through how it works. But that's this complex stuff that is done by OpenStack or Neutron. And I'm not showing distributed L3, I'm not showing security group, so DFW. I'm not showing load balancing, I'm not showing QoS. Just with basic L2, basic L3, it's already a nightmare. Oh, sorry, I should not say nightmare, but for me it's already a nightmare. Maybe some of you, you didn't raise your hand, so I guess you understand that piece. But for the people who don't understand that piece or don't want to understand that piece, then yeah, that's why people use something else for Neutron and they use a vendor. It's to get simpler implementation, that's what was in the survey. To get also support when some shit happens, you want to be able to call someone to help you. To get better performance, because with all the hops we talked about before, I mean that kills the throughput you can have out of your hypervisor. Troubleshooting, I mean troubleshooting as I said before and I'll show you a couple of things later. It's just a nightmare, so for all of that and also scalability scale to support many VMs, many hypervisors and high availability, I put that in Italic because some people may claim, and they're right, you can do with the reference architecture Neutron in a fully scalable, highly available way, but it adds even more complexity to the implementation. So you want something just simple that works and with high performance, easy to troubleshoot and with support and so for that you go with a vendor and NSX is one of them. Now why do I believe NSX is a great plugin for Neutron? That's what I'm explaining to you in the next two minutes. So first of all, NSX comes from a vendor. So okay, you have to pay for it, but what do you get out of this? Each release coming out of VMware NSX has to go through tens of thousands of test, functional test and also scale and longevity test. So at the end of the day, you have some some reinsurance that it should work at your place as well. High availability, which is very complex to do in OpenStack Neutron, then you have it outside of the box. You don't have to configure anything, it's just by design. It also gives you flexibility, which is a big motto of OpenStack, choice. It's not because you use VMware NSX, you must use ESX for the compute. You can use ESX, you can use KVM, Red Hat, Ubuntu, we love them all. And you have the same feature set for network and security services. Whatever if you use KVM or ESX behind for your compute, then you can use the services NSX offer. In terms of services, you offer lots of people, even if OpenStack Neutron supports a bunch of stuff like QoS, distributed router, load balancing. Many, many, many people don't use those because complexity, bugs, interoperability between one and another. What do you have outside of NSX? You have L2, obviously overlays, distributed routing, so the DVR, if I use the OpenStack words. You have NAT and NoNAT. If you're in the enterprise, yeah, NAT sucks. Lots of people don't want floating IPs. You don't have to with NSX, you can do NoNAT and it works great. I'll show you in a minute how we do that. You have L2 connectivity in the same subnet between VMs and physical servers. You have the security groups with stateful firewall. And you can have also load balancing and all that driven from OpenStack. You don't talk to NSX. You talk to OpenStack with the front-end OpenStack API you love doing click, click, click in horizon or doing your heat template or whatever you use. And this, because of the plugin NSX configured in Neutron, we'll translate that to NSX and NSX will do that beautiful network topology behind the scene. So all that exists when you use NSX behind Neutron. Now, simple integration into your physical world. We have something pretty unique that helps you to not use floating IPs, so not use NAT. What we do when you deploy NSX in an OpenStack environment, you pre-deploy, outside of OpenStack, you pre-deploy a logical router. That's what you have at the top and you link that to your physical routers. So the NSX top tier router has a BGP adjacency with your physical routers and will advertise all the subnets you have in OpenStack. On day zero, obviously it advertises nothing, but then when tenants start deploying from OpenStack stuff, like those two networks and this logical router, then automatically NSX plugin will plug the tenant router to the top tier NSX router. And so now the top tier router advertises subnet A and subnet B. So you don't need floating IPs. You can do no NAT and automatically your physical world will learn those subnets A and B without touching anything on the physical world or OpenStack or NSX. And if you want to use NAT, then it will advertise only the NAT in that case. The plugin is smart enough to know, oh, the tenant green is configuring its router with a SNAT, so I will only advertise a floating IP and so on and so forth. All the stuff, the topologies we support will be rightly advertised in the physical world. So much simpler. I'll finish with the performance and I'll go quick because we want to spend some time on Kubernetes, which is a new, cool stuff everybody talks about and we do also some pretty cool stuff there. Performance on the left of the screen, you see what is the reference architecture, that beautiful bunch of boxes and wires. And what do we do inside your KVM with the NSX plugin? We simply control the OpenV switch and we gave up on using all those Linux bridges and that gives you much, much higher preference because you don't have those internal hops. We can saturate a two 10 gig nick out of each KVM you have you love. And obviously if you love also ESX, we do it on ESX. I say KVM because yeah, it's OpenStack here today. At VMworld, I don't use that word too much but we support both and with the same scale. In the diagram on the left, I'm not showing you the reference architecture with security groups, but on the right I'm showing it to you and it's not using Linux bridges and things like that. It's just plugging OVS FWD with contract for stateful firewall on the OVS V-switch, on the OpenV switch and that's it. So it doesn't kill the, it doesn't drop the performance to add security. You still can saturate your 20 gig, your two 10 gig nicks. So that's the performance outside your hypervisor. Now for routing, we support distributed routing. Let's go through the animation quick but what that means is in the logical view, you have the VM blue that will go to its default gateway, logical router that will go to the VM yellow, yellow, green, whatever. So that's the logical view but the physical view inside your physical fabric, it's actually going from hypervisor to hypervisor because the router is distributed in all the KVMs and if you have the blue and the green on the same hypervisor, it actually doesn't leave the hypervisor. It's just internal, even if you have routing. So we support that. For the north house, when you need to go from a VM to the outside world or from the outside world to your VM. So on the left, let's go through the animation. On the left, you have the traffic as you can imagine. From your VM going to the logical router to the physical VLAN in the external world. On the right, that the physical path. So what you see is actually, let's use some animation. The first VM blue will go to its default gateway and to the outside world using what we call an edge node that will be your top router going to the physical world. Each edge node, it's not a VM. Each edge node supports DPDK, whatever. So it can push up to 80 gig of traffic and they are actually in a cluster mode. So if you have another VM with obviously another MAC address, it will be hashed and may end up on another edge node. So now you don't have 80 gig north house traffic, but you have 80 gig per edge node and we support a cluster of eight. So really high throughput also for the north south traffic. Troubleshooting, I'll show you that in the demo, but yeah, if you go with the reference architecture and your VM one cannot talk to the VM two, I'm sure. I mean, if you're in the neutron space, you've done those beautiful commands of VS DPDK tool or VS VC cut all VS cut all and all those dump options. And then you do your magic grab to find the real flow to see what is the action, blah, blah, blah. And five days later, yeah, you figured out and okay, it's working again. So if you're, oh no, if you have a tattoo and then yeah, it's maybe only in one hour, but still it's painful. So what we do with NSX, because NSX is your SDN that controls the OVS, you can do it from a nice UI, click, click, click, and you can see the path in the logical world. So when VM one in this example going from your KVM, you love, once you talk to the VM two on the ESX you should love because we support KVM and ESX even in a mixed environment. It goes through the logical switch, the logical router to the logical switch and back to the second hypervisor. So that the logical view on the right side, you see each step done by the NSX, the DFW, the switch, the routing, and those are real, really, really done. So I'll show you that in a demo that will be simpler. Okay, five more minutes. So just on time for the demo. We'll do Q&A session at the end. So my demo is pretty much the typical environment you have at customers. So you have the top router I explained before which is a BGP paired with your physical router in my lab. And at the beginning I have nothing and that's what I have in my lab, I have nothing. So my physical router knows nothing. And then I won't do this one to save time. I'll just do the second one. So no NAT, which is popular in the enterprise. So I'll deploy a VM on the logical switch, another VM on the logical switch, a logical router and plug to the external world with the logical router with no SNAT. And you will see that the physical router will learn those two subnets, okay? If you want to see SNAT, I can do it also for you if you want at the end. And actually, oh yeah, this I'll do it quick. But yeah, let's say the tenant is calling you because its VM blue cannot do my SQL to the VM green. And I'll show you how to quickly troubleshoot without doing 20 line of commands with five grep. In it. So that's actually what I've done so far, a very similar session I did six months ago in Barcelona. You have the link, the YouTube video of this. It was using Mirantis just to show you that, yeah, VMware has a great open stack distro called VIO. But actually NSX works great with that one, that open stack distro, but works with any distro. If you don't want VIO for whatever reason because you want to use somebody else for good or bad reasons, open stack distro, or you do it yourself, you're a real man, then it works on anything as long as it's open stack. So last time I showed you Mirantis, this time I show you Red Hat open stack 10, which is Newton, but roughly the demo is the same. So let's go there. Tink, tink, tink, tink, tink. So I guess I'm still logged in. Yeah, so I'm logged at, no, I'm not. User one VMware one bang. So that's Red Hat open stack 10, this one. So Newton. I'm logged in as user one and I will deploy what I showed you. And this is this, I do it via heat because I don't want to do 200 click, click, click. So it's the same heat you could do even if you don't have an SX. What do I do in this heat template? I create a network. I create a subnet. I create a second network. I create a second subnet. I create a router with no NAT. I create, I plug my router to the two subnets. I create my security group to accept this for my web VM and that my SQL for my DB VM. And I create the neutron port and the two VMs. Okay, so it's the heat you would do without an SX. Nothing to do with an SX in this. Okay, and let's do that quick. Launch. Here we go. Here we go. Next. Demo one, whatever password. Oh, just before doing this, I just want to, here we go, I lost it. I go on my physical router. It's a Vietta, so you can see it's really physical. IP route, BGP. So that's the physical router in the physical world. And it doesn't learn anything via BGP because in my OpenStack it's the first tenant and yeah, nothing is in OpenStack yet. So let's deploy that one. And here we go. So it's deploying my common horizon. It's deploying, it's not finished but we can see already my two switches. My logical router, if I refresh, because I have only one VM now. If I refresh, here we go. My two VMs, one on each logical switch and if we look at the VMs, they are deploying or maybe already done. Yeah, one is done, one is not done. But okay, almost there. And now if I look, those subnets are learned by my physical world and I can access my VMs with no net, no floating IPs. Thanks to the trick I explained before. So pretty neat. Now if you haven't seen NSX yet, let's look at what happened. So in OpenStack it's the default stuff you've been doing for years. So your VMs, your network are here with those very friendly UUIDs. Like that's my name and that's the friendly UUID. So in NSX those switches are, if I can see switch, here we go. Those switches are here and what we do to make your life OpenStack cloud admin your life easier. We have the name created by the tenant plus the UUID in OpenStack so it's easier to find. And you have a bunch of tags who tells you who created and the UUID and same thing for the router. Here we go. So that's very friendly to find things in OpenStack when you have hundreds or thousands of things. And you have this beautiful filter or search, sorry. If you want to know everything from tenant one, you just go here and here you can see all the logical switch ports, the DNS, DHCP, created by OpenStack, the routers, the VMs, everything. So even if you have a large OpenStack you can find stuff easily. And the last thing I have minus one minute. When VM one cannot talk to VM two, somebody calls you and instead of asking him, oh, okay, give me SSH access and I'll do some tests or you go to the KVM and you do those beautiful OVS cut all commands. You can simply go here and you select the VM one, VM two and you can see behind the scene the, you can see the real IP address and Mac and you can do, oops, you can do a ping. That's what I'll do or something else. And what this does is it's really sending, so this NSX manager will talk to the KVM and say please, please, I'm not sure, but we'll say forward that packet. So if you sniff, you will really see that packet. With the flag, it's a test packet and it will go through all those steps. The DFW, the switching, the routing up to the far end hypervisor and when it has to send it to the VM because there is this flag, it won't send it, but the packet really goes through. And if I do something and I'm done there, which is not accepted because of firewall reasons, here we go and let's say I want to do SSH, which is not in the policy security groups I accepted. Then, so it takes time because it really goes to each element and go through and get the result. And it tells you, oh, it's dropped at the very beginning because of the firewall rule and the idea of the firewall rule. Or if it was not working because KVM1 cannot talk to KVM2, the communication in the data plane is broken, you would see the traffic go inside, but then it has to go to the other KVM, drop because yeah, it cannot talk to. Okay, so it makes your life much, much easier. No question, anyway it's at the end. Okay. Switch to the other laptop, please. Thank you. Welcome. Awesome. Okay, let's talk a bit about Kubernetes and what we are doing there. So before I go into some of the details on what we have built, let's go through some of the challenges that we have. First of all, I don't really see too many differences between the open stack challenges and the container challenges, right? So you are the right audience to talk about it, to understand it. If you look at the reference implementation, you have a lot of ports everywhere and you don't have a central point of management on how to troubleshoot individual ports. You don't see counters on central place, et cetera. That's the same for most implementations out there for containers. You have thousands and thousands of containers with a lot of ports, but where do you see them? Where do you see what traffic counters are there? How do you redirect traffic? How do you troubleshoot? So that's one thing that we want to address. Every container or pod in Kubernetes speech will have or has a dedicated interface on NSX that we can see in the central management system just as with open stack instances or vSphere VMs. Some plugins today in Kubernetes don't support yet network policy and network policy itself is also in a beta state right now. So in a lot of cases, today's Kubernetes clusters are all open. So every pod, every container can talk to all the others, no matter if you are in different tenants or not. This is addressed by the Kubernetes community. There was a new project called network policy that addresses this. But again, not every plugin supports it and what we will add on top of the support for network policy is also admin rules where you can pre-define specific sets of rules that apply for the whole cluster. No matter what the tenant does, you don't allow in specific pods. You don't allow in traffic from a specific IP so that you can also black hole the bad guys. And then of course we want to automate everything. We just saw with heat, we want to make sure that if you deploy a Kubernetes cluster, if you deploy something in Kubernetes, if your user, which is a developer, deploy something in Kubernetes, he doesn't have to ask for something first. He doesn't have to call the network guy and say, can I have some VLANs first, please? It just happens automatically for the user. He doesn't feel that there is a network implementation doing magic in the background. So what are we building? We are building, first of all, a different topology than what most plugins do or most network implementations for Kubernetes do today. We are building a topology per namespace. In Kubernetes, the namespace is a tenancy construct. And what we decided to do is to build a topology per namespace. So in this example, we have the namespace foo and the namespace bar and we will create separate objects in NSX, logical routers, logical switches to support the pods in that namespace. So IP addressing will also be done per namespace and a pod in namespace foo has a different set of IP addresses than a pod in namespace bar. And that also addresses the use case we saw in Dimitri's example where we have a known net and a netted environment. And we will make that possible with Kubernetes too. When we create the namespace in Kubernetes, we will be able to define that it should be a netted or a known netted namespace. And then in the known netted case, you just have direct routing from the logical switch to the first top router, the tier one router, to the tier zero router, to the physical network. So what we just saw in the OpenStack demo is equally applicable here. Now, since all the pods will have their own interface into NSX, we also have counters. We can redirect traffic. We can send traffic with a remote span monitoring session to a centralized system. We count flows, et cetera. The traceful tool that we just saw in the OpenStack demo will work for containers too. One additional set of detail, we are also implementing an IPAM in NSX for the container use case. So when a new namespace is created, we are addressing it based out of IP blocks, as you will see in the demo also, that are administered in NSX. Okay, the central component that we are building is the NSX container plugin. And this is a piece of software that we will give you as a container image that you can run in your Kubernetes cluster as a pod also. And it interfaces between the Kubernetes API and NSX. And it will create those objects in NSX when it sees them being created in Kubernetes. What is nice about Kubernetes is that you can watch for things. You can watch for new namespaces to be created. You can watch for new parts to be created. And then you react upon it. You create your topology if you detect a new namespace. And that's what we do here. Now, this will not be just limited to Kubernetes. We are also actively building an integration with Cloud Foundry right now. And we are also looking into Docker LibNetwork, which is part of Docker Datacenter and into Mesos. However, right now, the priorities on Kubernetes and Cloud Foundry. So some of you might already have heard of container network in the face of CNI. And in the previous slide, you didn't see CNI, right? You saw that we are directly talking to the Kubernetes API to create objects. And that's true, but still we are also using CNI, which is an interface spec on the node that runs the containers to talk to the network implementation. We're also using that for a specific case of CNI in the next slide. Now, CNI is supported by all of those frameworks that we just said. Just Docker itself decided to do a different spec, but we're also working with Docker to support this one. So where CNI is used is on the node itself. And here, we are working on the assumption that the node is a VM or an instance in OpenStack. And in this case, what might happen if you use one of the normal plugins or the usually used plugins of Kubernetes is that you get double encapsulation. You have your underlying OpenStack IS network solution, which gives you VXLAN overlays. And on top of it, you might run, again, VXLAN overlays. How do we make sure that doesn't happen? Basically, we're using a local VLAN ID from the node down to the hypervisor to signal the individual port. And for those guys that saw the career project, that's actually the same. They are using the same method to do that. And we are also supporting, by the way, the career project, as you will see in the later slide. So we are doing the same thing. So every time a pod is created, the node itself, there is a service of Kubernetes running here called kubelet, will call our CNI plugin. The CNI plugin will build this pipe up, will assign a local VLAN ID, and here's the virtual port we get on the hypervisor. And I'm running really fast so that I can show you the demo. So, here are my Kubernetes nodes. I have a master and two nodes. And those are deployed using OpenStack, and specifically here, VIO. That would also have worked in Dimitri's demo with a KVM-based environment with Red Hat, Merantis, or Suzer, one of our partners. And you see each of those VMs, those nodes, and the master has an interface to the management network and the pod network. And the management network is where they communicate with each other. So the node, one, will talk over the management network to the Kubernetes master. But if we create pods, we will use the pod network to send traffic. And actually IP address that you see here you can pretty much ignore because they are fake. This is something I'm working around right now. We don't really need those IP addresses, but the way I deploy it today with Terraform forces me to add an IP address there. So, usually those interfaces would not need an IP address because all the networks for Kubernetes are created in NSXT alongside the OpenStack environment, right? And so what you see here is the management network and the pod network that we just saw in OpenStack and they were created by Terraform, okay? Now you see those two other, or these three other networks here and those are already our pre-created logical networks for Kubernetes. So if I look at Kubernetes and I look at my namespaces, I already have three predefined namespaces that come with Kubernetes 1.6, which is the default namespace, kubesystem and kubepublic. And those are automatically created also here if in NSX, as soon as the NSX container plugin starts, it sees those namespaces in Kubernetes and creates a topology for it, right? If I create additional namespaces in here, we'll call them NSX Open and NSX Secure, and I refresh here, then they will pop up here. So here we have our NSX Open and we have one pod in it right now. This one pod that we have in it right now is the logical pod for the logical router, right? So obviously we also created logical routers, NSX Open and NSX Secure here. If I look at the logical router configuration, I see that there was an IP address assigned to the router port. This is the default gateway, the containers or the pods that Kubernetes are using. Where did the IP address come from? As I said, it comes from the IPAM in NSX. So here you can see we have a Kubernetes IP block, which has a specific sider, and we carve out a subnet out of that block for Kubernetes or for those namespaces that we just created. And by the way, those are slash 27, so at some point you will run out of IP addresses if we deploy too many pods. But that's not an issue, we'll just create another logical switch with another subnet and another logical switch with another subnet. So it grows with the amount of pods we have and it shrinks when those pods disappear. Okay, next thing we'll do is we'll look at the stuff that we will deploy now. And this will be a replication controller which starts four replicas of a pod that runs this container in it, which is a web server running an NSX demo. Then we will create a service, a Kubernetes service is an east-west load balancer and a way to do services covering in Kubernetes. So my service will be visible inside of the cluster as the NSX demo service. And then we'll create an ingress which is an ingress load balancer that will look for the URL or the host name that I'm using, it will send it to the right pods, which is my web front end. Now one thing before I create it that I want to point out is this labels here. Here the only label we have is app NSX demo. And that is significant because when we create those pods and we go to the NSX open, that's already this one, you will see that we will create this logical pods. And if I look at the logical pod created for that pod, for that container pod, you see we assigned the specific IP address to it, but we also copied down all the tags that were present as labels into the NSX logical pod. And we use that for the firewall rules as you will see later in the demo. Now importantly here, yes, we have counters. I could do my spend session, my pod mirroring. I can export flow records, et cetera, et cetera, et cetera. And one last thing here on that view, let's refresh that so that my pods are up, you can also see here that each of those containers up interfaces has a VLAN ID that identifies it on the node. So it has a parent interface, which is the node, and which is node two in this case. And then it has this VLAN tag that makes it unique on the local significant node. Okay, now, yeah, I don't need to watch that. That's already created. Now I will create another one. And the only difference between those two is that I have an additional label here, which is sacrub web tier, okay? So let's create that. Now else it's completely the same spec. So refresh, we see now the NSX secure has two interfaces. Let's go into it. And here, obviously, we copied the label and now we have sacrub web tier down here. How is that used? We have a group here, and by the way, those are the groups that were created by Neutron as security groups for the firewalling. But also we have here a predefined one, which is Kubernetes web tier. And here I defined that whatever has the scope sacrub and the value of web tier is a member of that group. So this is a predefined admin rule, which matches all the logical ports that belong to this NSX secure that uses these labels, right? And that I can now use in my firewall. And here I have a simple rule, say, drop web to web traffic because I don't expect web servers to talk to each other. So I will drop all the traffic in the NSX secure deployment from parts to parts. So let's look at the ingresses. I don't need to watch that. Let's look at the ingresses. The first one is listening to a new URL called NSX open. And the other one is listening to the URL NSX secure. So let's go to the NSX open one first. And here's our NSX demo. And as I'm a bad guy, I created a little, yeah, that's when you're on the Mac and try to do a control copy on the PC. I have a little port scan app because I'm an SD guy. And that will scan all the neighbors that I have on specific ports. So as you see here in the open case, might be a bit small, but I guess you can get it. The port 80 is open, right? I see all my other web containers because I don't enforce any policy for NSX open. So now let's do the same with the NSX secure. What was it again? 10-0-5. I'm not as good as you. And here we go. There will be one answer of 100 or whatever the part was I'm in. Oh, 10-0-5. Okay, that was a bad demo if I'm using the wrong. 10-0-5. It doesn't get better. 10-0-5, you say? Yeah. I cannot see, it's too small, I think now it's good. Okay, one should react. Okay, one reacts which is the container I'm running the support scan on. The other one are closed. Okay, and that's pretty much what I wanted to show you guys. Where's my presentation? Okay. So some FAQ that you might have before we go into the actual questions. So why are we not using the Neutron API in this integration? Basically this integration that we are using right now is also meant to work in a pure vSphere environment that doesn't have any IS. It's meant to work in Photon platform. It's meant to work in other environments in public cloud that doesn't have OpenStack with it. So that's why we are doing this integration this way. However, if you want to use career, we are happily supporting that. Career has the VLAN aware VM feature that it needs to do this piping, let's say, between the containers and the hypervisor, and that is fully supported in NSXT. We are supporting the cloud provider to use ALBAS. However, as you saw, we are mostly looking to use Ingress right now as our main solution. And yes, we will support network policy from day one. Here we go. Okay, so I don't know. First of all, those were two live demos. So I think he deserves an A2, a round of applause, and then we open to questions. Okay, any question? Was it super clear? Ready? Yeah, if you can go to the mic. Protocol that you had between the edge and the physical? Yeah. BGP, can that be swapped out for OSPF? No, the only dynamic routing protocol we support today is BGP. Okay, thank you. Okay, if that's something you are looking at, we are looking at enhancing that, but so far, customers were fine with BGP. Yeah, I'm not the network guy. I know we use OSPF, so I just thought I'd ask. You may have said this, but I may have missed it. What, are you using overlay networking for all the container cases, or is there a non-overlay case? Yes, so if the NCP creates this topology where it actually creates its overlay networks from hypervisor to hypervisor, but from the node VM itself, from the container down to the hypervisor, it's a local VLAN tag that gets popped out as soon as it. So that's the VLAN aware VM requirement. And can you explain a little bit more about the individual logical topology per namespace? Yeah, so the reason we are doing that is to have the flexibility to also decide between, here's a namespace where we wanna use NAT at IP addresses, and here's one where we don't want to use NAT at IP addresses, because we have customers that say, okay, for most of the test-dev use cases, I just want to have capacity where I don't care about if it's NAT or not. But for my few production use cases, I want to have direct routing to my backend database. I wanna see the real IP address of the pod down in logs, et cetera. And so the topology per namespace gives us that flexibility. And honestly, it's also easier to grasp from a normal networking mindset point of view. So when you say separate topology is just independent set of VXLAN tunnels. Yeah, yeah, right, right. It's not too separate physical network topologies, no. Yeah, not even physical, but when you say logical, the separate topology is essentially these ABC tunnels and XYZ tunnels. So if you compare it to something else, then you would compare it to an installation where all the IP addresses on all the nodes are spread all over the place, and the only tendency is using firewalling and network policy, and that's the difference between topology per namespace. Okay. We kicked out. Yeah, we kicked out, but so we stay here. So if you have other questions, no, yeah, go to the stage and that's fine. I think for the recording, they want to keep it at 40 minutes. So thank for your time and enjoy your turn.