 Okay. Let's go. All right. Let's go. Hi. My name is Andrew Webstow. I'm a systems engineer in the networking and security business unit at VMware, focused on NSX and a number of the use cases, some of the interesting use cases that that provides. I've got some of my colleagues here with me today, Thomas from product management and Marcos as well. And we're going to cover a range of things today. I'm going to go in, give a bit of an overview, a bit of a technical coverage of the NSX integration with Neutron. We're going to cover a little bit of Vio and Kubernetes as well and what that means to NSX. And Marcos is going to go and show us that dangerously live a bit later. So I'll keep my bit short and sweet and to the point, but it'll help set the context for everything. And then we'll get into some of the cool techy stuff. Let's get going. So just a quick overview of what we're talking about today, and this will make a lot more sense when we get to the demo, is we're talking about OpenStack as our cloud consumption layer. That's the management layer of a set of infrastructure. And we're putting the VMware STDC underneath that. So that's our hypervisor. That's our software defined networking platform, which is NSX, and we're going to go into more detail on the NSX side of things. It's also our storage in vSphere underneath OpenStack. So OpenStack is our cloud management layer that provides the API and the horizon interface to go and drive all of the virtual infrastructure. And the virtual infrastructure is VMware's software defined data center solution. So specifically NSX, what we have here is our manager. That provides the API for NSX. That's what Neutron talks to when we're driving the infrastructure. We've got our control plane separate to that, so we're decoupled. And our control plane is a set of controllers that you deploy, and that holds all of the state of our virtual infrastructure and helps orchestrate things like overlays and provision configuration changes to our infrastructure. Our control plane split into two components. We have the central controllers. They're the VMs that you deploy that take care of the state orchestration of the platform. We've got our local control plane. That's a component that translates calls from our central control plane down to the type of hypervisor we're using. And if you move down the stack here into the data plane, you're going to see that I've got ESX and KVM listed. So NSX works with both ESX and KVM and bare metal and public cloud and a number of other areas as well. And that's why we've decoupled central control plane from local control plane. We have an architecture now that can scale irrespective of the hypervisor using the endpoint type you've got and whether or not your bare metal or physical or what have you. So that's our architecture and it scales quite nicely. And that's what NSX looks like. And that's that virtual infrastructure layer underneath open stack. So we're driving everything through Neutron specifically for NSX. So let's go into just a little bit of an overview of what an environment looks like from an architectural perspective and then we'll get into the integration points. When you're building a vSphere environment, we have hosts. We pull them together into what's known as clusters in VMware speak. And essentially it takes all of your compute resources, memory, CPU and disk and network and combines them together into one giant virtual host. And there are some things that we can do in our hypervisor when it comes to containers and VMs in a cluster format that allows you to move around live the emotion we've had for a decade. And some of those features in vSphere specifically is what brought to you through the cluster construct in NSX. I've just got a couple of points here around some of the different design options we have with switching. We can do, typically, we recommend three clusters, separate function. You're gonna have a separate management from some of your edge functions. An edge function in NSX is our egress from the NSX environments, our north, south kind of connectivity. And it provides the resources for some of the stateful services. Some of this will make a bit more sense in a moment. And then we've got our compute clusters. Typically, we scale them independently, sometimes on a tenant basis, project basis, or you're flexible with as many of those as you want. And that's the typical architecture and smaller scale, though you can start to collapse these. But it's just generally good practice to separate your management environment to a few hosts on the side so you can change control separately. And you've got the availability of those components, including OpenStack, in this case. So that's just on the vSphere side. So let's jump into VIO now. I've mentioned Neutron a couple of times. NSX was born out of an acquisition by VMware of a company called NICERA in 2012. And that brought with that acquisition a number of engineering capabilities and some code that was written by NICERA, which became the Neutron project in OpenStacks. That was a NICERA invention and incubated and iterated on through VMware. So that's the heritage of Neutron. We've been involved in the OpenStack community for quite some time, even prior to that, and that solidified our involvement there. Some other aspects as well around OpenV switch and that that came about as part of the efforts through NICERA. Over the years though, so NSX is not new. It's six years old. We're at version six or something of NSX now. And we've gone through a lot of changes and updates to the platform as we've learned new things from our customers. And so it's been around a while. Now OpenStack is a use case for networking. Neutron is the component that's driving our APIs. So as you can see in the diagram here, we've got Neutron, your project, and it's driving the API of the NSX manager. So anything that you can provision in Neutron, routers, security groups, low balances, networks, NSX is going to be fulfilling those API requests from OpenStack. And that's the integration point. The nice thing about that is it's one driver for Neutron. It comes packaged with OpenStack, in this case VMware integrated OpenStack. That's our commercial distribution. If you want OpenStack and you like the idea of VMware infrastructure underneath, we package it all up. It's tested, it's got lifecycle built in, and we'll provide you support for that. So that's our OpenStack. It's called VIO. But Neutron comes bundled with this plugin to drive NSX, and it means you don't need to touch it. You don't need to change out the plugin. You don't need to go and do custom ML2 things and some of that sort of stuff because you've settled on the fact that NSX will be your endpoint. It's one driver and away you go. Same for storage, for Cinder, Glantz, and Nova for placement. So that's all through vCentra, and that's the abstraction points that I wanted to highlight on this picture here. So one set of drivers and they're the same for every VIO deployment on Earth, right? So it makes stability good, simple. We know exactly what's going on if you want that kind of functional behaviors in your OpenStack platform. But now we built these drivers with Upstream than their open source. If you want more flexibility than an opinionated distribution like OpenStack, VMware integrated OpenStack offers, or you want to do something a bit more bleeding edge for some of the projects, then you can go and build your own Upstream OpenStack and you can still leverage those same drivers or you could go with another distribution if you wanted to as well. You have that freedom of choice and you don't have to give up the benefits of our virtual infrastructure that OpenStack is driving underneath. You can leverage the same drivers through these integration points there and you can also use KVM as a hypervisor. Perhaps you want to use NSX specifically, you want the software-defined networking component that VMware offers under Neutron, but you want to run KVM hypervisors, then you can still mix and match those drivers accordingly and away you go. So that's kind of what it looks like there. Different components touching each other. Okay, now NSX. So that was just a bit of an overview. There's some things that you want to do in Neutron specifically and NSX behaves in a certain way when you start firing these calls off. So you're going to go and create networks. We're going to go and attach instances to those networks and a network in NSX is our overlays. It's a software-defined layer two domain that you can go and attach instances to. And then we're going to create other networks with other instances. Perhaps we want to break out a multi-tiered application or something like that. And DHCP services. Again, this is something you would do in OpenStack. It's driven through NSX and it would provide that function for all of the logical switches in this case. Distributed firewalling. So this is when you create security groups in Neutron. The way that that gets translated through NSX is to our distributed hypervisor firewall. So every single node can actually enforce stateful firewalling per virtual nick of the VM in the host. So that's what we're talking about there. Routers. Neutron routers. And so we again translate those calls into NSX. NSX now provides the routing function underneath OpenStack. You can use NAT or no NAT. And we can start to do interesting things by chaining networks. And we have a concept of a tier zero router as well. That's the router in NSX that talks to the physical world, right? Cool, so let's just move this along so it can get to the demo. There's a bunch of supported topologies. I'm actually not gonna read through all of this. But we can do NAT, no NAT. You can do routing. We support for BGP and everything else from a networking perspective and then low balancing and security under Neutron for some of those other micro segmentation and high availability use cases. Now, VIO. VIO is our distribution of OpenStack. It comes bundled with all of these drivers that I mentioned. But with it, we have an additional component called Viocubinettis. Now Viocubinettis is just a, it's another management component that comes with Vioc to allow you to deploy Kubernetes clusters on top of Vioc. And this provides the full lifecycle of those Kubernetes clusters, scale out, up, you know, up, down, doing things like upgrades and create, destroy all of that sort of thing. So if you want to deploy Kubernetes on top of Vioc, it comes built in with the product and it's called Vioc. Plus, we provide you support, right? So it's a supported Kubernetes lifecycle manager built on top of Vioc and it gives you some of the things mentioned here on the slide. And again, this is all on our infrastructure. It's on the software defined data center, vSphere, NSX and vSAN. Now this is where it starts to get interesting because Marcos is gonna demo this in quite a bit of detail. But traditionally, you know, in a past life, we've had VMs and they've had NICs and they attached to ports and they've got their connectivity and their network services. But now with Kubernetes, we're talking pods. And with NSX, we can make container pods on Kubernetes and traditional VMs, both equivalent and first-class citizens when it comes to networking services. So this diagram here, just to sort of go through this a little bit, we're creating our routers. There are tier zeros for northbound connectivity. We've got essentially tenant routers. They're the tier ones. They are multi-tenanted. You can configure these per Kubernetes namespace, right? You get a router and it's distributed again because that's what NSX does in our hypervisor. And then we can create logical switches and attach Kubernetes container pods to those logical switches. One of the benefits of doing that is we now bring the distributed firewalling function security groups and whatnot directly to container pods and Kubernetes. And this is all done through our network container plugin. So there's a few different things going on here, but the takeaway is that NSX allows you to provide the same networking services in NSX to container pods and Kubernetes through our plugin that you would do for traditional VMs. With that, I think I'll hand over to Marcos to explain what we're going to demonstrate. Thank you, Obi. So what we have here, we're gonna do a demo of our Kubernetes and NSXT integration. So I'm gonna do it with my computer. We're gonna be switching laptops here in a moment. But let me just go through the applications. This application, by the way, was written by a next colleague of ours, a German guy, super smart. And this is a very fun application. It's called Playspotter. Okay, it's a three tier app that has a web tier that in this case is gonna be implemented in Kubernetes. It has an app tier where we do all the application processing and then it has a database tier. And the database tier consists of a relational database that's implemented in MySQL and also an in-memory database that is implemented in Redis. So this is basically the application. What the application does, it's in the web page, you're going to basically be able to search for Plays from a data set, a static data set hosted in the MySQL database. So you can search who built this plane, which airline owns this plane. It's a very simple query application logic that he implemented in the app tier. And we're pulling, we're fetching that data from the MySQL database. But if we integrate the Redis piece, what he's done in this application is correlate the plane that you are looking for with the fact or the actual airborne state of that plane. Is that plane flying? And if so, where is it right now? And he's actually doing that by pulling live data, live GPS and tracking data from the internet and then storing that in Redis. And then his app tier is correlating the static data from the MySQL database with the dynamic data. And it will tell you that that plane that you're looking for is actually flying or not. Very, very interesting application that will demonstrate the capabilities of NSX across multiple endpoints. And what are those multiple endpoints? In this particular demo, what I've done is I've put the static database as an open stack instance. It lives as an open stack instance inside of VIO. Okay, it's a VM. And everything else I've donated in Kubernetes. Okay, so I have the web tier, the app tier and the Redis tier implemented as a Kubernetes application. So this will show that NSX can actually see and treat Kubernetes pods and VMs the same way. In other permutations of this demo, we've been showing this application at VMworld and some other places. In other permutations of this demo, some people have the front end running in AWS, have the back end running in Cloud Foundry or Pivotal application services just to show that we understand all these different endpoints in NSX. And we have seen versions of this demo where Redis is actually implemented in a bare metal server, right? In this case, we're talking about container pods and VMs. And again, the VM is an open stack instance owned by VIO and everything else has been implemented in Kubernetes. So let me switch laptops here. See if this works. Perfect, so let me just open my RDP again to get full screen here. And I'll walk you through the demo in a moment. Again, very simple application. It's not created as of now. I have done, there are a combination of products, all the products that Andrew talked about. I have used VIO Kubernetes to create a very simple cluster, very simple geometry, one master and two Kubernetes workers. VIO Kubernetes works on top of OpenStack or top of our OpenStack. So what that looks like in OpenStack, right? That Kubernetes cluster is just three instances sitting on a neutral network and connected to the rest of the world with a neutral router. So this is the OpenStack view of that Kubernetes cluster. You can also see here the MySQL database, which I configured just using OpenStack APIs and where I put all the static data, representing the manufacturers and the owners of the planes that we're gonna be searching in a moment, right? And finally, remember this is all running on top of VMware infrastructure. At the end of the day, all these Kubernetes clusters and OpenStack instances are vSphere VMs that your vSphere admins have been doing and troubleshooting and optimizing for years. So I have a compute cluster and a management and edge cluster. My management components are listed here. I have my OpenStack, I have IOK, I have the controllers for NSX, all of that running in the management cluster. And in my compute cluster, I have my Kubernetes cluster, master and two worker nodes, as well as that instance, that OpenStack instance, okay? So it's a multi-layer solution here. So let's go ahead and run this demo, okay? First of all, it's populating my variables in my environment, and let's go ahead and create a namespace. As Andrew mentioned, when you create a namespace in Kubernetes, and again, the developer or application owner that is working with Kubernetes doesn't even know that this is integrated into NSX, he's just creating a namespace. I'll show you the before and after. I'm gonna create a namespace and just the fact of me creating a namespace drives automatic network configuration in NSX. This is the NSX view. I don't know if you can read that. Let's see if we can make it a little bigger. But basically I have what we call logical switches. These are layer two networks and this is the before. And if I refresh that, I should see here a new logical switch created called the Planespotter logical switch. This is that network that was automatically provisioned. And if I look at which, what kind of things are connected to that logical switch, right now I only have a logical router port. So the mere fact that I created a namespace drove the automatic creation of a layer two network and a router on that logical switch. And now the system is waiting for me to start scheduling pods on top of that automatic topology, which we're gonna do momentarily, okay? So let's go ahead and create the app tier first. By the way, this application is on GitHub with very, very clear instructions on how to deploy. It's a lot of fun. So I went ahead and created the app tier. So I'm creating that, if you remember that diagram, that middle box there, that has all my application logic that obviously created a deployment, a config map and a service. I'm showing here the service called Planespotter service and I'll show you why that is important in a moment, and then I have a pod spec in that YAML that basically says my application tiers to Kubernetes pods, okay? Which are showing running in running state right there. So if I refresh the number of logical ports that are in my logical switch, now I see two port. And the reason why logical port is important in NSX is because this is the same construct that we use for bare metal, VMs, cloud native applications and containers. So a logical port is a logical port. It's a first class citizen and VM where NSX doesn't distinguish between one map to a VM and another one to map to a container. So with this we demonstrate that we can treat containers, VM bare metals in the same manner. So let's continue to build our application. I'm gonna do, now I'm gonna deploy the front end. And the front end is a web tier. And the web tier has an increase. If you're familiar with Kubernetes, an ingress is a layer seven load balancing rule that basically maps. Let me show you the services and the pods and then I'll get back to that ingress. So now I have a new service called front end and I should have now four pods in my application because I've added two more for the web front end, right? So if I go to NSX and I refresh this, I should see now five logical ports. The one for the router and the one for the four Kubernetes pods that I have created. And this will come up momentarily here. In the meantime, what we'll wait for that to come up for the containers to be in running state, let me show you the ingress. In Kubernetes, an ingress and the typical implementation regards an ingress controller and a service type and all that, an ingress, a layer seven load balancing rule. When you integrate Kubernetes with NSX, this gets implemented as a layer seven load balancing rule, HTTP redirect rule or HTTPS redirect rule inside of an NSX load balancer. So we don't use the open source load balancers of Kubernetes. We replace that to another component that we replace when you do Kubernetes on top of NSX. So let's go ahead and refresh the application and it's up. So at this point, I should be able to go to that URL and hit my application. There it is. And the person who wrote the app was kind enough to add a little bit of a widget here that tells me the application health. And right now, everything is shown as running except for that red is tier. So I'll show you the before and after. This is going to appear broken because it always fails when it's tried to connect to a database for the first time. It didn't in this case. And let's go ahead and search for planes for American Airlines. I have to use US Airlines because this is FAA data. So there it is. So those are all the planes owned by American Airlines that are basically in this MySQL database, which again, is an OpenStack instance. And as you can see, they're all showing as airborne. The answer is no. That's very unlikely. It's very unlikely that the largest airline on the planet doesn't have any planes right now flying. That is because we have not integrated the redis piece into the application, which we're going to do now. Let's go ahead and provision redis. And the TCP stream converter that he wrote. This will provision two more pods for a total now of seven pods and three services. So let's go ahead and show that. So I should have now my application fully operational. Let's go ahead and check NSX again. Make sure the pods are up and running. They're still coming up. So let's give it a couple of seconds. And again, what we're going to see now once red is, the redis tier is up, is that now we'll be able to correlate planes from American Airlines, manufacturer data and all that models, things like that, with the fact of whether or not that plane is airborne, is flying. So this will come up momentarily. It takes a while. While we wait for this to come up, I mean, think about some of the questions that you may want to ask us. We have a product management team represented here in the audience, so they can tell you about roadmap and plans for additional integration services, both for Kubernetes and OpenStack. OK, so now my application is up. Let's go to the app health widget. And yes, redis now shows running. We're going to do a search again for American Airlines. OK. And I have a few minutes left. So there it is. So in the first table, one of those planes in the inventory shows us airborne. Let's go ahead and click on that guy. And now it's a fly from Chicago or here to Macaron in Las Vegas. So someone is going to have fun this weekend. And it also shows the altitude and all that. So this combines the fact that now I had static data and dynamic data correlated in my application. And what I cannot emphasize enough is that this application is multi endpoint. We're talking about Kubernetes and OpenStack instances in the same app served by the same networking back end, which in this case is NSX. So OK, so that's the application. Another thing that I want to show, so we have our application working. Another thing that I want to show is the capabilities, the tagging capabilities that are also part of our Kubernetes integration. In this case, I have annotations or metadata in my YAML of my App Tier where I have added a label, user control, and then you can restrict who does what with standard Kubernetes role based access control. But in this case, the application owner, me, decided to add a label called App PlaneSpotterApp. Those labels get propagated to NSX, to those logical ports. If I go to NSX, and I take a look at the logical ports map to the App Tier, I will see that those ports are tagged with that label that I just showed you, controlled from the YAML. The reason why these tags are important is because these tags are used to define membership criteria for security groups in NSX. I have created a firewall rule in NSX that right now is saying my App Tier can talk to my DB Tier. Remember, the App Tier's containers is Kubernetes Pot, the DB Tier is a VM, on the MySQL port, which is TCP 3306. And right now, that traffic is allowed. Let's see how those two things are talking to each other. We have a utility here called TraceFlow. You've probably seen it before, where I can say, OK, I want to understand the relationship between a container, right? In this case, let's look for one of the App Tieres, that guy right there, and the MySQL VM that I have in OpenStack. And I want for that packet, I'm going to inject a synthetic packet that will emulate MySQL traffic on TCP 3306. So let's go ahead and trace that traffic. Remember, future, OK, live demo. OK, so let's go ahead and wait a second here to retrace the traffic. And again, this traffic is allowed. We know it's working because we just search the MySQL database. But what I want to prove to you here is that NSX has this utility that can show you the end-to-end connectivity between a container and a virtual machine that is owned by the same fabric by NSX. And it tells you exactly all the Layer 2 networks that separate the Layer 3 routers that separate these two endpoints. And in this case, the container is running on ESXI5, and the virtual machine is running on ESX4. And then there's this logical topology, fully distributed topology, that connects the container to the virtual machine. And NSX gives you every hop. Let's go ahead and break the application. I'm going to do, so I'm going to inject a problem here in the app. I'm going to change my firewall rule from allow to drop. So when I do that, I'm blocking app to DB traffic. Let's see if that actually happens. If I go to app health, the connection to MySQL shows now it's having a problem. If I go to search for that data, yes. The application is broken. I broke the application. Very unfortunate picture that this person chose there. But anyway, so if I retrace, exact same traffic. I want to understand how app and DB are interconnected. I want to understand why the application is broken. NSX in a moment will tell me, OK, yes. There's a firewall rule in NSX that is blocking that traffic. That firewall rule, right, is firewall rule 16504. That if you remember, I go back to my firewall, is that firewall rule right there. So I injected a problem. It's a problem that I made up. But I'm using the traceability and the capabilities of NSX to display the end to end connection between completely distinct and separate endpoints, a container in a pod, in a Kubernetes pod, and a VM in an OpenStack instance. So with that, I think that's my demo. I have an extra demo here for network policy, which I'll be happy to show you offline. But I want to leave time for Q&A. But we also integrate network policy into our NSX integration with Kubernetes. You define a network policy in a YAML, and we automatically drive the creation of firewall rules in the NSX distributed firewall. Let's go ahead and fix my app, because I want to leave it in a bad state. So I'm going to change the firewall to allow. And then this should all be green and good to go. OK. And I think that's the last thing that I have in my demo. Now we're going to open for Q&A. Hopefully, this was an interesting integration. We showed cell service provision of Kubernetes clusters with ViOK, obviously running on top of OpenStack. And then NSXT serving the connectivity, security, and elasticity needs of endpoints of the distinct configuration, containers, and VMs in my example. Any questions or comments? Any NSX customers in the audience? Thank you. Yes, so the question is, is this working for any Kubernetes distribution? The answer is yes. It obviously worked with the Kubernetes that we include in our own Kubernetes as a service, or containers as a service solutions, like ViOK, PKS, et cetera, or VMware PKS, VMware Container Service. But it can also be integrated into a do-it-yourself Kubernetes, absolutely. We also integrate with OpenShift. We have a certified integration with Red Hat OpenShift and their Kubernetes distribution. Any other question? Did you try to do this Cata containers in your Kubernetes cluster? No, there is no support today for Cata containers in the Kubernetes cluster. While we don't really care about the runtime in the Kubernetes cluster, Cata is specifically not supported as of right now. Good question. Good question. The question is, we need to capture the question to Mike for the YouTube viewer. The question is, do you support Kubernetes running on KBM? The answer is yes. We support Kubernetes clusters running on KBM. We support Kubernetes clusters running on bare metal. The NSXD platform supports everything that you saw here. It could be a KBM VM, or it could be just a bare metal Linux host doing OpenShift, doing Kubernetes. Yes. Any other question? Comment? Come on, come on. No, nothing? Well, with that, we're going to hang around here for another five minutes or so, if you want to talk one-on-one. Like I said, this demo is on GitHub. Just Google GitHub Planespotter, and you'll see this demo with very, very clear instructions on how to deploy it. It's a fun app, and it's one that demonstrates the value of NSX. So thank you so much for your time.