 Good afternoon, everyone. My name is Muhammad al-Sakhawi. Today I'll be speaking about Neutron. Before I do that, I'll start first by telling you where I work and what we do. I work for Sharknet, which is a multi-university consortia in Canada for high-performance computing. We mainly are concerned the core business is high-performance computing. I'm also part of the national team for Compute Canada, which is in charge of cloud operations across Canada. So we deploy lots of clouds for both HPC and for public and private clouds. We have the largest cloud deployment for us. It's 7,000 cores. It's in Victoria, BC, and its name is Arabusus, and it's located in Victoria, as I just mentioned. Why are we having this session? So the first time I attended the OpenStack Summit, I had lots of sessions, but I was looking for a session that actually tells me how Neutron operates. Neutron is a big beast. We all know that. It's a core component for OpenStack. You cannot have a functional OpenStack deployment without Neutron. You could choose to ignore cinder, heat, or thermometer, but you cannot ignore Neutron. It's very hard to follow component. As it sits, there is lots of pieces that form Neutron. There is L3 agent, ML2 agent, DHCP agent, Neutron server, metadata agent, and so on. And there is lots of Linux native technology involved to make it happen. You have to know a tab device. You have to know vEath pairs, Linux bridge, open vSwitch, network name spaces. There is lots of technology that you have to understand before understanding Neutron and its operations. And understanding is definitely key to troubleshooting. You have to understand all of that. Otherwise, any problem happens. There is no way you can fix it, or you can even find where it sits. So before we start, I'll first explain five main Linux native technologies that are used by Neutron, which are tab devices, vEath pairs, Linux bridge, open vSwitch, and network name spaces. A tab device is software-only interface. Basically, it's a virtual interface. You attach it to user space program, and then you start sending Ethernet frames to it. It's basically used by KVM of using this hypervisor, such that you could have connectivity to your VMs. If vEath pair is a pair of two virtual nick cards attached to each other using a virtual wire, they're usually used to connect multiple different network entities, such as open vSwitch, or network name spaces, or Linux bridges. Linux bridge is a virtual switch in Linux. It's, I think, the oldest one in Linux. You can add physical and virtual interfaces to it. It operates at layer two. And if you're like me and you forget what layer two and layer three are, layer two is MAC addresses, layer three is IPs, layer four is where TCP and UDP sit. Open vSwitch is a more complicated Linux bridge, more complicated Linux switch, per se. And you can add physical and virtual interfaces to it. The big advantage it has is that you can apply open flow rules on it, which is this one. Open flow rules can manipulate traffic at layer two in the switch itself. So it doesn't have to act as basically forwarding traffic only. You can actually have some intelligence on that as well. Network name spaces are the last concept I want to explain now, which is network name spaces are isolated network stacks in Linux. You could think of it of different installations of Linux from the network perspective. You could add different interfaces to them. You could add different IP tables rules and different routes. So they are definitely totally isolated from the other network stacks on the same host. This is very big advantage for having network name spaces for OpenStack. Because OpenStack, you have your users. They create their own networks. They set their IPs. They want them totally isolated. And this is the way that OpenStack uses to implement routers and DHCP. So Neutron uses it to implement DHCP agent and layers to the agent as well. So since now we understand the basic Linux devices or Linux native technologies that explain how Neutron operates. Now let's look at what does Neutron actually do. Neutron allows users to create networks and routers. It provides an API such that you could request those kind of services such that creating your routers and networks. And it manages database on the back end. It travels layers to traffic for your VMs. It provides floating IPs such that you could assign floating IPs for your VMs. It switches the layer to traffic, which is a Mac address these traffic. It provides DHCP for instance such that you could have an instance startup and have a DHCP attached to it. It provides metadata to your instance such that you can inject your keys, for example, in it. And if you want to map what parts of Neutron do what will be basically like that. Neutron server is in charge of handling the API and the database. There's no traffic that flows through it for the VMs. L2V agent allows you to have the layer 3 traffic routed for the VMs and allows also you to have floating IPs attached to your VMs. ML2 agent, it does a switching for the layer 2 traffic. DHCP agent controls the DHCP process such that your VMs can have a DHCP address and metadata agent is what allows you to inject keys and other metadata into your instances. So if you look at Neutron in a very primitive diagram, Neutron server would look like that, an API and handling database. ML2 agent would be looking at a service that relies on Linux Bridge and OVS, which is Open V-switch, and V-exlan or GRE tunnels, depending on what you choose. And L3 agent would be relying on network naming spaces to operate. It relies on IP tables, it relies on routing tables. And DHCP agent would rely on network naming space and DNS mask. I understand there's lots of ways to deploy Open Stack. In Neutron, you could use Linux Bridge totally. You can just, you know, open V-switch. However, the most common deployment methodology for Open Stack, which is back-packed stack use as well, is using Open V-switch for the integration and tunneling bridges. So since now we understand the functionality of pieces of Neutron, what do they do, and what Linux native technologies do they rely on, let's have a look on how the VM looks when you deploy something, deploy VM in Open Stack. So when you create a VM in Open Stack, you would first have a VM up there with a virtual net card inside it. It generates traffic. Traffic goes down there to a tab device, which we explained previously. A tab device, as you could see, is the first point of entry and exit from the VM. So it's the best place to apply any kind of security group rules on top of it. So basically, Open Stack uses IP tables rules on top of the tab device, which you see in here, such that it can implement your security group rules for this particular VM. As you see, a tab device connected to a QBR, which is a Linux bridge. Those naming conventions are identical in every Open Stack deployment, so they want to change. The only difference is that you will find UU IDs, or unique IDs, after each one of them. But you'll find QBR in every deployment that you have security groups in LinkedIn. So QBR is a Linux bridge. Basically, it connects QVTab device and the QVB, QVO vEth pair. So QVB and QVO are vEth pair. They are basically a virtual wire. You send traffic from here. It goes down here. The next step after the filtered traffic gets from the QVO interface, which is the one at the bottom, you'll find it goes through the integration bridge. The integration bridge is the OpenV switch bridge. It's a VLAN tag, so there is VLAN IDs specified in it. Traffic gets VLAN tagged in here, and if it's destined to stay in the host, this is the exit point for it. This is the end point for it. There's no further step. So if you're sending a unicast traffic for another VM on the same host, it will end in here. The next step, if the traffic is destined outside the host, it will have to go through the patch interfaces in here, which is this kind of connection. And those are basically patch interfaces inside OpenV switch. And then goes the tunneling bridge. From the name of tunneling bridge, tunneling bridge is responsible for sending traffic over tunnels to the rest of the compute host and network nodes you have in the environment. Depending on the kind of tunneling you use, you could use GRE or VXLAN. It creates a mesh network of tunnels between this compute host and every compute host and network host in the environment. So this is basically a recap of what I just said. VM1 has a VNIC. IP tables, rules are applied to tab devices to reflect the security groups. QVR is a Linux bridge. It connects the tab and QVB. QVO and QVB and QVOR are VEth pair. The integration and the tunneling bridges are OpenV switches and they're connected via a patch interface. So this is a logical way of how it looks like. But physically, we all know how to use command lines. So if you want to see this reflected in the command line, you would go, let's say this part, VM1 has a tab interface attached to it. You do PS-EF on a compute node and you'll find the QEM depending on the hypervisor for sure. And there's a .dev tab device, file descriptor and the MAC address is for it. If you go to the next portion of it, which is the tab device connects the QVR, you can use BRCTL show, which will allow you to see that the QVR bridge you see in here, this one, and there's a UUID after it. It has two interfaces, the QVB and the tab interface. If you go a bit further just to verify that IP tables rules are applied properly to the tab interface to reflect the security groups, you will find IP tables L and grep for the tab interface name. There is a particular security group chain created for this use. So basically the traffic is filtered at the tab interface using the IP tables rules. This is basically recap, as I just said. So if we go a bit further, we want to see the OVS parts, the BRint and the BRtunnel. If we do OVS-VSEL show, we'll find that there is a BRint in the bottom. There's a patch connection, patch tunnel, which is basically this part, which connects it to the next bridge, and there's a QVO portion, and there is a tag associated with it. The tag is basically the VLAN ID that I just mentioned earlier, and this is the purpose of this bridge, that it VLAN tags the traffic and makes it possible to communicate with the VMs on the same host. And there is a BRtunnel, as you could see there's a patch int, which is the other side of this connection, and you could also see the port for the VXLAN connection. As I mentioned, the BRtunnel will create a match of networks, a match of tunnels between this host and the rest of the host in the environment. So now we understand how a VM looks like in an open stack physically and logically. If we proceed further and look how the traffic flows through open stack, there's what I could think of as five different ways of traffic flow in open stack, basically an instance trying to communicate with another instance while the same network, they can either live in the same host or in different hosts, instance trying to reach another instance in different networks, instance trying to get a DHCP address, instance trying to reach the public network, basically the internet or any other public network you can have for your for your deployment, or you trying to reach the instance over a floating IP, basically SSH in there. We'll take the first scenario, an instance trying to reach another instance on the same network and the same host. So in this case we were lucky enough that both our instances lived in the same host and they live in the same network. One notation to mention here that open stack, you connect your VMs to subnets. So I'm assuming here that the same network and the same subnet because normally when you change subnets you have to go through routing. So in this case we'll see that VM1 and VM2 live in the same host, VM1 would generate traffic, traffic would go the tap interface, get security group pools applied on top of it, QBR would take it down to QVB, QVB sends QVO, goes down, gets VLAN tagged in here and then goes the other way around. I'm only talking here about unicast traffic because there is also broadcast traffic that happens for every kind of communication, like our requests, so you would have to get the MAC address, so it's only unicast that goes through this path. So let's take an example, we have VM test and test2, IP is 10.0.0.5, 10.0.0.9, they live in the same host. If we look at the same scenario and we focus on the physical implementation of it, so we look on the host and do BRCTL show, we'll find two bridges created, each one of them has two interfaces, the tab and the QVB, and if you look at the definition for the integration bridge, we'll find these QVO interfaces with the same VLAN tag, so basically this is the way that traffic flows from one instance to another, it will go down the way to here and then flow this way, as long as they know the MAC addresses for each other. So the same VLAN ID, traffic flows normally. So this was the same host, so sometimes we're unlucky that our VMs would flow or be on different hosts, so in this case traffic has to go through a different layer, which is in here the overlay network. So traffic would flow in this case from this VM, which you could see the test VM, going down to the tab interface, gets its security rules applied, goes to QVB, QVO interface, this time it has to exit the host, so it will go through the patch interface and from there it goes through the overlay network. Now you have to remember that the compute host one does not know where the other VM to which the test to exists, so it will have to send to every host you have in your environment, telling them you have this MAC address, take this traffic for it. So traffic will go again up the patch interface and it go up the QVO, QVB connection, go through the tab interface and from there it goes the test to VM. One thing you could think about the overlay network in here as a big highway that actually connects maybe two cities, in this case two compute hosts, and it has lots of lanes in it. So the way that OpenStack does it is that every traffic that goes in here for a certain network, it gets its own unique lane, it doesn't go through a big pipe, it actually segregates it through tunnel IDs. I'll explain this in a sec. So the two questions we have in here is the first one, as I just mentioned, how does traffic from different networks get isolated? You have an OpenStack deployment, it has 100 users, everyone is creating an OIN network, and traffic flows through the same highway which is the VXLAN tunnel. How do you segregate it from each other? You do this using VXLAN tunnel IDs. So basically VR tunnel is actually intelligent, it will map a VLAN ID on the same host to VXLAN tunnel IDs and it makes a VXLAN tunnel ID unique per network. So this is the way it segregates traffic for a network for user A and user B. The other thing is isn't it useless send the traffic all over the place because what I mentioned that traffic will be sent everywhere for every host because Compute Host 1 does not know where Compute Host 2 VMs exist. So it will just have to send it everywhere until it finds the VM it's looking for. And actually it is, it is very useless. So what VR tunnel here happens, it learns. The first time it sends, second time it gets a packet back from this VM, it will add something to its flow rules to tell that VMs live in this host, so next time I'll only send it to this host instead of the mesh of the networks. It's very economical to think of it this way because if you have 100 VMs or 100 Compute Hosts in your deployment and you won't do a ping, you have to do it 100 times if this doesn't exist. So let's take an example, VM test and test 2, the IPs are mentioned and they are living different hosts. This time VM test will send the traffic down there and go to reach the QVO. So let's look at the QVO definition. We'll do the OVS vs CTL show. You could see that there's a tunneling bridge, there's an integration bridge and they have the QVO interface and tag equals one. But now we know that in this scenario that traffic will have to exit the host because VM2 does not exist on the same host or test 2 does not exist on the same host. So let's look at the VR tunnel open flow rules. In this case we will do OVS or CTL dump flows and grep for the V lines. So we'll see that there's some sort of translation that happens in here which as I mentioned earlier traffic goes down in here and then gets VLAN tagged and stripped at this bridge and then gets sent over the dedicated lane in the tunnel that's created with the other hosts. So as you could see up here you could see that dl VLAN equals one, action equals strip the VLAN ID, put it on this tunnel ID which is 0x39. So basically it maps everything that's coming from tagged interface tag equals one into 0x39 tunnel. So basically it makes sure that traffic coming from single network gets its own lane over the tunnel that gets created with the rest of the compute hosts or the network nodes as well. So the same goes the other way around if anything is arriving on the dedicated lane which is tunnel ID 0x39 it gets VLAN tagged with tag equals one. So now it can find its way to the QVO interface. So the idea is the VLAN ID is removed outbound tunnel ID is set equal the 0x39, VLAN ID is added inbound if the tunnel ID is coming from is 0x39. Compute hosts to follow the same scenario we focus on this part OVS VTS CTL show you'll find there's integration tunneling bridge the port the QVO has a tag ID equals two you could notice now that although they are the same network they actually have different VLAN IDs on different hosts. So the VLAN ID is only significant for the same host if you have multiple the Amazon same host they will have on the same network they'll have the same VLAN ID if there are different hosts it can can be different. We focus also on the dump flows for this particular bridge the VR tunnel we'll find out that it does the same thing that traffic coming is also sent through the 0x39 tunnel ID so basically it makes sure that the two hosts are communicating over the same lane when it comes to the bigger highway which is the VX lane. So VLAN IDs remove the L bound because VLAN ID is totally significant only for the same host tunnel ID set equal hexadecimal 0x39 VLAN ID to add an inbound is tunnel ID 0x39. So now we've spoken about the first scenario which is two different instances same host and different hosts on same network how about if we have different networks the queries we have definitely have a router because they do different networks. And networks here I mean subnets because I'm assuming single subnet per networks. So it actually looks more this is how it looks like in the logical way but more logical and more in depth way it looks like that. It will have the traffic flow from the VM1 down here to the VR tunnel. It goes to the network node then it gets up here to the router name space on the network node then it goes down again through the tunneling bridge it goes here to the tunneling bridge of the second compute host and then goes up to each VM. So the idea is that tunnels are created between the compute host the network node network node takes the traffic routes it to the actual compute host. If you look deeper into the network node we'll find that it has the same assembly for the integration bridge and the tunneling bridge such that it can tunnel the rest of the environment. We'll see also it has the BREX which is basically what you could choose whatever name you have for the external bridge you can have multiples of them if you want to. This is your entry point the public network. There's QDHCP name space which is basically created for every DHCP enabled subnet you have in your environment such that you could provide DHCP addresses for your instances. There is also QRouter name space created for every router that you have with multiple interfaces in it the QRs and the QG. QRs are connected to every subnet you connect to your router is and QG is the default gateway for the router. So if we look at the network node the router is implemented using L3 agent it's implemented inside network name spaces QRouter dash whatever ID is specified for it. There's multiple network interfaces on top of it QR which is specified for each network subnet you attach to your router and there's routing tables that specify how the traffic flows between the interfaces and there is one gateway interface which is QG in this case. So if you go in the network node and do IP and SNS you will find that there is actually a QRouter name space this one that you can go into and see in more depth how it looks like and now we'll take another example to instance different network and different hosts. So we have here two VMs as you could see up here and there is two networks they are connected to there's router in between them and the router is connected to a gateway. Since this router is connected to two different networks it will have two QR interfaces and since there are two DHCP enabled subnets this case we have QDHCP with QDHCP name space for every one of them with a tap interface inside it. As you could see there is a router up here which connects them so the IPs are 10 to 0 to 0 to 16 if people can see it in the back and 192.168.3.3 for this one and they're both connected to a router which has also a gateway connection to the public network. So if you look at the QR outer name space you can always easily go on any of the network nodes and do IP net exit QR outer bash so you go inside it this way you do an if config you'll find the QR interfaces and the QG interfaces that you want. So each one of them has specified IP for it and each one has a MAC address is specified as well for it so we could say that the traffic flows between these depending on the routing table that exists inside this name space. You have to remember this name space is totally isolated from other name spaces in the network node so traffic flow through that was how it seems traffic from other networks or other routers in the deployment. So if you see here in the network node we do route dash n on inside the name space you'll find that there is a destination network in here it goes through this QR interface this destination network goes through QR interface and our default gateway is a QG basically our gateway to the public network. So we could say that the router has three interfaces two QR interfaces one QG interface and those are the IP specified with it. So back to our example we have compute host one has this VM with this IP on top of it and if you want VM1 to communicate with VM2 which is a different network it has to go through the router in this case. So let's have a look at the definition of the integration bridge in here we'll find that the integration bridge has a QVO interface with certain tag for it but we know again that the traffic will have to exit the host because it's destined for something outside the host so if you do OVS or CTL dump flows you'll find that there's actually a certain specified rule for the gateway for this particular VM. So as you could see there's a MAC address in here is actually matching that this interface on the router on the network node so actually any traffic that gets down going to the gateway of this node it will be shoved over the 0x43 tunnel ID and sent over its own dedicated lane in the tunnels VXLan tunnels. The same goes with the second compute host too you'll find that if there's a QVO definition there's a VLAN ID specified for the QVO in the OVS VCL show we also look again at the OVS or CTL dump flows for the tunneling bridge we'll find that the destination also for the gateway also goes for a specific tunnel ID so the idea that every host would communicate through its own dedicated tunnel with the network node network node would take it put it in the Q router namespace and then we'll move the traffic in between the connect connections or the interfaces such that they reach each other so in the network node if we go back we know that VM1 we're getting traffic on 0x43 or xdismal 43 tunnel and we're getting on 39xdismal from VM2 so if we do definition for the VR integration bridge you'll find two interfaces the QR interfaces those are the ones that represent the connection to the subnets connected to the router and then we will find that each one of them is tagged differently which is what we expect because they are connected to different networks as I mentioned the LAN ID are significant in the same host so they have to be unique pair subnet connected so if we do a router and inside this router namespace we'll find that there is this interface connected to this network this interface of this network and if we look now at the tunneling bridge the tunneling bridge should be doing the reverse of what the compute hosts did it will take anything coming on the dedicated lane move it up to the right VLAN ID such that it moves correctly to correct interface so if we do in here we'll see that there's translation actually happening anything coming on in tunnel ID 0x43 you will see it's getting VLAN tagged with tunnel with VLAN 3 so basically it's going to this interface which is this interface so it's going to correct network as in goes with the other one that the VLAN ID gets set to 2 and in this case traffic gets the correct interface the routing happens inside the Q router namespace through that the traffic will flow from a certain compute host the right interface and from the other compute host the right interface and traffic will happen to be routed inside these so general route and notes on the Q router there's one QR interface per network submit attached the default gateway is QG network interface make it totally isolated so whenever you create a router it's totally isolated from the other routers in the environment instance can be on the same or different hosts so whenever you have a change in subnet traffic has to flow through a router there's there is no special circumstances if they live in the same host or a different host there's change in network so there is definitely a router involved in that this one I'll mention it really fast so innocence trying to get a DHCP address we focus this time a DNS mask and the QDHCP name space it's created for every subnet that you have DHCP enabled on so in this case let's say the same example two VMs two networks so as you could see in here each one of them will have its own QDHCP name space and the tab device connected to this network and if we do PS-EF and grab for DNS on the network node you'll find two DNS mask processes each one of them has a certain tap interface it's listening on so and if you do IP net NS on the network node you'll find the QDHCP name space is created here for every one of them gets a QDHCP name space so the idea is to implement a DHCP you have to have a DHCP name space you have to have a DNS mask process running and you have to have a way to reach it from the compute host so if you go inside the IP net and as exit this any of these so I picked the first one you'll find there is a tap interface inside it there is IP specified on this network which is picked two in this case and there is a MAC address is specified for it as well so to get DHCP on this network we need a path between the compute host and this tap interface which is very similar to what we just did in the router basically tunnel VLAN IDs will get translated to the VXLAN tunnel IDs and then moved over to the network node it goes up the same stream until it reaches the tap interface which will provide the DHCP address on top of it so tunnel ID mapping happens between compute host and network node DNS mask is on that provides the IPs so we've covered right now the Innocence to Innocence traffic same network and different network instance DHCP agent we have to cover the innocence to public network and you trying to connect your instance over floating IPs so an instance to public network we all know that the VM only knows about its gateway so we know that the traffic will end up at the router the QRouter name space and the QRouter name space routing table says that the default gateway is my public connection the QG so in this case anything that you want to reach the public network with it will go through the QG interface in this case which is this one so we have to look further of how QG interface is connected so we know that the traffic will have to flow through this one through this part but how is QG connected to the actual BREX bridge what will happen is QG is actually a VLAN tagged interface as well so it's tagged here one and the QG will send traffic through its BREX through the patch interface so if we go back in here QG will get the traffic send it down here it goes through this way and then exits from here BREX does the normally stripping with a VLAN ID out because VLAN ID is only significant within the host you go outside there is no value for VLAN IDs sends it to a physical network incoming traffic will be the opposite of that so basically the incoming traffic will get VLAN ID such that QG gets it so last scenario we'll talk about is the floating IPs so so far we only mentioned the Q router uses only routing tables but I mentioned first that IP tables not rules are also used to implement the routers in OpenStack so the traffic would flow for inbound traffic you're connecting over floating IP go first to the public network public network go to the BREX and BREX goes up to the QG interface floating if you go inside any of the Q router namespaces and do IP tables dash T NAT dash S so basically we're looking at the NAT chain and looking what is implemented in it you'll find lots of D NAT rules happening in here such that this public IP is translated to this private IP and the same goal is so going outbound if you connect from a VM that has a public floating IP associated with it it will go outbound through its own IP but if you connect of something that has only private IP associated with it it will go through the the router in this case so now it's demo I know I was a bit fast in it but there was lots of material to cover but I'll go through this demo first outbound traffic and then the inbound traffic from a VM so here as you can see there is test instance I just created has 10 to 0 to 0 to 16 private IP and the traffic in order to go outbound has to go through this path to public network this is a router that has only one interface attached to it and this is basically where the QR interface sits so this is a QR interface that you would expect to find inside the router name space and its connection to this one is a QG interface that you should expect so basically QR interface is implemented because of that we connected this to this and QG interface is implemented because you connected this to that so I'll open up the console I have the console in here and I'm trying to ping yahoo.com I actually emulated a certain IP for it because it was I created cloud within a cloud so now we're looking at the compute host so we'll go to the compute host and first look at the process for the KVM process so ps-ef and grep-kvm we see that the instance is running then we look at the OVS vsctl show so we can find here the QVO definitions for this particular VM it has a tag for it tag equals 2 as you can see in here and I'm TCP dumping on this interface you can see that the ping is happening at this level if I go back and just delete the QVO and right tap instead of it this case the UUID is the same so if I do it I'm right now looking at that before filtered traffic there's the one that exits from the VM itself so I have added a brint listen which is an interface for every bridge I have to listen all the traffic that go through it and I say my go with the tunneling bridge so the traffic will go flow tap interface QVO integration bridge tunneling bridge and then exit out till it finds its way the network node router and then go outside the public network so right now I'm TCP dumping in here TCP at ETH0 which is actually the physical interface the reason I specify UDP because VXLAN happens over UDP so right now I'm catching the VXLAN traffic you see here VNI57 that's basically a tunnel ID for the VXLAN and it's hexadecimal I think 39 so now we look at the network node and the network node received this traffic so we look at it TCP dump dash I UDP so basically we see that the VXLAN traffic is coming from the other host the compute host it's same VNI basically it's unique lane in the highway and now we'll go through the listeners which those are specifically created listeners on every bridge so TCP dump dash I we can still see the pinging happening in here we look at the integration bridge it's happening next we can look into the router actually itself so you could see here there's a QDHCP and QRouter namespace I went into the QRouter namespace and let's see how what kind of interface it has so it has a QG interface in here for its public connection and there is a QR interface which is the one that traffic is coming on from the compute host I'm doing another TCP dump for it and again it goes now through the QG interface so I'm now showing the exit bridge so basically everything has to flow through the external bridge and I'll do a TCP dump on the physical interface attach it to it which is the ETH3 but this actually won't work and there's a reason for that I'll explain in a sec so as I mentioned an interface namespace will isolate totally their network from the rest of the environment so right now I'm inside because I used bash to go inside the QRouter namespace I cannot see the ETH3 which is actually outside it on the root level so I have to exit it and then we'll be able to do the TCP dump for the ETH3 so now I could see that traffic going out so this is basically the traffic flow for this instance it goes from the VM tap interface QVB, QVO VETHPair integration bridge tunneling bridge goes to the network node the same way around exit through the QG to the BREX external bridge that we have and then go to the public internet so the next demo I have is actually for the other way around of traffic so basically inbound traffic coming in you're using a floating IP to connect to this instance you could see in here that we have the test VM 10 to 0 to 0 to 16 a floating IP 172 and we have this topology so traffic will have to flow first public network router our network and then reach the instance in this case I have a VM that's sitting on the public network it's sharing the same IP range within with the floating IP for the VM as well so I have the I'll do a ping before it so I'm pinging from the summit one machine to the VM public IP and now we'll look at on the other way around so we'll look at the network node traffic is coming in through the network node through the BREX bridge so we'll do a ptcpdump.i eth 3 so we're seeing the traffic coming in and do if we do ip and s we'll look at the router name space we'll go again inside it we'll see that traffic flowing first to qg and then to qr and from qr it will go again through the tunneling integration bridge tunneling bridge to the compute host so we see that qg is now receiving the traffic so now it's reaching the qr interface which represents the network the VM is connected to and now we'll look at the compute host we're receiving the traffic again on the BR tunnel so basically it has exit the network node going back to the tunneling bridge on the compute host goes up to the integration bridge goes up again to the tap interface and then goes into the VM see here the qvo interface which is the one attached to the integration bridge it's vlan tag equals 2 now if you look at the tap device the tap device is getting the traffic that's basically it thank you very much for your time today and if you have any questions we need to answer them