 Hello everybody, my name is Thomas Richter. I work for IBM Research and Development in the Linux Technology Center and we are gonna talk in this session about software defined networking using the VX LAN device driver. The agenda will be centric on the VX LAN device driver. It's an IEEE draft, IETF draft and also an RSC standard. It had been released into the Linux kernel late last year and saw a couple of extensions is the Linux 3.8 kernel, the so-called DAF extensions. The DAF extensions stand for Distributed Overlay Virtual Ethernet and are basically add-ons to the VX LAN driver to actually really use it for distribute for software defined networking. So I'll show you a little bit about the VX LAN details. We need that for the next item principle of operation. So we are going through a few examples on how we create, migrate and delete virtual machines. We go through some examples, advanced usages like multicast, broadcast, detection of VMs and then finally we talk a little bit about the management tools. We need to get actually the most benefit of our VX LAN defined networking. And at the end of the session, if you have time, I'll have a few words about Relature and Future Work. Okay, so where are we at right now? Datacenter virtualization. So we have a few hosts. A&B for example and the customers run virtual machines on them. Big datacenters have a lot of the customers and we see them, they are separated using virtual machines in different colors. So this is quite standard now. What they really want is the customers are basically network which really belongs to themselves. So they share an underlying network right now and what we really want is a own logical network. This own logical network can have individual address spaces for IP and MAC addresses. So each customer might have its MAC address and they might be the same as a different customer will have. Of course the connections should be interconnections with other customers as well. So the logical network should belong to me. And what this really refers to is an overlay network. I want a logical view of my network on top of existing underlying network and use the existing underlying infrastructure. Okay, what we want really is a central management and control. So the customer would like to be able to control his logical network. It should be reliable, of course, and ideally it should cover long distances. So it doesn't really matter if the two datacenters or the two machines are next to each other in one room or if they are across the globe. Optional, if we want to have the ability to find policy. So for example, a specific machine shouldn't be able to talk to another machine, for example, or if we go across a long-distance network, we might want to actually check in an app which does encryption for us before it goes on to some unknown service provider network and will be decrypted on the other side when it receives our destination virtual machine. Those are the needs for the customer and we are going to I'm going to show how we can realize that using VXLAN. Okay, VXLAN is shorting for a virtual extendable local area network. It basically operates by encapsulating data. So the data leaving the machine is encapsulated, distributed, sent over to the on the internet. The participating hosts which are called virtual tunnel endpoints and the virtual tunnel endpoints are interconnected using standard IP infrastructure. VXLAN provides a 24-bit network identifier. This is prepended every packet leaving a VXLAN device driver. This defines what we also call a VXLAN segment. So the idea is that a virtual machine can only communicate with another virtual machine if they are in the same VXLAN segment. So in different VXLAN segments, different can be used in the same IP and MAC addresses in different LAN segments. Also very important is the virtual machines are unaware of the encapsulation. So there's no need to change anything in the software stack. Good, I focus on the recent extensions and the typical flow scenarios why we have implemented them and put them into the latest kernels. I show you the mapping of virtual machine addresses to the destination endpoint. So the virtual machine knows where to send the traffic to and of course very important is the management of the virtual tunnel endpoint. So we are going to talk about these items. But first I have to talk a little bit about VXLAN so that we are on the same page. If you create a VXLAN device, you create it usually on top of some physical Ethernet device. Pretty simple. We have two hosts A and B. My VXLAN device are named 20.0a and 20.0b and underlying Ethernet devices have some weird IP addresses. Doesn't really matter. What will happen is you created these VXLAN device and this is a command. In here is ID 42, meaning this is the network identifier we are going to use to actually talk between my tunnel endpoints here and here. And the next thing is we specify some weird IP address. This is a multicast address. What will happen when the VXLAN device traverse is created? It will happen two things. First it creates a tunnel endpoint. The tunnel endpoint has a port number. So it is a UDP socket endpoint. Port numbers in the kernel right now are 84, 72. It doesn't really matter. The IANA assigned number has been set to 4789. It doesn't really matter. In the Linux 3.8 kernel uses this one and later ones as well. The next thing is we join a multicast group. This is needed for actually sending out the packets. Whenever a packet goes down there, we encapsulate it with this VXLAN header. What is in there? I'll show you in a second. And it goes over the network and the receiving side receives the encapsulated packet, takes out the header, does some checking with it and moves it up to the receiving device driver instance. And here you can see we have the inner packet. For example, if we do a PING, then we have this inner packet from the VXLAN device. A VXLAN header is repended and then we have the outer packet for the transmit over the infrastructure of IP. Okay, some more details, unfortunately. What will happen if we, for example, we have our 2-host AMB and I want to send on host 20.00A a PING packet to host 20.00B. So I'm on a virtual machine. I use my logic and network so I know nothing about the underlying infrastructure. So the first thing which is happening is, of course, that we don't know the destination MAC address. 20.00A, 20.00B is unknown. What do I do with it? So next thing is I send out an app request. Okay, so this is now hitting the VXLAN device driver. It doesn't know what to do. The only thing it knows to do is that I had to prepend a VNI number in this VNI header here. Okay, it will do so and then, well, there was this multicast address when we instantiated this device driver. Okay, and since there's nothing better to do, we use this multicast group and send out the data packet on the wire. Host B, having done the same thing, will receive this multicast packet. Okay, we'll find out that there is a VNI header and it will look at the VNI number we instantiated or we used when we instantiated the device driver and hopefully it will be the same, then it will accept the packet and route it onwards to the VXLAN device driver. The device driver is pretty happy because it can answer the 20.00B MAC address request. Okay, we could do the same trick, so put in an app reply, do out the multicast and all this thing goes back to host 20.00B. But we could do better because when we receive this packet here, we have two headers actually. We have the outer header where we have the source IP address of this sending tunnel endpoint. The next thing we have is we have the source MAC address of this sending endpoint. So we can actually combine those two values together and we know that we want to send something to 20.00A. Okay, this guy was kind enough to send us our MAC address, so we know where to send the MAC address to and we now have the destination endpoint because the packet came from this guy. So we know host A is the one actually hosting this virtual machine or this this network endpoint. So what will happen is that instead of sending back a multicast packet, it will send back a directed unicast packet and saying, okay, I know where you are. Here is your app reply. Okay, so we can hit one of the main things of the VXLAN device driver. It's a so-called forwarding database. The forwarding database is responsible for knowing which MAC address is actually hosted by a certain tunnel endpoint. So it's a mapping between a destination MAC address of this guy and the tunnel endpoint, which is the source IP address of this guy or IP address of this guy. So this forwarding database is actually can be shown using a new command in the IP tools chain or IP route 2. Tools chain is called bridge forwarding database show and there you can see all the entries from the VXLAN device driver. In this case, for example, we have the source the MAC address of the host in 20.00 A and the destination to send it to is 192 and so on dot A. And then last line here, we see the catch all MAC address of zero is invalid. It does mean in this case, if we do not know where to send the packet to, well, we go out and send it on the multicast group to actually make sure everybody in my network receives the packet and can be, has a chance to reply to it. So it's a so-called catch all. So this is the status of the VXLAN device driver, how it was introduced late last year. Early this year. And then we want to use that for overlay for network system. For software defined networking, this device driver lacks a few features. So the drawbacks are pretty simple. I want the control plane. So if I want to control which machine is part of my logic and network, I really don't like the idea everybody plugging into a multicast address. Now we want to actually have a central point of management to allow hosts participating in my logic and network. Next thing is multicast routing. So if you span the globe, you rely on a long path. But you cannot be sure that everybody, that every hop in between is actually doing supporting multicast routing. In fact, a lot of vendors don't do that. So your packet might get dropped somewhere in between. Next thing is multicast addresses are pretty limited. So we don't have as much multicast IP addresses as we want VNI numbers. Because if you're talking about logical networks, you can use VLAN. VLAN has a 12 bit. So it gives you 44,069. This number can be up to 24 bits, so it would be 16 million logical networks, but you don't have 16 million multicast addresses. So that's not gonna work. Okay, what went into 3.8 duff extensions? The first thing is what we call an L3 miss. L3 miss means if I want something to send to a destination IP address, I need some help. The help is the app request to find out the destination MAC address. We don't do that anymore by sending the multicast packet, but we trigger a let link message from the VXLAN device driver to some, let's call it user space helper or helper program, asking for help. There's a market, there's a destination IP address. I don't know, so tell me the MAC address. So what is expected? Well, it is expected that this user helper program will send us a reply in form of an entry in the neighbor table. So the idea is that I don't know a destination IP address. Give me the destination MAC address. If I don't know a destination MAC address, give it to me and please enter it in my neighborhood table. Okay, let's call the L3 miss. The L2 miss, we have a MAC address, but we don't know the corresponding tunnel endpoint. Okay. So we need some more help again. The idea is again, not to do a multicast, but to actually ask our space user space helper program, which tunnel endpoint is it and the expected action from the user space helper is a net link, is a reply in form of a command to actually enter this correct entry in the forwarding database of the VXLAN device driver. Okay, then next feature we have added is no learning. I just introduced you to the idea that the app reply is done by unicast by investigating the inner and outer header. We don't want that in a user-controlled environment, so actually we disable that type of snooping. And then we put in some optimizations, proxy and route short circuiting. That's important, although it's not really important, you can't live without it. It's just some optimizations if the VXLAN device driver is used in an outgoing interface of a Linux bridge. Then the proxy reply is, the proxy command is there to actually send out the app reply by the VXLAN device driver if it has a mapping instead of sending it to the destination machine back again. And short circuiting if needed, if you use virtual machines as a router. If you have time at the end, you might come to that. It's just an optimization to save the first hop. Okay, so the key point in the VXLAN device driver is forwarding database. It has keyed by destination MAC addresses, so it has a unicast MAC address. In this case, we only have one single entry because we have one single endpoint. And the information contained herein is the destination IP address, a VNI number and the port number in case for some crossover talk, we come to that later. So this is just a mapping we have to send the packet to. More advanced is multicast and broadcast. What will happen in this case if we do a multicast or broadcast that we have several recipients of each packet. So the VXLAN device driver knows which tunnel endpoints need to receive those multicast packets. And it will copy and create a packet outgoing for each destination. So it does a lot of copying and will send all the packets in parallel to the destinations. Those here in blue are add-ons basically done into the 3.8 kernel to support the DAF extensions. And of course, we have commands in the IP Route 2 tool chain to actually create an entry. This is bridge FDB add to delete an entry naturally. This is the lead append if we have a multicast MAC address. So we want to have multiple destinations and replace is just an optimization. Put on if you change the destination endpoint, we ended up deleting the old entry and adding the new entry. And that was a bit of a cost a few raise conditions. So we put it, we added the replace command to actually update this atomically. Good. What is this all good for? Let's go through a few examples on how we actually use VXLAN in reality. So this is the command to actually instantiate the VXLAN device driver. As you can see, there is no multicast address anymore. We have the network ID. We use L2MIS, L3MAS, some optimizations and no learning. So what will happen if VMA is going to send out some packet down to VMB? Okay. First of all, it wants to know what is the destination MAC address of 20 OOB? Okay. At the initial startup, we don't know. What will happen is we see this L3MIS message coming along. The user space helper is hopefully around and actually will put in an entry into the app table and telling us, okay, you have to send, this is the MAC address of this target guy. Next thing is the packet is then routed further to the VXLAN device driver and he will ask its database where to send data for this target MAC address to. The database initially is empty, so we'll see an L2MIS message saying, okay, where is the tunnel endpoint? No problem. Somebody will fill it in for us and the data and the packet is sent to this target device over here. Again, on the B side of OSB, the setup was done in a similar way. No problem. This guy will receive the information and will send it back. And this is how we actually set up our virtual machines. We are using a virtual bridge connected via MAC VTAP and the only underlying device is the VXLAN device driver. Okay. And once both tables are set up correctly, traffic between VMA and VMB can flow and it doesn't matter if you go across the globe or just two boxes next to each other. Okay, so a simple situation. Now let's get it a bit more complicated. Assuming we have our two machines again and we are migrating VMA from host A to host C. Nothing really changes. I mean, what is happening is that we have to delete the forwarding database entry on host A. It's pretty simple. Next thing is we have to copy the database entries forward to host C. It's also pretty simple notification. We use LibVirt and QEMO, so the LibVirt migration events can be used to detect that. But there is a disadvantage. There are many other hosts which might not be either the target host or the migrating from host of this migration. So the other hosts in the network have to know that the destination endpoint for this guy has changed from host A to host C. So everybody participating in my logical network has to be made aware that the destination endpoint of virtual machine A has moved. Okay, this is something we'll come back later. But this is the basic idea by simply manipulating those entries. The virtual machine A can talk again to virtual machine B. And next thing is pretty easy task. We remove the virtual machine. This is really daunting. Again, we just simply delete the entries in the forwarding database from host C. And more complicated, of course, are the other network participants have to be notified. And the forwarding database entry has to be deleted so this guy cannot be reached again. Good. VxLan supports broadcast and MightyCast as well. So let's assume we are running on host A and want to talk to host B or VMB and VMC. The entries are pretty simple. They are the same. But this time we have to talk from host A to host B and to host C simultaneously. So if you run a ping and you do a broadcast ping, your destination MAC address will be all Fs. Okay, what we need is basically some mapping in our forwarding database to actually find out that if I talk to destination MAC address all Fs, I have to have sent the package to several destinations. And the destinations are also participants in my network. So by some kind of magic, somebody is going to actually add those entries for me if I do create virtual machines. Good. Next thing is the multiple customer setup. Assuming we have several... Yeah. When you talk about the migration, the way host, which were aware of the previous address, are notified you didn't mention it. I come to set later when we come talk to... Yeah, you set it later. You're going to talk about that. I'm just laying a little bit of ground to it so we are... It's a ground for that. One slide back. Sorry, that's one slide forward. Okay. Right. For example, when you create the virtual machines, you would know what are your other target endpoints and you can create those entries in advance. Okay, so let's go and talk about our multiple customer setup. For example, here again, we have several customers, the red one and the blue one, and both have the same setup. We use a little Linux bridge and we have two device driver instances of the VxLan device driver. And again, we have the same on the other side. So what is happening now is that you have to actually make sure that one side of the VxLan device driver... So for example, if you talk from VMX, it goes down to this bridge and down to this VxLan device driver instance. And down here, you see a virtual network number 4. So this is the virtual network number the VxLan device driver was instantiated with. So if it's going outbound, so the traffic will be prepended with this VNI number and it will go travel through this network on the other side. So here it will be received by the VxLan device driver and it will check out the header. So if the header number is 4, it will be going out to this instance and it will be going up to this guy. So it will be following up to this bridge and to those virtual machines. And on the other side, if you are going down to this path, you are using a different instance of the VNI of the VxLan device driver. And you have a different network number and it will go out on the physical interface and will be received again by this side. Or this side, what is it? It will send you out 4, so it will go up to this side and it is just the receiving guy. So the VNI number in this header of the VxLan device driver is actually the fan out to make sure the traffic is not interfering between different customers. What can also be done and this is something we call logical network traffic or cross logical network traffic is that we can define so-called domains. A domain is a set of logical networks under one central control. So what will be possible is in this case for example that I have an entry in this forwarding database and I want to send traffic from this machine using VNI number 3 and then I will target this machine for example. This is 2100B. So the idea which is that we have an entry in the forwarding database or in the uptables that we want to talk to this box. So we do the mapping between the IP address and the destination MAC address. And now we have this entry in the forwarding database but now we have an additional entry called VNI number. This means that the traffic is leaving my device driver or my VxLan device instead of using my default number 3. It will go out on the wire and this header put in VNI number 4. So the receiving side assumes that the traffic was sent from VNI number 4 and it will go up this path and it will actually enable us to talk from red to blue. So this is the way we could support cross logical network traffic if we want to. That's not the default. You have to actually put it in and make the forwarding database entry accordingly with your IP route command or IP bridge command. But if you want to you can do it. And again then we have here different nets than here so we can also use the standard routing tools to actually do IP routing. Good. One last thing is there then we are through all these scenarios you want to sometimes talk to external connections. So what we can do is basically and what we will do is we have a virtual machine. For example here going down to this this path talking to this guy. And this guy has now two connections. It's called the gateway. One side of the connection is basically attached to my logical network. And the other side of the connection is a McVitlub device attached to for example a network nick or some other Ethernet device which is connected to the internet. The next same trick can be done if you want to talk to via this gateway from the McVitlub device to the underlying network. So we are actually having a connection from here down here and then we go back to the underlying network. And then we can talk to the two which is on how speed directly connected via McVitlub interface to the physical nick. This is how we can set up the connections between external internet and the underlying physical network. Okay. So those are the ground rules. What really do we need this for? Okay. I have missed so far one thing is the point that we need some aid to actually define the logical network. So as I said in the beginning we had this anonymous user space helper program. Well, what's the idea behind this? The idea is pretty simple. We have on each machine, which is basically part of our logical network, a tool running called agent. The agent is there to actually manipulate the forwarding database entries and the neighbor table entries for the VXLAN device driver. So the idea is that every agent, that every host which participates in our logical network has such an agent running. The agent is responsible for actually detecting that the virtual machine is started. This can happen via, at boot time via for example a DHCP snoop, so a virtual machine boots, goes attaches to the Linux bridge and gets via DHCP its IP number. When you say gets VM, IP and Nacodas, you mean it learns it, it doesn't get it from the Linux bridge? Well, there are two ways. You can actually define on the Linux bridge IP addresses which are assigned when you start a virtual machine. No, no, let's say it goes to an external DHCP server. Okay, we haven't done that because what we have done is we attached it to the bridge and we used the bridge to actually reply with the DHCP and supply the virtual machine with an IP address. Because in OpenVis which you can register books that it will give you. Okay, basically it would be nice if the agent can learn that, correct? Yeah, it will actually be able to learn that because if the bridge is responding to the DHCP request, there's a DHCP offer and the agent can actually snoop that in and learn it. The other possibility would be to have a static IP address of the virtual machine. In that case, we have normally a gratitude app that's an app packet sent out at boot time to make sure my IP address is unique. So I sent out an app and I do not expect a reply. So that means there are no reply coming in. I use this IP address because nobody else has a MAC address for it. So this is the both possibilities we can actually use to find out the virtual machine's data like IP address or MAC address. This can be then forwarded to some, instead I call agent manager. So the agent manager has a constant connection with its agents to actually find out which machine resides where. So it learns the information of VMA being started and it will then know which other agents are around manipulating my logical network. So it will send this information back to other agents in my logical network and they will feed the corresponding entries in the forwarding database and in the neighborhood table. Okay, if the virtual machine wants to do multicast, it will also be possible because the agent can do some IGMP snooping. So it will realize the virtual machine joined a multicast group or leaves the multicast group. And this information can then also be forwarded to the agent and distributed throughout my logical network. Okay, so the agent is the key point in defining my logical network. It is a connection to all the agents to actually make sure the data is shared and spread out through my logical network. Important program, so we have to make sure it is reliable. So the idea is to have multiple instances, basically master responsible for all the requests coming in and a few backups. And they control each other using some hard big monitoring. So if the master is dead, some backup will become master and take over. This guy can also be used to actually define policies. So for example if host A or VMA doesn't, should not talk to this VMB machine, the agent can simply deny corresponding entries. So the agent is, then has to have the information which machine, which logical machine should be able to talk to which machine. And also if the agent realizes that the destination is far away, it can also feed the corresponding information to the agent and tell the setup or make the setup in such a way that the machine will send its data to some appliances. And the appliances will for example do some type of compression or some type of encryption if the network is going throughout a wide area network. And on the other side, the same thing will happen again. So this is the central hook to actually define your policies and to actually define your logical network. Okay, and as I said before, there's a constant data exchange between the agent and the agent manager. And the agent also hooks into the LibWord daemon to actually watch out for migration events. So whenever a machine is shut down for migration, the agent can notify the manager and telling the manager the machine is going down. It's migrated away. And on the other side, the receiving agent can instantly notify the agent manager that the virtual machine has been received and is put back online. So it can feed the corresponding information to the forwarding database entries. That is the basic idea of the control plane for this type of logical network which we are using with VXLAN device driver. Okay, I'm nearly done. So what I have to say is we have a few details to tell about. So if you are using standard IP throughout, we are adding a little header. It's only eight bytes, but those eight bytes might be able to actually break your maximum transmission unit. So what we will do on the outgoing interface is that we make sure that first the header can be fitted into the outgoing packet by making sure the data is not larger than the maximum transmission unit minus the header. And then we set the don't fragment bit before we send out the packet on the wire. The tunnel endpoint's connection is UDP only. That's not really a big bummer because if your virtual machines are using TCP, the virtual machines will make sure they transfer the data reliably and consistently. So the virtual machine TCP stack will make sure the data is sent and received correctly. And the underlying traffic, if it loses a packet, well, tough luck TCP inside the virtual machine will find out and will retransmit the packet, hopefully. And if the virtual machine is using UDP, it has some other way to actually make sure the data is transmitted correctly because UDP itself is somehow not really secure. So you have to cope with data losses using UDP. The next thing is security. This is a bit more complicated. Of course, the best thing would be to actually hack your agent manager and make him distribute the wrong entries. So there has to be some security put into the data manager to actually protect its database so we don't manipulate the database. Very simple would be actually to change the VNI number and just transfer the data as something totally different. We don't want that. So, and of course, you have to have secure communication between the agent and the agent manager, although that the data to feed the database entries is not hacked on the fly. Next thing is if you employ some type of middle boxes, may it be virus scanner, firewalls in between, they don't know about VXLAN. So they have to know that there's an outer packet and there's an inner packet and they really have to concentrate on the inner packet because that's the payload and not the outer packet. A few more things. IP version 6 support is currently, it's already in, but it's missing multicast support. So if you use the underlying protocol version 6 as an underlying protocol, you don't have the full function right now. IP file 4 is okay. As I said, it's kernel 3.8, but if you want to try it out on your own, 3.10 is better because you put in some fixes recently. So if you are introducing IP tables, EB tables, traffic control commands, you can do anything on the host. It's all without VXLAN because if you are manipulating the outgoing interface, the interface on the host, there's nothing you see. VXLAN is completely hidden in this level, so you can use those programs as today. We have some alternatives. For example, you can use VLAN. For those who know, VLAN is this four byte header inserted by hardware. So if you want to use VLAN, you have two disadvantages. First, your network range is limited to 4,090, something like that, because you only can use 12 bit out of your VLAN header for logical networks. And of course, you have to make sure it's enabled all the way from source to destination. No, you can enable it only on the switches if you walk in access mode. Okay, but it has to be enabled to the destination as well, right? Yes. The next thing is there's an alternative standard IEEE 802.3 called QBG. Edge virtual bridging is something similar. It has a disadvantage that you have to configure your hardware accordingly. First of all, this is non-standard because it requires a switch being able to actually send the traffic back out of the same port it received it on. So this is an extra add-on. And of course, you have to configure your switch accordingly. And then the switch has a disadvantage that it sees all the MAC addresses from the virtual machines. So if you have a large data center with many virtual machines, your switch has the problem of a very large table size and the resulting spanning tree protocol to actually remove redundant passes can become pretty large and slow. So this is a disadvantage of IEEE QBG. Okay, so let me summarize what we have achieved this week's plan as the workhorse for my software-defined networking is. We have independent addressing, sorry, we have location independent addressing. So it doesn't matter where you go to the machine resides. It keeps the same IP address and it keeps the same MAC address. Regardless if it moves one block or one continent or doesn't matter. You have a logical network which scales nicely. You have independent of the underlying physical network and protocols. So you can use any protocol you want in between. You can use the existing IP infrastructure in place today. You have no virtual machine addresses whatsoever on any external devices or switches or so on. You have no VLAN limitations. So the limitation is basically the number size in the VNI header which is 60 million that should last a while. And you do not rely on multicast routing. So that's another big plus point. And you have address-based isolation. So if you're in the data center, different tenants or different customers have totally isolated network and can use the same address as if they want to. So it's totally isolated. It's all done in software. You don't need anything to do in hardware. It's all by one device driver which you have to actually insmod and attach it to your virtual bridge. And then you can use these features. Okay, so a few words about related and future works and I'm done. It's basically, as I mentioned, an overlay transport protocols. There are a few others around. I'm not very familiar with all the other ones, I must say. They have the same concept. So it's an inner packet for the VM and the outer packet for the transport interface. We have network virtualization using generic routing encapsulation. It's been around for quite a long time as you can see on the RFC numbers. Its main difference is that we use a dedicated protocol. VXLan uses UDP, whereas SEI uses their own protocol over on top of IP. That's all I can say about that. The next thing is stateless tunnel transportation, STT, specially designed for NICs which do hardware support for sending and receiving large buffers. So if you have a network card which can do transport segmentation offload or large receive offload, say this protocol is actually used to use the hardware features to actually segment the packets into MTU chunks, transport it over the network and do in checksum calculation all by hardware. That has to be... This could be done also for VXLan in the... Can it? ...in the AfroMino kernels, the few drivers which know how to do that for the encapsulated traffic as well. Is that in? Okay. Okay. Oh, that's nice to hear. Good. And future work, we are going to think about integration to OpenStack and maybe using OpenVStage instead of the Linux bridge we are using right today. Okay, so this is basically the end of my talk. If you have some more questions, I'm happy to answer some if I can. Yes? Yeah, what does VXLan bring to the table that VPS doesn't have for years already? Is it what? VPS doesn't have for years. VPS? I'm sorry, I don't know much about VPS. It's an application on top of MPS that knows pretty much exactly that. It's been around for six, seven years. I'm sorry, I'm unable to answer this question. Maybe you can talk later offline so you can maybe show me a little bit. Okay. You can use VPS efficiently in the case of the virtual hosting. Okay, yeah, you need some sort of host support if this is the case. All this kind of project, VXLan, NVJV, all these things. I mean, for the people that have been working on it, it's kind of a project since many years. It's just that the invention, if you leave the invention behind, the next thing about VXLan is that it goes to the firewall not in the house. If you have to use your VXLan, it can last. But if you need to go outside the segment, think that if you have to do travel, these ones are over the internet, right? If it's an internet, I'm over the internet. It can last. Okay. More for Earth? The difference is that it runs directly over IP. Pan? It runs directly over IP. Yeah. Okay. What other question? Other? What happened in the event of... When the agent manager isn't able to provide the correct VNI number. Oh, okay. So in case you don't have any backup, you probably won't be able to communicate. I mean, the existing communication will continue until the forwarding database entries will time out. So if... Yeah. It has to be, I mean... Well, I mean, the agent is basically notified when something changes inside the network. So if there's no change at all, the agent manager maybe must not be reached because if you don't start a VM or don't migrate a VM, the agents have the databases look already distributed to the VXLan device driver. Yeah. That's what I mentioned. I'm sorry. Maybe it was event under... So the idea is at least in our scenario is that we have several of them. There's a master and a backup and they are in constant heartbeat monitoring. So once the master fails, the backup will take place. And obviously you can have multiple slaves and one master or so. More questions? What do I need to run this apart from the current Linux kernel? You need nothing else. I have, in fact, running it on a bare Linux kernel plus the... I have for the virtual machines, LibVirtDemon and the VXLan device driver. You need an agent. Yeah, where do I get these from? Ah, that's a good question. This is something I can't actually much talk about. There are some... I knew this question was coming. Let me refer to item number two. So this is basically what we call a software-defined networking white paper. This is the state of direction where we are... So the agent is not open source? The agent is not open source. Yeah, okay. But that's right. But if you don't want to do... Okay, I've been not... Five minutes? What's that? Five minutes or stop? Is there an IBM product which is based on this agent or it's only a research project? Just wrap that pretty quick. Okay. Is there a product of IBM that uses this agent or it's only a research project? Well, actually, I can't say much about this. So this is all I can say. It basically answers the question, right? It's not just a research project. Maybe we can talk offline later on. Okay, so any more questions? I'm here around for the next three days. So we have to wrap up. So if there are more questions, I'm around. Talk to me. Talk to me. Maybe just right now. So thank you very much for listening to my presentation. I hope you've got it.