 So here we talk about open stack and its integration with tungsten fabric. So just a show of hand, how many people have heard about tungsten fabric? Quite a few. Good, good, good, good. So for those who have not heard about it, there will be a little refresher here. So what are we going to cover today? First of all, let's introduce ourselves. I'm Sukhdev Kapoor. I'm a distinguished engineer at Jennifer Network. And I'm also on the technical steering committee for tungsten fabric. And with me is Krzysztof. Why don't you introduce yourself? Yeah, thank you Sukhdev. I'm Krzysztof Kajkowski. I'm director of engineering from Codilime. And in Codilime, we are a software house doing software for the network related projects. And I've been working with Sukhdev for some time for the project we are going to show you today. Yeah. So tungsten fabric is an SDN controller, which is a very heavily production-deployed SDN controller. And we've been trying to get it to run with ML2 for a while. It runs as a monolithic plugin and we took it upon ourselves, two of us, and with the help of a few others to bring it with ML2 so that you could run different kind of workloads. So we're going to talk about that. So I'm going to start out with the brief architectural overview of tungsten fabric, some of the features which tungsten fabric offers, and then we'll get into the actual live demo of what we're going to show. So having said that, let's get going. So tungsten fabric architecture. So everything really in tungsten fabric works with the logical view of management of the networks. So if you look at the top right-hand side, so you see two virtual networks and you see different types of workloads. So the concept in tungsten fabric is you create ports and you attach anything to the ports. You can attach a virtual machine, you can attach a pod, Kubernetes pod, you can attach a bare metal machine, you can attach a top-of-the-rack switch. So you create ports, you attach any type of workload to it and you put it in a network and you define the policies. You simply say, okay, these workloads belong to this network, these workloads belong to this network, and then you simply define the policies. You say, okay, this policy, this network is allowed to talk to this network or I'm going to deploy certain services. So these workloads should talk to this workload through this chain of these services. So that's the logical view. So that's how the entire tungsten fabric network management works. So now, in reality, what happens is, so I'm going to start from the bottom and then I'm going to work my way up on the slide. So in reality, what happens is, those workloads which I just talked about, they can be anywhere. They can be sitting in a same rack next to each other. They could be sitting 20 feet apart. They could be sitting cloud apart. They could be sitting miles apart. They could be in a different cloud. They could be in your on-site premises. They could be anywhere. It doesn't matter. Still the same concept. You create a network, wherever the ports are, wherever they're living physically, you simply say these ports or these workloads belong to this network and please apply this policy and we'll take care of everything. So in this case, so what we're showing is on the right-hand side is a weed outer. So hypervisor, everything is in tungsten fabric weed outer based. It's not an OVS. It's a layer three. So you don't require additional routers when you're using tungsten fabric. So the router everywhere by default. So weed outer runs in a hypervisor, which is the one which is managing all the policies and all the flow of data from which workload is allowed to talk to which workload. So that gets managed and all the policy management at a distributed level. And like at the bottom, it shows it could be running in Azure. It could be running in Amazon or it could be running in your local data center. It doesn't matter, right? So on the left-hand side of the screen is your bare metal machines. So again, you have top of the rack. You have a bunch of bare metal machines. So they get treated exactly the same way. So physically they're living differently, but logically they belong to the network. And therefore based on the policy, they'll all communicate. So now the core of the SDN controller is the tungsten fabric controller itself. So that's a centralized entity, which has a configuration. It's a single pane of glass. One place where you configure all your policies where you define your deployment. And that's one place. That's a configuration. The control plane is distributed. So it's logically, again, central one place. That's where all your BGP policies, all your prefixes, all your routes, everything gets decided there and get distributed and passed on to vRouter agents on different nodes. And analytics the same way. And to end full visibility into what is happening anywhere, whatever is being managed. So those are the key components. And then when you go north of it, that's an orchestrator. So orchestrator can be open stack. It could be Kubernetes. It could be anything. So some top tier providers have their own GUI, which they run on top of it. So it doesn't matter. It's all REST based APIs. You come in and you can configure whatever you want to configure. So those are the key components. And if you notice on the right hand side of the controller, there is a loop with the BGP. So the controller works in a Federation mode. You can horizontally scale it to as much as you want. It will scale. Some of the people are running this into very, very large deployments. So there are 35 plus million mobile workloads running in production networks. There are some tens of, tens plus thousand bare metal machines are being managed in production with this SDN controller. So it's a fairly powerful, at the bottom center is a gateway. That's how you get in and get out. And we again, we use BGP standard IP fabric to get in and get out. So that's a basic high-level overview of the architecture. So here is a weed outer architecture. So weed outer sits in every hypervisor. So in the previous slide, I showed the control node, which I was showing on the top. That talks to the weed outer agent, which sits on the local. Every compute node which is being managed by tungsten fabric weed outer is running there. And it deals with the configuration, the verfs in the policy table. So I'll touch a little bit basis on the policy table and explain to you how powerful the policy framework tungsten fabric has. So essentially you have virtual machines running. You have a kernel, I'm sorry, the weed outer runs in a kernel, right? And that's your forwarding plane. So the packets come in or packets which are going out, they are encrypted, which are coming in. They are decrypted based upon VXLAN, VNI or MPLS label. They get passed on to the appropriate routing instance and that's how the traffic moves in and out of a given hypervisor. So that's a basic crux of how the forwarding. So having said that, by default weed outer runs in a kernel mode. So that's like if you install tungsten fabric as is without any additional configuration, that's what it comes as. Now for higher performance, if your applications can support DPDK, it does support that. So that will give you much higher support. In that case, the weed outer runs in a user space and it uses DPDK libraries and takes full advantage of DPDK libraries. You can also run in a hybrid mode where the weed outer will run in kernel and you can leverage. If you have VNFs which can support SRIOV workloads, you can leverage them. They will run in a hybrid mode, right? And this is where you will get somewhat a better performance. You can leverage the SRIOV workloads here. And the lastly is the smart deco offload. So in that case, the weed outer itself runs in a nick and it completely offloads your CPU. So now you can fully utilize your CPU. This is where you can get the highest throughput. That's like if you're using edge deployments or remote compute situations where you have a need for low latency, very high throughput, that's the right way to go. So those are the four deployment models. One thing which I didn't touch so far is that the tungsten fabric not only just deals with the virtualization of network, it deals with the full-blown fabric management as well. So it's a zero touch provisioning. It will deal with the bare metal machines. It will deal with the life cycle of the bare metal machines. It uses ironic underneath to deal with that. But it can also fully provision the switches and routers and leaves and spine switches, so everything. So essentially you connect the pieces, I mean physically wire the pieces and then you come in and provision it and it will configure all everything for you. So it's a single SDN controller which will work for your virtual machines, parts as well as bare metal machines. So it works with Kubernetes, it works with OpenStack and it deals with your edge deployments. One thing which I didn't touch because I didn't come here to talk too much about tungsten fabric, but I wanted to give a little bit of a primer. But we do have a multi-cloud support that tungsten fabric will deal with multi-cloud. You can run with the same controller, you can manage multiple sites. You can run multiple edge sites or multiple clouds. So either one, so that's what it is showing here. So another thing which I wanted to touch is I'm not going to read this list. So in addition to the basic networking, it does have a lot of advanced networking features and they're listed down there. So it's fairly rich and a lot of people, I've been contributing to Neutron for the last 7, 8, 10 years. So a lot of people have always asked, hey, you guys have a lot of features, why don't you upstream them? So it's like nobody has really spent some cycles to do that. So if you're using pure OpenStack-based deployments, you will miss out on some of those features. But for those features, you usually typically will configure them from tungsten fabric to GUI as opposed to from OpenStack Horizon. But some of those things we are trying to bring into OpenStack and that's one of the things we're going to talk about today. So here, this is one feature which I wanted to really touch base is because this will hit the home run as to what kind of capabilities tungsten fabric in terms of the policy framework can offer. So here is your typical deployment scenario where you have a three tier application which you're developing, that's a financial app. So you have a web portal which your customers are logging in and then on the back end you have an application which is managing the real application and you have a database. So you're developing it, you're staging it for deployment and you're running it in a production. So for all of these, you have a policy which you will define how the workloads can talk and what not. And so what we said was instead of having a separate policy for each of the stage, why not have a single policy which can deal with regardless of what staging of the deployment is. So it reduces the complexity, simplified management and improve the scalability. Now once you define that policy and you define for any type of staging for an application, now at that point when you're scaling it out it doesn't really matter where you're scaling it out and how your cloud is growing. You could be going into Amazon cloud, you could be going Kubernetes, Mesos or bare metal so it doesn't matter where you're going. Once you have defined a policy once it will just seamlessly apply anywhere. This is what I was sort of emphasizing in my first slide when I mentioned like doesn't matter. Once you have these ports and you put them into networks doesn't matter where the workloads are. Just based upon your policy it will manage, it will deal with how the workloads can communicate. This is what I was referring to. So here I'm going to give you a very quick use case example. So here is the same app which is in the development stage. So the policy says on the top allow HTTP traffic between web tier and an app tier. So that's your one single policy which you're defining. Now you have a development stage where web can talk to the application and now you have in production as well. So based upon the policy it says the application, the web can talk to the application doesn't matter where it is. So whether it is in the development stage or it's in the production stage. So now if you say that no I don't want my developers to be messing around with the production deployments. So their developers they shouldn't be mucking around and screwing around with the production. So here you simply say you match the deployment and that's it. So what does this policy do is now it says the web can talk to app but only in a given... So we have these bunch of these tags which are standard. You can define your own custom tags. You can build any kind of policy for them. You can define and build any kind of rules around that. And now let's say it's distributed geographically. So you have two sites, again the same deployment. So you have developers running at two different sites and they're developing applications or what not. So the same policy. So nothing changes the same policy applies. It says the web can talk to the app anywhere as long as it's in the same deployment. So over there is a deployment. This is deployment. So it will just work. And now what if you want to prohibit that? You said no. We don't want... Sorry I clicked twice. So you said no I don't want to allow that because of geographical reasons or whatever. I want only local deployment to be able to talk. You just simply match the site. So now based upon the site it will restrict. Otherwise the policy is exactly the same. So this is how you can control your deployments. So now at this point, I'm going to give you an example, a real life example. Let's say this is a financial large distributed financial application. Which is where the web servers are running behind let's say 100 networks. And similarly the app networks because they're geographically distributed. Running under 100 networks. Now you imagine trying to configure these policies using OpenStack security groups. How many lines of configuration? Gentleman is smiling here. So he knows exactly what I'm talking about. So imagine you will require pages and pages and pages worth of configuration. Or the policies which you would have to write to manage that. And how much error-prone that would be. Here you just need one simple policy. And what happens once you have, let's say you have the deployment going. It's running. It's working 100 networks each. Now you add a site and now you have to add another let's say one more network. 101 network. What would happen? You have to go back and reconfigure a lot of crap if you're using security groups. Nothing changes here. You reduce it to 50 networks. You add another 50 networks. Nothing changes. The policy was once defined. That's it. It's going to work. So this is how powerful this policy framework is. And now here is another example of, here you're allowing the applications to store database. And the policy says store it locally. It doesn't matter where the application is running, which stage it is running in. It should be able to write to the database. And again, this time, you simply match the site and the local site should be able to push data anywhere you want. So I just wanted to touch, just wanted to give you a little feel of why this is so powerful SDN control. Why a lot of people like to use it in the production deployment. Yesterday, I'm going to court a gentleman sitting in the room. If you were watching the keynote speeches yesterday, right from CERN. And Tim Bell mentioned he's using tungsten fabric for their cloud with their building. Right now they have a very small site, 3,500 servers with 35,000 virtual machines. And the plan on expanding it to 15,000. So afterwards, I'm going to chat with them and you guys can chat with them and see how hard or how painful it is to grow from smaller deployments to larger deployments. It scales really well. So it makes a lot of sense. So those were the good features. So now by default, tungsten fabric uses a monolithic plugin. This goes back into the time when New Front started and most of the plugins were monolithic plugins. So the tungsten fabric started as a monolithic plugin and it never bothered to move once the ML2 based plugins came by. It never bothered to do that because it was so widely used into the production. A lot of people were already running it in production networks. So people never bothered to move forward. But here we have taken an attempt to do that. So if you're not familiar with it, if you have not heard about it or you have not seen it, there exists an open dev GitHub networking open con trail. It used to be tungsten fabric used to be called open con trail. So it's just laziness on Christophe and his teams part that they have not changed. I've told them several times, hey, let's change the name, but it's still sitting. So this tells you how lazy some of us are because things are like just working and why mess with it. So networking open con trail, you can go see it. You can play with it. You can download it. You can deploy it. He's going to actually show you the whole thing running. And what this does is, so in the previous monolithic plugin, anything else you want to bring it in, you have to come through us. So we will integrate it. We will give you one fully packaged solution with ML2 based. Now you can run anything and everything you want. So in this demo, which Christophe is going to show you, we're going to actually run open V switch as well as SRIOV drivers from open stack. And then we can also run networking open con trail to manage the tungsten fabric. So we're going to show you three types of workloads. SRIOV is OVS and Vrouter based workload communicating. And then we're going to go one step further. So we're going to do a live migration of workloads from OVS to Vrouter. So we're going to show you how easy it is to not get your typical standard and NOVA migration, but NOVA migration really works from OVS to OVS, right? So here we can actually go from OVS based workloads to the Vrouter based workloads. So if you have legacy deployments, which you're running on OVS and you want to cut over to Vrouter based, so it makes it seem less. Oops, I'm clicking too fast. Okay, so I sort of touched already. So what this facilitates is by bringing in ML is you can run free workload simultaneously. Another thing which you can do, which is interesting, which is the second bullet here, right? So tungsten fabric, when I was describing the architecture, one thing I mentioned, that it will do the full fabric management. So for example, if you don't want to use tungsten fabric to manage your virtualization, you can use it to manage your fabric. And you can run OVS based workloads, you can run SRIOV based workloads and use tungsten fabric for fabric management. So this will also do it for you. So that facilitates it. And of course, we're going to show you live migration. Here is the link, which I was talking about. So you want to pull the code. It's there. Whatever he's going to show you, whatever he's going to run, he's going to be running directly straight out of that GitHub. There's nothing for, so we never work with any forks. So everything is straight from upstream. Having said that, now I'm going to pass it on to Kristoff. You step up and walk through the demo and just take over. Thank you, Sugative. No problem. So this is simplified overview of our setup. Sugative mentioned the steps we're going to perform. So we have three nodes in here, managed by the OpenStack. And the first node is SRIOV deployment. Then we have OVS. And on the right-hand side is V-Router. They are connected. We use VMX as a router, but it could be any router. We just use it because we know it very well. And the network configuration is something I won't go into details in that, but you can use any router you wish. And they are all connected to external network. We use QFX. So besides showing the connectivity between all those VMs, I will show the migration. And the migration, as Sugative mentioned, will be from OVS to tungsten fabric. All those VMs will be in the same network. It's for the simplicity. And I'm going to create new VM to migrate. I would not have to do that, but I want to show that I can move the OVS VM to tungsten fabric and then see if the connectivity still works. So that's why I'm creating new VM here. So the demo. Okay, so L2 connectivity is, as I mentioned, done in the VMX. We just use instance-typed EVPN, which is basically a switch. We connect SRIOV and OVS using a VLAN-type connection. And VRouter, we use VXLAN. So this is for the L2. For the L3, it's a little more complicated. We use instance-type VIRT router in the VMX, and we connect the SRIOV and OVS VMs. And from the VRouter, it's a little much more complicated because we use MPLS over GRE to connect to VMX, and then we interconnect them together in the VMX. This is the part of the network. It's not important for the demo. Okay, so let's create new instance. The first one would be the OVS instance. So we just select any source, and it doesn't matter. Oh, here we select the predefined network. I was just speaking just about before. We call it SRIOV255. It's the 50-50-50 network. Yeah, so the instance is being launched. This has been given the address 2-3-1, 50-50-50-2-3-1. And now we are creating VRouter instance. We'll choose the same network because the source here doesn't matter. Yeah, so OpenStack is managing the VRouter, and it assigned the VM the address 50-50-50-7-6. Now ML2 plugin works. So let's create SRIOVVVM right now. So here instead of network, we select the previously defined port. It's within the same network as the rest of the VMs. Okay, so all three are in the same network. So let's see how it looks in the tungsten fabric. We can see it here in the networks. We can see if there really is an instance. Oh yeah, there it is. 76, 50-50-50-7-6 is the same IPS in OpenStack. So yeah, it really worked. Let's do the connectivity test. So first we connect to the console to the OVS instance. And we try pinging other instances. Let's try pinging the VRouter. It's 50-50-50-7-6. It works. So OVS VRouter connectivity works. Now let's do the SRIOV ping. It's 200. Yeah, it works again. Let's check the internet. We simulate it by 166.01 address. Yeah, it also works. So I want to show you the other pings between other VMs. It's too boring, but I assure you it works. So let's do the live migration scenario. So as a reminder, we'll create new VM migrate in the OVS and move it, migrate it from the UI to the tungsten fabric, to the VRouter. So again, create VM. So it's 50-50-50-183 IP address. I can check it in tungsten fabric that there's only one previously created VM. So we expect a newly created VM to appear there. But right now, we can double check if this is really in OVS and we can check it here in the virtual interface type field. For example, here. It's OVS, right? And it's going to change soon. So it's node three. It's the name. We click the Live Migrate button there. And we choose the new node. It's node four. This is the VRouter node. And just click Submit. So it's standard OpenStack mechanism. We didn't do much with it. It's no migrating. So what we expect is the IP won't change. But the node here will change to VRouter. Yeah, it did. So it's done. It moved. So let's check the interface type again in the OpenStack. It's a VRouter here. But let's check the VRouter. Maybe OpenStack is wrong. Go to the network. Okay, here's the second instance. 50-50-50-183. Let's do the connectivity test. Yeah, so let's try to ping OVS. Yeah, it works. So the newly created VM that is in the VRouter now pings the OVS and pings the VRouter VM that is beside her. And then SRIOV also works. It's the 200 address. And now it's the Internet test again. So it works. So that pretty much concludes our presentation. So at this point, I'm going to open it for question and answer. But so you get the gist of the idea. By bringing tungsten fabric through ML2 into OpenStack, it opens up this word for it. So all those features which I was describing earlier, so you have them all. And now you can mix and match. You can run multiple SDN controllers. If you want to run ODL with tungsten fabric, be my guest. Try it. In fact, somebody was in China trying to do that. So it's available to you. I showed you the code. It's up there. Play with it. Contribute. We're looking for people to come and contribute to it. Help us make it even more feature rich. If you can think of any use cases, which you would like to bring it on board, come on. Welcome. This is all open source. Tungsten fabric is open source. ML2 is all open source. Everything is open source. So nothing is proprietary. The only thing which we use proprietary here was the switches, the Jennifer switches, because I work for Jennifer. All those switches are free. I can use them. Otherwise, I have to spend money. So other than that, everything was open source. So anything I can or we can answer for you at this point. Talking about smart nicks, what cards do you support VRO to upload in? Currently, we're working with multiple vendors. So I don't know whether I should be throwing out names or not, but any big vendor, Nick vendor you can think of, they are. So I mentioned earlier on a technical steering committee for Tungsten fabric, we have defined an API for any Nick vendor to become compatible with. So there is a certification. So any Nick vendor can come and comply to that API and it will become compliant with. But if you're talking about commercial licenses, so the commercial version of Tungsten fabric is contrail. So if you come to Jennifer and you say, I want to deploy it in production and I want a commercial license. So we have a commercial. So Netronome is the choice. Melanox is another one Intel. All of these guys, they're all tested. They're all verified with Tungsten fabric and pass it on to him. He had a question too. First of all, I can name your vendor later, face to face. You are interested? Really? I can tell you a vendor that works. But that's not my question. So the driver name is open contrail. You didn't mention that it was split to Tungsten fabric and contrail by Juniper. So what he did was when we moved to the open source, so we wanted to keep the same name, open contrail. So Juniper has a commercial product which is called contrail. So when we moved it under Linux foundation, Linux foundation legal created an issue. They say open contrail and contrail sounds very close. Either you give up the copyright to contrail or we're not going to let you have open contrail. So then the community, the Tungsten fabric community got together. They actually voted. A lot of people proposed different names. For some odd reason, Tungsten fabric was chosen and that's how it got. That's not my question. How do you see the future of Tungsten fabric? So who will contribute to that? Sorry? How do you see the future of Tungsten fabric development? The community is growing. In fact, there is an event, Tungsten fabric event. We are kicking off formally Tungsten fabric in China in Xindao. It's like an hour and a half flight from here. So I will be there, it's on Thursday, right? Thursday, right? So there are roughly 150 to 200 people who are planning on attending that and they want to become a part of this. So apparently there is a huge demand here in China. People want, so those guys are pinging me. We are past our time. So I'll stick around, we can talk offline so that we can free up the stage for the next presenters. So there is an event. So please do show up. It's actually on a Linux. If you go to Linux Foundation, the main web page, and you go look at the events, it's listed there. It's a Tungsten fabric China event. You can go register. It has the full.