 Hey, good afternoon, everyone. Welcome to our session on using OpenStack with Open Daylight. I'm up here with my friends and colleagues, Brent, Kyle, and Madhu. We're all going to tag team this a little bit, and we're going to hope to be a little bit more efficient with our time than we've been so far. So welcome. I'm going to do the introduction, talk a little bit about what we're trying to accomplish, and then these guys are going to do the heavy lifting. So with that, oh, by the way, if people have questions, we can do it in line as long as it doesn't get too long, because we're already a little bit behind. So we'll try to do it in line, though. OK, so what we are hoping everybody walks away with is an overview of how OpenStack and Open Daylight can integrate. And these guys have done fantastic work on that. A demo of bringing up a multi-node OpenStack environment, and also a demo of bringing up Open Daylight with OpenStack Neutron for virtual networks. So that's what we're hoping you'll get out of this. And I'm going to try to keep this part short, because I think the demo is the real cool part. So how many people know what Open Daylight is? OK, well, OK, that's really good to hear. This is sort of a mom and apple pie kind of slide. But basically, we're trying to create a platform, not a point solution, but an open source platform for SDN applications so that people can build a wide range of applications on top of it. Trying to get broad industry acceptance, build a community basically of users, vendors, developers, and have that community be thriving, growing. And we've heard a lot about that this morning. It's pretty key. Community is key. And so, again, it's a community. I talked about the platform aspects of it. There's common abstractions northbound. We've been working a lot on that. And there's some implementation details about how those northbound are connected to the southbound or are mediated. Programmable network services, applications, and whatever else we need to make it work, which is probably nothing. Oh, okay, so on to Kyle. Thanks, Dave. So Brent and I are both going to talk about this. I think we've alluded, or at least I alluded to this earlier. There was an earlier discussion. This is a more detailed diagram of how the open daylight integration with open stack looks. The key points on this diagram are, if you look on the very north side up there, you can see that there's an ML2 plugin. And so the open daylight integration on the open stack side is a mechanism driver for ML2. That essentially just passes the rest API calls, the neutron API calls that people are making down into open daylight. So those API calls then show up in open daylight in this northbound API layer, which is the open daylight neutron service, I think is what it's called up there, inside open daylight. And so that service essentially provides a mechanism for things southbound of it to register for those. And one of the things that registers is the OVSDB neutron application there. And with that, I'll pass it over to Brent. Hey guys, so like Kyle said, we've got an abstraction basically for neutron API calls. There's a handful of southbound protocols that you can talk to the data path with. So we kind of start getting into the SDN world pretty heavily here. So with what we're gonna demo today, we're using two main protocols. We're using OpenFlow 1.3 and we're using OVSDB, so OpenVswitch database protocol. So Ben Faff developed that with Adam Nasir. It's a core component of OpenVswitch. So right now we have overlays implemented. So if you have a bunch of, a handful of hypervisors, a full mesh of tunnels is set up. We're changing, what our roadmap is, is we're working pretty hard on services because if you kind of step back and look at SDN, the value prop of SDN early on is doing middle box functionality. So the services are kind of long tailed directly into network function virtualization. Carriers are definitely have a need for simplifying and both on OPEX and CAPEX. So the first service we're working on is security. We're looking to integrate IP table functionality down into OpenVswitch. We're almost done with that. And any feedback on services that's interest from the community, we'd love to hear it. That's how we prioritize. It's totally community-driven. So this was our release and Madhu is gonna do a demo today showing flavors of OpenStack and the integration. All right guys, I'm planning to do a simple demo. It's not as simple actually. It's going to cover a few components out there. Of course, we have OpenStack with our source release. We are gonna use DevStack for the demonstration. Open Daylight, we have the Hydrogen release this February. So this code is based on Hydrogen. We are going to have OpenFlow 1.3 and 1.0 in combination because Open Daylight supports multiple protocols as we saw in the previous slide. We support 1.3, 1.0, and in future, it'll be all the OpenFlow support as well. We talk OBSDB to the OpenView switch. And of course, we use OBS and everything's written in the federal index now. When you're thinking about putting a demo together for this audience here, we were thinking about really how to differentiate from a typical OpenStack demonstration. In a typical OpenStack demo, we have agents running on the compute nodes and we form L2 overlay network and we show things work like pinging and we're all happy with that. So when it comes to Open Daylight integration, we thought we'll do a bit more to highlighting what Open Daylight can do bit more than what we can do on an OpenStack overlay integration. So not to showcase that, we took an use case that's available today in OpenStack where when you want to launch an instance VM instance, you see there's a flavor where you can say, you know what, I want so much CPU, so much compute power, you want so much hard disk, and you say a flavor for compute and the compute resources and you launch a VM, right? So we have nano, mini, micro, whatever we have. But we don't see a network as a resource in the OpenStack UI, right? So we thought, yeah, why not we take it as a use case here and do a flavors for networking, right? So let's do a flavor for networking and see how Open Daylight can assist in providing a network flavor for OpenStack. So that's gonna be the overall objective of the demo. In order to do that, we wrote few apps, as you can see here. So from an Open Daylight perspective, OpenStack is an app, even though they are in OpenStack conference here. That's because Open Daylight is a generic SDN controller on which anything that talks to the Open Daylight via the Northbound REST API is an app for that. Since the integration, as you can see here, the Open Daylight controller talks to the OpenStack Neutron to Northbound API. So we categorized it as an app, even though it was one of the most important app for Open Daylight, of course. Then we have flavors to enhance the network-favorite functionality for OpenStack. We have the policy-based traffic steering. The way we have achieved this one is we have the open, we have overlay network using VXLAN and GRE. And in the demo, we have a simulated fabric, right? A spine and leaf architecture, and we are using Open Daylight, an open flow to do traffic steering on the fabric, right? So we are trying to tie in the overlay tenant traffic to the underlay fabric traffic steering to showcase that with Open Daylight, we can control the resources, use the usage on the underlay as well, on the fabric, while we can configure the overlay by OpenStack to state a given tenant wants a specific SLA, SLA guarantees, right? So that's the idea. So the traffic steering is going to do that for you. And we have the hyperglance and delux. There are a few apps on the controller which essentially does network visibility into UI management purposes, right? So we'll go over all the details in this demo. This is going to be the topology which I will be demonstrating now. I have a very simple setup. Everything is running in my laptop, right? I cannot trust the Wi-Fi, so everything is running in my laptop, including the underlay fabric. We use Mininet as an underlay simulation tool, and we are going to control that using OpenFlow. And we have OpenStack compute nodes using DevStack. The compute node one is going to be the OpenStack controller. And we have a few networks here, as you can see. The purple one is the management network which we use to connect to the controller. So as you can see here, the OpenStack controller node will talk to the computer, the Open Daylight controller using REST APIs, using management network. Similarly, the Open Daylight controller will talk to these switches, obvious switches, using OBSDB and OpenFlow, again through the management network. And the data network is used for actually forwarding the data, right? It carries the overlay traffic. So the reason I'm showcasing this one is you will start seeing that we will start to traffic engineer the underlay network on the data path, that's the idea. So we will have a silver flavor and a bronze flavor, and we'll show that a silver flavor traffic takes a different path based on the cost availability. So we'll configure stating, hey, for silver flavor, we want the least cost path. So you'll get the least cost path while you want bronze flavor, you'll get a shortest path, you'll get a shortest path here. So as you can see here, having a shortest path is not a big deal on an Open Daylight, an OpenFlow controller, really. But we are talking about an overlay traffic, a traffic that is going overlay, how to make use of the underlay's resources. That's the key benefit out of it, right? So a control like Open Daylight can deliver for you, because it can control both the overlay and underlay networks at the same time, right? That's the overall objective of this demo. Let me switch over to the nerdy backend. All right, so is it visible to all of you guys? I mean, at the back, is it okay? Or should I increase the font? It's fine? I see one, two offenses. That's all I need. So we have four windows here. We have the DevStack running here. These are DevStack, Control Note, and the Compute Note. And by the way, guys, all is live demo here. And those who do live demos know everything can go bad, right? So I hope things will work fine, because it's really live at this point here. So we have the Control Note and Compute Note running up here. And this is the mini-net console I have, and this is Open Daylight back in. Let's, for interest of time, I actually created this overlays. So I'm going to showcase how it's been programmed and everything. So if you look at Keystone tenant list, you'll see the tenants that we are going to demonstrate here. So for this purpose, we have created two tenants, Pepsi tenant and a Coke tenant. So the way we're going to showcase is that the Coke tenant will have a GRE overlay network, while Pepsi will have a VXLan overlay. All these are actually orchestrated by the Open Daylight controller, because in the previous talk, if you guys were there, Kyle was talking about when we use Open Daylight, we don't use any agents on the Compute Notes, right? All we have is OBS, it's typical configuration, right? So OBS is running on the agent, and that is it, right? On the Control Note, yes, we are using the DHCP agent and L3 agent, but on the Compute Notes, it's just pure OBS, and you'll see that since it's pure OBS, there's no other intelligence running on the Compute Notes. The Open Daylight controller takes care of programming all the overlay traffic for you. So since we support VXLan and GRE, I took GRE as a use case for Coke and VXLan for the Pepsi. These are two tenants here. And we all like an ICU, and we have an ICU as well here. Open Daylight has a decent UI on the inbuilt on the Open Daylight itself. You can see here that Open Daylight, it's actually learned the entire network, including the overlay and underlay. You can see the spine and leaf, they are the underlay thing that I created. So if you look at the mini-net that is running here, that's not mini-net, here, this is mini-net. So those who know mini-net, they all will appreciate this console where we have two leaves and two spines running here and they are interconnected in a spine-leaf architecture. And Open Daylight has learned the fabric, and also it learned the, since it orchestrated the overlay network, you can see that we learned the BR tunnels of the control node and BR tunnel of the Compute Node. Ah, good battery, man, help me. Thank you. All right, thanks. All right, I'll be fast. So as you can see here, the link that you see here is actually the tunnel, the GRE tunnel that we have established. So the way to see it is let's click on this link, you'll see that, there you go, there's a GRE tunnel right here. It's established by the OSDB Neutron code running on Open Daylight. If it refresh, I should see one more, there you go. That's for, another one is VXLAN. So I just created the VXLAN network. So if you see here, we'll see one GRE tunnel for one tenant, another VXLAN tunnel for another tenant. So I have all created right here. So we have, as you can see here, the Open Daylight, we have learned these tunnels as well. So we give a unified view for the management, the guy who's managing the network, both overlay and underlay, and we can start playing with things, okay? As you can see, we see all the switches that is available there, the overlay, underlay switches and everything. And of course, we have the Neutron UI here. The Neutron UI, see, the Neutron UI is simple, right? I mean, it looks at the network as a simple layer to bridge, right? It doesn't really care about, hey, we use an overlay tunnel or whatever, right? Because according to Neutron, it's just one VXLAN network. It has two VMs being launched on that. And that's it, doesn't care about, hey, how it is actually implemented on the network. And that's the power of Neutron as well, right? Where keep it simple for the operators while do the heavy lifting by the plugins, right? So Open Daylight is one plugin which does that. Similarly, we have three or four plugins that Kaila was talking about in the previous talk. And the actual heavy lifting of launching these two VMs, and as you can see, one more thing is, the VMs here, even though they are like next to each other, actually they are launched in two different compute nodes, right? One compute node is now having one VM, another computer node is having another VM. They are separated across this fabric, right? In this case, let me go back to my thing, right? So one VM is launched on one of the node here, another VM is launched here. They are separated by the whole data center, really, fabric leaf architecture. And we actually have a tunnel across which is forwarding the traffic. And all these are done by the Open Daylight. Now let's get back to the context again. So I promised about the Flavors application where now doing this is very common, right? I mean, having a VM launched and creating tunnels across multiple nodes, it's common in any plugin that we see today. The specialty of Open Daylight we are showcasing here is that we have an app called Flavors App, which, again, is built on top of OpenStack UA framework. As you can see, it's very similar. And the Flavors app provides this way to configure flavors. So in this case, we configured a flavor called Silver and Bronze. And the Bronze flavor does the shortest path first algorithm, while Silver does the least cost path algorithm. It applies on the underlay. And we apply the policy to part number of the UI. You're supposed to put the string here. So the Silver is the Pepsi tenant, and the other string is the Bronze tenant. So if you do that tenant list, you'll see the same here. As you can see here, Coke is 89cc, and Pepsi is D8. Whatever, right? So this is that. So this is what the Flavors app does. It configures it using the Outbound API of the controller. It configures the underlay. And now when we look at the actual flows, here. So we do a pseudo OVS, VSCT, OFCTL, and I think it's EL1. So these are commands used by OVS to dump the open-flow rules on the underlay network fabric. You see there are a lot of flows here, but what will interest you is these flows here, right? So we configure which port the package is coming in from, what are the batch criterias, and then how the package should go out, right? This is the actual traffic sharing I was talking about, where we can use, I can say, commodity hardware on the fabric side with open-flow support. And we can actually do the traffic engineering using open daylight, really, right? So essentially, the support of open-flow 1.3 we have, we have much more control. 1.0 has a little bit control, but 1.3 has much more control on that. So for all of us, it's very difficult to read these flows to understand. So for that, we have this nice UI. This one is a three-dimensional DNA-structured UI that it's called hyperglance, right? It's a commercially-available product on top of open daylight and open-stack, really. They talk the, they are this, or hyperglance talk, not-bond API to the controller, both open-stack and open-delight, and you can extract data. So here, as you can see here, it has extracted data out of. So here, it's showcasing only the underlay. So we can configure whether to show the entire UI or just the underlay. But here, it shows only the underlay to make sure we are on track. And here, as you can see here, we have the bronze and silver policies right here. So we click on bronze. You see the traffic is actually taking the bronze path, the red highlighted one, stating that, hey, if the traffic for a coke, once you send it, they will actually take this bronze path where least, no, SLA is like a shortest path, doesn't care about cost. It's just going to keep forwarding it there. But if we do a similar path, the similar path takes a different path altogether there. So the UI actually takes the flow from the actual running open-flow rules, stitches them together, and forms a nice picture out here. So as you can see here, it just demonstrates that from 192.168.56.101, which is not a host actually. It is actually the tunnel endpoint on a compute node and 150.102, because from an underlay standpoint, fabric standpoint, it doesn't care about the actual VM that is running inside the overlay because it doesn't have visibility into it. That's why you see the tunnel endpoint as the actual endpoint here, while overlay is running on top of that. So those who are network operators who understand this overlay idea, it's a very common problem where the traffic is actually on the overlay network, while we are to traffic engineer the underlay. So this is exactly what we are showing here that, hey, we can do that. It is just an idea. We are doing traffic steering here. But as you can imagine, we are going to work on the DHCP quiz and how to do much more interesting ones just by this overall picture, overall idea of what we can do with open daylight and open stack in tandem. And also open source, none of them is closed. Everything is open source. So you can try it today, in fact. So in my case, I spent at least half an hour to put the demo together just before the talk. I couldn't do the live creating these policies and everything because we don't have much time. So if you have a question, you can stop it to the open daylight booth we have where we'll be running this demo starting tomorrow. And you guys can have deep questions if you are on the demo. With that, I'm done. Yeah, we can ask questions now. So I am Prakash and I'm from Huawei. The question to you is, what happens if you are under flow stitching breaks and your overlay doesn't know about the stitching break? Will it not make things horrible for a carrier for a SDN operator? And how do you address that? I'll take it, but I will leave it to others to talk. At least in open daylight case, that's the power of the demonstration we gave here where we have the visibility to underlay and overlay. So the question that you asked is very generic about overlay itself, not about open daylight or open stack. But it's a good question where? It is specific to open daylight. How do you address? You say you automate everything from overlay to underlay. And underlay, you've got an OVS switch where you've got flow tied together. And as of today, I know Havana, if you reboot, all of that numbering which is there in the chain is broken. And then you have to tie from top to the bottom. And that stitching has to be the same. Otherwise, I'll have something flowing from VM1, which was intended for VM2, going to VM4. That's the situation I am trying to see. How do you address this? Sure. So I can go deeper in this one if you want to discuss about how we do the actual programming, right? We can definitely take it to the booth. I will give you a lot of information on that. But what I want to give a short idea of this one is the controller itself provides all the capabilities. It's an app that is actually doing the traffic stitching. The question that you asked is actually about the policy app that we have, how we are managing this one, right? So to give a short example of this port number, the open flow port number is not, it stands in across reboots. And that's why we don't use port numbers. We use the logical entity which does that for us. And I will let others talk also here, right? You guys? Yeah, absolutely. I don't know if it's on. Like practically speaking, though, our neutron integration, we're not dealing with hardware. For the most part today, we're just having a routed fabric with overlays on top of it. So I mean, this is like Madhu said, this is just an app. This is kind of showing some of the added value. Personally, everything we're developing is totally overlay focused. This is just part of, I mean, at some point, you have to tie in the underlay in some form or fashion. This is just an example of one, the power of kind of having an app, having an abstraction in between the data path and the control. Yes? I'm sorry, yeah. Hi, just about something you said earlier at the beginning of the talk. You said that you just now are implementing, integrating with IP tables, OVS with IP tables. So if my understanding is correct, you don't support security groups today? Yeah, correct. And the main reason is before we only had OpenFlow 13. So with OpenFlow 13, with the OpenFlow spec, you don't have any TCP flag awareness. So it's really hard to do services without knowing any form or fashion of state. So to address that, we're working on NXM extensions, so we have TCP flag. So with that, we can now kind of look at SINs, we can look at SIN acts and do policy that's proactively instantiated rather than reactively instantiated. Okay, so I don't quite follow. So you're relying on OpenFlow to implement security groups, is that it? Instead of sort of having it sort of out of band on the side? Yeah, so it's being instantiated out of band over the OpenFlow channel using OpenFlow 13 in Nesira. So instead of installing it in IP tables and in a process that's kind of outside the traditional network pipeline, we're just kind of integrating it. We think there's some value there. Okay, thank you. Sure, absolutely. And if you have any interest on the project, come work on it with us. Good stuff. Yes, so I'm Thomas, I'm the Debian maintainer for OpenStack. So I've got two small questions. One, are you working on NF table support? And second, can you describe? The second one is very easy, I believe. Can you describe what makes a flavor for a network? What kind of option and parameters there is? I'll take second one. First one, I'll leave it there for me, right? Okay. The Flavor is an app that we created like in a weeks time for demonstration really to showcase what we can do. Today, as you know, right, the Flavor we are using the policy engine that we have, the policy app that we implemented, the way it is done is the customer can provide any property he wants to, and the property can be applied on these links, the underlay links, and the property can be mined via the northbound APIs, and it can be anything, it can be dollar cost, it can be bandwidth, latency, or whatever it can be on a given link property, and the flavors can apply on the property. So that's an example app, right? So now we can start thinking about... So like, for example, for Flavor, you can define how much bandwidth you have in the LAN or the outside? Yeah, you can do that as well. Okay. Yeah. So my point is the Flavor's app makes use of the existing open-aerate infrastructure support, and since the open-aerate has the full control of the underlying fabric, you can start manipulating, look at the network's resource. You can think about all the resources and network can be part of Flavor's app, can potentially be. It could be latency, it could be, you know, time of the day, it could be cost, whatever it can be. Cures, whatever, yeah. Yeah, the first one. Yeah, and about NF tables support. Is it planned, or have you started working on it? Is it a NetFlow table, or...? NF tables has the... You know, we have AP tables now, and the Linux kernel is now bringing a new API after changing from AP chain and now AP tables. The next one is NF tables. Not that I'm out of. Outside of just NF, in the Open V-Switch project, there's some work by Justin Pettit on using Contractor out of the kernel to get state awareness, so we're kind of tracking that. We're trying to really keep things native to Linux, so whatever makes sense we're open to. We kind of like the idea of consolidating as much as we can into the data path, so we'll definitely come talk to us about it. We'd love to talk about it. Thanks. We've got probably one more. Hi, I'm Keshav from HP. I have three questions. You said you need to have full mesh in the underlying network. So when it is a full mesh, how do you guarantee all the bandwidths are utilized and all the interfaces? That's the first question, if it is based on the SPF or the least cost. Second question is, how do you make sure that since it's a central entity, ODLVB is a central entity, and whatever the path calculation or the graph calculation happens as a central entity, whereas in the real world, every node used to do the path calculation and the computation. So being a central entity, how do you guarantee that whatever is calculated is what is actually reflected in the real network? Because compared to the existing network, the graph used to calculate each node being the root, and the graph used to be given to the CSPF, and based on that, RSVP has to be applied, and that's how it used to be. But now, since there is no distributed routing and it's a central routing, so how do you make sure that all of the paths have been equally utilized? OK, so the question that you asked is very generic about underlay, right? Nothing to do with overlay or neutron, right? So let's take this question to the booth, if you don't mind, because we are running out of time, and we have a few more questions. Being very generic about underlay, we'll take it in the booth, if you don't mind. Yeah, we also don't have time. Right, sorry, we'll take it in the booth. But you are with HP, and we're going to bring that to us. Yeah, we have two minutes, guys, yeah. One more. One more. One more, please. This is Ajay from EMC. So wondering if you can shed some light on the performance aspect of the Open Daylight controller. I take it that in a large scale network, you might need a distributed cluster, and what kind of performance can it handle. Sure, the hydrogen release, we didn't do performance testing much, really, but Open Daylight controller has the clustering support already, where we can have a cluster of many controllers working together, and it supports active-active as well. So we can actually split the network into multiple small pieces, and each controller can handle each one piece, and they talk to each other using infinite span and exchange nodes, yeah. But we haven't done performance evaluation to figure out how much, really, performance for a single controller yet, all right? All right, so last one, last question. Thank you. Sorry, I think you might have answered part of my question, because his question is very close. But on the one hand, I understand the attraction of open flow and software-defined networking. On the other hand, I'm reading a lot of stuff that there's no real industrial implementation of that yet, and that there will be huge problems when that takes place. And so it's kind of a performance question. Have you, has anybody installed this industrially? Do you envision problems when it happens? I can give one example, and Dave can do the rest, right? Example, today, if you look at Neutron and OpenStack, it is using OpenFlow on the edges, right? If you can't be from wrong. The OBS today uses OpenFlow to program their path and everything, right? Okay. That's simple answer that I can give. Hey, it's not true, but I can let... So there's kind of two parts to your question, right? What happens in the data plane and what happens in the control plane, right? Yeah. And so in the data plane, the theory is that you should be able to do something that looks like line-rate forwarding as long as you have enough flow entries in whatever flow cache you have, right? Sure. And actually, that usually turns out to be the case. That's not where performance bottlenecks come from. Now on the other hand, if you're talking about the control plane, then there's all the issues that Madhu just mentioned with sort of distributed controllers and there's some physics too because if you have a reactive model, right? You know the reactive model? I'm not that familiar, no. Okay, so a packet comes to the switch and the switch doesn't have a flow entry that covers it. So it punts it to the controller. The controller figures out what to do and writes flow entries along the path at once. That's reactive, right? Okay. That of course has scaling properties that aren't the greatest, right? Yeah, it makes sense. And so in general when people have, you know, production installations, they do proactive flow installation. Okay, so that's it. Okay, okay, so that would be very helpful in that case. I guess I've, so I'm probably, I'm like the last question, maybe I'll take it to the booth. I had a couple of the- Yeah, that'd be good. Let's let it go. Thanks. Thanks everyone. Thank you. Thank you.