 So Morning everyone welcome welcome to this talk. So My name is Kyle Mestri. I work at Cisco. I'm the current neutron PTL right now And my name is Mark McLean. I work at Yahoo, and I am the I was PTL the previous two cycles So yeah Kyle is transitioning takeover PTL duties. I'm also a member of the technical committee as well Right so this this talk is going to be about open source back ends for open stack neutron That's what we're gonna. That's what we're going to cover here I think this is a topic of interest to a lot of different people so What is this what are we going to cover in this talk? We're going to go through and we're going to do a quick Recap of the neutron plug-ins a little bit of background on plugins versus ml2 Or core plug-in versus ml2 some things like that We're also going to talk about the built-in the neutron built-in plug-in The the solution that it provides how it solves things like L3 routing and DHCP and things like that Then we're going to jump over and we're going to look at the other open source alternatives that are out there for neutron right now And these include open daylight The real plug-in and open con trail as well So What what this talk is not about this talk is not a competitive analysis of open source back ends? We we're not going to advocate one over the other We're basically going to show you how to configure all of them show you a brief overview an architecture diagram of how they work We're not going to tell you which of them will be right for your deployment. So if you've come for that You'll be sorely mistaken We're also not going to tell you the holy grail of the infinitely scalable open source back-end so But you know what we're way up what we are going to talk about and I alluded to this before is We're going to enumerate all of the different open source alternatives for neutron These include both core plug-ins and ml2 mechanism drivers We're also going to provide an overview of all of these plug-ins So you'll get an idea of how they function architecturally how they solve the similar problem of implementing the neutron APIs as well And we're also going to show that you know neutrons agents are not the only alternative to the commercial back ends So we take a little bit. We take a look at neutron just to kind of recap plugins The main open source plug-in that we have in the neutron project is the modular l2 plug-in commonly called ml2 And basically it's a common layer of database for Davis management resource allocation and its supports drivers both proprietary and open source One of the reasons we did that long term was to come kind of combine We had different open source plug-ins and some people are like why do I deploy OBS or why to pull a Linux bridge? And we combine them in the one so that way you can manage both types concurrently. It is a standalone Plug-ins so like I said has multiple interfaces one of the things. It's also Importance is the drivers are go through testing. That was one of the big initiatives We worked on as a community the last six months is making sure that the vendors and even the open source communities that we're contributing We're testing their code. And so it's really cool to see even open source projects have gotten Sponsorships to do testing do continuous integration testing. And so the code comes out with higher quality and in better stability So for neutrons built-in solution, it's really a hybrid l2 l3 It's not exactly the most clean separation in the world if you're a purist for the OSI model, but it works And one of the things one of the one of the main components that you'll see more at the i3 level at the l3 level is the IPAM We manage address allocation for both the v4 and it does some v6 work in Juno We're actually expanding the options for IPAM management for v6 It handles east-west routing within the within the deployment as well as external gateways and floating IPs If you're not familiar with open-stack floating IPs, it's basically a way to give you to map an external address into your private cloud Address space. So if you have RFC 1918 space, you can map it in real easy The solution is implemented with ML2 plug-in uses OVS mechanism driver the mechanism drivers Just basically responsible for actually implementing the changes onto the data plane as well as the OVS Lair 2 agent and that's the same OVS layer 2 agent that we've been running since really the beginning of the project But the one thing has changed just to kind of give you a tease for ice house It's actually been improved and actually is more stable and also scales a lot better and in runs a lot faster And then we also have combination of layer 3 and DHCP agents layer 3 is typically it's going to handle Routing not dynamic routing to static routing. So it's a simple very simple forwarding device Typically for the open-source solution based upon Network namespaces and Linux and then the DHCP agent for those of you. There's lots of different config options I think a couple others will touch on it later this afternoon about the config options within neutron But the DHCP agent handles DHCP services within the deployment and then lastly In addition to the core plug-in, which is say ML2 we have several advanced services plug-ins Load balancing as a service VPN as a service firewall as a service And these are plugins that you can install alongside of ML2 and that you can actually make the decisions independently which ones you install So when we take a look at the built-in solution, if you look up at the top You'll see that the interface of the neutron server is a rest interface. It's via HDP within a neutron server We have several plugins and so you'll see the ML2 plug-in You'll also see the L3 plug-in because the way it's designed ML2 and layer 3 can actually be run as two separate plugins And so one of the future work that some folks are working on is alternative L3 plugins Those plugins all connect to the same shared bus If you notice the guy who prepared this slide is obviously from the UK because it's double-decker So Salvatore one that he leave is one little thing since he couldn't be here, but So the amqp bus which is how all the eight which is how the plugins and the agents talk to each other The nice thing about that is that you don't other than knowing the host where something's running the host name You don't actually have to know the specific address or the specific port the bus takes care of the communications It's both bidirectional as well and within ML2 there's a drive the OBS mechanism driver takes care of Mipilating the logical data model underneath the hood to ensure that the mechanism to ensure that the actual agents Have the appropriate fields and the so for instance when you provision a network It's going to sure if using VLAN that the proper VLAN IDs Allocated from the pool So the neutrons built-in solutions really easy We're going to talk about DevStack and it's mainly just because it's kind of we're developers That's kind of the way we talk, but for those operators the open source solutions are packaged in all the distros So it's really easy. You don't have to anything special other than install the appropriate packages If you're doing it via DevStack, you're just going to enable services for neutron And then you'll notice there's a couple other services enabled QL3 Q agent Q service QDHCP and Q meta If you're wondering where the Q comes from Back in the day neutron was called quantum So every now and then we actually slip up and still use the old name But for historical reasons in DevStack, the Q prefix still exists and then you would call stack.sh And it's going to build the default open source If you use the packages from any of the distros they will by default install ML2 as well so There's a couple the built-in solutions and the reason why we say Careful messaging is these solutions were originally designed as a reference architecture And for the open for the neutron team to build the server to talk about extensions to try out new features We need an architecture which worked which demonstrated tenant isolation which demonstrated layer 3 routing But the solutions were always since they were built as a reference were never built to be highly scalable The scalability has been one of the things that the community has brought to over time So these solutions keep getting bigger and bigger. So for those of you who need to do small test lab Deployments and want simplicity the open source solution works perfectly in most cases. I'm sure other folks will disagree but the other thing is is that There are some limits with some of the agents especially with the OSS with some of the back-end so It's important that while we have a reference architecture From our standpoint as being technical leaders within the project is we want a viable ecosystem and we want a viable ecosystem that's open and has competition because Writing an SDN controller if you go that route is really hard and it takes lots of effort It takes lots of dedicate effort the same time if you're developing neutron We can't do both things well. So that's why it's pretty that's why we're pretty excited to talk about the open source alternatives and so While you can start with our mick with our built-ins there are other free alternatives that do scale and do deploy well and so the other thing is is neutron community is Has a really vast mix of people and so you have folks who have web development experience distributed systems and network experts And depending where you're working within the software stack You need people of all those levels of expertise But some of the people in the stack their focus is not networking Which is why when you have the open source projects that have been spawned for the open source for the open source alternatives These are folks who are hardcore like dedicated networking folks who want to write these services And so when I touched on a little bit earlier where the community started working on scaling out the reference architecture There's actually production and deploy deployments of public clouds that work using what's known as the L2 population the L2 population driver and one of the original problems with the reference architecture is We wanted to limit the use of flooding and broadcast traffic Especially when because if you know the logical state of the network you can actually precede and pre-populate the tables For both OBS and Linux bridge forwarding whether you're using both VX LAN or GRE And so one of the main one of the main things we did to implement this is we implemented a local ARP Responder if you're not familiar with the lower levels of the networking stack ARP is basically a dress resolution protocol It's how your computer figures out what how to talk to another station on the network So let's take real quick look graphically so before the ML 2 population driver If we were to have say VMA, which is at the top of the screen wanted to talk to VMG Which is in the bottom left-hand corner in order to discover where VMG was running it would have to send out It would have to try it would have to send broadcast traffic out to all the host So one of the things you do is you have a very active network with lots of instances And they're trying to resolve addresses or send traffic through you're now flooding all the tunnels And you're creating a lot of unnecessary traffic on your network So with ML 2 population driver, and this is where we started working on simple scaling Tricks is now for the host one we install proxy ARP So when VMA tries to resolve the address of and how where to forward traffic to for VMG the proxy ARP answers And routes the traffic directly to host for if you now notice that host That that host one and the other host two and three are no longer seeing traffic So if we want to use L2 population with DevStack before we install DevStack as before We can set it up with ML 2 we can use in this case the agent's been set up with Linux Bridge It also works with OVS and then we can we want to add a couple service plug-in classes for routing Because routing is interesting if we're doing especially isolated networks And then the mechanism driver we install L2 population And then we also set up our tenant network types The types are basically generic code that gets recycled so that everybody who implements a driver Doesn't have to go through the same algorithm for allocating a VXLAN or allocating a VLAN or somewhere And then all we do is we go in in the config file And we add at the end we want to add VXLAN enable VXLAN true What the local IP of the host is for the tunnel endpoints and turn on L2 population So this is one of the ways where we've started making the reference implementation smarter in what we do But there's still lots of work to be done which is what makes the other sides interesting So as Mark was saying outside of what we've done with the internal neutron reference implementations And the innovation we've done there there are other open source implementations out there as well So the first one we'll talk about is open-contrail as well So open-contrail is designed to solve two primary use cases The first one is cloud networking and the second one is network function virtualization Now both of those use cases fall within the domain of open stack and specifically neutron as well Open-contrail itself at a high level consists of two primary components The open-contrail controller and the V router as well And certainly like there's a web link here you can get much more detailed information about open-contrail If you go to this site as well So how does open-contrail integrate with open stack neutron? So this specifically talks about booting up a VM here and how this integration actually happens Because open-contrail has the V router portion as well So what happens when you're booting up a VM with open-contrail And you're utilizing the neutron plug-in information as well Nova is going to instruct the Nova agent to create the VM on the compute host At that point network attributes are going to be acquired by the open-contrail plug-in After the VM has been booted that information will be passed down to the V router And the V router will actually configure the networking for the VM at that point This diagram actually gives you kind of a high level overview of how this looks in reality here as well In this particular configuration the open-contrail deployment is going to use a single node for the open stack services Where Nova is running, where neutron is running with the contrail plug-in as well There's going to be a configuration node, a contrail node that's running the open contrail configuration server And then you have your compute node which is where open stack Nova is going to run And then the V router agent as well So this is just a very simple high level diagram of what open-contrail looks like Now if you wanted to actually try open-contrail with dev stack as well There's actually a website here on the bottom that shows you this at a high level You can actually go ahead and pull the dev stack tree, the fork of dev stack that has the open-contrail support Here's how you would actually enable it again So again you would set up the Q plug-in as contrail right now You need to set a physical interface And then you can essentially run stack.sh And this would give you a dev stack setup with neutron with the open-contrail plug-in as well So the open-contrail plug-in is not currently upstream right now Although I expected to go upstream in Juno as well There was a blueprint for that so hopefully that should go up there So the next open-source plug-in that we'll talk about is open-daylight as well So open-daylight is actually an open-source software project That's actually run by the Linux foundation And the main goal of open-daylight is to further the adoption of SDN So open-daylight likes to talk about how there's three pillars inside the open-daylight organization Code acceptance and community Open-daylight has modeled itself a little bit after what open-stacks done in fact with how they both develop code And review code and things like that as well So open-daylight much like contrail is building an evolvable SDN platform as well The infrastructure as a service network or orchestration is an important use case And open-daylight likes to utilize that use case to show some of the functionality that open-daylight has as well The first release of open-daylight was actually just this year in February It was called the hydrogen release The open-daylight group is actually working right now on a helium release Tentatively set for the fall at this point What's interesting about open-daylight is If you look at it on the website and you've seen an architecture diagram which I think I might have in here It almost looks like a bag of parts There's a lot of different projects inside open-daylight So to simplify things, the open-daylight project came up with three different releases for the hydrogen release They came up with the base, the virtualization and the service provider edition And each one of those includes relevant bundles for the functionality that's at hand The base edition was meant as more of a research type platform The virtualization edition is what's utilized with OpenStack So it includes things like the southbound OVSDB and open-flow work As well as OpenDove and VTN and the Neutron API service as well inside open-daylight And the service provider edition includes things that would be relevant on the service provider things Like the BGP work and things like that So Open-daylight itself is actually a part of OpenStack Neutron There's an ML2 mechanism driver that was upstreamed in Icehouse So you can actually download Icehouse and try out Open-daylight With the released hydrogen version of Open-daylight as well Effectively, it's a thin rest proxy that passes the API calls from Neutron over to Open-daylight as well On the Open-daylight side, there's actually multiple bundles over there that make use of this Open-Dove, VTN and the OVSDB plugin are the three main ones that make use of this at this point This solution at this point still requires the Neutron DHCP and L3 agents So that functionality is not implemented on the Open-daylight side now That is something that is being planned for the helium release So at least on the L3 routing side, hopefully that will be solved there So this is an architecture diagram of what Open-daylight looks It looks surprisingly similar to Contrail, I think What you'll find is a lot of the controller-based open-source plugins do look very similar So with this, you can see there's the Neutron node which is running the Neutron server with the ML2 plugin The Open-daylight node is where you have all of the Open-daylight server running The networking node is required in this case because that's where we're doing DHCP and L3 routing functionality as well One interesting thing you'll note here is that the compute node doesn't have... There's no agent running on the compute node at all Because Open-daylight utilizes OpenFlow and OVSDB to talk from Open-daylight down to the compute host as well So again, this is a real quick... If you want to try this with DevStack This is actually already integrated into DevStack as well So you can check out DevStack, set up your mechanism driver to be Open-daylight Enable the ML2 plugin You also want to enable the ODL service and ODL compute That will actually launch Open-daylight inside the DevStack instance as well If you have additional compute nodes, you can just enable those with just ODL compute Enabled in the services as well There's actually more configuration options for this as well So this would allow you to check out Open-daylight integration with Neutron as well The last Open-source plugin that we'll look at is the Ryu Network Operating System as well So Ryu is a component-based SDN framework Ryu is written in Python, whereas Open-daylight is written in Java Ryu supports OpenFlow 1.0, 1.2, 1.3, 1.4, as well as the Nasera NXM extensions as well And it's licensed under the Apache 2.0 license It supports a variety of protocols for managing devices both physical and virtual underneath it Including OpenFlow, NetConf, OF Config, SNMP There's a website link for a lot more information on Ryu as well So Ryu is actually integrated with OpenStack Neutron in two distinct ways There is an existing standalone plugin which you can run the Ryu as a core plugin And there's also an ML2 mechanism driver which is called the OpenFlow agent Which is integrated as well and will actually work with ML2 The OF agent utilizes the Ryu library on the host to talk down to the OpenV switch on the host for programming as well And so this supports standard multi-tenant networks using MAC You can actually do tenant network segregation utilizing MAC address-based segregation Utilizing OpenFlow rules, it will do VLAN as well and GRE tunnels also And the Ryu agent also supports the port binding extension in ML2 as well Basically any agent in ML2 that's going to support the virtual switch side will support port binding So it can bind virtual ports there So this is a diagram of what Ryu looks like Again it's very similar and this particular diagram is with the Ryu core plugin But again it looks very similar to OpenContrail and Open Daylight You can see there's a Ryu node where we're running the Ryu server as well The Neutron node has the Ryu plugin On the compute node there is a Ryu agent that will talk locally to the OVS on the host as well And then this also makes use of a network node as well Where there's also another Ryu agent running there to handle L3 and DHCP services for the Ryu plugin as well So this is also something you can try with DevStack as well You could check out a recent DevStack and this is the config you would want to set up You would want to enable OF agent and L2 population And then you would set the agent to be OF agent instead of Linux bridge or OpenV switch And then there's some additional config there as well This would let you set up and run a DevStack instance with Neutron With the Ryu OpenFlow agent support in ML2 as well There's more information on how to set this up with Icehouse at the link below there as well So in summary, what we wanted to show in this talk was there was a large number of open source options for Neutron plugins as well There's the existing functionality that's implemented inside of Neutron, the built-in solution But there's also three other open source plugins that implement the Neutron APIs themselves as well So there's a lot of innovation on the open source side going on around Neutron here as well And certainly there's no one-size-fits-all, so depending upon your deployment needs Or what you're comfortable with, what type of application scalability you want You can utilize any of these different plugins to solve your needs So at this point we'd like to open it up to questions I know in the previous panel there was questions around scalability and things like that So if anyone has any specific questions around scalability of either the built-in solution Or any of these other open source solutions that we've talked about I think the microphone's around, would you mind going to... Yeah, there's microphones right over here, so... Okay, yeah, why don't we go over here Hi, I'm Joris, I work for Cisco And I'm just wondering, with all these SDN controllers, there's yet another layer of abstraction, right? They both have their Nord and Southbound APIs And they might use OpenFlow And so basically you're going to have... Your networking is going to go through Neutron and then to Open Daylight and then to OpenFlow So I'm just wondering about scalability really How is that going to scale and how is that going to perform? I think it depends on what you're looking at If we look at something like the Open Daylight And probably most of these controllers end up being rest proxy pastors So they're just going to pass the Neutron APIs over to their controller And I think people are... In the case of Open Daylight, it's a Java application So scaling Java applications is something that the world knows how to do You can scale those sorts of things So in some sense you're moving the scalability problem from the built-in Neutron solution Over to whatever controller-based solution you're utilizing But you still have to solve the problem I guess My name is Ishan and maybe since I was not in the previous session So maybe this question is duplicated So it is the real problems we are facing We are supporting some eScience users, very heavy traffic users We have the Neutron server And our Neutron servers actually chalk up the connection in and out Because sometimes they download terabyte of data And like in a NOVA network, you know that The NOVA network traffic goes out from the compute nodes Because you have lots of compute nodes Then actually the network traffic actually distributed But in a sense our Neutron servers is like a single point of choking the whole network So I could not figure out the solution for having that Maybe this Lord Balancer service, maybe they solve it Or is there any way to have the NOVA network style Totally distributed because we need at least I mean, terabyte of data in and out all the time So you're basically running into bandwidth problems Or saturating links And so in the reference implementation, like I said, it's a reference And so it is really easy if you have a bunch of instances that generate a lot of traffic You can saturate it So one of the things we're doing from a community perspective Is we are working on distributed virtual routing Which will allow you to distribute the north-south traffic out Of different agents, that's going to come most likely in Juneau And so that's going to solve a lot of the use cases for NOVA multi-host But in the interim, one of the things to take a look at Is even these open-source implementations allow you different scale-outs And different ways to configure them to basically route the traffic north-south Also some of the proprietary options, so I'm sure you'll see vendors around Will also have a different scale-out story as well That will allow you to get higher bandwidth to your instances Without running into limitations And this Juneau release is... So the current release was Icehouse, which is what we released in April And the Juneau release will come out sometime in October Okay, good, thank you Hi, my name is Girish If Neutron APIs are active, if the act has passed through into Open Daylight And Open Daylight does catch up eventually And Open Daylight does everything in terms of Neutron, what it does today So what is the role of Neutron in the future then? Like Open Daylight does all the agents, L3, DHCP, everything Then what is Neutron doing just passing APIs to Open Daylight? Well, the thing to think about is when you think about the Neutron API It's really an agnostic API so that the tenant in the cloud user Does not know what the actual implementation is under the hood So for instance, if you're using the Nova API You don't necessarily know that maybe it's backed by KVM Maybe it's backed by Zen, maybe it's backed by VMware You know, VMware, you don't know And so that's the role of the Neutron APIs You have an agnostic API that presents a single unified experience for users So it would be just a shim layer then, right? But that's basically the way Cinder works, Nova works It's a very consistent design pattern with the rest of OpenStack Thanks Alright, I have not evaluated SDN controllers at all So I don't know pros and cons and so forth I was just wondering why Floodlight wasn't mentioned As far as what little I know, it seems to fall into the same category It's an open source SDN controller with Neutron plug-in So what was different about it that you didn't mention that one? To this point, we forgot to put that in Yeah, that was an oversight That is an oversight So the one kind of artifact of Floodlight is Floodlight support inside of Neutron, it is supported Even through the big switch REST proxy actually works very well with the Floodlight controller So our apologies did not mean to leave that out But a lot of times we lump the Floodlight work in with the big switch work Since that team does a lot of similar maintenance for it Sorry Hi So you mentioned three SDN controllers which are RYU, OpenControl and OpenDalight So as you mentioned, there is no one-side-feed-all solution for OpenStack as plug-in So at the end of the day, users are going to pick up one SDN controller to use Either RYU or either OpenControl, etc So can you comment on the advantages and disadvantages of these three I guess most prominent SDN controllers, OpenSource controllers can comment on this Which one do you choose? Well, first of all, I think it's worth noting that if you use ML2 Currently you could actually try OpenDalight and the RYU, the OpenFlow agent plug-in at the same time You could actually enable both of them and you could use OpenDalight for some things and RYU for the other Basically, you would run the OpenFlow agent on hosts where you wanted to use RYU And you would not run it on the other hosts and you would use OpenDalight So it's possible to do that and if OpenControl does an ML2 plug-in, you could do that as well As far as use cases in that, we're not really up here to say one is better than the other They all are solving things in a different way It's like Mark was saying, the Neutron API is abstracted The tenants won't know what's underneath implementing it So these different plug-ins are all implementing the same API but solving things in a slightly different way The most obvious example is the RYU plug-in still uses an agent on the host And like OpenDalight doesn't use an agent on the host Add a little bit to that, a lot of your choice in technology also depends on what your existing infrastructure is in your data center And so sometimes you may have certain hardware that works better on one of the controllers versus others Or you may already have existing contracts with either integrators or distributors or service engineers Who actually have a preference for using one or the other So it's really hard for us to say, if your deployment sys-size used this one They do have, and it's no different than the choice we have now When folks ask what should my hypervisor be in terms of NOVA Again, it's really an operator choice The biggest benefit to the ecosystem is that we do have a choice It's not like we're all being force-fed, one SDN controller that we all have to like no matter what So that's the good thing, they're also open communities And each community functions a little bit different, so sometimes one of your choices is Do you like participating in that community? Do you like the documentation of that community? And give you a really an opportunity for merit to win out as well Yeah, definitely I had a question, the RU agent is called OF agent, open-flow agent Open-flow RU is not open-flow, it's just one of the open-flow implementations So the naming to me suggests OF agent, you can use it with any open-flow controller, but it's not the case So is it the plan going forward or is it just a misnomer when you call it OF agent? So that's actually an interesting point, the RU team actually I believe that they may have plans to extend it to control more than just Open-V switch on the host to also control open-flow enabled switches or other devices that use open-flow So I think that's why they called it the open-flow agent Because I think they do have plans to extend its reach beyond just the hypervisor I had one more question Sure In the previous session you talked about collaboration between different open stack projects And now we are talking about different open-source components that we are using So what about collaboration between Neutron and these open-source components that we are using? Like I said today, Neutron has certain use case requirement which open daylight needs to fill So how are we going about it? Do we have some look into their decisions like their roadmaps Or is it we look at what features they have and then we go about it? So what's the plan on collaboration between open daylight and Neutron? So I think the best answer to that is that all of these open-source plugins They have representatives that kind of work across the both boundaries And the healthiest way to have this ecosystem exist is to have people who are pushing patches to both sides of the fence And that's what you're seeing already with a lot of these You'll see people that work in both Ryu and then also work on the Ryu open-flow agent And the same thing with open daylight and Contrail as well And I think that's the best way to drive that sort of collaboration Make sure that people are on the same page as well is to have that cross-collaboration And it's frankly the same way that it works with the vendor plugins as well The best way that that works is when you have vendor participation upstream As well as just in the plugin and even on the proprietary side And to add a little bit, as members of the core team, that's one of the things we do is we help facilitate that information sharing And also like Kyle mentioned with the vendor plugins, you will actually see information shared from vendors even to the open source Because a lot of the crazy thing is about the OpenSat community is a lot of people will interact with a lot of people And sometimes they change employers as well and so we all have relationships that keeps that dialogue open Yes, hi. I have a follow-up question to the comment that was made here earlier about the encapsulation of Neutron to SDN controllers Can one assume that it doesn't have to be aware of any SDN special capabilities and Neutron would encapsulate all the interactions? Yes So if there is special capabilities of one particular controller, how would you resolve that? So I think it's like we were saying like the Neutron API to some extent normalizes that because it says here's a set of APIs you have to do Create Network If you look at Create Network as a very basic example, you want to create a network that you can segregate multiple tenants across So each tenant maybe gets its own network So the controller could implement that any way it wants It could implement it using a VLAN or a tunnel network type or even like the OF agent utilizing just MAC address based segregation using Opethole How about more advanced stuff such as if you want to do redirection to firewalls and IPS and whatnot network introspection Yeah, and again that would come if the controller wanted to implement perhaps some of the extension APIs we have around VPN or firewall or load balancer or something like that The controller could do that utilizing service redirection itself or something like that Okay, thank you Hi So in this model when we have Neutron as a pass-through right so one of the things that was available in Grizzly but with ML2 kind of got restricted is The ability to make more extensions that Neutron currently doesn't support say for example if you want to pass some specific parameters to create router That is not there in the open source version right to be able to do that Just you know get an additional bunch of parameters that then the plug-in parse knows how to parse Whereas the driver doesn't know that now with this model where you have ODL or some other controller and you want to expose some other functionality up north Is there you know are you going to have a model to do that is that even under consideration or Yes, so there's a design summit session where we're talking about how we can expose extensions into ML2 drivers so that we can expose different capabilities So we got time for one last question Yeah, we are moving the core functionalities to the controller so what is the roadmap for Neutron because why do we need DVR and any other features in that case Well again there's always going to be use cases for deployments where people want a very simple install they want to be able to deploy Neutron and run on a And run in a limited cluster size and so DVR gives you that advantage maybe you don't have a whole lot of instances but you need a whole lot of throughput You can deploy DVR and one or two racks and get a deployment versus having to configure another moving part Yes, all righty. Thank you all Yeah, thank you everyone So Good morning everyone. Welcome to this talk so My name is Kyle Mestri. I work at Cisco I'm the current Neutron PTL right now And my name is Mark McLean I work at Yahoo And I was PTL the previous two cycles So Kyle is transitioning takeover PTL duties I'm also a member of the technical committee as well Right, so this talk is going to be about open source back ends for open stack Neutron That's what we're going to cover here I think this is a topic of interest to a lot of different people So what are we going to cover in this talk? We're going to go through and we're going to do a quick recap of the Neutron plugins A little bit of background on plugins versus ML2 Or core plugin versus ML2, some things like that We're also going to talk about the built in, the Neutron built in plugin The solution that it provides How it solves things like L3 routing and DHCP and things like that Then we're going to jump over and we're going to look at the other open source alternatives That are out there for Neutron right now These include Open Daylight, the Ryu plugin, and Open Contrail as well So, what this talk is not about This talk is not a competitive analysis of open source back ends We're not going to advocate one over the other We're basically going to show you how to configure all of them Show you a brief overview, an architecture diagram of how they work We're not going to tell you which of them will be right for your deployment So if you've come for that, you'll be sorely mistaken We're also not going to tell you the holy grail of the infinitely scalable open source back end What we are going to talk about, and I alluded to this before Is we're going to enumerate all of the different open source alternatives for Neutron These include both core plugins and ML2 mechanism drivers We're also going to provide an overview of all of these plugins So you'll get an idea of how they function architecturally How they solve the similar problem of implementing the Neutron APIs as well We're also going to show that Neutron's agents are not the only alternative to the commercial back ends So we take a look at Neutron just to kind of recap plugins The main open source plugin that we have in the Neutron project Is the modular L2 plugin commonly called ML2 And basically it's a common layer of database for Davis management resource allocation And it supports drivers both proprietary and open source One of the reasons we did that long term was to kind of combine We had different open source plugins and some people were like Why do I deploy OBS or why do I deploy Linux bridge And we combine them in the one so that way you can manage both types concurrently It is a standalone plugin so like I said it has multiple interfaces One of the things that's also important is that the drivers go through testing That was one of the big initiatives we worked on as a community the last six months Is making sure that the vendors and even the open source communities that we're contributing Were testing their code and so it's really cool to see even open source projects Have gotten sponsorships to do continuous integration testing And so the code comes out with higher quality and better stability So for Neutron's built-in solution it's really a hybrid L2 L3 It's not exactly the most clean separation in the world If you're a purist for the OSI model but it works And one of the main components that you'll see more at the L3 level is the IPAM We manage address allocation for both V4 and it does some V6 work In June we'll actually be expanding the options for IPAM management for V6 It handles east-west routing within the deployment as well as external gateways and floating IPs If you're not familiar with open stack floating IPs It's basically a way to give you to map an external address into your private cloud address space So if you have RFC 1918 space you can map it in real easy The solution is implemented with ML2 plugin uses OVS mechanism driver The mechanism driver is just basically responsible for actually implementing the changes onto the data plane As well as the OVS layer 2 agent and that's the same OVS layer 2 agent that we've been running since really the beginning of the project But the one thing that's changed just to kind of give you a tease for Icehouse It's actually been improved and actually is more stable and also scales a lot better and runs a lot faster And then we also have combination of layer 3 and DHCP agents Layer 3 is typically it's going to handle routing, not dynamic routing, just static routing But it's a simple very simple forwarding device typically for the open source solution based upon network name spaces and Linux And then the DHCP agent for those of you there's lots of different config options I think a couple others will touch on it later this afternoon about the config options within Neutron But the DHCP agent handles DHCP services within the deployment And then lastly in addition to the core plugin which is say ML2 We have several advanced services plugins, load balancing as a service, VPN as a service, firewall as a service And these are plugins that you can install alongside of ML2 and that you can actually make the decisions independently which ones you install So when we take a look at the built-in solution if you look up at the top you'll see that the interface of the Neutron server is a REST interface It's via HDP within a Neutron server we have several plugins and so you'll see the ML2 plugin You'll also see the L3 plugin because the way it's designed ML2 and L3 can actually be run as two separate plugins And so one of the future work that some folks are working on is alternative L3 plugins Those plugins all connect to the same shared bus If you notice the guy who prepared this slide is obviously from the UK because it's double decker So Salvatore wanted to leave this one little thing since he couldn't be here So the AMQP bus which is how the plugins and the agents talk to each other The nice thing about that is that you don't, other than knowing the host where something is running The host name you don't actually have to know the specific address or the specific port the bus takes care of the communications It's both bi-directional as well And within ML2 the OVS mechanism driver takes care of manipulating the logical data model underneath the hood To ensure that the actual agents have the appropriate fields So for instance when you provision a network it's going to ensure if you're using VLAN that the proper VLAN IDs are allocated from the pool So the Neutron's built in solution is really easy We're going to talk about DevStack and it's mainly just because we're developers that's the way we talk But for those operators the open source solutions are packaged in all the distros So it's really easy you don't have to do anything special other than install the appropriate packages If you're doing it via DevStack you're just going to enable services for Neutron And then you'll notice there's a couple other services enabled QL3, QAgent, QService, QDHCP and QMETA If you're wondering where the Q comes from Back in the day Neutron was called Quantum so every now and then we actually slip up and still use the old name But for historical reasons in DevStack the Q prefix still exists And then you would call stack.sh and it's going to build the default open source If you use the packages from any of the distros they will by default install ML2 as well So there's a couple the built-in solutions and the reason why we say careful messaging is these solutions were originally designed as a reference architecture And for the Neutron team to build the server to talk about extensions to try out new features We need an architecture which worked, which demonstrated tenant isolation, which demonstrated layer 3 routing But the solutions were always since they were built as a reference were never built to be highly scalable The scalability has been one of the things that the community has brought to over time So these solutions keep getting bigger and bigger so for those of you who need to do small test lab deployments And want simplicity the open source solution works perfectly in most cases I'm sure other folks will disagree but the other thing is that there are some limits with some of the agents Especially with the OSS with some of the back ends so it's important that while we have a reference architecture From our standpoint as being technical leaders within the project is we want a viable ecosystem And we want a viable ecosystem that's open and has competition because writing an SDN controller if you go that route is really hard And it takes lots of effort, it takes lots of dedicated effort, the same time if you're developing Neutron we can't do both things well So that's why we're pretty excited to talk about the open source alternatives And so while you can start with our built-ins there are other free alternatives that do scale and do deploy well And so the other thing is this Neutron community has a really vast mix of people And so you have folks who have web development experience distributed systems and network experts And depending where you're working within the software stack you need people of all those levels of expertise But some of the people in the stack their focus is not networking which is why when you have the open source projects that have been spawned for the open source alternatives These are folks who are hardcore like dedicated networking folks who want to write these services And so when I touched on a little bit earlier where the community started working on scaling out the reference architecture There's actually production and deployment of public clouds that work using what's known as the L2 population, the L2 population driver And one of the original problems with the reference architecture is we wanted to limit the use of flooding and broadcast traffic Especially when because if you know the logical state of the network you can actually precede and pre-populate the tables For both OBS and Linux Bridge forwarding whether you're using both VXLAN or GRE And so one of the main things we did to implement this is we implemented a local ARP responder If you're not familiar with the lower levels of the networking stack ARP is basically a Dress Resolution Protocol It's how your computer figures out what how to talk to another station on the network So let's take a real quick look graphically So before the ML2 population driver if we were to have say VMA which is at the top of the screen Wanted to talk to VMG which is in the bottom left-hand corner In order to discover where VMG was running it would have to send out It would have to try it would have to send broadcast traffic out to all the host So one of the things you do is you have a very active network with lots of instances And they're trying to resolve addresses or send traffic through You're now flooding all the tunnels and you're creating a lot of unnecessary traffic on your network So with ML2 population driver and this is where we started working on simple scaling tricks is now For the host one we install proxy ARP So when VMA tries to resolve the address of and how where to forward traffic to for VMG The proxy ARP answers and routes the traffic directly to host four If you now notice that host that that host one and the other host two and three are no longer seeing traffic So if we want to use L2 population with DevStack before we install DevStack as before We can set it up with ML2 we can use in this case the agent's been set up with Linux bridge It also works with OBS and then we can we want to add a couple service plug-in classes for routing Because routing is interesting if we're doing especially isolated networks And then the mechanism driver we install L2 population and then we also set up our tenant network types The types are basically generic code that gets recycled so that everybody who implements a driver Doesn't have to go through the same algorithm for allocating a VXLAN or allocating a VLAN or somewhere And then all we do is we go in the config file and we add at the end We want to add VXLAN enable VXLAN true what the local IP of the host is for the tunnel endpoints and turn on L2 population So this is one of the ways where we've started making the reference implementation smarter than what we do But there's still lots of work to be done which is what makes the other sides interesting So so yeah so as Mark was saying outside of what we've done with the internal neutron reference implementations And the innovation we've done there there are other open source implementations out there as well So the first one we'll talk about is open-contrail as well So open-contrail is designed to solve two primary use cases right The first one is cloud networking and the second one is network function virtualization Now both of those use cases fall within the domain of open stack and specifically neutron as well Open-contrail itself at a high level consists of two primary components the open-contrail controller and the V-router as well And certainly like there's a web link here you can get much more detailed information about open-contrail if you go to this site as well So how does open-contrail integrate with open stack neutron? So this specifically talks about booting up a VM here and how this integration actually happens because open-contrail has the V-router portion as well So what happens when you're booting up a VM with open-contrail and you're utilizing the neutron plug-in information as well Nova is going to instruct the Nova agent to create the VM on the compute host At that point network attributes are going to be acquired by the open-contrail plug-in After the VM has been booted that information will be passed down to the V-router And the V-router will actually configure the networking for the VM at that point This diagram actually gives you kind of a high level overview of how this looks in reality here as well In this particular configuration the open-contrail deployment is going to use a single node for the open stack services Where Nova is running, where neutron is running with the contrail plug-in as well There's going to be a configuration node, a contrail node that's running the open-contrail configuration server And then you have your compute node which is where open stack Nova is going to run and then the V-router agent as well So this is just a very simple high level diagram of what open-contrail looks like Now if you wanted to actually try open-contrail with dev stack as well There's actually a website here on the bottom that shows you this at a high level You can actually go ahead and pull the dev stack tree, the fork of dev stack that has the open-contrail support Here's how you would actually enable it again So again you would set up the Q plug-in as contrail right now You need to set a physical interface And then you can essentially run stack.sh and this would give you a neutron setup, a dev stack setup With neutron with the open-contrail plug-in as well So the open-contrail plug-in is not currently upstream right now Although it should, I expected to go upstream in Juno as well There was a blueprint for that so hopefully that should go up there So the next open-source plug-in that we'll talk about is Open Daylight as well So Open Daylight is actually a software, an open-source software project That's actually run by the Linux Foundation And the main goal of Open Daylight is to further the adoption of SDN So Open Daylight likes to talk about how there's three pillars inside the Open Daylight organization Code acceptance and community Open Daylight has modeled itself a little bit after what OpenStack has done in fact with how they both develop code And review code and things like that as well So Open Daylight, much like contrail is building an evolvable SDN platform as well The infrastructure as a service network or orchestration is an important use case And Open Daylight likes to utilize that use case to show some of the functionality that Open Daylight has as well The first release of Open Daylight was actually just this year in February It was called the hydrogen release The Open Daylight group is actually working right now on a helium release tentatively set for the fall at this point What's interesting about Open Daylight is if you look at it on the website And you've seen an architecture diagram which I think I might have in here It almost looks like a bag of parts There's a lot of different projects inside Open Daylight So to simplify things, the Open Daylight project came up with three different releases for the hydrogen release They came up with the base, the virtualization and the service provider edition And each one of those includes relevant bundles for the functionality that's at hand The base edition was meant as more of a research type platform The virtualization edition is what's utilized with OpenStack So it includes things like the southbound OVSDB and open flow work As well as Open Dove and VTN and the Neutron API service as well inside Open Daylight And the service provider edition includes things that would be relevant on the service provider things Like the BGP work and things like that So Open Daylight itself is actually a part of OpenStack Neutron There's an ML2 mechanism driver that was upstreamed in Ice House So you can actually download Ice House and try out Open Daylight with the released hydrogen version of Open Daylight as well Effectively, it's a thin rest proxy that passes the API calls from Neutron over to Open Daylight as well On the Open Daylight side, there's actually multiple bundles over there that make use of this Open Dove, VTN and the OVSDB plugin are the three main ones that make use of this at this point This solution at this point still requires the Neutron DHCP and L3 agents So that functionality is not implemented on the Open Daylight side now That is something that is being planned for the helium release So at least on the L3 routing side, hopefully that will be solved there So this is an architecture diagram of what Open Daylight looks It looks surprisingly similar to Contrail I think What you'll find is a lot of the controller based open source plugins do look very similar So with this, you can see there's the Neutron node which is running the Neutron server with the ML2 plugin The Open Daylight node is where you have all of the Open Daylight server running The networking node is required in this case because that's where we're doing DHCP and L3 routing functionality as well One interesting thing you'll note here is that the Compute node doesn't have, there's no agent running on the Compute node at all And that's because Open Daylight utilizes OpenFlow and OVSDB to talk from Open Daylight down to the Compute host as well So again, this is a real quick, if you want to try this with DevStack, this is actually already integrated into DevStack as well So you can check out DevStack, set up your mechanism driver to be Open Daylight, enable the ML2 plugin You also will want to enable the ODL service and ODL Compute That will actually launch Open Daylight inside the DevStack instance as well If you have additional Compute nodes, you can just enable those with just ODL Compute enabled in the services as well There's actually more configuration options for this as well So this would allow you to check out Open Daylight integration with Neutron as well The last open source plugin that we'll look at is the Ryu Network Operating System as well So Ryu is a component-based SDN framework Ryu is written in Python, whereas Open Daylight is written in Java Ryu supports OpenFlow 1.0, 1.2, 1.3, 1.4, as well as the Nasira NXM extensions as well And it's licensed under the Apache 2.0 license It supports a variety of protocols for managing devices, both physical and virtual underneath it Including OpenFlow, NetConf, OF Config, SNMP There's a website link for a lot more information on Ryu as well So Ryu is actually integrated with OpenStack Neutron in two distinct ways There is an existing standalone plugin, which you can run the Ryu as a core plugin And there's also an ML2 mechanism driver, which is called the OpenFlow agent Which is integrated as well and will actually work with ML2 And the OF agent utilizes the Ryu library on the host to talk down to the OpenV switch on the host for programming as well And so this supports standard multi-tenant networks using MAC You can actually do tenant network segregation Utilizing MAC address-based segregation Utilizing OpenFlow rules It will do VLAN as well and GRE tunnels also And the Ryu agent also supports the port binding extension in ML2 as well Basically any agent in ML2 that's going to have, that's going to support the virtual switch side Will support port binding so it can bind virtual ports there So this is a diagram of what Ryu looks like Again, it's very similar And this particular diagram is with the Ryu core plugin But again it looks very similar to OpenContrail and Open Daylight You can see there's a Ryu node where we're running the Ryu server as well The Neutron node has the Ryu plugin On the Compute node there is a Ryu agent that will talk locally to the OVS on the host as well And then this also makes use of a network node as well Where there's also another Ryu agent running there to handle L3 and DHCP services for the Ryu plugin as well So this is also something you can try with DevStack as well You could check out a recent DevStack and this is the config you would want to set up You would want to enable OF agent and L2 population And then you would set the agent to be OF agent instead of Linux bridge or OpenV switch And then there's some additional config there as well This would let you set up and run a DevStack instance with Neutron With the Ryu OpenFlow agent support in ML2 as well There's more information on how to set this up with Icehouse at the link below there as well So in summary, what we wanted to show in this top was there was a large number of open source options for Neutron plugins as well There's the existing functionality that's implemented inside of Neutron, the built-in solution But there's also three other open source plugins that implement the Neutron APIs themselves as well So there's a lot of innovation on the open source side going on around Neutron here as well And certainly there's no one size fits all, so depending upon your deployment needs Or what you're comfortable with, what type of application scalability you want You can utilize any of these different plugins to solve your needs So at this point we'd like to open it up to questions I know in the previous panel there was questions around scalability and things like that So if anyone has any specific questions around scalability of either the built-in solution Or any of these other open source solutions that we've talked about I think the microphones around would you mind going to... Yeah, there's microphones right over here Okay, yeah, why don't we go over here? Hi, I'm Yoris, I work for Cisco And I'm just wondering, with all these SDN controllers there's yet another layer of abstraction, right? They both have their North and South bound APIs and they might use OpenFlow And so basically you're going to have... Your networking is going to go through Neutron and then to Open Daylight and then to OpenFlow So I'm just wondering about scalability really How is that going to scale and how is that going to perform? I think it depends on what you're looking at If we look at something like the Open Daylight I mean probably most of these controllers end up being rest proxy pass-through So they're just going to pass the Neutron APIs over to their controller And I think in the case of Open Daylight it's a Java application So scaling Java applications is something that the world knows how to do You can scale those sorts of things So in some sense you're moving the scalability problem from the built-in Neutron solution Over to whatever controller-based solution you're utilizing But you still have to solve the problem, I guess Hi, my name is Yishan and maybe since I was not in the previous session So maybe this question is duplicated So it is the real problems we are facing And we are supporting some eScience users, very heavy traffic users We have the Neutron server And our Neutron servers actually choke up the connection in and out Because sometimes they download terabyte of data And like in a normal network you know that I mean the normal network traffic goes out from the compute nodes Because you have lots of compute nodes Then actually the network traffic actually distributed But in a sense our Neutron servers is like a single point of Choking the whole network So I could not figure out the solution for having that Maybe this Lord Balancer service Maybe they solve it Or is there any way to have the normal network style Totally distributed because we need at least Terabyte of data in and out all the time So one thing you're basically running into bandwidth problems You're saturating links And so in the reference implementation like I said it's a reference And so it is really easy If you have a bunch of instances that generate a lot of traffic You can saturate it So one of the things we're doing from a community perspective Is we are working on distributed virtual routing Which will allow you to distribute the north-south traffic Out to the different agents That's going to come most likely in Juneau And so that's going to solve a lot of the use cases For Nova multi host But in the interim one of the things to take a look at Is even these open source implementations allow you Different scale outs and different ways to configure them To basically route the traffic north south Also some of the proprietary options So I'm sure you'll see vendors around Will also have a different scale out story as well That will allow you to get higher bandwidth To your instances without running into limitations And this Juneau release is So the current release was Icehouse Which is what we released in April And the Juneau release will come out sometime in October Okay, thank you Hi, my name is Girish If Newton APIs are active If the act has passed through into OpenDialyte And OpenDialyte does catch up eventually And OpenDialyte does everything in terms of neutron What it does today So what is the role of neutron in the future then? Right? OpenDialyte does all the agents L3, DHCP, everything Then what is Neutron doing just passing APIs to OpenDialyte? Well, the thing to think about is When you think about the Neutron API It's really an agnostic API That the tenant in the cloud user Does not know what the actual implementation is under the hood So, for instance, if you're using the Nova API You don't necessarily know that maybe it's backed by KVM Maybe it's backed by Zen Maybe it's backed by, you know, VMware You don't know And so that's the role of the Neutron API Is you have an agnostic API That presents a single unified experience for users So it would be just a shim layer then, right? But that's basically the way That's the way Cinder works, Nova works It's a very consistent design pattern With the rest of OpenStack Thanks All right I have not evaluated SDN controllers at all So I don't know pros and cons and so forth But I was just wondering Why floodlight wasn't mentioned As far as what little I know It seems to fall into the same category I've been an open source SDN controller With a Neutron plug-in So what was different about it that you Didn't mention that one? We forgot to put that in Yeah, that was an oversight That is an oversight So the one kind of artifact of floodlight Is floodlight support inside of Neutron It is supported because it comes in Through the big switch REST proxy Actually works very well with the floodlight controller So our apologies did not mean to leave that out But a lot of times we lump that The floodlight work in with the big switch work Since that team does a lot of similar maintenance for it All right, sorry Hi So you mentioned three SDN controllers Which are RYU, open control and open daylight So as you mentioned There is no one-side-feed-all solution For open stack as plug-in, I guess So at the end of the day Users are going to pick up One SDN controller to use Either RYU or either open control So can you comment on the Advantages and disadvantages of these three I guess most prominent SDN controllers Open source controllers can comment on this Which one do you do chose? Well, first of all I think I think it's worth noting that If you use ML2 Currently you could actually try open daylight And the RYU, the open flow agent plug-in At the same time You could actually enable both of them And you could use open daylight For some things and RYU for the other Basically you would run the open flow agent On hosts where you wanted to use RYU And you would not run it on the other hosts And you would use open daylight So it's possible to do that And if open control does an ML2 plug-in You could do that as well As far as use cases and that Really, we're not really up here To say one is better than the other Or that they all are solving things In a different way Mark was saying the neutron API is abstracted The tenants won't know what's underneath Implementing it So these different plugins Are all implementing the same API But solving things in a slightly different way The most obvious example is The RYU plug-in still uses an agent on the host And like open daylight Doesn't use an agent on the host Add a little bit to that A lot of your choice in technology Also depends on what your existing Infrastructure is in your data center So sometimes you may have Certain hardware that works better On one of the controllers versus others Or you may already have existing contracts With either integrators Or distributors or service engineers Who actually have a preference For using one or the other So it's really hard for us to say If your deployments this size use this one If your deployments this size use this one They do have... And it's no different than the choice we have now When folks ask what should my hypervisor be In terms of NOVA Again, it's really an operator choice The biggest benefit to the ecosystem Is that we do have a choice It's not like we're all being force fed One SDN controller that we all have to like No matter what So that's the good thing They're also open communities And each community functions a little bit different So sometimes one of your choices is Do you like participating in that community? Do you like the documentation Of that community? And it gives you a really opportunity For Merit to win out as well So... Yeah, definitely Yes I had a question The RU agent is called OF agent Openflow agent Openflow RU RU is not openflow It's just one of the openflow implementations So the naming to me suggests OF agent you can use with any openflow controller But it's not the case So is it the plan going forward Or is it just a misnomer When you call it OF agent So that's actually an interesting point The RU team actually I believe that they may have plans to extend it To control more than just Open V-switch on the host To also control openflow enabled Switches or other devices that use openflow So I think that's why they called it The openflow agent Because I think they do have plans To extend its reach beyond just the hypervisor I had one more question In the previous session You talked about collaboration Between different open stack projects And now we are talking about Different open source components That we are using So what about collaboration between Neutron and these open source components That we are using Like I said today I want Neutron has certain use case requirement Which open daylight needs to fill So how are we going about it Do we have some look into Their decisions like their road maps Or is it we look at What features they have And then we go about it So what's the plan on Collaboration between say Open daylight and Neutron So I think the best answer to that Is that all of these open source plugins Have representatives that Kind of work across the both boundaries And the healthiest way To have this ecosystem exist Is to have people who Are pushing patches to both sides Of the fence I think And that's what you are seeing already With a lot of these You'll see people that work in Both Ryu and then also work on The Ryu openflow agent And the same thing with open daylight And con trail as well And I think that's the best Way to drive that sort of collaboration Make sure that people are On the same page as well Is to have that cross collaboration And it's frankly the same way That it works with the vendor plugins as well The best way that that works Is when you have vendor participation Upstream as well as in Just in the plugin And even on the proprietary side And to add a little bit As members of the core team That's one of the things we do Is we help facilitate that information sharing And also like Kyle mentioned With the vendor plugins You will actually see information Shared from vendors even to the open source Because a lot of The crazy thing is About the open stack community Is a lot of people Will interact with a lot of people And sometimes they change Employers as well And so we all have relationships That keeps that dialogue open Yes, hi I have a follow up Question to the comment That was made here earlier About the encapsulation of Neutron to SDN controllers Can one assume that It doesn't have to be aware Of any SDN special capabilities And Neutron would encapsulate All the interactions? Yes So if there is special capabilities Of one particular controller How would you resolve that? So I think it's like we were saying Like the Neutron API To some extent normalizes that Because it says Here's a set of APIs You have to do create network If you look at create network As a very basic example You want to create a network That you can segregate Multiple tenets across So each tenant Maybe gets its own network So the controller Could implement that Any way it wants It could implement it Using a VLAN Or a tunnel network type Or even like the OFAgent Utilizing just You know, MAC address Based segregation Using openflow How about more advanced stuff Such as if you want to Do redirection to firewalls And IPS and whatnot Network introspection Yeah, and again That would come If the controller wanted To implement perhaps Some of the extension APIs we have around VPN Or firewall Or load balancer Or something like that The controller could use Could do that Utilizing service redirection Itself or something like that Okay, thank you Yeah Hi Hi So in this model When we have Neutron as a pass through Right So one of the things That was available in Grizzly But with ML2 kind of got Restricted is The ability to make More extensions That Neutron currently Doesn't support Say for example If you want to Pass some specific Parameters to create router That is not there In the open source version Right To be able to do that Just, you know Get an additional Bunch of parameters That then the Plug-in parts Knows how to parse Whereas the driver Doesn't know that Now with this model Where you have ODL Or some other controller And you want to expose Functionality up north Is there You know Are you going to have a model To do that Is that even under consideration Or Yes So there's a design summit session Where we're talking about How we can expose Extensions into ML2 drivers So that we can expose Different capabilities So we got time For one last question Yeah We are moving the core Functionalities to the Controller So what is the roadmap For Neutron Because why do we need DVR And any other features In that case Well, again There's always going to be Use cases for deployments Where people want a very Simple install They want to be able to Deploy Neutron And run In a limited cluster size And so DVR Gives you that advantage Maybe you don't have A whole lot of instances But you need a whole Lot of throughput You can deploy DVR In one or two racks And get a deployment Versus having to Configure another moving Part Exactly Yep Alrighty Thank you all Yeah, thank you everyone