 All right. Well, thanks for coming at this late hour of the afternoon. My name is Armando Miyacho. I'm here speaking as a Neutron PTO and I'm handing over it to either of you guys. All right. We're going alphabetically. So my name is Christopher Price. I've been working in the OpenNV community. I've been a participant of the OpenStack community and Open Daylight for some years and here to talk about OpenNV networking and the role of Neutron. Hi everybody. I'm Rossellas Blanido. I am a Neutron core and I recently joined the Technical Steering Committee of OpenNV. Yeah, and I'm here to, you know, give this talk about the relationship between the two of them. Thanks. Very good. So more or less what we're going to do this is three parts. I'm going to talk a little bit about a little bit about OpenNV and then we'll talk a little bit about the features that are OpenNV related that are coming out through Neutron and then we'll talk a little bit about the history and futures and where Neutron's going I guess as a rough outline. So I can start very briefly to talk about OpenNV. I guess the first question is how many people actually know what OpenNV does and where it is and those sorts of things because we have maybe 20% of the room if we're lucky. Okay, so I will give a little brief introduction and I won't go too far through it. So OpenNV has a few key goals. One and primarily is to develop a platform, an integrated platform for carrier networks. A virtualization layer if you like, carrier networks. We don't want to build our own which is why we come to communities like OpenStack that provide the real, the foundation parts for carriers and how they're going to build these networks. So we work a lot with other communities and we spend our time trying to help communities collaborate together and find good solutions. We also strive to participate, strive to include the participation of end users. So we have a number of key operators who are actually on our governing board, AT&T, China Mobile, Telecom Italia and others. And we try and contribute. So if we have an idea, something we see as a need, we come to OpenStack and we say, hey, we need to solve this problem. Can we do this together? And then we of course try and contribute to these communities. Open Daylight, OpenStack, Linux, KVM. And trying to build at the end of the day an ecosystem for developing things. There's some statistics I won't go into because statistics are boring. But just to give an overview of OpenV then we build a cloud, right? That's what we do. We build a cloud. We build it for Telecom operators. So we try and look at what are the types of use cases that Telecom operators have. They have big clouds in the middle. They have medium sized clouds in different cities. Then they may have smaller clouds at the edge. And all of these things are serving different purposes and require different capabilities. So we look at all of those clouds. In a latest release, Colorado, we produced 47 different types of cloud. Just because there are that many different needs that we haven't been able to converge them into a common platform yet. But of course the purpose will be to do that eventually. And we do this through integration primarily. A lot of testing. We do a lot of testing of things that have already been tested. This is just what we do. We bring all of the tempest tests in and then we write our own end to end tests. And we try and develop new features. And one of the key areas for us is the continuous integration, continuous deployment pipeline. We work very closely with the OpenStack team on this. And what we're trying to do is make it as easy as possible to build a cloud. We deploy I think over 10,000 clouds a year in our labs and infrastructure. And this is all based on this 47 different flavors that we have. And that's all through aggressive automation, I would say. So what we do, this is a map which is kind of hard to read. But at the framework level we have the CRC pipeline. That's all built around a set of infrastructures. Components, hardware types, different lab configurations and so on and so forth. And then on top of that we build in our infrastructure and tooling, analytics, orchestration, virtual infrastructure management, networking control, different types of networking approaches that we try out and we sort of integrate into these platforms. And then on top of it all you have the application layer, you have things that you may want to run like platform as a service type, execution environments, and all of this sort of sits over top what we do as an infrastructure as a service layer. As I mentioned, we did do a release recently in Colorado. Some key areas that came out in Colorado, we now have full IPv6 underlay overlay solutions in our platforms. We support multiple types of service function chaining. We can do VPM, BGP puring end to end. We have different hardware architecture. So we're not only deploying x86, we deploy it to native solution. So of our scenarios I think we have a dozen that have been ported to run both x86 and ARM. Because we see in smaller data centers at the edge of the network we may have ARM type architectures. So supporting multiple hardware is extremely important for us. And of course the DevOps in order to make this happen, extremely important. So we did a lot of work in that. And I think that's all I have as far as interaction is concerned. And I will pass it over. Thanks, Chris. So now let's move on and see a little bit what's the relationship between Neutron and OPNFV. Of course to understand that first we need to understand what NFV is. I don't know how many of you in the audience are familiar with the concept. I'm sure you heard the word. I'm not sure you know exactly what that means. So the idea is very simple. It's to virtualize physical appliances that are used by telcos to make sure that the traffic flows. So like radio equipment, router, firewalls. And you know it's more or less what happened at the beginning of the virtualization. Like we had lots of physical servers that were you know difficult to maintain. They were not scalable. So it's the same idea applied to network appliances. And you can imagine the benefits. Like you spare money. They are easier to maintain. It's easier to get orchestration. And so in order to do that, we need a reference architecture. So I'm taking it from Etsy. So let's try to understand what are these blocks. They can be a bit scary. So that's why I slightly modified it to make it a bit more human friendly. So we have the virtualize network function. So this is basically the virtualization of a network function. What's a network function? Could be anything. Could be firewalling. Could be DHCP. You name it. And this virtual network function, of course, they need an infrastructure. So we need to configure and manage this infrastructure. This is done by the management and orchestration. Maybe you heard the word manual. So that's what it does. So with an analogy, like we can think of this management of orchestration level like some kind of God. And then this virtual network function, they are animals, plants, and yeah, human being. And the infrastructure being the earth or any other planet you like where you can have life. I don't know if there are many. So let's have a look now at the opn of the architecture. You see it mirrors the diagram we've seen before. Just with some more information maybe. So they decided to use open stack as a VM. It's a virtual interface manager. And you can see there are three fields. There's the virtualization, the storage itself, and then there's the networking. So guess where Neutron is? Of course, the component that it's taking care of the networking. You can see that in the diagram. And as you can see, the most common configurations for opn of the are to use an SDN backend like open daylight, onus, or open contrary. So opn of the and Neutron. Neutron provides the API that the VNF manager can use to create and configure this network resources to be able to deliver the virtual network function. Of course, so there are two separate, so opn of the Neutron, they are separated project and community. And we started developing Neutron before opn of the existed. So there might be some friction. Like it always happens in good relationship. So you can have two kind of friction. One is tied to the models. Like in Neutron, of course, we have abstraction. We have models to be able to identify some network resources like port, network and so on. And so these models might not fit completely into the opn of the use cases. For example, in Neutron, a network is tied to an out to domain. And this doesn't work always well with opn of the use cases. For example, it doesn't work very well with BGP VPN. And then the other kind of friction that you can have is that maybe the API is missing some piece. Maybe you want to extend it. And this is already happening. For example, there is the networking SFC project that is inside the Neutron stadium. And what it basically does, it provides the API to configure service function chaining. And we try to collaborate. And one good example of this collaboration is the new feature that just landed in Neutron. It was really wanted by the NFV people. And it took quite some time to deliver it. But we made it. And I just want to explain a little bit what this feature is about. The idea is to be able to get target traffic to the VM. And also that the VM should be able to send tag traffic. This is very important for NFV because, as I was saying before, the goal is to virtualize appliances. And sometimes you might have a legacy application that uses target traffic to ensure isolation. And so you really need to get target traffic to the VM. And another good use case could be that you might need to connect a VM to several networks. And it's not very scalable to create new interfaces, one interface for every network if you can use the VLAN subinterface. And then, of course, this feature is also useful for containers because you can then use VLAN to isolate the traffic inside the VM to be able to handle several containers. And for this feature, we added two new entities. One is the trunk port that, as the name says, this is the concept of a trunk, so it's a port that can receive tag traffic. And then we have a support that it's associated with a trunk and is identified by a segmentation ID. So the traffic that flows through the support has only a specific VLAN ID. So I just wanted to quickly show you a little bit how it works in the OVS implementation. So here in the graph, we have a VM with a trunk port and a support with a segmentation ID 10. As you see, to be able to handle this target traffic, we introduced a new bridge, the trunk bridge that is the one in charge of the tag traffic. So now let's see what's the path of the packet when the VM is sending untagged traffic. So the untagged traffic will go through the trunk port. You see the square there, the white square. And the trunk port, it's on network two, so it's the red line. So the packet will go to the trunk bridge. And on the trunk bridge, we have a patch port, TPT. And we have the pair of this patch port that it's on the on BRI in TPI. So the untagged traffic will flow through this patch port and then to the pair TPI. In TPI, I put the red circle because that's a tagged port. It's tagged with a segmentation ID five because I assume that in this compute host, five is belanded, it's used internally by the obvious agent to separate the traffic between the networks. Now let's see what happens when VM one is sending traffic tagged with segmentation ID 10. So it will go through the support that it's the blue diamond. Then it will get to the trunk bridge there. We have this patch port SPT that it's tagged with segmentation ID 10, so it will flow there. Then it will get to the pair that it's SPI. As you see, it's tagged, but with a different tag. It's a triangle, so it's segmentation ID seven because seven is the local VLAN ID that the obvious agent uses for network one. And I can hand it over to Armando now. Right. So I won't go into deep dive and VLAN over VMs, but what I will do is to double click on some of the things that Rosella and Chris have touched on when it comes to Neutron and OPNFV. Some of you may actually wonder what Neutron is all about anyway. We've already seen this black box without really drilling down into the internals. I mean, to start off Neutron besides a code base, as some of you may wonder, is primarily a community of people who are gathering around a virtual round table called OpenStack and decide to collaborate and use the open stack way. And when the project was started, the major objective of the project was to devise abstractions for any networking constructs in a self-provision manner. And even though at the very beginning, one of the main tenets was providing overlay networking in the form of a logical layer 2 broadcast domain, the API and again the abstractions underpinning it were devised in such a way that we could strive to extend certain concepts, you know, alter certain concepts and make sure that these concepts again are the best of all, but it is technologically agnostic, so they can map on top of physical implementations, they may vary. So that meant that the API to some degree can be considered somewhat polymorphic and underneath the API you can plug different components. And I'm actually borrowing a picture, I couldn't even be bothered to change the name. Quantum is actually the former name of the quantum, quantum is how Newton was formerly known. We had to change the name for legal reasons. And I'm borrowing this picture from an old slide deck which exemplifies a little bit what I just described. And these compositions of services and components, it's something that we tend to refer to as stadium, which is again this fancy name for just describing a list of projects that are related to one another to deliver networking services on top of this core backbone that makes up like Neutron as a whole. And features and collaborations end up being managed and driven by what we call Neutron drivers, which is a set of people who have been around in the Neutron community and project for long enough to be blamed for all the mistakes and all the good things that have happened in the project. So this picture somewhat exemplifies, you know, gives you a one view of what the Neutron architecture is like and what this modularization and this composition is like. The Neutron server, which is again one key component of the Neutron deployment is what makes up the part, you know, the bulk of the system. And besides things like, you know, things that are not very sexy like, you know, state notifications, code management, you know, policy enforcement, scheduling and so on and so forth. The key piece here is the API abstraction. And the API abstraction is where the magic is. And it is fulfilled by what we call plugins. And we have plugins that span from core plugins that implement what we call like a very subset, a core subset of networking abstractions that are used by other projects in OpenStack like Nova or Heat, Solometer and so on, Magnum, the more, you know, the more you can come up with the better. And other services, they may consider like somewhat optional or appropriate or like suitable for addressing more niche needs, things like load balancing or firewall and so on. And all these components again can collaborate together in a loosely coupled fashion in order to deliver networking services. The fact that this is a very composable architecture means that two neutron deployments may not necessarily look alike. So you may have like a cloud that is meant to do something like AWS CC2 that is powered by a neutron deployment that has just, you know, a core plugin and an L3 plugin. And you can have another cloud that may be using a telco environment that's being configured to do something completely different that's very, you know, very, you know, telco driven where you may have like something like BGP VPN and service function chaining. And as I, you know, I have sort of like briefly skipped over, these plugins themselves can provide, do provide a pluggable framework for tapping into potentially like SDN controllers or built-in agents and then they map the logical API artifacts on top of like physical constructs. So obviously, you know, I would be naive to think that the platform would be ready and, you know, functionally complete from day one. There are gaps. The project has grown organically over time. But obviously, there are gaps, right? There is only so many people who are involved at any given time in the project and the project is still relatively young. So the question sometimes that we get asked is, is this feature ready or is this a gap that exists? I think that these questions are valid, but I think that the more crucial question is are these gaps, can these gaps be overcome? So is the platform and the project arranged in such a way that these gaps can be filled over time? So is the platform, A, like extensible or like modular enough that things can be composed in a way that not only a single person, well, many persons can understand at once what they're dealing with, not giving again the knowledge of the whole thing to a God type of person. And B, is the project adopting a set of processes and procedures to enable collaboration at scale. And I mean, looking back at the various releases that we've gone through and having been involved as core and BDL, I would think that we've established a set of procedures and architectural guidelines where things, again, these gap filling exercise, it's somewhat of a success. Obviously, you know, we make mistakes and we look back and try to iterate on our mistakes and fix them. And the reason why I bring this up, because I want to touch base on something known as a net rate in the OPNFV project, which as far as I understand, and obviously Chris, keep me honest here, is a project aimed at identifying, again, what are the gaps that Neutron as a platform has when it comes to fulfilling OPNFV requirements. And you can find like the whole set of requirements and gap identified and the link at the very bottom of the slide deck. And here I've sort of like summarized in this bullet list a number of them. And the one marked with the crosses are the ones that I somewhat disagree with. And the one with the dixies are the ones that acknowledge that it's something that the Neutron community has to work on. And I would be happy to drill down into those, but maybe we can do that in a question and answer type of fashion. I have no more slides and again I open up to questions. A round of applause, no? That's fine. So I think just to paraphrase, I mean so what Neutron provides and I think coming back to the point of this topic, Neutron provides us with a network control function that allows us to move natively from open stack into the network and start to manage that network. We see that for instance we've done studies in OPNFV which show that Neutron doesn't necessarily solve every problem in the world. That's kind of okay, Neutron doesn't need to solve every problem in the world. I think there are other architectures, other approaches. We know there's a bunch of SDN controllers out there that want to be able to and do integrate today. And as we move forward we explore how to solve some of these problems, some which are relevant to the Neutron community, some which may not be relevant to the Neutron community. And we try and find ways of establishing how we can realize these moving forwards and essentially coming to an understanding of how to make progress as a community. And I think from an OPNFV perspective, we see Neutron as a very stable and constantly improving network control function. And for us as you saw on the slides, it's one of many that we work with. So I think it's part of the ecosystem. It's from an open stack perspective, the primary networking focused ecosystem that we want to work with. And certainly we want to find ways of making sure that we can integrate those other solutions and other architectures into an open stack platform. And I would like to add also that Neutron, as I mentioned earlier, is a community, is a system that you can deploy and play with and blame or hate or love. But it is also the mean, the vehicle to getting access to your workloads. Ultimately, an open stack is about VMs, containers, bare metals, things that can crunch data. And if you think about Nova, you think about Ironic, Neutron is the key of the Basel that is used by these projects in order to provide networking services. So again, it's that interface that allows workloads to get access to the pipes. So that is also another thing that I would like to stress and highlight. And it is important to realize that, as Chris mentioned, we don't necessarily have to bloat the plumbing layer. We can figure out ways to cut it out, slice it in such a way that it doesn't fall under its own weight. But that has to be done in coordination and cooperation with the entire open stack ecosystem, ecosystem of projects, not just like looking and dealing with Neutron alone. So for instance, there have been initiatives that some of you may be familiar with, like Loon, that tend to look at how Nova and Neutron interface to one another. And in order to address certain issues with that, we will have to look at both projects in conjunction. Yeah. Yes. If you can go to the mic, I mean, that would be great. Otherwise, you can speak very loudly and we'll try to replay the question. Thank you. Okay, so given that there are gaps in Neutron that you mentioned, and Neutron has plugins to various SDN controllers, does it make sense? I guess when does it make sense to fill in the gaps in Neutron versus in the SDN controller themselves? Yeah, that's a good question. You end up, there is no hard and fast rule, really. You make a judgment call on a case-by-case basis. And ultimately, you see this question addressed by the needs of the persons and the people involved in certain initiatives. We tend to privilege access to open source technologies as far as Neutron goes. So, so long as there are technologies that we can access to in terms of SDN controllers or, again, we can consider open and accessible, then so long as we realize that we agree on a common abstraction that can be easily implemented, the map onto different implementations, you can go and chase, like an SDN controller-based implementation first, and then you can go and tap into other possible venues. Maybe you can go into proper areas then, controller path, or you can take a path where you end up building the entire stack yourself, like, for instance, we've done for some of the services and some of the plugins. So, again, it's somewhat like the answer is it varies. And I've seen examples in the past where we chose one or the other, depending on how fast you want to get to market, so to speak. Yeah, I mean, just to add on to that, I don't think we can give an answer. It's not that easy. Can I fit ICN into Neutron? That's something I probably wouldn't try and do because that's just going to cause more problem than it's worth. So, I'd find another way, and I'd find a way of integrating that into the solution if I wanted an ICN-based data center. I think it's practical. We have to think about how to do things. Neutron is an attractive venue to get work done where it fits in to what Neutron is doing because Neutron is native to open stack, right? It's the place, if you want everyone in open stack to be immediately accessible to technology, it's a great place to get stuff done. If you want to address things in a different way or take a completely different approach, then maybe there's other venues that are worth doing that. Yeah, I'd also like to add, it depends a lot on the kind of gap. If the gap is some new functionality that you want to add, then it might make sense to go to an SDN controller, but if the model doesn't really fit well, then I think the right solution is to discuss with the Neutron community and to see what you can do and try to find a common point. It doesn't really work simply to skip this step and just to use some custom SDN solution. Yeah, I mean, from a practical standpoint, asking the question, can you do it in Neutron ultimately boils down to choosing whether you want to Python and you want to expose a REST API that looks like the Neutron API and it boils down to, again, the layer, the API layer that the Neutron server provides to the other OpenStack services. And if you're uncomfortable with neither of those options, you want to use a different language or you want to expose the REST API in a different way, then obviously that effort cannot happen in the context of the Neutron as the software system, but, again, the Neutron community is still pretty inclusive and open to addressing any type of innovation required in order to address any of the needs so that if someone else wants to come along and want to do that in Neutron because they're not sort of because they really have no spare time on their ends, I mean, they have spare times on their ends, then by all means it's an OpenSource community. It means that anyone is welcome to step up and, you know, roll up their sleeves and do work. I mean, that's a lot of work. So, yeah. Good. Any other questions? Any hard questions? Well, it seems like we've either done a good job or a terrible job. Paul, thank you for joining us. Thank you very much.