 Morning everyone Thanks for coming here. We'll We'll be talking about OpenStack Networking. We're gonna give a project update Let's dive in Let's present ourselves. My name is Armando Miyacho for those who don't know me I've been the neutron PTL for the Mitaka Newton and no catalyst and I'm Kevin Benton And I'm the new PTL for Pike and maybe more later the one in everything in the mess All right, so this is the agenda. We're gonna go through the day We're gonna have a quick overview of the project and then we'll we'll dive into how we govern the project We're gonna try and touch on the few technical points and then Kevin Google go over on on the roadmap And if we have time we'll obviously open up to questions All right So I couldn't have a presentation without this light as you can see You know the networking piece is a key component of the OpenStack ecosystem and yeah, we see it right there in the middle So if things go bad, that's usually our fault So what about the project? The project was started back in 2011 in the double-time frame We've grew tremendously over the years up until the point where we have hundreds of contributors and Across many releases and It is neutron nowadays, it's virtually like deployed everywhere in OpenStack environments More than nine out of ten deployments to have neutron in there, you know in their deployment architecture so from from an obstruction point of view, what is neutron? It's it's an API a self-service API that that allows cloud users to create networking artifacts that may map map on to different implementation backends and The reason why we came about with that abstraction was because at the very beginning of the inception of the OpenStack project Some networking details were baked into OpenStack and all by itself So some of you if not all of you may know about Nova Networking and that was an inflexible model where you couldn't have control over how the networking topology was going to be like for interconnecting Workloads and also there was a very inflexible ability to provide isolation and scale tenant environments over You know Scale scale those environments adequately over time And that's where you know those are some of the answers that neutron some of the questions in Newton tries tries to answer From a governance standpoint the project has evolved radically We were obviously at the very beginning of very small project, but then we grew to try in scope to try to embrace more and more networking use cases so From a project management standpoint some of you may have heard of the Neutron Stadium was a concept that was brought in by a ptl before me Kyle Messery at the time of the Big Tent and Again from a governance standpoint This stadium is effectively a list of a list of networking related projects that are overseen by the Neutron core team So as a core team member we are asked to members we are asked to provide architectural guidance We provide reviews over API proposals We take care of release management of the projects we address gate or infrastructure issues and in return subproject since each subproject is its own core team they they pledge to Provide good documentation across and I'm you know across the spectrum You know user admin and developers We obviously care about testing We do things like stable branch management and so on and so forth but more importantly we aim at being a Fully open source stack from the ground up. So again neutron projects are typically Projects that address technologies that you know are openly accessible to everybody and in a nutshell These subprojects follow the practices adopted by the Neutron core what it was known You know what was started this as the Neutron the Neutron project and Neutron code repository So let's have a glance at the Neutron subprojects that we currently manage as a part of the Neutron stadium There are back-ends projects and there are API projects and the difference is that back-end projects typically end up Implementing the API that are available in the Neutron core and have you know Have been Introduced like back in back in early early days of the project and then we have API's definitions and Respective implementations that have been brought in into the Neutron larger ecosystem over time So we have back-ends like mid-night open daylight to VN and back pipe versus API's like BGP VPN the dynamic routing firewall as a service and service management. So let's go through some of these Midanette is the middle who rise the end solution. It is a very feature-rich back-end It does address many use cases from out to gateways to firewall QoS load balancing and Top as a service. I've put a reference to release nodes in in the slides so that you can you know Use it as a as a pointer to learn about more about these projects Open daylight Was possibly the first as you know SDN controller implementation for for Neutron and he has gone through a couple of iterations Already in order to make it like production grade It does a features like L2 gateways firewall QoS load balancing and so on OVN it's something that started not too long ago. This is effectively a scope creep of the open V switch community We as Neutron project Provided OVS as one of the back-ends for the API alongside the Linux bridge and we did so using a Python based agent architecture Where we effectively were talking to to the open V switch demons in order to provide virtual networking and things like that and Over time, you know, it came to the The community is open this which community came to the realization that open this which will actually grow grow in scope and provide some of those Abstractions and building blocks within the open V switch community itself that there is what OVN is about So from an integration standpoint into Neutron System the integration is somewhat similar to what open daylight has done in the form of an ML to make a driver and One of you know one of the the tenets at the inception of the project was okay We want to make sure as an OVN community we want to make sure that we May be able to scale and perform much better than the existing OVS implementation of agent-based OVS implementation Well, whether that's gonna be end up, you know, end up being true over time We'll have to see because the project is still relatively new But we have the community has shown like promising start They have features. There's still a gap like between the Agent-based OVS implementation and OVN in terms of features But they do provide things like L2 L3 services as well as DHCP They do provide a trunking in the implementation of the trunking API And things like a corridor service and they also do integrate with Various containers orchestrators like like Kubernetes Docker Backpipe was an initiative started by Orange Telecom And it is effectively a way to provide tenant isolation by means of BGP VPNs And this also as you'll see in a second the building block for providing BGP VPN and an implementation for the BGP VPN API and In fact, I mean the BGP VPN project Uses back pipe to interconnect the neutral networks via One BGP based VPNs, but it also this project itself also like implements a plug-in base model where different Backends can come in and implement the EPI and we have implementation available for when they light open-con trail and new age Dynamic routing came about as a way to overcome the limitations of static routes. Okay it's all you know good and well that you can come up with your tenant private private, you know networks and But then what if you want to integrate with the rest of your data center and dynamic routing came to concept as an Realization in order to enable advertisement of this private net private networks by means of BGP At this point in time we have a Ryu as one of the implementations You'll see in a little bit of what we're working on in the future Right now and in the future. So if I was a service was was conceived as a way to Provide a zero trust Security model opposed to security groups where the user comes in and says, okay, I want to open up these ports But then if I was a service comes in and it takes the Administrator and operator standpoint and says, okay I actually want to close these boards right and you know for and I want to force the closure of this board so that It's basically a mechanism to override user will if you can you know If you allow me the electric expression So it started with and within when with an initial iteration looking at how These rules could be applied applied at the router level, but it evolved to be more port oriented and with with a V2 version and The V2 a version actually is a little bit It seems like more promising in the sense that it allows for for the for the semantics of firewalls Fireworks of service to be more consistently applied throughout the that are the portable portable typologies that are available in Newton We also over time worked on service function chaining It is an API that allows the definition of poor chains, so basically we take neutron ports we can model a chain and Traffic classified traffic that matches the chain effectively can goes goes gets redirected throughout the chain and It has implementations for OBS so when they light on us and and over again All right, so I'm gonna hand over to Kevin now that's gonna walk us through the roadmap. Okay So I'm gonna go over a few of the bigger features that we have targeted for Pike that will be visible to operators and users and then some of the community-wide goals that will impact Operators, they won't really be as visible to users The first big one that's been going on for a couple cycles now is adopting Oslo version objects inside of neutron This is kind of a lot of Internal work to refactor how we pass around data structures And then how we will communicate with agents and this will enable us to do Rolling upgrades as well as push notifications because it provides a mechanism by which one version of neutron server can communicate with agents from The version behind it and you can have multiple versions of neutron servers running at the same time so Starting with the Ocata release right we now support Upgrading the database schema while there's Ocata servers running and you can bring online Or sorry while there's Newton servers running you can bring online Ocata servers And then slowly phase out the Newton server so you can have zero downtime upgrades with the neutron server API multiple port bindings is Something that won't really be visible directly to users, but it affects the communications between the neutron and Nova Components so when Nova goes to migrate a VM from one host to another host right now It has this limitation where it has to Land on a host that has the same type of networking topology so they both have to be using OBS on both sides the the interface type if you will can't change at all and That means you can't do things like migrate from OBS to an SRLV based system, or you can't migrate from OBS to OVN So it blocks kind of a lot of upgrade options for providers that want to change from one networking Technology to another another example we have here where this is really going to help is we recently Added a pure OBS base firewall that has much better performance than the Linux IP tables hybrid-based one for the OBS back end But until we have this multiple port bindings there won't be a way for an operator to adopt the new firewall type and migrate VMs to it Security group logging is for now starting out as a purely operator-based API. It will let operators define Criteria to match to say I want to log all Packets or log all instances where a packet has reached a fire a security group rule and has been dropped or accepted So this is very experimental for experimental early on the initial implementation will be only targeting the OBS Reference implementation that we have now And it will be operator-facing so it won't be available to users, but it will allow Operators to log security events for sensitive devices Quota usage API right now the quota API that we have is very limited It just lets operators set quotas for tenants and then tenants can view their own quota limits But they can't see how many you know how how close they are to hitting the limit on those quotas So we're just there's going to be a new details endpoint on the quota API Where either users can look at their own usage or operators can go in and ask you know how many how many ports is this This tenant using or how many networks is this tenant using without having to do full port listings and all that kind of stuff So they can see how close users are getting to hitting their quota limits RBAC rules for domain so right now we have a RBAC API inside of neutron that allows you to share share networks with other tenants or other projects and Allow them to access them as external networks or shared networks directly with their VMs It also can be used to share QoS rules between Projects and this is just going to be expanded to also support keystone domains So you'll be able to share networks or QoS objects with entire keystone domains Diagnostics is another brand new API that we're trying to get in very early version of this cycle Do you know if it's operator only or will users be able to use it? We'll try to address the end user use case Yeah, yeah, so the the basic idea is that you can query this diagnostics API and return a bunch of information that's Using a plug-in architecture in the back end that will examine the state of agents if it's an agent-based system And let you know hey, there's the reason your instance isn't getting an IP address for example is because maybe the DHP agent is dead or None of your agents are responding so it'll help troubleshooting when you have difficulty figuring out why you know Why is my floating IP not working or why can't I reach my instance to be able to use the Diagnostics API and it'll return back some error code to say hey, there's something wrong with the infrastructure or maybe hey you just have All of your you don't have any security group rules that are allowing traffic So you need to go in and add some if you want if you expect things to work QoS ingress rule support This one's pretty straightforward the initial version of QoS. It was released a couple cycles ago now, right? only had Limits on egress bandwidth, so you could only limit how much you could only put bandwidth limits on the traffic It was coming out of the instances, so this is just adding ingress support as well So you can limit how much traffic can go into instances and do bidirectional limits The community goals these are project or community-wide goals set by the technical community the technical committee Python 3 support neutron I think Has Python 3 support we have one one test that's running all the unit tests with it and we're working on getting a Tempest test and maybe one our functional test or full stack test set up So this one should be effectively done. There could be some edge cases. We're missing right now That will that will find during the cycle as we enable more testing of it WSGI support is a little further behind but it's still on target. I think to be released this cycle WSGI support is just to allow The neutron API server to be run in a patch WSGI or another web server WSGI rather than as a individual process with its own Socket stuff and if you want more information on the goals and the reasoning forum, you can go to this URL that's here so neutron lib is a A library that we started two cycles ago. No, I think a little longer than that. Okay. Yeah, and We we've been trying to provide a stable interface. So we have new the neutron core repo and then we have all these drivers and Service plugins all those sub projects that need to have constants that are specific to neutron all kinds of stuff and right now They're importing from neutron core directly and they end up Breaking whenever we make some kind of refactor that kind of thing. So we came up with neutron lib to Make a stable interface that where it's safe to import and there'll be a really long deprecation cycle if we need to get rid of something so That way drivers don't have to constantly Watch what's going on in master and worry about things breaking if they're just importing from neutron lib So the the goal for pike is to pick a few of the sub projects that we have inside the the neutron stadium and make sure they're not Importing anything from neutron at all make sure we provide everything Necessary for them to function just inside of neutron itself. Yeah, this is pretty pretty much like a housekeeping task Yeah, make sure that we provide a clean stable interface between the various approaches that can integrate into the core neutron platform So if you're a driver developer or you don't have a developer. This is this is more relevant to you So kind of the the broad themes for what we're focusing on in pike is scalability manageability with the Versioned upgrades, you know operator kind of focus features Modularity the the big thing with neutron lib to make sure we're not constantly breaking sub projects and putting out fires that way Interoperability kind of focused on that a little bit with the the multiple port binding stuff That's going on between Nova and neutron Security is kind of minor. There's the logging API. There isn't much changing along those lines There's firewalls of service inside that project, but nothing in the core Core neutron changes there and user experience the troubleshooting API will help with that a little bit too Yeah, and also, I mean we keep on investing time and energy in the open stack client Yes, where right now you could use the open stack client pretty much to target any like core command that targets the L2L3 operations so We've deprecated the neutron client as of Okata I think and we won't be obviously getting rid of it anytime soon But yeah, if you are you know uses out there You should start considering using the open stack client and provide us feedback because there may be like still gaps that we may have overlooked and Then within the sub projects there's quite a quite a bit of activity going on that's kind of very independent from What's happening in the neutron core projects inside of my denet they're working on ironic? integration IPv6 container integration Open daylight scalability any other big features and open daylight No, I think they go to a decent like mature the point and they are going over incremental iterations I think the big the biggest thing that they wanted to work on We're targeting L3 Right L3 enhancements and then OVN they're working on the ML to OBS integration They're the big goal is to get to parity with the current OBS reference implementation that we have and then so they can start suggesting people move from our entry OBS implementation over to OVN and That's the one of their dependencies there is having a migration path that like the port binding stuff work in the neutron core So yeah seamlessly moving over workload that's running on the existing The other plane as implemented by the agent based of yes architecture to to OVN Big pipe BGP VPN just more Features for fine-grain control over routing. Do you have any more details? Yeah, I mean if you guys have more for more questions, I think we have a few experts in the room that can help us out I see Paul over there Firewalls of service the the big thing that's happening inside of firewalls of service is the development of the v2 implementation the port-based one right now the The router the router one the router base fire firewalls of service v1 isn't going to receive any major updates or fixes So right now all the effort is being focused on the v2 API and the implementation for OBS Service function chaining. Do you want to give a quick update? Yeah? Well, and also, you know, let's keep that I'm crowding I mean we've been working on providing Different implementation backends So there was a team at some point I was working on the quagga driver and also there was an initiative to Refactor the code base so that it would be a little bit more agentless friendly Right now there are some baked-in concepts that are tied into the agent-based framework in order to provide dynamic routing services Service function chaining The team was was focused on NSH and provide higher level abstractions in order to build Service chains and also address more use cases that they would even target potentially bare metal deployments I mean truth will be told. I mean during the past few months, obviously We've experienced a bit of a setback with a number of contributors moving on to greener pastures Let's let's say and so, you know, we're trying to kind of huddle up and figure out how we can Keep on pushing the road map forward with you know with limited more limited resources So some of these goals may end up being dropping from the map. I mean I was still trying to figure that out Yeah VPN is a service Last during the last cycle it kind of fell fell out of status with the neutron stadium because there isn't anything particularly Wrong with the project. It's not critically broken or flawed or anything like that And there's actually quite a few operators that I know are using it. We just need Basic maintainers to volunteer to do bug triage Review the simple patches, you know changing import pass that kind of thing and so we have a session dedicated to this That's actually right right after this one in mr. 104. So if you're interested in VPN as a service Come down to that one and we're gonna kind of try to get some volunteers from some operators That won't take too much time because I don't think there's any big feature requests right now It's a product that just kind of works The reason why I ended up being dropped from the neutron radar is because again, there wasn't active element However, you know when it comes to open stack development I Projects need to be, you know continuously nurtured. There are good issues that must be addressed. There are these issues that must be addressed There is documentation that must be provided and so on and so forth and without an active maintainer Obviously, we can guarantee we can't vouch for the quality of the project any longer And that's why I kind of ago we signaled these This change in status of the projects by taking it out of the neutron governance But again, if there are folks out there who are, you know, generally interested in Helping the project Going forward, especially like for instance of dressing open stack of musicals like Python victory compliance and things like that Then, you know, we'd be more than happy to work with them in order to you know, bring bring the project within the new You know the neutron management ecosystem It's it's too early to have any kind of Picture of what's going to be happening in Queens So the the pipeline still open for proposals for Queens Definitely and even for pike for for very small changes. So File a request for enhancement if you have any Features that you want to see in Queens or even in pike we discussed in the driver's meeting to prioritize things Yeah, I don't know if you want to briefly mention what the process looks like There Yeah, if I you file a bug inside of you go to launch pad and file a bug against neutron and just add the RFP tag and put RFP up in your the title of your bug and Put just a very basic use case. Don't talk about how we should implement Don't focus on implementation details just one paragraph of I want this feature, you know, here's here's how users gonna use it Here's how it's gonna help an operator that type of thing and Submit it and then we discuss it Discuss these RFP's weekly and the neutron drivers meeting and if we need more details We'll ask for more details in the bug that kind of thing and if you're more than happy to help with implementation the bet yes So some things that have come up just in like this the recent meetings is like cells support So we have a similar scaling solution that new Nova has distributed SNAT support for more network types multiple IPAM drivers loaded at the same time and Talking to you know kind of like an ml2 type thing where you can have multiple IPAM drivers and for one network You get IP addresses from one system and another one network. It talks to different one Another one that's come up pretty recently is SSH host key retrieval. So new Tron can Ask ask an instance directly kind of over a trusted network path What's what's the SSH host key? So this would be retrievable via an API So when people are initially connecting to an instance, they have a secure path to get the host key Does anybody have any questions if you do come up to the microphone so we can yeah, we got plenty of time I think we got like 30 minutes to open up to questions. So please thank you for your work. There is a very long-standing request for Neutron to influence Nova scheduling Decisions for example if you have two physical networks available on different hosts and Tenant want to use specific network We should be able to say Nova use this host not any other Yeah, and now every operator do this with some crazy magic around flavors aggregates and so on and the simple way for Neutron to say no this node couldn't serve this Networking, please do not schedule here and I as far as I remember it was outstanding request Havana maybe even Earlier and I see no any traces of Nova scheduling in so the Rotted networks was very similar to this where we have a single network that's made up of multiple segments And they're all on different physical networks And so that's a Miguel actually standing sitting right next to you there worked on some scheduling changes Inside a Nova. Do you want to give a quick update and So for rather than was we are able right now to Actually influence send the information that Nova needs to take the the scheduling decision based on the segments that Are part of a router network what the Nova team is working in this cycle is using that information to actually To take that information actually in the scheduling process and to piggyback on my Miguel's response there There's a mechanism available in Nova has been available for a long time which is scheduling filters So you must realize that from a neutron standpoint We don't know you know the project does know about The hosts obviously like if you look at the computer So there is some awareness of you know what you're running on the host in terms of like agents and how they're net You know how their networking wiring looks like but obviously that you can you can then use a Scheduling filter there is like a Nova concept that could be potentially used in order to influence the scheduling decision at the time You're you're trying to place a VM on a computer. I don't know if that is something that you're looking at or you had considered Yes, then I have another question if I want to write my own schedule would do this all the time Where I can get neutron information when I Writing Nova filter, huh? There is no neutron. You're gonna you can hit the neutron API to do that for better or worse AT&T labs research Started a separate project called ballet which includes a Nova filter scheduler and a heat lifecycle plug-in and What the ballet project does is it parses the entire heat? template before any other system sees it and Makes decisions and then through the novus filter scheduler Nova filter scheduler plug-in is able to do very much what you're saying And some of the features that may or may not exist at this point in time are things like bandwidth guarantees So what one of the one of the feature goal when the original objectives of the ballet project was to be able to Support bandwidth reservation and be able to say this particular hypervisor has only two ten gig nicks in it therefore It cannot support a collection of VMs that need a aggregate of 30 gigabits of committed throughput now. It's not part of neutron It was as a separate project, but that's something you you might want to look into because it was it was designed for that And then you know to answer your question like with two points is a Apologize if you really you know requests got abandoned without any type of feedback But you should have at least seen a feedback of along those lines Is that you know there are building blocks that you can leverage in order to integrate this in you know these business logic into your system and if there are gaps that you still perceive You still observe from like from the net neutral standpoint would be a point to address them In fact, there was a request a while ago to expose an API that gives us the give the give tenant hints about IP consumptions Where actually these folks were writing a filter that was Looking at how many IPs Were consumed like in a subnet and then influence the placement the VM placement based on how many piece were available and We basically ended up providing this API endpoint VN neutron That was inquired by the scheduler filter at the time the VM placement was taking place was was happening So again, if you still have gaps of those sorts We'd be happy to look at them but as far as the building blocks are concerned we can you can potentially do that today We we can we can provide information that can be used by the scheduler But we can't actually influence the scheduler directly inside of and the scheduler itself has access to to the neutron client in order to go And you know and pocked in the neutron API I just wanted to add something to the ODL the networking ODL these I'm not actually that involved with the networking ODL Subproject, but there's a lot going on here at the summit and especially tomorrow on something called nirvana stack Which is ODL plus VPP and so? VPP is vector packet processor. It's part of the FD IO organization and what we hopefully will see is All of the all of the various neutron APIs that currently work with ODL programming open v-switch Hopefully we'll see moving towards parent feature parody with VPP as an alternate data plane and VPP is I mean competition is good I think there there are claims that VPP is much faster than open v-switch I have yet to see I have yet to see confirmed lab evidence that I believe on on on that But hypothetically, it's supposed to be much faster and more Adaptable towards programming into various advanced smartness. So let me stop you there Are there any questions about this because this is a time about questions, right and advertising I'm just saying it's good point should be a major Progress. Yeah in the ODL. Absolutely. Good point. And that obviously that's key opportunity to the audience to come up to the mic and ask a Little bit more about, you know, what things like over in the open. All right. There you go Sorry, there is another operational problem Which is IPv4 only when we have few external networks in our installation. We have no floating IPs no rotors External networks that shared with tenants and the problem is that we can't have too large networks because it's too much junk floating around But there is no way for all existing applications to give some non Simple way to say any network. Give me any network Nova can do this when there is one network, right, right. Okay That so that might be like get me a network. Yes. There's so yeah, I don't know if you've heard of Something known as get me a network as a feature that was introduced get get me a network It sounds like something interesting, but right So yeah, you would want to check out a video that was put up online yesterday because we had the presentation about this test It's a feature that was introduced in Mitaka for neutron and completed on the Nova side in in Newton And so today you could know if you get hand on your hands on Newton code base You can give this the try it's it's a feature that allows you to provision to assign Networks to tenants in a seamless fashion So when you you as a user are spinning up a VM, you don't have to specify what network you want The system is smart enough to figure that out. Well as but as it stands right now It creates a router and I was gonna say But you know it has some certain limitations in this summit we prevent presented the feature And we're discussing this afternoon at 530 so I know it's late But we welcome you there to come and bring us feedback as to how we can expand this feature Yeah, cuz one of the alternate implementations for get me a network could be to select from a pool of networks and Yeah, I have that match I worked in last night. Okay Say is that based on your description get me a network might be one of the solutions But another thing that you might consider routed networks You broke it what I understood is that you have a huge network. Oh, I say, okay Right it looks like we have what you need so just come and talk to us My network available to our users, but nobody uses it because there's no horizon integration I don't know if that's on the road map. Oh, okay First of all, could this to you for running Newton But no, we don't yeah, we don't have anything in the works It should be just a matter of like providing a checkbox in horizon So the implementation should be relatively trigger. We should try and get I key hero to work on it Yes, but yeah, it's do provide us that type of feedback on on the interpod That targets that forum discussion that I was mentioning earlier like this afternoon at 530 But yeah, at the moment, it's just your line to question Yeah, you know use case that's targeted kind of make sense for yeah, but you only do one one thing that I didn't even release So we go No, but pointing in very very well Good point any more questions. We've got two more minutes. If not Thanks for coming still early in the morning caffeine