 All righty, good morning. Welcome to the update on OpenSec Networking, formerly Quantum. So, first thing I wanna do, how many members of the core development team are actually here? Stand up, raise your hand. Stand up, well, Yong and Dan are hiding in the floor. Anybody else here? You know, they're all coding? Good. My name is Mark McClain. I work for Dreamhost, and I'm the incoming project technical lead. I wanna give a huge thanks to Dan, who's the outgoing PTL. Dan was the original PTL for OpenSec Networking, and has left some extremely large shoes to fill. So, thanks again, Dan. Just real quick, if you're not really familiar with Quantum or OpenSec Networking, yeah. So, Quantum was really designed to solve several challenges in the cloud, including high-density multi-tenancy, VLANs can have trouble scaling, depending on your deployment, on-demand provisioning. Traditional networking solutions have interfaces designed for manual configuration, but when you're doing on-demand, they don't work so well. Also need a place to move workloads around and where capacity exists. And the IP state tied to a particular location. Tackling these challenges, the vendors have come up with several different solutions, network virtualization, pick your tunneling technology, other types of technologies, question marks because vendors are always coming up with new and innovative ways to solve these challenges. So, what is really OpenSec Networking? It's really just the collection of APIs to allow you to access that networking technology. So, similar to how the Compute API allows you to interact with the underlying hypervisor, be it Levert, KBM, Zen. OpenSec Networking is the same thing. You have OBS plugin, we have plugins for Linux bridge, we also have several vendor plugins. So, a real quick brief history of OpenSec Networking. It was incubated during Essex timeframe, during Folsom it was integrated in the core, where we added resources for L2 networks, added IP address management, DHCP. In Grizzly, we got a lot accomplished. We closed 44 blueprints, fixed 386 bugs, some were bugs that existed before in Folsom and some were bugs we created while we were building Grizzly. So, but the team was really busy and accomplished a lot. So, let's talk a little bit about what's new. One of the biggest things and one of the biggest pain points in Folsom was metadata. If you deployed metadata with overlapping IP address ranges, a lot of folks will deploy using RFC 1918 space, you couldn't originally overlap them or you had to go through a lot of hoops. So, one of the things we did is simplify the configuration with the overlapping IP. You start up a metadata agent and it will work for you with very little configuration other than pointing it at Nova. It supports overlapping IPs and also supports non-routed networks in certain architectures. Maybe your database tier still needs metadata services but you don't want it routable to the outside internet. We added security groups. The concept existed before Nova but we brought them in the quantum. They support overlapping IPs which Nova did not before. Also, they handle VMs with multiple NICs so you can apply security groups to different interfaces for different requirements. Also, we added both ingress and egress rules. Previously in Nova, you were only allowed egress rules now you have ingress. Also, IPv6 matching. So, one of the things we've been doing is we've been adding features into quantum is improving the IPv6 support. One of the benefits of adding security groups in OpenStack Networking is that the plugins can offload the processing. Previously, Nova was done via IP tables. Now, where the infrastructure exists within the plugin they can apply the rules much higher up. Load balancing was another one of the big features that we added in Grizzly. What we developed is a load balancing API model. It was a consortium of vendors and community members that really worked hard during the cycle to develop this model. It's a pluggable framework. So, you have the API in the front end but then you have the ability on the back end to have multiple vendors can support that API and the players can choose the solution which is appropriate for their deployment. Also, one of the things we did to make sure the API was usable and to kind of give the community a sense of how the service works is we developed a reference implementation with HAProxy. One of the things you'll see, we'll talk a little bit in a minute, is that we're gonna continue working on this. We also added five new plugins during Grizzly. Big Switch, Brocade, Hyper-V, Midecura, Plumgrid. Says Plumgrid, should say Plumgrid, sorry. One, I apologize to Edgar wherever he is. So, one of the cool things about adding five new plugins is it really speaks to the vendor support of OpenStack Network and the excitement around it. Did you have so many vendors wanting to support the framework? So, it's kind of really cool to see that. So now with the new five plugins, Quantum now has 11 plugins that gives you a rich choice to choose from. Other things that also we worked on during Grizzly is improvement in Horizon. There's a core team member who's core in both Horizon and Quantum who actually has worked to kind of bridge and improve these coverage gaps. One of the things that we improved is the ability to manage routers within Horizon interface, give a graphical view of the topology. Specifying multiple NICs when booting a VM and load balancer control. So, I think other folks may have shown this and you may have seen this elsewhere because it was in the keynote the other day but is the ability to visualize the network topology. Sometimes when you're just creating or pointing and clicking or using the CLI, it's kind of hard to see how the networks are interconnected without kind of building that mental model. And this topology map really gives you a clear view and one shot to kind of understand what's going on in a tenant. The other thing that's new in Grizzly is you can select which NICs you're attaching to an instance. Previously in Folsom, if you booted it, it got every network you owned in some order. So now you can actually choose which NICs get booted and you don't have to do discovery to figure out which networks are attached where. Other features that we added is multiple network node support. In Folsom, you could have DHCP services and L3 services but they all ran on one host for all tenants. Not very scalable and not very fault tolerant. One of the things we've been working on is trying to spread that load out so you can have multiple network nodes. You can have DHCP services run for tenants on different nodes so that you can kind of limit faults and failures. XML API, previously the Quantum API was JSON only. Did a lot of work on making it XML so now we have XML and JSON with full parity. We've also added pagination support so if you're pulling down large data sets, you don't have to get everything at once, you can actually page through it. We've also worked on adding a seamless upgrade path on the database from Folsom to Grizzly. Prior to that, with Essex, the Folsom, because the number changes, there wasn't an upgrade path without doing some manual work but one of the community projects was making sure that this was seamless. So the real question is what's gonna be in Havana? The Quantum team has been meeting for the last three days. I'm gonna do a little bit of fortune telling, future prediction. The team's still working out some of the blueprints and we'll still be over the next two weeks or so kind of hammering out exactly what the roadmap's gonna look like but there are a couple themes if you looked over our project tract what's going to be in Havana. Services was a really big theme and probably the biggest three were firewall and adding real security groups give you one level of protection that allows you to protect the host but firewalls really give you the ability to protect the network and apply security rules and apply filtering rules to the network traffic. So a group of members of the Quantum community came together, really spent a lot of time before the summit hammering out what the model and APIs would look like and has really got a jumpstart into getting this feature implemented in Havana. Another one of the services I mentioned in Grizzly we worked on load balancing. During Havana we're going to keep working on load balancing. There's a lot of different features that we can support and expose and so we're gonna continue with the load balancing vendors in the community to expand that out to integrate more plugins from the vendors because as I said earlier we only had a reference implementation from HA Proxy speaking with several of the vendors they've already been working on implementations to include for open stack networking for load balancing service. And the last one is VPN. VPN is probably one of the trickier ones because there's so many deployment use cases but as a community there needs to be a VPN story for open stack networking and so our team's working on starting that progress to bring it. So at the end of Havana will we have full feature enterprise VPN? Probably not but we'll have an implementation API that the community can then begin to build around. Other features that are coming in Havana is improved IPv6 support. Several companies in the community want to make IPv6 and quantum a better experience because they have very large deployments, they have very large IPv6 deployments and need that quantum to work seamlessly for business cases and use cases both private and public clouds have this. So one of the things you'll see we didn't have any sessions on it but you'll see a community of vendors and folks working on the mailing list to make IPv6 a very pleasant experience after Havana. Another one is you'll see improved bare metal support. You have folks who are doing open stack and open stack and using quantum to help facilitate the networking so you'll also see improvements for SRIOV. An updated client library to make writing applications and tooling that interact with quantum easier and more vendor plugins. Been in discussions with three or four vendors who've mentioned and asked how they can get their plugins in for Havana so expect to see some announcements over the next couple weeks and months. Couple community initiatives we're working on is database profiling at scale. Different deployments have different scale characteristics and so you're focused within the community making sure that the database queries are as efficient and run as fast as possible. Also improving testing which at the end of the day results in a better release and makes our product better. And exploring NovaNet migration pass. Right now today there's not a story if you deploy NovaNetwork and want to upgrade to quantum so the community's looking into what those paths look like and how we can facilitate that, whether it's automated whether there's a different story. We had discussions about that. Another one of the things tied in with that is the Nova teams and the quantum teams are working together to kind of improve that integration experience between the two. So lastly, I actually skipped that. So lastly I want to say is the quantum community like I said we've been growing with vendors and plugins, we added five new plugins but the team also developers has grown. At the end of the full summer release quantum had about 100 developers who contributed code. In the last six months, quantum now has up to 150 developers which means 50 new developers in six months. So we've doubled our number of contributors in the last project cycle which is really which is really exciting as a team lead to see all the vendors who are contributing the community members who are contributing and the community growing. So from that standpoint it's exciting going into Havana and I'm pretty excited about the things that are coming up. If you want more information on what's in quantum the manuals are up to date. The teams will be really working hard and making sure the docs are there. You can go to the docs project and go to the open stack networking. Any questions? Yes. Have you looked at load balancing deployment over hybrid clouds and those types of use cases? We have load balancing over hybrid clouds. Right now I don't believe the team has looked at that. That's something that we'll probably look at in a later release. The one thing is what we're concentrating on now is making sure that we have in a single cloud deployment a good load balancing story. And one of those things is just kind of been exploring the space and making sure that the community and the vendors understand how it should interact. Thanks. Will quantum be the default network type in Havana and will Nova network be supported? So the question is will quantum be the default network type in Havana and will Nova network be supported? To answer your second question, Nova network will be supported for a few more releases at least. The teams met this morning to figure out how we can make it possible for open stack networking to be the default network implementation in open stack. And both teams are committed to making it happen. So it's just a matter of getting the timelines and also making sure that the documentation and the supporting infrastructure around it is there. I mean, if you want to deploy quantum today, it works. It's not to say don't. It's just right now most communities have been used to deploying Nova networking so that's what we're working to update. Yes. So this is just an observation on the way the quantum API is going. The original API that they ended was really conceptually very simple and it seems to be getting more and more complicated. An example of this is flow steering. Many of the things involved in the flow steering seem like they would be more appropriate at the orchestration layer. I think some of this is made worse by the fact that people are using names for like physical objects for the virtual. And you think about it in the early 90s with object oriented programming. If people had said structs with function pointers, there probably wouldn't have been much interest in an object oriented program, but it caused a revolution in the way programming was working. And I think there's a potential for that to happen with networking, but if we call something a router or a firewall, I think maybe there's a problem. Anyway, I think there needs to be some thinking about how to simplify things, especially with things like flow steering and services because if some of the functions from orchestration start creeping into the networking layer, I think it's gonna make it too complicated. I would agree. One of the challenges of any API is trying to balance the simplicity and complexity. The other issue, and if you sat in any design summit sessions with quantum, is there's always some difficulty in coming up with the appropriate terminology to represent what a logical device should be called, what it does, and then actually what the physical implementation of that device, whether it's one device, two devices, or what happens on the back end. And some of those discussions really take a little bit of hammering out, trying to figure out how we can do it. And I would agree that we wanna make sure that the making, obviously networking, simple and easy to deploy is one of the goals of the team. We actually held a session on that, trying to figure out ways that we could make it easier to understand both from a tenant perspective and from a deployer perspective. Yes. A sane default configuration. Yeah. Yeah, so if you didn't hear what Dan said, one of the main outcome of making quantum simpler was a sane default configuration if you went installed and you hit go, it did the right thing with, there's always gonna be a few parameters you have to tweak. They just can't get around it. You have to tell it where the database lives, for instance. Yes. So what is quantum gonna be called in the future? Its official name will always be OpenStack Networking. There's actually a session this afternoon to discuss how code names in OpenStack in general work. So I don't know that answer. Currently in Havana, there are no plans to support dynamic routing protocols. None of the community really made a big push for it. I do know there's folks who have privately have been experimenting with ways to add it in, but nothing has kind of bubbled up the community level for a greater, you know, kind of a greater push. Yes. The goal with the services is kind of two-pronged. One, we need to have an API that works and the community understands, but we also need to have a reference implementation and using open source software that the community can then use to test the API, discover how the service works. Also, it provides a test bed for the vendors to kind of make sure that their implementations are compliant with the API. I mean, that will apply to load balancing firewall and VPNs. Is this make, there needs to be an open source story for each of those. Kind of similarly, if you look at the core plugins of quantum, we have a Linux bridge plugin and we have the OBS plugin. So that way there's an open source implementation of our API and that you can test and try. All right, thanks. Oh, what? So currently right now, if you define a network with a, in a flat network deployment, that's accessible to all tenants. If you spin up an instance and it's shared, they should get a port on that network. Yeah, currently if you call Nova boot without any NIC association, you get the, you get the, any networks available to the tenant. Anything else? Thank you. Oh wait. The lights are blinding. Yes. Is service chain going to be in Havana? The core teams, that's one of the items we're going to discuss as a core team is how that, what that roadmap looks like. So when I said predicting a future is a little tricky, that's one of those items. I don't know whether it's going to be Havana or something that's a longer term initiative that maybe we iterate over longer than a single cycle. Now I think we're really done. Thank you.