 All right, I'd like to welcome everybody to the session. We're going to talk about subnet pools and plug-able IPAM, not the most exciting topic, but for some of you, definitely you're interested in what we're trying to do in terms of enhancing Neutron's capability to be able to allocate subnets and IP addresses. My name is John Voss. I'm a product manager in Infoblox. I'm going to give you a quick overview of what we're doing and what has been added to Neutron in terms of subnet pools and plug-able IPAM and what we plan for the Liberty cycle. And then I'll pass it over to Carl Baldwin. He'll talk specifically about subnet pools. And then John Bellamerick will end with details around plug-able IPAM. So just to show hands, how many here are developers? Few developers. How many are operators? OK, about a 50-50 mix. So I think this topic touches both ends of the spectrum. So why is IP address management important for Neutron? So really, you can't connect anything to any network unless you have an IP address. So this goes for virtual machines. It goes for any port that is created or managed through OpenStack. So if you have shared networks or you have routable networks with provider networks between tenants, it's really critical that you make sure you manage your IP space effectively so that you don't have IP address collisions. And this becomes especially critical as your cloud scales. So you can imagine how painful would it be to have to go and re-IP hundreds if not thousands of VMs because you've run out of subnets. And then you have to completely change your IP space. In addition, as you start to scale clouds across multiple different platforms, being able to allocate subnets automatically for tenants is really something that a lot of users that we've talked to, they've been asking for. So rather than having to specifically let the tenant kind of guess what network they can use, allocate a set of pools that they can pick from, and then automate that entire process. And then lastly, when we're talking about heterogeneous environments where you have, say, physical infrastructure, virtual infrastructure, more complex networks, then you really have to have a good view in terms of what IP addresses have been allocated, what subnets belong to both the physical and virtual environments, and then have the ability to reclaim IP addresses so that you don't chew up your IP address space too quickly. So a lot of users that we've talked to, what will happen is that they'll start allocating subnets to particular tenants or particular environments before they realize that they're completely out of the set of subnets they have for the routable IP space. So to drill down in terms of the enhancements for Neutron, in terms of IP address management, in Kilo, there's been a new feature introduced around dynamic subnet allocation. Carl's gonna go into this in more detail, but basically allows you to have a pool of subnets that can be allocated on a pertinent basis. And then we've been working on Plugable IPAM, which is the ability to have either user-developed or third-party developed external IP address management systems integrated into Neutron to be able to delegate the assignment of addresses to that external system. So this really opens up external centralized management of IP addresses as well as subnets. So if you have a large-scale environment across your enterprise, then you can use your own custom logic to be able to allocate IP addresses or subnets and manage this centrally. So we were hoping to get this in Kilo, which didn't quite make it, but it is on track for delivery and liberty. So with that, I'll go ahead and turn it over to Carl to talk to you a little bit more about subnet pools. Okay. So I'm Carl Baldwin, I work for HP. I've been playing around with Neutron for a couple of years now. And since I started, when you go to create a subnet, you've got to give it the details. And I sort of imagine a tenant going to, okay, I need to create a subnet, going off to his pad and paper where he's recorded all the subnets he's used and trying to calculate one that's available and big enough. It's just, it's an extra step. It creates some complication for users to have to provide those details. And also, I've taken an interest in IPv6 in Neutron during the last cycle. It's really gotten serious. IPv6 is working pretty well with the Kilo release. And with IPv6, address management is a little different than with IPv4 and Neutron. With IPv4, you kind of make up your own addressing. You use whatever you want. You grab some RFC 1918 space or really whatever you want to bring to the cloud. And we make that okay, because when we route externally, we nat it all. And as long as you pick something that won't create a black hole for you in the internet, you're okay. But with IPv6, we don't have that. Really what we want to do is we want addresses that are globally unique. We want them and we want them to be routable straight out to the external network. And that means that bring your own addressing is a real problem now. So the first step to addressing this is to create an address pool for either an admin or a tenant to create this pool of addresses that can be carved up and give that to Neutron to manage so that now you don't have to go to your pad and paper and do your calculations. You just tell it how big you want the subnet. And also in working with external IPAM, I noticed that they don't just give out IP addresses. They also manage subnets. And in order to integrate that with Neutron, we need something more. We need a way to request the subnet from something that's externally managed. So we designed this in a way that we have a reference implementation that you can use today in Kilo. And that reference implementation was developed by Ryan Tidwell. And the tests were developed by Zhang Fagiao and delivered in Kilo and you can use that. But we also developed the internal interface is necessary to be able to delegate this to the external system through the public, pluggable IPAM interface. Let's see if I missed anything here. So essentially to use subnet pools, you have to first set one up and I'll run through a demo. I'm too chickened to run through a live demo in a venue like this. So I ran through the demo on my laptop using DevStack and I took screenshots. So the first thing that you'll do is you need someone to create a pool. There's two ways you can do this. A tenant can create a private pool. This may be useful for current IPv4 deployments. If you wanna quit managing your IP addresses on your pad and paper, you can use these as a tenant. The second way is that an admin can create a shared pool. I purposely followed the model of the external network in Neutron. For years, the external network or any network, a network in Neutron can be marked shared or not. If it's shared, the owner is still the owner, but a shared network can be reached by any other tenant in the cloud. And I use the same model and in Liberty, there's some plans to add more fine-grained role-based access control and I plan to leverage that work to allow some finer-grained control over subnet pools. But for now, what we have is shared. So the admin creates this shared subnet pool and this works for IPv4 as well as IPv6. If you were in my talk yesterday, you saw a condensed version of the same demo but with IPv6 addressing. So we create a simple little pool here. The important parts here are the pool prefix and really the name, demo subnet pool that tenants are gonna use to refer to this pool. And up here in the left, we have admin creating the pool and down here we have the DevStack demo user listing the pools and now you can see one for IPv6 and one for IPv4. Moving on, now that the pool's created, we wanna use it and that's simple. We've added to subnet create API, we've added a subnet pool attribute and currently the Python Neutron client is able to use this. So we're normally at the end of a subnet create you would put the cider for the subnet you want. We take that off and we say dash dash subnet pool, the name and the prefix length we want. And when the subnet is created, it's automatically allocated this address. And then down here I was just kind of playing around with trying to fool it into giving me something I shouldn't be able to get and I get the appropriate areas. You can also pick, you can also use a subnet pool and pick the addresses you want. It allows that as long as it's available and as long as it's in the pool. It's a little bit different than with Neutron you can request any IP that's outside of allocation ranges and it'll give it to you as long as it's not been assigned to something else. The subnet pools are a little different. When you request something specific, you have to request something in the pool and it has to be available. And if it's available, it'll give you what you request. Now, if you noticed at the beginning I started with a pretty small pool, slash 24 and I allocated two slash 25s from it so we filled it up. So what do you do then? We also implemented the ability to update the pool. So here we've exhausted the pool and as admin right here I'm gonna run a pool update. I did it wrong the first time, the second time I added, I included every prefix in the pool and you'll notice that the prefixes are disjoint. There's no reason they have to be contiguous and with IPv4 that's important. It gets harder and harder to find contiguous space. Let's see, another thing I wanted to show was the allocation strategy in subnet allocation in the reference implementation tries to compact the space and use it as efficiently as possible. So what it's gonna do, when you request a particular prefix size, it's gonna look at the entire pool. It's gonna try to find you the numerically, it's gonna try to find you the first subnet numerically that fits the size that you're requesting. So this helps keep things from getting more fragmented than it needs to be. And that's it for the demo. It's a pretty cool feature. It's a first step in getting the address space under control so that we can do more with it and we'll talk about what the future holds later on in the talk. I'll pass it on to you, John, for a pluggable IPAM. All right, hello, I'm John Bellamerick from Infoblox. So as John and Carl have mentioned previously, as part of the IPAM enhancements, we added the subnet pools feature Carl and his team did and then the other aspect of that that we really want to treat is how to integrate with external or third-party IPAM systems. So in order to achieve that essentially, we have to go in and refactor the way that the management layer within Neutron allocates IP addresses and subnets. Today in Kilo and earlier, it's embedded within what's called the DB plugin and all the logic for that IP address management is directly in there. In Liberty, what we've been working on is extracting that logic and creating a new driver-based architecture for IPAM. So that means that there'll be a reference driver on Salvatore that has actually been the lead on building that reference driver and the reference driver really will be functionally equivalent to what's currently there. But the IPAM abstraction or interface that Carl mentioned earlier will allow, the DB plugin will now call into that abstraction in order to allocate subnets, delete subnets, allocate IP addresses, et cetera. This abstraction enables us, of course, to swap out the reference driver for other drivers and those could be drivers that work still locally and maintain things in the database locally or they can be drivers that make calls to external systems. So one of the questions we've gotten about this a lot is how does this fit in with the agents? What happens to the DHCP agent in this case? And so what I'm trying to show here in this diagram is really the flow of what happens when a user makes a request for a particular subnet creation or IP address. The API will call into the plugin which in turn calls into the DP plugin. And previously in Kilo and earlier the DP plugin would just do that logic, store the data in the new front database and pass it back. The pieces in yellow are, of course, the new pieces in Liberty that abstract that out so that the DB plugin calls out to that driver and which may in turn optionally call an external IPAM. It returns the IP address and then from there the flow is really essentially exactly what it was in prior releases. So from the point of view of the agents, nothing has really changed. It's just that the decision is made by a different part of the management layer. So I think that for one other thing I wanted to mention was that the way that the IPAM driver is set up today, there's one for the entire installation in Liberty, but the driver configuration or the driver setup is intended to be on a per-subnet cool basis. So in order to integrate with the subnet pools, you can actually in that case have a different allocation strategy for different subnet pools. That gets us a little bit into some of the future ideas we've been talking about and these are really, I don't know if Carl wants to talk about some of these, these are really just things we've been talking about and we really are looking for feedback from the operator community in particular on what you need, what use cases make sense and how we wanna bring this stuff in. But if you wanna... Yeah, sure, I can, I'll talk about address scopes. As we were, we had a lot of discussions about the abstraction that we wanted to create for pluggable IPAM and also adding subnet pools and adding the ability to delegate these subnet pools to an external system. And the idea of address scopes came up as part of that discussion and the question came up, well, is a subnet pool an address scope or what's an address scope good for and what's a subnet pool good for and we finally decided that subnet pools and address scopes are a little bit different idea. They're not quite the same, but they're related. So what an address scope is a new thing that I would like to develop for Liberty and what it is is it will relate to subnet pools. In fact, you could have any number of subnet pools under an address scope, but an address scope goes a little further toward addressing the problem of getting addresses under control in neutron and what I mean by that is up until now everything's been bring your own address, bring your own address. You always, you've always got to know that in order to create any subnet in neutron. And that kind of worked with the IPv4 external network problem as I've mentioned, but with IPv6 coming up and also a little bit with IPv4 to allow a little bit different use model, the bring your own addressing doesn't really work and right now with neutron you can do things like take a router and plug it into two subnets that have the same addresses and obviously it's not gonna work. It just kind of, it kind of works, but it doesn't work. You know, it'll let you do it, that's what I mean. It works and it'll let you do it and there's no error, there's nothing there that prevents you from doing that but it doesn't work obviously. So, and then there's another problem when you look at IPv6 and the external network model and I talked about this yesterday a little bit. If you look at that, there is no NAT, there's nothing providing a hard boundary between your internal addressing which tenets have provided on their own when they created their subnet and the external addressing and what happens is everything just gets routed routed out to the default gateway whether or not we know it can come back or not and whether we know that's a valid globally routed subnet that should belong to that tenant or if it's just something they made up. This is a step toward helping, giving the neutron the model it needs to allow us to distinguish between different kinds of addresses and where they come from. So, at IPv6 external network model, we'll use a subnet pool so that we know where that address came from and we know that it's unique and we know that it's actually intended to be routed externally and that we can actually route it back and complete the routing circuit and do all of that. And then also looking forward toward things like bringing in L3 VPNs, BGP VPNs into the neutron cloud. We need to be able to understand that there are different routing domains that need to be kept separate. And so address scopes will not just be a group of subnet pools, but the neutron routers will understand routing scopes and be able to distinguish between different routing domains and all that. And I think that's- I mean, I think that's essentially the idea is to build into the neutron model some concept of which of these subnets actually should be able to be routed together. So essentially it's an address space where you're guaranteed that any given IP address is unique, any given subnet is unique. It's a sort of generalization of the model that's there today, such that the neutron router, you know, neutron management layer can say, okay, you're trying to connect these two address scopes, therefore we need to do some kind of address translation between the two. It also gives you a place to tie in your route targets essentially for if we, it may be a mechanism we use for how do you decide when you're gonna advertise routes. If we're doing it, you know, it's clear that we're not gonna advertise routes from neutron in anything but BGP because we need all the extended community and everything. So this gives us a way to define which of those subnets would get tagged with what community. Did you want to talk to the second bullet here? The second bullet here is something I mentioned before that essentially in the current implementation, a single driver will be used but there can be cases where a given subnet pool may need a different strategy for allocation or it may be tied in the backend system into a different, you know, if we don't do address scopes, it could be tied into a different address scope and you could add that to part of that pool configuration as you attach to the external system. So I think that's it. Yeah, so maybe we can just open it up for questions. We have a microphone if you wanna step over otherwise I can repeat the question for you. So the question is with subnet pools, are you able to restrict the prefix length? Yes, there is a minimum and a maximum and there's also, I forgot to mention it but it was buried in one of the slides. There is a quota mechanism for subnet pools. It works a little differently than a normal quota mechanism in neutron because it's hard to count addresses especially in IPv6. So with IPv4, the quota system counts slash 32s. So if you request a slash 26, then that counts as 64 toward your quota. In IPv6, we count slash 64s because it's just not practical to count individual addresses. So you're not allowed to exceed your quota basically, yes. Yeah, so yes, prefix links can be different for subnet pools. And in fact, there's a default prefix length for the subnet pools so that you don't even need to specify a prefix length. There is a subnet and it'll give you the one of the default size, that's a per pool. That's very useful with IPv6 because really a slash 64, you never need anything bigger and neutron doesn't allow anything smaller because of a stateless addressing, it's helpful. Looks like we've got a line up here. Hi, so I'm excited to see this feature but I haven't used any of the Infoblox products but in my experience, IPAM systems are not, they don't appear to be written with very, very large scale in mind. So I guess my question is, do you expose enough information to say if I provision 6,000 instances, how do I keep it from knocking over my IPAM? Is there any way to batch those requests? That's a good question. So I was just thinking about that last night. I don't think we offered in the actual abstracted interface a way to request multiple addresses at once but I think that's probably something we should look at doing because of exactly this kind of situation. I believe our system does allow you to request an arbitrary number of addresses at once but I'm sure that some of our competitors do too for that reason, so thank you. So I will first start with a comment. I would like to say thank you for advancing in that topic. It's really useful. So my question is about the first fit part. I have, in fact, it's split with v4 and v6. So I'll give you a practical example. So you're an operator, you have a slash 23 that you give to your customers and then afterwards you say, okay, I don't have enough IPs, I was just add a slash 24. And there is one of your large customers that wants a slash 25. So basically you have once slash 24, one slash 23 and you have first slash 25 for your customer. And another one asks for a slash 25. So the first fit would be filling a slash 24 or continuing on the slash 23, et cetera, et cetera. That's my first question and I will directly ask a question for the v6. So for example, you have a slash 48 that you give to your whole amount of customers and you have a subnet pools, a customer asks for a slash 64. Afterwards, another customer asks for a slash 64. How will the slash 64 be arranged in the slash 48 space? For example, what my goal is here is if at some point a customer, the same, so the customer A, I will say, ask another slash 64, it would be nice to have the two slash 64 with the same customer adjacent one to each other and the other customer would be on a more spread out slash. So there is an algorithm for that and I would like to know if the first fit is this kind of algorithm. So it's essentially first fit. There's no, right now in the current implementation of the reference implementation, I can't speak to info blocks but there's no regard to tenant at the moment. So it's an interesting use case and I would like to give it some thought but currently it's first fit and you get the lowest subnet numerically that will fit and it will, in your first example, with slash 23s and slash 24s and slash 25s, you can add that slash 23 in the slash 24 which are probably disjoint and when your customer asks for the slash 25, if there is a contiguous slash 25 available within the slash 23, it will allocate that as long as that's the lowest numerically. Smaller subnets will be allocated from the smallest bit of space that it can allocate it from so as to not prevent allocating larger subnets later if at all possible. Okay, thank you. Yeah, I think what you illustrate though is one of the reasons for the plug of the IPAM, right? It's not just for third party systems but it's just, well, look, I've got some allocation logic that I want to include that's not in the reference. I can take the reference and I can fork it and make a few tweaks and do whatever I want. Yeah, I mean, so it sounds like you kind of want to introduce an affinity concept for IP address or subnet allocation. So one of the things we haven't talked about here is that there are also use cases for using metadata as one of the decision points for how to either allocate subnets or IP addresses. So if you might want to think about that as a way to introduce that in terms of the logic for how these would be allocated. For IPv6 use case, as you were telling yesterday, so between multiple tenants with different routers, like each using one router, right, neutral router, where do you want, where do you expect the routing logic to live? So how do one tenant's router know where's the address block? How is the address block for the other tenant is actually reachable? Right, that was a big topic that I discussed yesterday in with Sean Collins and the L3 IPv6 talk. And it's a hole right now in Kilo. We did talk about either using either delegating to prefix delegation. So delegating completely to an external system to allocate the subnets. And then it's up to that system to set up routing hops to actually complete that. So do you actually expect that to program a neutral router to tell where the other subnet is? I mean, one deals a neutral router, how do we start? No, no, it would be hierarchical. So one tenant's router, it's an edge router, right? It just says, I have my subnets, everything else is up. And then the system with the prefix delegation, that has the bigger view of routing and it can route back down. Okay, thank you. Yeah. I had a question on what provided routers. In the examples we provided, I didn't see a way where the pool is tied to an external network. Would it be possible for somebody to pick, create a pool and attach it to like a network where they can't, it can't get out basically. So the upstream router is set up a certain way. But then the tenant creates a pool where it's in the wrong physical external network. Right, that's a good observation. There is no explicit link between subnet pools and external networks. But with address scopes, we may actually add the explicit link. So the tenants, when they request any particular external network, and they want addresses that work on that external network, that they automatically get the right subnet pool for that. And if they do today choose the wrong one, it just won't work, it won't route correctly. It'll try, but you won't get the complete routing. And with address scopes added in Liberty, it will actually prevent that routing. Prevent those from routing across those domains. Right, I think there's also a more general issue right up. There's no route advertisement at all from Neutron right now. So even as you allocate these subnets, unless, if it's on a provider network, unless you've set your router up appropriately, you're never gonna get anything to them. Right. All right, any other questions or comments? Yeah, without having all the details fleshed out, that's kind of the direction this is going, yeah. Yes. And again, that's future work that we would like to tackle to go in a direction like that. Any other questions? Oh, go ahead. I have one question on the clarification for first fit. So in a highly dynamic environment where I'm allocating slash 24s slash 27s, it would seem like I could run out of my larger blocks quickly if you're not going through and finding best fit first. So is there a consideration to go for a multi-pass algorithm where you find the best fit first and then gradually increase the size so you're looking for a 27 first and a 26 and then a 25? Yeah, I didn't give enough detail. It does actually first sort them by size of availability and then numerically. Okay, thanks. So it does do a best fit. All right, anyone else? So we have some examples and demos of pluggable IPAM and the Infoblox booth. So I invite you to come by if you're interested in finding out more and seeing a demo. Thank you. Thank you.