 I can't see him to know when we're on. Okay. Sorry for the delay, we had a cabling problem. So welcome to Load Balancing as a Service, Metaca and Beyond. This is kind of an ongoing series we have talking about what we've done in load balancing in the previous cycle and what we're gonna be doing going forward into Newton. I'll introduce myself. I'm Michael Johnson. I'm with Hewitt-Packard Enterprise. I'm the current PTL for the Octavia project. I'm Stephen Blukoff. I'm with Blue Box, comma, and IBM company. And I am one of the cores for the Octavia project. I'm Doug Fish with IBM. I am a core on the Horizon project. Great. So quick agenda, we're just gonna talk about who's involved in the recent OpenStack user survey. Then Doug's gonna give us an overview of the Horizon dashboard work that we've done for Elbas V2. And Stephen, we'll talk about the layer seven rules and pool sharing that we've added. Then I'll come back and talk a little bit about what's new in Octavia and the roadmap there. And we'll open it up for questions and talk about the other summit sessions. So who's involved? We're really lucky on our project. We have a lot of companies that are involved and participate actively in Load Balancing as a Service and Octavia. So this is just the group of people I had from previous slides and know have been active and contributing but there are many more. And we wanna thank these companies for supporting the project. I also wanna thank all of you that filled out the user survey. This is the slide that talks about the most actively used, interested in, or plan to use features in Neutron. And as you can see at the top of the list is software-based Load Balancing. So thank you for contributing to that survey. That helps us justify our activities on this project with our employers and continue to do good work. So with that, I'll pass it off to Doug. Thanks, Michael. Good afternoon, everyone. So my team and I implemented the Neutron LBAS dashboard for the Metaca release. It's a plugin into Horizon. And the focus of our effort was to go ahead and create a, to let you create a new Load Balancer including the associated objects to actually do load balancing. So that would include associating a listener, creating a default pool, putting it in a health monitor to make sure that the pool members are still working and then populating the pool with members. In addition to that initial creation, you can do some basic update and delete type operations. So you can go ahead and add or remove additional listeners. You can add and remove members from the default pool. You can remove or recreate the default pool and you could make updates to the health monitoring strategy. There are a few things you might want to do with the UI that you can't yet. You can't yet do L7 load balancing and the UI is really not suitable for monitoring the state of your pool members to make sure that they're responding to health checks appropriately. So with that, I'd like to go ahead and begin the demo. I've prepared this system by launching three instances to represent the workload that I want to balance and I also created a floating IP because I'm eventually going to pretend to put my load balancer on a public network. So you can see here I'm logging in as a demo user. This is not something you have to be an administrator to do. And here I have my three instances. I'm going to load balance, you can see at the bottom. I'm going to navigate to the project network load balancer menu item that's available. I don't have any load balancers yet so I'm going to go ahead and create one by clicking the create load balancer icon. Here you can see that I've got some basic details I need to provide. I'm just going to provide a subnet and place my load balancer on the private subnet. If I wanted to, I could specify a specific IP address for it or change the name. I'm going to go ahead and click next. I'm going to choose the protocol for my listener. This is a simple HTTP application that's being load balanced. So I'll select that protocol. The port will be defaulted appropriately. I'll continue. Here I can specify the algorithm that's going to be used for the load balancing. For my application, round robin is suitable. I'll select that here and continue. And so here you can see I've got the three instances available from NOVA. I'm going to go ahead and add the three of them to my pool. As you can see that the port is defaulted. The weight is defaulted. You might notice that the instances are still available for selection at the bottom of the screen. If I wanted to, I could go ahead and I could add the same instance multiple times with different ports. If that was the strategy I was using for managing this, but I'm not. Also you can see that I can add an external IP address for balancing. This could be some server that's not necessarily, doesn't necessarily belong to NOVA. I could add that by specifying the IP address. It's subnet and a port would be easy, but I'm not going to do that as part of this demo. It's really not, it's not my scenario here. So you can see I'm back to my three instances that I want to load balance. I'll go ahead and click next. I'll specify the characteristics of my health monitor. I'm just going to select the monitor type of HTTP and maybe adjust the health check interval slightly. And that's it. I'm going to go ahead and create my load balancer at this point. So at this point, you can see some details of the load balancer that's been created. It would be reasonable at this point to maybe go ahead and do some testing and make sure that my load balancer is working. Make sure all of my instances are participating in the load balancing. I am a developer. I'm not going to do that kind of thing. I'm just going to go ahead and assign an IP address to my load balancer so it's available on my public subnet. So here I'm going to go ahead and choose the associate loading IP action. I can specify either a specific address or just choose one from the pool. I'm going to pick a particular address in this case. And with that, my application would be have a load balancer in front of it and an associated floating IP address. I'm going to go through here and click the load balancers object. And you can see what I've created here. It's online and active. We can click and see some of the details. It has a floating IP address assigned. We can link to some of the other panels and find out information about the subnet or the port that it's been created on. I can step in and take a look at the listeners that are specified in my pool. Here's the one that I created by default. And I can go out and find more detail about the listener. I can see that it has the protocol that's being used. Dig into the default port. Find the balancing algorithm to review it. Finally, I can take a look at the members from here and see that my members are being load balanced. So if you'd like to try this out yourself, we have a plugin into DevStack. So in your local.conf file, just go ahead and add either of the lines that I've specified here. It's worth noting that the level of the plugin needs to match the level of horizon. Don't try to use the master level of Neutron LBAS dashboard with a stable Metaka horizon. I don't think you'll get good results in the long term. With that, I'll turn it over to Steven to talk about L7. Thanks, Doug. I'm going to try this thing. OK, so one of the new features that was released with the Metaka release of OpenStack and Neutron load balance as a service in Octavia is L7 content switching. And this particular diagram just shows sort of how things fit together. And I know that it's rather complicated. So we're going to get into it a little bit more here. But along with the L7 content switching, there's a new API that's associated with it. And so the way that that works, let's see this. Go ahead. So why would you want to actually use L7 content switching? By the way, when I say L7, it's Layer 7. So when you think of the OSI model and everything about it, Layer 7 is typically the application layer. So what that allows you to do, without Layer 7, by default, all the requests get routed to the listener's default pool. But sometimes, as your client or the tenant's application grows, that's not always the great behavior. For example, you might have certain back end servers that are optimized for serving dynamic requests, like application server type stuff, and other ones that are optimized for serving static requests, that static information. And so in those particular cases, as your application grows, you don't necessarily want to have to deploy multiple load balancers and update all URLs. Especially if you have a bunch of people linking from external sources and whatnot, you have basically a URL structure on your site that you don't really want to change. But it's not going to scale well without doing some routing based on information that gets embedded within the actual client request. And that's exactly what this new L7 functionality does. It allows the load balancer to look within the HTTP request and pick out certain things in it and make a routing decision based on what it finds in the HTTP request. This works for the HTTP and terminated protocols only right now, because that's mostly what people are using load balancers for. We shot for solving the 90% use case with most of this stuff. And right now, again, we don't have a horizon UI for this stuff, but we're hoping to be able to get that landed in Newton. So the Layer 7 content stuff, you have a problem there. We need to define how do you instruct the load balancer how to do those routing decisions? Well, we have Layer 7 rules and Layer 7 policies. Layer 7 rules is just a single statement of logic that gets matched against client requests. And L7 rules evaluate to true or false. So examples of those might be, well, the request URL starts with slash API, or the request has a cookie in it called client group, and it is equal to the string group one. Or the request header, xmy header, because your application is instructed the client to send, matches some regular expression that you've decided. All of that stuff is, again, dependent on the application. And we tried to make it flexible and hit, again, the 90% use case for what we see most people doing in other environments where they actually use Layer 7 content switching. And so anyway, there's quite a few different permutations that you can do on different rules. And those are documented. Right now, it's documented on the wiki. You'll see them in the next few weeks that that documentation will end up within the Octavia and Neutron Albus projects. So Layer 7 rules. Now, there's also Layer 7 policies. And a Layer 7 policy is just a collection of Layer 7 rules. L7 policies get assigned to a listener. So that's sort of their parent object. And all Layer 7 rules on a given policy are logically added together. So what does that mean? If you, for example, have a Layer 7 rule that says, I want the host name that's requested to match www.example.com, and that's one rule. And you have another rule that says, the URI must start with slash API. Well, both of those rules must evaluate to true in order for that Layer 7 policy to get executed. And what I mean by executed is, the Layer 7 policy defines an action that will be taken if all of its rules evaluate to true. So most commonly, that's going to be a redirect to pool back in. So instead of going to the default pool for the load balancer for the listener, you're going to go to some other pool that you define. And again, you could do that in the simple case of slash API goes to a pool of API servers, whereas everything else goes to static content servers. If you need a logical or to define your particular policy, the way we decided to do that was you just create multiple policies that have the same action. So for example, if I have a situation where I want www.example.com and www.2.example.com to get routed to a specific back end pool, you can do that by defining two different policies with rules for each one of those, which will then go to the same back end pool. Or you could just create a regular expression that matches both www.1 and 2, which is a little bit more efficient for the back end software to do. It doesn't really matter. There's multiple ways to skin this cat. OK, so let's go ahead and do a demo. I was, oh, sorry, I was right. I need to talk about pool sharing. So pool sharing is relatively simple to understand. Here is a model of a hierarchy of a load balancer with a couple of listeners on it where pool sharing is not used. And here is one where pool sharing is used. What that means is two listeners are able to use the same back end pool. This is really handy, for example, if you have a SSL terminated HTTP listener on port 443 and a regular HTTP listener on port 80, and you want them to go to the same back end application pool. The reason why we concentrated on adding this feature in Mataka is because this is also really handy for layer seven policies that need to have the same action. Because then you can point them all at the same pool which makes it really handy if you need to add or remove members from that pool. It makes it much easier for the tenants to do that. OK, so now on to the demo. So in this demo, I'm going to create an HTTP listener listener one with a default pool pool one that contains server server one as its only member. I'm going to add an L7 policy and a rule which sends all requests which start with slash API to pool two which contains server two. And then also to demonstrate the shared pools function that I'm going to create listener two which just uses pool two as its default pool. So the setup I've done here, again, this is just using Mataka DevStack, what's in there right now, using Neutron Elbas with the Octavia driver. And before these slides, I launched two application servers on the private subnet with simple web servers they just respond with you're on server one or you're on server two. So you can tell which pool you're reaching. And the security groups in this particular demo have set to be pretty open and liberal. Obviously, in production, you would do it slightly differently. OK, so here's the demo setup again. I've got two web servers. Server one is on 10.005 at port 80,000. And server two is 10.006 on port 80,000. And you can see there they're server one and server two. And the first step, of course, is I need to create the load balancer. So here's the Neutron Elbas command to do that. And it stays in pending create for a few moments while it goes ahead and launches it back in Octavia and fora. You'll notice that the IP address for the load balancer is set to 10.007. That's going to become important as I show you how things work later on. And then from there, we're going to go ahead and create a listener on top of that. And let's see, we just put it on top of port 80 on top of load balancer LB1. Then here you can see I've gone ahead and curled the URL for the listener. And you get a service 503.001 available because it has no back-end pool. And therefore no back-end members. So there are no servers in the back-end. And this is an expected response when you have a load balancer listener that does not have a back-end pool. The next thing is we're going to go ahead and create pool one and make it listener one's default pool. And you can see here I went ahead and did that. You go ahead and curl it again, but there's no back-end servers yet. So it's still you're going to get an error message that says there's no service available yet. I'm going to go ahead and add that member one. And lo and behold, look, we're now talking to server one. OK, so we got a standard load balancer set up. Let's go ahead and do the next one. We're going to go ahead and create pool two on load balancer one, but we're not going to associate it with any listener. And that's a new change here with Mataka. You can go ahead and do that. So we create this pool. It only exists as a logical object in the database until you actually associate it with some listener. And you can see here on pool two, I'm going to go ahead and create member two. And the curl is kind of cheating here. I'm actually just hitting member two. There is no way to actually access it right now through the load balancer because it's not associated with any listener. There's no way to get to pool two yet. And so let's go ahead and create an L7 policy called policy one, and we're going to create it on listener one. And you can see here the action is to redirect to a pool. And until the redirect pool I want to go to is pool two. And we're going to associate it with listener one, and we're naming it policy one. Then we're going to go ahead and add a rule onto that policy. And the rule is going to be of a type path. We're looking at the actual path of the URI. The compare type is starts with. And the value of that compare is going to be, that we're going to compare against is slash API. So the rule is we want to evaluate any incoming request, its path, and make sure that it starts with slash API. And if so, then this rule evaluates to true. OK. So let's go ahead and, oh yeah, and then for our shared pools demo, we're going to create a second listener, and we're going to set pool tools as default pool. Pretty easy to do, and you can see here I've gone ahead and curled it. So there we go. We've set up a listener one with a L7 policy and a rule, which should send anything, any URI, that starts with slash API to the second pool. And anything else should go to the first pool. So let's see if it works. Look, it works. So I'll just run through the curl commands here real quick. The first one just hits the default URI. You're on server one, therefore pool one. I go ahead and hit the second listener, and you're on server two, so pool two. The second listener is going to pool two. Then we go ahead and hit the first listener again on slash API, and look, that's going to pool two as well. And again, it evaluates API with other stuff in the path. Well, it started with API, so the rule is true. It went ahead to pool two. But the other one without that slash API, it doesn't match the rule. We're going back to pool one. And then let's see. Yeah, so that's essentially how it works. Pretty simple there. I know that L7 sounds pretty daunting, but when you get right down to it, these are, this hits about 95% of the use cases we typically see in production. And there you go. I'm going to go ahead and hand it back to Michael now. Continue. Thank you. So what you've seen are new features that are available in Neutron Elbas. And that also means that they're available in Octavia, the reference driver for Neutron Elbas. So now I'm going to shift gears a little bit and talk a little bit about where we are in Octavia. So just a quick review, Octavia is a load balancing driver that plugs into Neutron Elbas version 2 API. Just like other hardware drivers that are present in Neutron Elbas, Octavia is just a software implementation that uses service VMs. So we have four main processes, our API, which the Neutron Elbas driver plug-in talks to to interact with Octavia. We have our worker, which does the provisioning and the workflow of managing the M4O. We use the term M4O instead of service VM because we expect there will be a container implementation of an M4O, or even a bare metal implementation. All of these components you see at the bottom in blue are driver-based. So we can plug in alternate technologies into Octavia. So for example, with containers, we can swap out that driver and not use service VMs, but use a container implementation. Same thing on networking. Health manager, that component is tasked with collecting status and statistics, and also monitoring and managing the health of those M4O. So should an M4O fail for some reason, it will spawn another one. Or as we're about to see, we have added active standby. I'll talk about that in a minute. And then housekeeping, this is a periodic job process. It does things like clean up the database. If you have the spares pool enabled, which means you have M4O pre-booted, just waiting to be configured, it'll maintain that spares pool. And another new feature that we have is it will manage and rotate the security certificates we use to manage the M4O. So I'll talk about that shortly. So what's new in Octavia? One of the biggest features is we have active standby now. I demoed this in Tokyo, but it is now fully merged and fully functional. So this allows us to spin up two M4O for each load balancer, a primary and a secondary. And should the primary fail, it will move that IP and transition to the secondary in seconds. It's configurable. It uses a VRP implementation. Again, I demoed that in Tokyo. You can go look at the YouTube video if you want to see that in action. Since Tokyo, we've added anti-affinity. So we're leveraging Nova's filter capabilities so that we make sure that those two M4O are not on the same post, which defeats the purpose of having your active and standby. So if you have that available in your environment and you have enough compute environments, turn on the anti-affinity capability. And that will guarantee that those M4 instances are on different hosts or however Nova defines affinity. We've also added the capability to do failover for those active and standby M4. So let's say your primary fails completely out. You've migrated all your traffic to your secondary. And we maintain session persistence between those. So your customer will see a very short interruption. But now you've got that dead M4O sitting there. Well, we also leverage that housekeeping component to rebuild that, failover that M4O, and bring back in a replacement. So you go back into an active standby situation. So that's really nice for resiliency. And the last bullet, when we do that, we don't preempt the new master. So we don't want to interrupt the customer flows, give them a short blip in their traffic. So the other node that became the master will remain the master until it fails and then it would switch back to that new backup. So pretty good stuff with active standby for availability. As I mentioned, the housekeeping component, when we interact with our M4O, we have an agent that sits inside those service VMs and we use a REST API to control them. That is all encrypted using TLS. What we put in is a component that lets you set an expiration period. And so when those certificates that we issue to each M4, and each M4 gets a unique certificate for it, when it's coming up to its expiration, we'll go ahead and automatically reissue and update that certificate on that M4O for you. So you can meet your compliance for having expiring certificates on all your components. Layer seven rules, Steven just went through that. Great stuff. I put in the different policy types, so reject, redirect to pool, redirect to URL, and then the different rule types that we can do there. That was another great addition. It's in Neutron albaster, but it's also implemented in the Octavia driver now. We had aspirations to do single-call actions. So being able to create a whole load balancer top to bottom, the load balancer, the listener, the pool, the members, everything in one API call. And along those lines, we wanted to have cascading delete. So you could say, I want to delete this load balancer, cascade, and delete all of those resources that are associated with it. This was something that the UI folks were really looking for. We got really far along on that, but we ended up running right up against the Mataka deadline. And so those are still work in progress. The code is up there for review and should merge in Newton 1. Another pretty cool feature, so the M4 image is a service VM. And so we store that image in Glance today. And if you needed to do an update to that image, patch or a new version, you can modify that in the config file, and then you'd need to restart your controllers. And from then on, the M4 would pick that up. If you needed to update in use M4, you could do a failover procedure, and then you would have updated load balancer images. Well, we had a contribution in Mataka that allows us to use Glance tags. So instead of having to restart the controller and go and edit your config file, you can now load a new image in Glance and move the tag that says M4 or whatever you put in your config file. In Glance, from your old image to your new image, and from that point on, Octavia will automatically start using your new image when it boots M4. So a much simpler maintenance process available for you. We've also started doing more hardening and making sure that we have our security story up to snuff. So we've added Bandit as a check gate. So all of our check units go through the Bandit Static Security Analysis tool. You'll probably see a couple other talks about that tool at the summit. But it's a great thing. Every check-in will go through a security static security analysis, and we can block or evaluate the risk of that change and make sure that we're not introducing new security vulnerabilities, SQL injection or whatever. And coming really soon, this was another thing that just didn't quite make it into Mataka. The HA proxy that we're running inside that M4 will be running inside a network namespace. So we're going to completely isolate it from the other networks that are associated with that M4 that's running. That code has merged on master for Newton, and we'll be back porting that and releasing a new version of Octavia in the next couple of weeks. And that'll be 081. So that's all the new stuff in Octavia. Let's talk about the roadmap. So this is the roadmap I shared in Tokyo. We had aspirations, again, for active standby, high availability control plane, layer seven rules, and container support, and flavor framework support for the Mataka cycle. We didn't quite make it. So I'll go ahead and move forward. This is what we accomplished. Got the active standby in complete, layer seven. But the other components, there are some pieces of that that are not quite ready yet. So active active is still actually in spec. We're still working on that and evolving that. There's actually a talk tomorrow where they're going to go in depth on the current proposal and where we're at on that. The horizontal scale is another piece of active active. So that's where you do the elastic grow and shrink the number of M4 that are serving a given load balancer. The container support we did get a good start on. There are some patches up, particularly around the networking components and the compute components to work with containers. But there were a number of challenges that came into that. Particularly, Octavia does a lot of hot plug-in of our network ports. So when you add a member from a tenant network, we typically hot plug that network into the M4. Well, with containers, that's hot plug-ins not quite there yet. So there's a number of challenges. That is a work in progress. Flavor framework, we did add the capability to do neutron L-bass flavors. But right now, it's pretty much tied to the provider. So it's very similar to the current provider functionality. And that's where you can say, I want my load balance are created with the Octavia driver or maybe the legacy namespace, HHA proxy driver. So flavors has got that far. Unfortunately, in Octavia, we've not pulled the metadata down in, where you can say flavor gold is active standby. Flavor bronze is a standalone load balancer. But that's something we want to do. The high availability control plane, you can run all of those Octavia components processes on multiple instances today. What we're talking about here when we're saying high availability control plane is once an action starts, so let's say a create load balancer call comes in. Right now, if that controller completely fails, somebody powers off the server or something and it's in the middle of creating that load balancer, it's just going to stop and be stuck in a pending creator and error state. What we want to do is pull in some of the job board components that task flow and be able to have an alternate controller pick up that creation process that's in flow and continue that work on another controller should a controller go down. So that's the aspirational piece of that that we didn't get done. And then, of course, I mentioned the single call actions. Really close on those. So again, you're welcome to try Octavia yourself. It is integrated with DevStack. Pretty easy to spin up. Just enable the services and update your local RSC. So we are looking for contributors, of course. We're a semi-small team, but we do have a good core group of people that contribute. But we can always use more, particularly in testing and documentation. And if any of these features that are aspirational sound exciting for you, please come see us. Join our RSC meetings. Join our channel. There's people on all the time. So other related sessions. Tomorrow, there's a session on heat. So heat has added templates for Elbas V2. So you can now create heat templates to deploy a Neutron Elbas V2 load balancer. And the deep dive into elastic load balancing, that's what I was mentioning earlier with the active-active. They're giving a talk to talk about where they're at on that and kind of a strategy. There's two hands-on labs. One is writing AngularJS plug-in for Verizon with our friend Doug here. And there's also going to be an install and configure of Octavia Session. Both of those are RSVP. You do need to sign up for those. You can't just show up and join because they are hands-on. There will be programming involved and some prep. And then finally, the design summit. We actually just had that this afternoon where we're talking about the future of the advanced services Neutron and the future for those. Any questions from the audience? So we don't distribute binary images. The DevStack component will build an image. And there are scripts included in the Octavia repo that uses triple-o disk image builder to create you an image. But we don't distribute anywhere a cooked image. Right, you would need to build new images as the updates come out based on which OS you're using for your M4 image. The defaults on Ubuntu. There's also Fedora support in that script. There's a couple options. Any other questions? Yes. In Tokyo, we were able to post the slides on the summit schedule. I think they'll open that up after the summit's over. So you'll be able to go back to the OpenStack website, find the session on the schedule, and you will see the slides available to download there. Sir? Hi there. In the physical server load balancer environment, there's typically the concept of having traffic for SSH come into the VIP and then go through a decryption appliance before it goes to the servers to take the burden off the CPU of the servers. Is there any thought of creating such a function within this framework? We do support TLS offload. So SSL offload is supported on Neutron, Elbas, and Octavia both. We use Barbican to store the certificates and the secure content, if you will. And in fact, that exact concern is exactly why Octavia takes the architecture that it does, where the idea is to launch many different M4s so you can horizontally scale managing that CPU load because that is, and the load balancer, that is the thing that hits the CPU the hardest is the TLS offload. Yes. Hello. Any idea to use UDP protocol? At this time, we don't really have plans for it. If you have a need and have some developer resources to throw out it, we can certainly take a look. But right now, it doesn't support it. It doesn't support UDP. Right, we have a plug-in model for the M4 and for the load balancing component in it. So if you have a use case and you're motivated, it's pretty straightforward to add an alternate technology from HA Proxy to be your primary load balancing component inside the M4. So that's certainly a capability if you have the need. We just, we don't have it today. Hello. Is there anything in IPv6 that is not supported or is IPv6 supported at all? We have some bugs. The framework is there to do IPv6 all the way through. And in fact, you could do translation. So you could have a V4 on the front of your load balancer and V6 on the back or vice versa, because we are a full proxy. But recently, I started looking into that and playing around with that and I found some bugs. So we will be fixing those in Newton. But the frameworks there, all of our technology does support it. We have a couple of gotchas. We tried to architect it so that it was completely IPv6 compatible. We'll just have some bugs. Yeah. Other questions? All right, thank you.