 We're going to talk about load balancing as a service, which is now a part of Neutron project. And I'd like to start with what the service looked like in Grizzly for those who are not very familiar with it. Since it was introduced in Grizzly release, I'd like to show those nice elephants. And the list of, I'd like to show the list of features that we had in Grizzly. We got a couple of things which should help user to play with the whole project, which is REST API, ability to operate with it with CLI. We had horizon integration. As a backend implementation, it was HAProxy based solution, which spawns HAProxy software load balancers on the network controllers. And we also had DevStack support out of the box, so you can install it in your single node installation. That's how it looked like from architecture perspective. So users work with horizon or with CLI, the REST call gets into LBS extension, load balancer API extension. It is processed by the load balancer plugin. The configurations is stored in the database, then a plugin communicates with the agent that resides on network controller with RPC and agent manages HAProxy processes. That is how data model looked like in Grizzly. So we have a few major objects which are VIP pool, member, health monitor, that's how they relate. So one of the major limitations of this model is that you can only have one VIP for the pool and only one pool for the VIP. That picture represents the wiring, how it works for HAProxy implementation. So process listens on the IP address that belongs to the same subnet in which members reside. So in order to be able to reach the VIP, you need to associate this VIP with the port of the load balancer VIP. So that kind of additional step you need to perform to get it reachable from outside. Workflow is quite simple. You start with creating a pool, adding members to it, associating VIP with the pool and also additional objects like health monitoring and persistent objects for the VIP. And then I'd like to talk about the major changes that we did in Havana. So in Grizzly release, the whole thing was kind of experimental. It was very hard-coded into the existing HAProxy implementation and the first thing we did, we have added multi-vendor support which is ability for the user to choose the actual implementation of the service he creates. So it's an additional extension that allows you to specify the provider at the point when you create the pool. The second major change is that we created the driver API for vendor plugin drivers. The plugin driver is the notion of the piece of code that resides on the server side and is responsible for storing additional data in the database and it is responsible for communication with the backend devices in its own vendor-specific way. So for, say, existing implementation, plugin driver communicates with the agent via RPC. Other vendor may have their own backend solution that manages VMs or hardware appliances and in this case, agent may not be needed. So that all creates the framework which helps to adopt different architectures of how you interact with the backend devices, backend implementations. And also we have a major improvement for the reference implementation which I prefer to call now HAProxy provider. Initially in Grizzly, there was a big limitation that you could only have one HAProxy agent and one node that runs all the HAProxy processes. So that was obviously not scalable. And now we have the agent scheduling mechanism that allows you to have any number of agents and any number of nodes running HAProxy. So that kind of solve scalability issues. Also we have improved statistics retrieval and you can use it from the CLI and we're able to use it from the horizon. And also we modified the work with health monitors making it more natural. So that picture just represents that you could have several network controllers or it could be even dedicated HAProxy nodes where you have LBS agent running and it will manage HAProxy processes. When you create a pool, it is bound to some node in random fashion. So you have the distribution of HAProxy balancers across the nodes. So this is how overall picture looks right now. We have added the service provider extension that allows you to specify the provider for pool creation that's expected change. And also you have drivers inside the LBS plugin that utilize different communication patterns with backend devices. Some use agent, some drivers don't use agent. They use some external entity to reach load balancers and some reach for devices directly. And the probably most interesting part is what do we plan for Havana? So in late Havana, we received, we experienced growing interest both from users and from vendors asking for certain features that are not present in Grizzly. And one of the most desirable features were ability to have multiple Vips per one node group per one pool and also multiple pools per one Vip. It also related to a notion of layer seven rules which kind of configurable rules that allows you to analyze L7 traffic and point it to the right group of nodes, to the right pool. So the first two bullets represent the major data schema change, data model change which gets rid of one to one mapping between Vip and pool and changes it to many, too many. You can have multiple Vips per pool, you can have multiple pools per Vip. And another thing which actually is a consequence of first two and also have various reasons to be implemented is the load balancer instance notion which wasn't there and we need it. So load balancer instance is the new entity that will be the root object and the starting point of your workflow. So instead of starting with the pool creation, you start with creating load balancer. And there is a number of consequences. So one of the consequences is that you bind your load balancer instead of binding the pools to various objects like currently we bind pools for two agents, we bind pools to providers, we bind pools to routers. You'll be working with load balancer instance. By the way, from the user perspective, it will be just an additional step that will create this object and all this binding is behind the curtains. So from user perspective, you don't need to worry about all this stuff and you can just think about load balancer instance as a container for Vips and pools. Another major change that we plan to implement is the vendor API extension framework. Existing extension framework is only focused on extending core API and core API in certain fashion that requires you to put all of your maybe specific code into a common space. That limitation actually slows down the development of vendor-specific features. We need vendors to be able to expose their specific features without adopting them to be too generic because when you make a generic thing, it's also very hard to get used to it. It's also very hard to get a consensus within the community on how it should be implemented. And by creating such framework, vendors will be able to expose specific features through their drivers and it will be exposed on the API layer. Also, we plan to implement particular features using that vendor extension framework such as device inventory, some vendors work with virtualized and hardware appliances and they prefer to have additional API where users or cloud operators could register new devices, maybe delete them and such devices will participate in distribution of the load balancer resources that are created by users. Another very desired feature is SSL termination and SSL of loading. I know a few vendors are working on this part and probably in nice house, we'll see some solutions that actually implement this. Another important thing is route load balancer insertion when you can have VIP and pull actually residing on different subnets where VIP subnet could be external network. So you plug your VIP directly in external network and you don't need to specify floating IP, it's already there. Other features include heat and C-lometer integration. We're thinking about having more support from heat side. It has some basic support right now and we plan to improve it. And also we have plans to integrate C-lometer with LBS to measure and monitor things. And another important thing we would like to do in nice house and probably it will become a requirement to have an integration testing suit that will cover the basic user scenarios in which all the code that is submitted should be tested against. And that is requirement both for existing code and for new code that is gonna be submitted by vendors. So probably that also could be utilized as the verification that you have created correct installation of the load balancer and you have set up your cloud rate in the sense of load balancing. So this picture represents how model change will look like. Actually, if you don't look at load balancer instance, you have a configuration graph instead of configuration tree which is harder to maintain in the code if we don't have this load balancer instance notion. That is something that end user is not aware of but I'm just saying that it simplifies lots of things and probably it simplifies how you can think of your configuration. So it's instance, you can bind it to a device, you can bind it to a router, it's not just a bunch of you know, whips and pools around. And that is how architecture will look like if we introduce the vendor extension framework. Instead of monolithic LBS extension we'll have vendor extension as well. So it's additional calls that are implemented within the plugin and if plugin doesn't implement them it finds the implementation within the driver. So it will be some kind of dispatching mechanism that will go through drivers and see if drivers support it. So in other sense architecture remains the same. We can have various architectures in the sense of communication when we can devices, so nothing new here. And I also would like to say about which vendors do we support right now and which vendors are actively working on introducing their drivers. So we have HA proxy provider which is an open source implementation of the API that we have. Late in Havana, Redware have committed their driver which works with virtualized load balancers and they use their external management platform to which they communicate within their driver. Also we have NYSERA implementation of LBS that is relying on NS6 platform. Those three implementations are already available. Probably they, Redware and NYSERA, probably they are in some kind of experimental status right now but they are actively working on it. Also we have three more companies that are already published their code for a user. They are actively working on their drivers to be included in the upstream. And also we have a few more companies that are planning to do the same in their house cycle. So that I would say is a good variety of solutions that will be available to users. That's pretty much it with the existing status of LBS and I would like to see your feedback on this. Maybe you have some questions, maybe you have some feedback on your experience with the LBS. That would be good to know. So for the concept of the load balancer instance, how do you deal with the number of actual instances? The load balancer of a concept has to be multiple instances because you want it to be highly available. How do you define how many actual instances are running a load balancer? What do you mean by saying how do I define? You as a user create an instance and you may create as many as you allowed to but that kind of could be limited by a quarter but what do you mean by saying how do I define? So you've introduced the concept of an instance to which all of the information is attached. But there's the concept of an instance that's actually executing the load balancing and I have to have multiple of those. You mean the process itself? Yes. So if we are talking about processes, that will be direct mapping between a load balancer instance and the process. So currently we have one process per one pool because the pool is the root object of the whole configuration and this role goes to instance object. So we'll have one process per the instance and one process will serve several whips and several pools if we are speaking about HA proxy. If we are speaking about virtualized or hardware implementation then actual appliance may serve several instances or it may serve just one. That is up to a vendor to decide. Two questions. The first one is in the current HA proxy implementation you're saying that the pool maps to an HA proxy process. So does that mean that the HA proxy process in itself is not highly available and what are your thoughts on the future of high availability for the standard implementation on HA proxy? Yes, that's a good question. So currently there is no HA for HA proxy and I know folks that are willing to help us with it and the current model makes us somewhat simple to implement because you have HA proxy within the same subnet as the members and you need to attach a floating IP to it and if you have many of them you can just detect if one fails and you reattach the floating IP. So with existing implementation is something that is not quite complex to implement and probably with some help of the community will implement it in nice house and the second question will be. Okay. Any pending on the multi tier of low balancer? Sorry. Could you repeat? I think that's our house we support low balancer willing on the real instance. So is there a limitation on the number of tier of the low balancer? So for example, can we running the first tier with a low balancer and the second tier with two low balancers or multiple low balancers? Yeah, why not? So there's low limitation on the number of tier. I think we shouldn't have such limitation, why? So it's just about how you wire different networks together. I doubt that it could be a reason to do more than two layers of this, but I also don't see a reason to add the limit to this. Any more questions? May I ask questions to you? Oh, okay. You got one. No. Because that is something that I would expect to get help from the community because the heat and cylinder integrations are great things but the main focus will be on getting the feature party at least to the basic feature set that vendors support against what LBS supports right now. Supports less than basic features and we need to get to the same level. So that's where our focus will be. If we get the help with integrating heat and cylinder that will be great. Otherwise I'll be saying the same thing on the next presentation. Okay, so any questions left? Yeah, please. I think that the primary contact point is open stack dev and open stack mailing list. I'm usually looking through and I usually answer most of the emails regarding LBS and I'm also coordinating LBS subteam within Neutron team. So if you have development questions you can reach me directly or you can reach me through the mailing list. Other questions? Okay, then probably I'll ask the question to our audience. Maybe someone has drawn opinion why load balancer sucks in Neutron. Maybe they tried it and they hated it. Anyone? You know, receiving critics is something good because we haven't been receiving much of the feedback and we would really like to because yeah, that's good to know and okay, okay, okay, that's good to know. So I suppose that's okay. Just going back to the original question. Most people using load balancer as a service are expecting to provide high availability to a given application and if the load balancer itself is not highly available, especially the reference implementation, then you're not highly available. So I understand that some vendors may implement that but I would expect that the reference implementation H8 proxy is highly available in itself. So yeah. I think that we'll move this feature up the list in priority. That's also good to know. Maybe then a success story, anyone? No? Not this release. Okay, maybe someone expecting some particular vendor implementation of the load balancer. That would be interesting. Okay then. We can spend another 10 minutes being silent or we can just go out or we can continue with questions. What do you prefer? Thank you. Thank you.