 Okay, everybody welcome to the next talk in the technical deep dive track And this is as you can see one of the more popular talks today This was actually the top voted talk in the entire tech deep dive track. So Undoubtedly, this is a very interesting topic to everyone. So please give it up for Ilya and Eugene about load balancing as a service Yeah, okay okay, so today me I my name is Ilya as I was presented and I am one of developers who worked on load balancing as a service in grizzly release I'm working in Mirantis and also on your team Eugen Nikonov shall answer from Mirantis and he also one of our main contributors from our company into this part of quantum Okay, before we start I would like to Thank the whole community who worked all these people who worked on load balancing as a service and who Just make large efforts to to make it happen in grizzly release Okay, so today we'll just briefly Recall what was before load balancing in grizzly then we'll take a look at what we've Developed in grizzly what features water architecture how to use this stuff and Then we'll take a look at the future because maybe as all you know there will be more more and more services in quantum and Actually a lot balancing. It's it's the first and we just Make proof of some concepts on this service Okay before grizzly as they were actually just to stand alone services that could work like load balancer So the first was Atlas and it's It was rather so it was it's stable and it's With with large history. So it's it's actually worked in some real Deployments and the issue with that because why is this is not became part of open stack because it's written on Java the second one was equilibrium project and that project was written in Python originally and So it worked in with no one network not with quantum and when we started working in grizzly we just took a equilibrium project as a base and rewrote much much much code and Made it as a part of quantum Okay, so now let's take a look at what what is it? Actually load balancing as a service so first of all it it has its own rest API and rest API it allows to manipulate all this actually all four Models that exist in load balancing Extension so it's pools Vips health monitors and members. Yeah, we'll So we'll this I will describe what they mean a bit later. Okay now just like overview Also, it has CLI and CLI is a part of a quantum quantum CLI So that's the common that when that is around when you type quantum It actually works the same as other common line comments Okay, also it has nice UI in horizon and This all features that can be made through a rest API or through CLI there They can be made through horizon also Let me first the last the last line this describe so it has support on Dev Stack out of box So when you just install new version of Dev Stack, it has all Everything set up and to help you Just try a lot balancing as a service out of box So it's just couple of configuration like yeah, actually you need to modify one line in the local RC What you would usually do if you need some modification to the default configuration, which is just enable quantum LBS and it will install and Horizon and all other companies will be ready to actually to actually be start started and running, okay, and so the this all things are actually some kind of envelope and this is Behind this under the engine. There is a HA proxy that is running as an host process on your network controller. So Our implementation that is available in in a trunk in a Dev Stack. It's actually a reference implementation and it has the first plug in for Followed balancing and this plug in utilize HA proxy and HA proxy It's well very very well known freeware open source Software load balancer, okay from From perspective of load balancing there are actually the following features. So these features are part of the model part of API and These features are implemented in our default plug-in. So it's So certainly you can load balancing between some services and the services are actually we are virtual machines There can be configurable load balancing methods. So like around robin. That's this that means that All machines will so one by one Get in a in order the same amount of Yeah request there can be by default static static Static IP. So it means that all requests coming from the same IP will go to the same destination There is also session persistence it allows to hold the session and if For example, if in an HTTP all requests are independent and if the two similar requests come in the same session from the same origin Or with the same cookie they will just go to the same back-end service We have health monitoring and the health monitoring allows to allows load balancer to Check whether these back-end services are alive or not and by default we have for TCP which just checks for connection and HTTP which sense get request and We have connection limit it it it is built is feature of HA proxy and We also introduce it into API. So it allows just to limit bandits and limit connections Okay from From architecture point of view how it looks so before grizzly quantum actually was So quantum it's actually a set of API's and it has one part called core plugin which implements all networking called level 2 and some plugins level 3 some we are Separate extensions, but anyway core plugin. It's way. How you? How your network implemented? So it's not now it's called the core API Is the implementation of the core API? Yeah, and in grizzly or we introduced service plugins So it's for core API for core plugin It would be it's possible to load only one core plugin per time But for service plugins, it's possible to load one service plugin per service type So we introduced the service type called load balancing and we have our load balancing plug-in Which implements the API So in future when we'll have some more services and more service types there will be more service plugins Okay, that is architecture and how it's implemented in reference implementation of load balancing plugin so the Blue side it's a quantum. So on top is extension which actually It listens to rest requests and validates them Under layer, it's a plug-in or and the DB plug-in which actually does Processing of data and the persistent layer On the bottom we have agent and agent it It's just very similar to agents to DHCP agent to L3 agent and it pulls Plug-in Yeah, in some intervals and gets the whole the whole configuration and reconfigures HAProxy process Yeah, so maybe I'll Give it a bit more details Agent that Managed the chip proxy process is really similar to what the HCP agent does but Probably some of you know that in grizzly They were introduced a new feature which is Agents scheduling where you can actually schedule your router and networking request to different hosts and That's not the case yet for the load balancing agent. So Oli chip proxy processes are still running on the same host So it's kind of load balancing host Yeah, and one more word we can tell about how it how it handles How it handles network so we use Network namespaces. Yeah, so so that's why it's possible to have overlapping networks. Yeah, so for each VIP there is a separate HAProxy process running in its own namespace which is connected to pool ID Which means that HAProxy process actually Got its own IP address within the tenant network. They'll L2 adjacent And the word should be told about how you access actually the VIP Currently, yeah, we'll show the model Okay, so the VIP has the IP address of the tenant network So you would ask how how to access it and currently you need to associate the Floating IP to the port that is occupied by this HAProxy process So it's kind of separate step you need to take in order to actually get the load balancing functionality. Yeah, okay And a bit more details about the model and about wiring that Eugene just mentioned So we have actually four models in in our load balancing model So the first one is we it's it's virtual IP and this is front end part of load balancing. So it's It has IP address it has port and it has it's linked to some subnet So the second part is pool pool is just a place holder But pool allows to specify the location of members members So the pool may be in different subnet in our current reference implementation both subnets are equal So it's not possible to Make VIP from different subnet as pool, but in future this would be certainly Be possible be a case. So the member it's Just a presentation of interface for application running on virtual machine. So here is IP address And it's pretty tall and we have health monitor. So it's there standalone objects So they can be created and the shared between several Several pools. So we associate one pool with many health monitor. So for example, it's possible to configure health monitor for TCP which allows to to make quick check of health of service and the second health monitor will be HTTP of each So if the first check is succeed, then the second check will Actually check that the application is up and running Okay in from point of view how it's how it's wired and how networks work Also, here is a schema actually our load balancer since it's run on the host and since we have one load balancer pair Per whip so it almost stand in a tenant network And just to make it accessible from outside. So we need a floating IP So for example in that case or dashed line, it shows how the traffic will go From outside from provider network to load balancer and then to virtual machine be Sorry in a key if virtual machine be selected by load balancer. It's a floating IP that associated with This port of load balancer. You're doing that, right? Yes. Yes, that's right. Okay now work flow So workflow it has four steps First is create pool we need to start this step because the pool is placeholder for for members So it's the root object of the object model load balancer object model. So we start with the pool So at this moment we can we need to provide for pool load balancing method Subnet subnet yeah Then we create members so one member pair virtual machine or pair service Then we create weep and for weep we have session persistence Fix it IP address at the moment So we we can't provide actually fix it IP if you don't need to If you're not playing to Make us external traffic to reach the load balancer it could be enough to have just fix it IP so It would be balancing within the network. Yeah, and the Optional step to create health monitors and associate them so from from API point of view since health monitor. It's a separate object and shared between between different to pools so we Need separate state to create health monitor and then to associate it Okay from UI the same steps in in UI. So we have First of all in our menu. We have load balancing features. So it's on on the very bottom So we have similar forms For pool for member For weep and for house monitor. So I will not go into details because it's pretty pretty easy And it's actually can be can be can be tried. You can try it in a Dev stack. Yeah Okay, so if there are any questions regarding the current model, so it's time to ask them because Then we proceed with the bus next Can you can use a microphone? Right next to you No, I'm trying to understand in the case the tunneling is used the old network. So when where does the in-cap decal happen? It's on the load balancer or on the VM behind unhosted behind in the pool Like we extend those in yeah, so it does this is that's applicable to this no so Well, yes, that's correct. Okay Okay, so questions maybe I think that Have a way to go Yes, the question was about performance both have a lot of deployment actually So no, we don't support how well it's right now and I'm not sure that we will in Havana because it's really advanced feature and we Actually need to support basic features in Havana first. Yeah actually Well for H you may use HHA proxy as for for load balancing Not only HTTP. It's can load balance TCP. It's right. All right. It's used as a load balancer We balance traffic from the Yes, yes. Yeah. Okay, so let's let's proceed Yes, it is compact Yeah, let's let's do one question at a time Yeah, I See and yes, you're you're right. Okay. Okay, so I Okay, let's proceed with With the next and then we certainly have some case section in the end Okay, so for the current for the next version for the next version we have The following features that are just I think mandatory to have in the production So the first is the support for different types of load balancers It could be hardware load balancers software load balancers and this will be done via driver API The second is device inventory. It's a place where we can Just store just information about the appliance information about these appliances And the last one is different modes for insertion of load balancer into network because The way it's done now. It's just just very simple way Okay, a little bit more detail. So you jump from architectures standpoint So this part actually was Discussed quite recently like at the summit Currently we have Some kind of driver support within an agent but it provides quite Simple API which just gives the driver the whole configuration and driver can do whatever it wants and we have a different thing in mind which Where driver API is kind of reflection of talent API so we can manipulate every object and the call will First of all, it will be reflected in the database and then it will go into the driver and driver will apply the configuration step on the device because some of the hardware appliances have API similar to what Load balancer as a service Exposes that's why we need to More close mapping between Rest API of the alba LBS and driver API It's not it's not the case at the moment, but it will be implemented in Havana also There are different ways of how to choose which hardware or virtual appliance to use as a Backend implementation of the service. So if you if you if you have several appliances you need to you need to somehow put your Service requests and your or your VIPs into different appliances In the in this case You need to actually use that device inventory Probably monitor of the load of Each of the device of your inventory and choose what is less loaded for instance some vendors have their Such kind of management as external components. So we need to Proxy the rest call directly to their management component and That will be a different case so The current proposal is to have the drivers within the plug-in And and the drivers will will decide what to do with the rest call whether we need to choose the hardware appliance and redirect the call to the agent which will then redirect the call to the hardware appliance or we need just to Proxy the initial rest call to the external management component render specific and that will be it Yes, so yeah, and actually driver API it will be available. I believe in pretty soon So I hope so. Yeah, I hope so and this will allow vendors to start to write in drivers Also, we have We will have separate Invent device inventory. So we actually it's the name may change because it's we just decided it a couple days ago How it call it so this will be said this will be a separate plug-in and extension and it will allow to To Store management information how to manage different devices not only load balancer devices But also devices and the virtual devices that will be used in other services like firewalls as a service VPN as a service Okay, so this actually may change but right now it looks very it looks like this picture Okay, and the couple words about different modes. Yeah, so one of the mostly used Insertion modes for what balancer is routed insertion and the load balancer actually Works as a router for the tenant network as far as I know This insertion mode is not yet implemented in quantum. So probably we need to Make such support pretty soon because it's one of the most popular popular usage models for load balancer It also is a part of So-called service insertion Framework which is under discussion and probably will be under development soon in the quantum So in that case you'll be giving the VIP actual external address and Probably you'll get one less step In your workflow. Yeah, so you will not there will be no need to create assigned floating I Probably that will be Delegated to the underlying functionality some kind of this yeah and Actually, yeah, it will be so the whole module and the whole insertion types There will be I I believe they will be just part of some library that will be also shared between different services By the way, that's that's the model mostly for the hardware appliances which we Doesn't support the moment Okay, so that's that's pretty all for for today We are ready for questions Yes, that's correct. Actually the whole So the question would be that initially our proposal was to Make agent to load drivers different drivers So the agent will always always receive rest calls and then it will forward them to the devices we are drivers but it appeared that some layers like hardware appliance choosing Could be done by external component some vendors in this that They don't need such functionality within quantum because they have so they are separate component management component that Actually does device inventory and scheduling. That's why we need a bit more flexibility We need to have drivers at plug-ins side and probably will will have a driver set agent side actually because Some other vendors stick with the same model where They have different devices, but no management about device inventory and choosing so Agent will load drivers as well most probably So driver is just a you know vendor specific piece of code Okay, okay, okay So the question was that we have drivers at both sides at the side of the plug-in at the side of the agent and I'm telling that yes, that is the case because Driver is nothing more than the vendor specific piece of code There is no issue to use anything on both sides So you're you're saying about each a proxy model where each a proxy process. Well So the question was that Current implementation could be extend or improved in the same way as the HCP agent scheduling was done in Grizzly Well, I don't know because the features that are planned are Proposed mostly by hardware vendors and Maybe they're not quite interested in improving the current, you know, HAProxy on host model But you know the author of the HAProxy on host Mark McLean if he wants to improve the current model, of course It can be done Pretty much the same the same way as as it done for the HCP and rather scheduling So I don't I don't have answer for this question. Maybe Yeah, if you if you wanted you just file blueprint and implemented It's quite easy so The question was how the host is selected so the host is not selected and you must have just one agent running on the one host which actually Yes, so load balancer edges runs just on one host and You can't run the agent on two hosts because they they would do the same thing. They would create this Same processes on different hosts, which will conflict. Yes, so that's it in grizzly load balancing It's mostly like reference implementation. Just like I would not say it's proof of concept, but it's Just to show that we the services are possible on quantum that There is an API and the vendors can start thinking how to how to implement drivers for this API and just to show Communities that there is an interest in this area and that everyone be involved into development So yeah, maybe in production. It's not it's not a case to have just one load balancer running on one host, but It's just the first step Two cent feedback Naming is very important if you call the service load balancer provisioning as a service It will clarify a lot of confusion because what you are doing at the end of the day is providing a provisioning API and the load balancer itself could be Somewhere else Right and I think that's where the confusion is coming How you how do you map the traffic paths and how you get the traffic to the instance that is actually going to do the load balancing Whereas the API the way what I've heard so far is you create a web do this do that add pools It's provisioning is how I understood I could be wrong, but that's my question and maybe a comment I know but that's just as he keeps clarifying. That's a reference implementation You don't have to spin up things on the same horse that could be storing the metadata and that's that's my question You're you're right about the name, but As far as we see it's kind of No one interested in the answer You know So I have a comment about the name first of all I see that other services Employ the same name pattern as well and also I think that maybe We need to think about the last three letters of the name rather than about introducing the Next question. Yeah, I'm sorry quick clarification I'm sorry. I might have missed it. I came in a little late When you talk about LBA down there and the agent Or could that be any load balancing product like from like F5 or Citrix, or is that a server there? It's a actual product. Yes product. Okay, so just to clarify again You're talking about being able to provision the load balancing as a service through any vendor and you're just your current implementations HAProxy Yes, it's so it's a slide about the next architecture and if we take a look at How it's implemented now, where is it? Yeah, so it's an agent specific agent for HAProxy Okay, so actually we just move This part with agent and HAProxy an agent from this from the current implementation. We move under Under here so it will go on the driver for HAProxy. Okay Thank you. Yeah, for example if some vendor will Will provide Some other function some part other way to communicate with its load balancer then it doesn't need agent So for example, if they have Existing a synchronous API then they just can call from their driver load balancer directly and this will work So no agent will be required Okay Your load balancer is created there. Can you add or remove member from the pool? Yes, we can we can we can do it right now and Driver API allows you to do in the future. Are there any elastic scaling capabilities built in yet? Not yet. That's probably Elasticity is the another layer on top of what we will have Yeah, I agree. So it will be most probably task for the heat to the monitor the load and to spin up new instances for virtual appliance Okay, so It's possible to configure to use cookie by cookie name or HTTP header Yeah, HAProxy supports it out of out of box. Okay, so thank you. Thank you You