 Hello, everyone. You hear me? Yeah. Okay. Thank you everyone for coming for the session about load balancing as a service What we'll do today is we'll talk about brief History, what's going to be delivered or what's delivered in kilo? What's going to be in liberty and the next day in reference implementation that it's going to be based on the Octavia project so I am Samuel and The spacebar will switch to the next slide. Okay Okay, so This has been a journey. We've started on deaf. I think on the first some version With the planning v1 a lot of people were involved. I do apologize if we forgot someone and We are now approaching To be the v2 version again for a brief Review of what the load balancer does and what it's important to the cloud So when you are looking at load balancing functionality, we are looking on three key things first one is what we are calling local scale out basically be able to take a set of physical machines or VMs and put them behind a Virtual IP so that you can scale them out or scale them in Obviously address failover. So if one of the machine fares again be able to recover that in a seamless way and The last thing is geographically load balancing Which is nothing that is not touch it in Elbas, but obviously this may be one of the key Features that will be looking and delivering in the future. So again, that's very basic on load balancing as a function in application delivery environments So what was done in Elbas v1? so as we run was Kind of basic it allows An IP with a specific TCP port To be specified as a front and could either be load balancing HTTP HTTPS or simple TCP as a path through technology For example, there was no HTTPS termination there was persistency and Based on cookie and source IPs and cookie insertion the nice thing about Elbas v1 is that it was actually Sorry, it was a good demonstration on how when specifying an API you can get a lot of adoption by multiple vendors and multiple solutions There are I believe around 10 different back-end implementation for v1 so in that regard it was very successful and about Let's say about two years ago when v1 a was done We started to hear requests for adding initial capabilities the key use cases that were not addressed in v1 where We actually want to put more than a single TCP port behind an IP Since reasonable wasn't there at v1 And the other one was we actually Want to do a TLS termination Because there's a lot of smartness in looking at the content and doing load balancing and you can't really do that if you don't do Termination and also there is a notion of managing your keys in a maybe a slightly more efficient way The result was the what we are calling the v2 model on the left you would see the v1 So on v1 we had what was called a VIP A VIP already includes a single TCP port behind the VIP there was a single Pool with a health monitor, which is the health check the way you Check how your servers are alive and obviously if they are not yet the load balancer will stop distributing traffic to them and the members v2 Did something interesting it first created what is called a load balancer object which defines the IP the virtual IP it then adds listeners, which adds the TCP ports that are being listened to and then behind that a very similar structure, which is the pool the health monitor and the members the interesting addition in addition besides that is that we are actually looking on Doing object sharing which we will talk later And this also introduced the notion of a slightly different status object where at the beginning all objects or Object like a VIP and pool had their own status and on v2 The status is actually something that you pull from the load balancer object, and then you can see what's the Hierarchical status of items inside it this will be Helping us to introduce sharing probably at Liberty so With the previous slide we've seen that the model was changed to address one IP multiple TCP port The other addition that we are doing for v2 is TLS termination. There were two Things that we've decided to be able to do in TLS termination one is default policy Which means obviously simple termination when you have a single TCP IP And the guy here nodding is to blame for the other part, which is the SNI support In short SNI is when you actually want to do virtual hosting behind the load balancer So all your virtual hosts as the same IP but have different SSL certificates and The certificate delivery is based on the host name. So SNI is also part of a v2 TLS The other thing that was kind of a blocker to introduce it previously was that The neutron team was not Willing due to security reason to actually allow storing the certificate in the neutron database So a key requirement was to have a secret store, which is now Called the Barbican project which obviously does more than just securing TLS certificates for Elbas But it does store the TLS certificates for Elbas and then there is linkage between The Elbas and those security stores. So now on the Elbas API you reference Certificate IDs that are being stored in Barbican on the Barbican side You will store the public key the private key and if needed the intermediate CAs So those two things allows us to one have an IP with different TCP ports and do TCP termination Also with virtual hosting We already see a lot of community drivers So what's nice about this cycle is that this was part of spinning out the advanced services as its own Project the end result is that I believe that we are seeing a much faster pace of delivery If you look already on the gate Of open the stack kilo or on reviews You would see a lot of a contributed the community drivers by different vendors and So that's that's very encouraging and The last thing that I would like to do is We've actually asked for vendors to send us demos We got three for the moment, but I'm sure that there's much more than that So we'll be showing at the next five minutes demos for Elbas V2 from three vendors And then we'll show the Octavia one, which is the open source demo Okay The nice thing about those demos is that what they show is a consistent way That the same service is being delivered You would see exactly the same type of commands basically creating the load balancer object After that, you're going to see a listener being created one or two after that you're going to see a pool being created attached to the listener and members and obviously Some of the demos will also show traffic The nice thing about this though is that you get a very consistent again set of APIs Without disregard with disregard to what's the back end. So yeah, this is the 8n one as we can see we can see that the solution is that the model that Configuration was deployed on the appliance Okay The next demo will show Again a two leptopology when the Vips is on one network the reels on this other network You can see the virtual machine in there Again, almost the same type of CLI commands First the load balancing is being created then a listener is being created You can see it's an HTTP listener and then again the pool are being created Members are being attached Now we're going to create a second listener behind the load balancer, which is the HTTPS again under the same IP As you can see there is a reference TLS reference ID in there There's a default TLS, which is the ID, which is the Bobbicon certificate and Members and pools pools and members sorry and the end result is that You can see that the status of the load balancer is active as Discussed as the status object and Based on that we can now go and check an HTTP based traffic We can see that it goes to port 80 and We can see that it's around robbing me to switch between those Machines and then obviously we're going to send HTTPS base traffic So what we can see here is that we'll do use HTTP, but several is actually sitting on port 80, which is HTTP So this is HTTPS based, right? Okay, and the last demo Come on Yes, okay, okay, so again the same type of commands Using a strip that will just do all of this in this in one time. You're going to see a Load balancer a listener pool servers So and obviously we expect to see that from all other implementations So we're going to see that Application is being load balanced behind HTTP and we're going to see Second iteration where it's being load balanced behind an HTTPS terminated. Okay, so we can see that it's behind now an HTTPS being terminated for port 80 and Again What we get is we get a unified API that can express there more more than a capabilities of load balancers Implemented by different flavor flavors and we'll be talking now on what's expected in Liberty It's fine. Okay. Yeah, I'm Brandon logo from rack space So there's some features we're planning on for Liberty The first one is l7 content switching This is actually planned for kilo, but due to the server split and getting TLS in and didn't make it But it's it's pretty close to being merged into upstream right now So l7 basically allows a user to Direct traffic to a different pool Based on l7 rules in this case, you know be one example here is the URL So you can go to a different pool based on what the URL a client sends in another case is using the HTTP headers But people want to do this to you know have a performance pool versus another pool that is less performant And That's going to be off the listener. So we need this basically gives a listener multiple pulls Another thing that goes with l7 is a pool sharing so currently You have to create a pool a pool cannot be living on it can't live on two separate listeners and so in the case of where you want to Low balance on port 80 and on a terminator 443 you would have to create a duplicate pull Which means you have to create duplicate members a duplicate and health monitor So if you ever have to update the pool or the health monitor or the member you would have to do it twice And this becomes kind of not very a good user experience So with a pool sharing you can attach a pool to many different list listeners, so whenever you are able to create or whenever you need to update a pull or add a member or delete a member at Turcle so all the other lists there, so It makes it easier on the user Next one So if you looked at the demos there was a lot of create requests for each Entity so at a minimum to get an actual fully functioning low balancer You have to have four API requests one for the low balancer one for listener one for a pool one for a member Now if you wanted to have many listeners and many Pool members you would have a lot more a PR request So one of the things we want to do is have the ability to provide the entire low balancer configuration all the one request This makes it super Provides the entire configuration to the driver up front so they know exactly what kind of networking they need to set up first They know what kind of resources they need to set aside up up front and In the case of horizon it makes it easier to this send the one command instead of having to send all these different commands and Speaking of horizon right now. We don't have horizon Integration, but that's what we plan on for Liberty We also plan on the the flavor framework and this essentially gives you an extended feature set for like if some some driver support certain features that Core API doesn't support then just give us this for example you can have software versus hardware flavors or HA versus non-HA flavors We also plan on coordinating with heat integration and We plan on having Octavia Ruby place the current namespace driver as a reference implementation and More about Octavia Michael. I'm Michael Johnson from HP and As you saw before on the previous slides We have a number of drivers from different vendors and one of those drivers was the HA reference implementation. That's currently shipping with elbas v1 and v2 Octavia is yet another driver that's intended to replace that reference HA proxy driver with a more operator class reference implementation based on an open-source load balancing solution So as you can see here Octavia is based around a controller which can live on Your compute nodes and it can live on a separate node or can live on any of your service nodes And on the left under neutron you'll see it's just a driver plug-in Much like the others. Let's use the mouse work here. No But essentially we have a database that stores all of our information about the Configuration for the controller and one of the nice things about Octavia is it's driver based So as you'll see here, we have in for a driver. I'll tell you a little bit more about that in a minute Compute driver and a networking driver. These are all Replaceable components in Octavia. So if you have custom solutions for this you can drop in replacements and in fact Even today we have two different and for drivers available We have one that's based on an SSH model and one that's based around a REST API model for M4 Speaking of M4 M4 is In the current implementation a service VM But we're calling it M4 we gave it a name that's abstracted because we also want to support containers and technologies like Docker as a possibility for hosting your M4 content so the HAProxy in this Implementation actually lives inside that M4 and the controller interacts with it again through the driver and as I mentioned We have an SSH and a REST API driver So when the customer comes in and makes the request we will either pull from a spare's pool of M4 that are already built and Issue those and configure them on the fly Or if you have a zero spare's pool we'll boot one up Using whatever compute technology you have plugged in as your driver In addition the controller is also responsible for two other Capabilities here the health manager and the housekeeping manager. These are actually separate processes. So they can be distributed as you see fit The health manager of course monitors the M4 make sure they're healthy and does fail over activities Should there be a problem and housekeeping is there to manage your spare's pool Do deletions cleanups etc that we want to do asynchronous to customer interaction The other neat thing about M4 is we're plugging them into tenant networks So we have a load balancer network, which is kind of our management Network for talking to the M4, but then as we Spin these up we plug them into the customer network or even a VIP network if you have a separate VIP network for your incoming connections Quick roadmap and will is highlighted there this will change Particularly throughout this week. We're gonna have a lot of summit meetings to talk about Kind of this roadmap and what we're intended to get done But our 0.5 release, which is what's work in progress right now. The intent is to have a feature parity Maybe not complete performance from a control plane perspective, but a feature parity with the current reference implementation and using the service VMs and Just basic spares pool fail over so in other words if we have a M4 that goes down will rebuild it from a spare In the 1.0 time frame, which again may land in Liberty may not We'll go to active standby on the M4. So we'll have to set up with VR RP And they'll be able to fail over between themselves so much faster fail over than a spares pool fail over and High availability on the control plane So you'll have multiple controllers for your your deployment And then 2.0, which is out there We want to do active active and horizontal scale out so in number of M4 Making up one load balancing component. So let's see how are we doing on time? 30 minutes, so I'm going to go ahead and be bold here and try a live demo Here you get better swag Actually, let me change the font first that sizing didn't work very well. He's in full screen on that screen It's not full screen. Let me Scale it down a little bit. All right. Let's do this a little differently. You get to see what I get to see. I'll protect your can't do it Okay So I'll do it in a fairly small window Sorry about that so what this is is a dev stack VM running on VMWare workstation on this system and I've done some basic setup. I'm going to run off a script here. I'm not that crazy So I've got two nodes. These are my back-end web serving nodes. They have a very simple web server on them You'll see it's dot 11 and dot 13 on the networks. I've set up Three networks. We have the LB management network, which we talked about. That's how Octavia talks to its M4 I've also created a load balancing VIP network. So this is a separate VIP for the incoming connections and a tenant network Which is where these two nodes live Just to show you I have no load balancers up my sleeve. We don't have any booted up yet Okay, so I just fired up a Load balancer I'm going to do a quick check here and look at The different bridge networks that are set up in Nova. Since I'm on dev stack I'm going to do a little hack here so that I have connectivity onto that VIP network So I just wanted to see which one is Coming up here So if we look at Nova list Now you see it. We have our Amphora. We have our management network plugged and we also have our VIP network plug now So I'm going to go ahead make sure 61 Is it 61? Is the one that we need? Yep 61 is the one we need to put the IP on All right, so now dev stack has access to that VIP network So I'm going to go ahead and continue just like we did on the other demos So this is going to create the listener on that load balancer Next I'm going to create the pool and the first member Second member. All right, so there's 11 and 13 If we now look at our load balancer I've got the IP here Everything worked Okay, there's member one And member two Yay live demo If we go back to Nova list we can also see that now after I added those members we've plugged that tenant network into that M4 as well Not all deployments will have a separate VIP and tenant You know, you may lay out your network very differently may use floating IPs, but That's an option Okay, don't need that demo So you can try Octavia yourself on dev stack. This is all committed. So you can go and Update your local RC add in the plug-in go pull it down The operator API is here if you want to go direct with rest Otherwise you can use the neutron L bass command set just like I did We also have a sample sample vagrant file in there in a local comp If you want to use those One other thing to note in case you guys didn't see it Octavia is now an open stack project So that's very recent. So these URLs. They're currently stacked forage. They're gonna be changing very shortly We didn't want to do it right before the summit. So The other thing to note Octavia is definitely looking for more contributors get more people involved more companies involved There's lots of work to do. So please feel free to join us on IRC We have a number of websites with information. You're a main wiki page on open stack The Octavia.io is our documentation And github. We also have a v-brown bag Thursday 245 Gurman will be doing a hands-on with Octavia. He'll be demonstrating the rest driver I use the SSH driver today and current status and design. Okay Unfortunately Doug was going to join us, but he's under the weather. So he doesn't want to use up his voice As I mentioned again, we have the v-brown bag going on We also have two design summit sessions Octavia design session is Wednesday at 150 and Also the neutron L bass use cases Nine Thursday, that'd be a great time to come in if you're an operator help us with what are your use cases? What do you want to see? What do you want us to? to work on and Any QA any questions? That's correct in 1.0. It is a single controller For your M4 and then for the failover will be pulling from a spare's pool Configuring up a new one. I'm putting it in place. It's 1.0. That will get the HA and the failover It is statically connected at creation time not yet v2 version 2 of Octavia is still pretty up in the air So that may be a statically sized horizontal scale implementation. I don't know yet. That's Way out on the horizon for us at this point. I'm sorry. I couldn't quite hear the question Sorry my first question is that It seems Octavia is really separate to project rather than a driver than new train There's a driver that talks at Octavia Yeah, appliance basically So there's a neutron driver that basically makes request rest calls to Octavia itself. And so that's the driver Octavia is just a I can either back in yeah, the Octavia driver plugs into the neutron networking services Framework, it's on the far side of the diagram at the bottom and it uses a messaging queue to interact with the controllers Okay, and the Octavia needs to boot a source VM to To set up the load balance service yet so What software installed in the source VM right now it's HAProxy HAProxy, yeah, but that's gonna be configurable because of the M4 a drive. Okay Do you have any other free software choice like RVS or something something else I Mean there's certainly the opportunity for other Drivers to be underneath this controller Even hardware components could implement an M4 a driver and in a networking driver and a compute driver to Put a hardware implementation underneath it. There's certainly opportunity for other configurations Okay This seems interesting. Thank you. Yeah. Thank you. That's a good question Probably brands a better person answer that Are we gonna have a bulk API for creating large numbers like a batch up or a batch create I Think that would go into Like updating a pool and they'll just specify a lot of members. Yeah, yeah That's definitely something that we thought about. It's just we haven't listed it. But yeah, that goes along with this I'll get it close across streams So yeah, that goes along with the single like the create call I mentioned Yeah, but it was along with updates to updates. You kind of want to update the entire graph or the tree at same time Good question We were at that presentation as well So the question was how do you service chaining and low balancing work together? We're still coming up to speed on the service chain There's actually work the service chain discussion that was done here before is kind of fair forward-looking There are more down-to-earth discussion on how you can actually chain logical services in neutron I'm I recall that there were discussions in previous summits. I'm not sure what's the status of that but there are discussions on service chain in terms of firewall VPN No, bouncer any other virtual service and how you can actually cascade them, but then at the moment There's no such thing. We have no good answer right yet I just wanted to ask a question about the load of balance H.A The I just want to know that the community approach for this load of balance H.A Is there any plan or or Some something plan to do for for this topic Load balance load balance H.A I'll notice that there there there was a blueprint blueprint in the community, but there's no big progress for that You know in in neutron In neutron it has the internal H.A support for L3 agents for L3 layer Yeah, I just want to know is there any plan for the H.A support for the load balance Yes, so the answer is yes So, I mean if you the current reference implementation is running on the on the network service and is not part of the H.A H.A work that is being done and they switch to Octavia Actually enable load balancing for us to be highly available And of course if you're using a hardware driver, you can use the hardware implementation of H.A to achieve a similar so there there will be no code or blueprint plan for this one for for for the H.A topic It will just adopt the hardware support for for the H.A support No, so again on for the current for implementation is based on H.A proxy The plan for Octavia is to provision H.A proxy instances that are highly available This is not there at the moment. It will be in version one, right? Correct Okay, thank you. I Think we're out of time. Thank you