 It's not time. Okay. But we are live. But we are live. Okay. Good afternoon. I hope everybody is still awake from lunch. But anyway, let's start. Our topic today is migrate from Nude1 load balancing to the Octavia load balancing system. Still people walking in. But let's start introducing ourselves. My name is Gamman Eichberger. I'm a senior software engineer with Wack Space. I'm a core reviewer on Octavia, OpenStack Ansible Octavia. And as a firewall as a service, and you might notice a theme in the stuff I'm reviewing. And currently I am with the Wack Space Managed Kubernetes team and helping them with load balancing and architecting how to put Kubernetes on top of OpenStack or AWS or whatever that is. Put Kubernetes on top. And hi. My name is Karuj Goncalves. I'm a senior software engineer at Red Hat. I work on OpenStack Octavia. I'm a core in Octavia and Neutron Elbas. And I contribute also to TripleO so that we enable Octavia integration in TripleO. Prior to that, I was a software specialist at NEC. And I was also doing work in OP NFV, specifically in the Docker project. Okay. So what is Octavia? Let's start at the beginning. And Octavia is the load balancing project for OpenStack. And we provide a scalable on-demand self-service load balancer which runs on virtual machines or whatever else. And we have the reference load balancing driver that's the one with the virtual machines. If you're a compute environment. And we have found it during the June or cycle of OpenStack. And we have 90 contributors from over 30 companies becoming more and more every cycle. And that's the interesting part. We used to be a Neutron sub-project. That's why there's a Neutron Elbas. And then we moved out of that and became our own top-level project. And because during this move that basically resulted that we had to think about how to migrate people eventually into our own Octavia from Neutron. We also always have been the number one Neutron feature. And I think we got in now to update a user survey to make us move us out of Neutron to reflect our new structure. Just before the, or yeah, okay. So basically, as I said, we started out as part of Neutron. And so we had a Neutron Elbas, which then used the Octavia driver to do Octavia load balancers. Now we moved everything out of Octavia. So basically we want to deprecate Neutron Elbas and we declared it in the green cycle. And so no new features will be merged since then and that hasn't been very active anyway. So it's not like there's a huge loss and there's lots of stuff going on there. We then made the plan. At last PDL we decided to retire Neutron Elbas, a Neutron Elbas dashboard which goes with it. And that's in September 2019, which is around the U OpenStack cycle, which ever comes first, yeah. So we did that, we weren't sure if the cycles are changing. So we did September 2019, it's our goal. We have a deprecation FAQ, so we can read up what it all means if you're running Neutron Elbas, but it's on the wiki. Yeah, so just one note, so because there is the project Neutron Elbas, there is the Elbas API and there is Octavia. So whenever we mention Neutron Elbas, we are talking about the project that we are deprecating and retiring soon. If we say Elbas, we are most likely talking about the API, which is currently on V2. And we also, as we will see, we also support that API in Octavia. So that's why it is important to clarify what is Neutron Elbas and Elbas as in the API. So because we are deprecating and retiring soon, Neutron Elbas, we wanted to give people the power to transition from one to the other project, Neutron Elbas to Octavia. So for that, we have a few options that people can start using as a migration path to begin or even just to migrate from one to the other. So the first one is the Octavia provider driver that is a provider driver in Neutron Elbas. We will see more details for each of these on the next slide. Second one is the pass-through proxy plug-in in Neutron Elbas. We also have the option to have layer seven policies, so we put a proxy server in front of Neutron server API. And we also validate that the V2 API in Octavia is a superset of the Neutron, of the V2 API that is implemented in Neutron Elbas. And lastly, we have now a new tool in Rocky that migrates your load balancers from Neutron Elbas to Octavia. And in many cases or some cases, you will not even have experience any downtime. Oh, you should go back, because there were some caveats, so the migration tool only works for load balancers, which are created in Neutron Elbas, and it only works for providers which support the migration. So if you're running like a hardware appliance, like an F5 or a net scaler or a Gradware or whatever, then those people would have to support the migration. And as one of the providers, VMware already has a provider driver for Octavia, and they have successfully used the migration to have to be careful of the legalese there. So if you are running on the back end, VMware load balancers, most likely this migration tool will work for you for those load balancers. So the first one that we have is the Octavia provider driver. So you can have still in Neutron Elbas, there is the provider support. You can enable multiple providers. One can be Octavia. So Neutron Elbas can talk to the Octavia driver, and this driver will then talk to Octavia. To enable it in the configuration, you just need to add it to the list of service providers, or if you are using the FStack, there is also on the screen the option that you need to set. We test on our upstream CI. We have a couple of jobs there. So we test the API with Python 2, Python 3, and also the scenario tests. So just to, so for you to have an idea of what I just mentioned, so this is, if you have Neutron, you can enable the Octavia provider driver, and it will talk with Octavia, the project. And simultaneously, you can still have different other load balancers, for instance, with A.J. Proxy, our reference implementation, the namespace. The old reference implementation, the namespace driver, Octavia has been the reference since then. You can also have, like, from VMware, F5, all those load balancers. Yeah. Okay. Yeah. So one of the migration tools, so if you are still running Neutron, one of the easy things to basically switch over, once you have all your load balancers, Octavia can just install the Neutron Proxy plug-in, which replaces the Albas V2 plug-in, you might already running there, and the Proxy plug-in basically takes a Neutronic vest, it gets bubbled up from Neutron, and then sends it over to Octavia, it comes back, it packages in Neutron form and sends it back, and it's very simple to switch it on, just search in your Neutron configuration file for Albas V2, which is our normal plug-in, and put minus Proxy behind it, and then it will use the Proxy driver. And so basically, all applications which go to Neutron, they will just work as they worked before, and instead of going to the Neutron Albas, they will then be proxied over to Octavia. And we also test it on our CI. Here's a picture how it works, so you talk to the Albas endpoint, or the Albas plug-in, Albas extension on Neutron V2O slash Albas, goes into Neutron, the plug-in ships is over to Octavia, and then it goes back. It's one caveat. Octavia and Neutron both manage quotas, so you need to be careful that you set them similar, because if you, for instance, have like, there are only five load balancers on Neutron, let's, 10 load balancers on Neutron and five on Octavia, then the user wants to create a sixth load balancer, he can't because Octavia will block that, and vice versa. So you gotta careful on that, because they both do their quota checks. So the third option is the direct layer seven. So in this one, the proxy server will redirect traffic that is targeting to the Neutron, to the Albas API to Octavia, the rest of normal Neutron calls to create ports and so forth, it will go to continue, it will still continue going to Neutron. So to enable it in DevStark, we just set it to the proxy Octavia to true, and you will have it. Again, we also test it on our upstream series. Yeah, it should probably be, talk about this, so most people who run OpenStack, they have a load balancer in front of their API servers, in front of the Neutron API and the Nova API, in front of the Octavia API, and so it's very easy to add in a seven-vool there and redirect things. Again, just a picture to illustrate. So again, Neutron Albas API traffic will be forward now to Octavia, whereas the rest of the traffic to Neutron will continue going to Neutron. So we have started, basically we implemented the Albas, now Octavia V2 API is compatible with the Albas V2 API, and it's a super set, so there's more functionality in Octavia and there will be more functionality coming, and we are now have a, we are also versioning that, so you have a version string and see what changes are. And basically because it's a super set, all applications which use the Neutron API endpoint, they will continue to run like nothing changed when you run them against Octavia. So for instance, I think Heat used to run against us, and then they just had a switch that worked. Again, CI jobs there. Then the fifth option, and that's the most, should be like your final one, if you want to migrate, actually migrate all the sponsors from one project to the other. You can use the one option that you can use is the migration tool, which is, you can find it in the Neutron Albas repo under the tools directory, and it's a very simple agreement. We'll explain it to you. So yeah, with this tool, you can migrate from Octavia provider driver to Octavia project, from the AJA Proxy slash namespace driver to Octavia, and then from VMware to VMware, and so forth from other providers. We also have a job downstream, upstream that tests this, and so it creates all the sponsors, then migrates from one to the other, and runs a couple of Ansible tasks to verify the connectivity, and so forth. Okay, so here's the tool we have written. It's called N-Albas to Octavia, so it's a very descriptive name, and this is all the options you can give it. The most important ones I highlighted with yellow, for instance, the all option means you want to migrate all load balancers in your Neutron Albas database. The config file, you might want to give it, so it uses normal Oslo config format, so if you happen to put this into ETC somewhere, then you might find it, but I find it more convenient to just specify the config file, the configuration. You can also give it a load balancer ID if you only want to migrate one load balancer, and you can also basically migrate by project ID if you only want to migrate a certain project for testing, or because it makes more sense. So here is, as I said, those are the important command line settings, all config file, it's the path to the config, and then the load balancer ID of a load balancer, if you want to do it that way, or the project ID. So when we look at the configuration file, this is in the directory where you find the tool that also assembled configuration file, and I took that and highlighted what you would need to add to it. So we have to put in the user ID for the Octavia account ID, which is the user ID of the Octavia account, if you gave Octavia service account, and then you have to put in the database connection strings for Neutron and Octavia. So you can find both of them, so the Neutron one you find in EDC Neutron conf, and the Octavia one you can find in EDC Octavia, Octavia conf, and then you just copy them and put them in there. In my case, they are both the same because we use the same database connection, but in a lot of installations, there will be a specific database users per service, so you would have to copy the different ones. I think we want to do it a bit more at the end. So let's leave the demo. So in the demo, we will show this migration tool, but we will leave it to the end so that then we can play better with time. So we talked a little bit about the provider support. It was another thing we had to get right before we could deprecate Neutron albath was that we as a now need to support providers in Octavia. And basically, this is our structure, so we have an Octavia API server, and it has a provider driver in it, and this can be anything. And here's the whole architecture when you use what we call Octavia driver to make things more clear. We renamed that in Octavia Stein into M4 driver. So it's not Octavia the project, Octavia the thing, and Octavia the driver, we want to be different. So it's now that Octavia driver should say M4 driver. Basically, we vote our own driver from the four driver, and that goes in, and then we have the Octavia worker, or we should rename that in the M4 worker, but it's all a little bit broken blocker still. Octavia worker basically creates and updates load balancers, deletes them, we have the health manager which monitors the health of load balancers, and we have the housekeeping manager which manages our spare pool. So you can have a spare pool of load balancers to kind of speed up things. The thing is with drivers in Octavia, so we can abstract the way from what we're using. So it's a network driver, right now it supports Neutron, but if you are so inclined to provide your own network driver, it supports other networks. You have a compute driver, right now for Nova, but people are working on adapting that for soon, so we can have containers. You have a certificate driver which talks to Barbican, if you have something else you can replace the driver, and we have the M4 driver which talks to M4, which shows the name M4 because when we start off the project we weren't sure which form factor load balancers come in, it would be VM, it would be a container, or maybe even bare metal, and so we used a very generic name, and that's why we stuck with that. In this slide we are showing DevStack configuration options in case you want to run DevStack with the M4 driver enabled, or with the OVN, which is a new provider driver, or also from VMware. These are the three drivers that we know that are out there, exist, and people can start already. OVN is coming with Stein. Yes. I think some patches that we wanted in Rocky, I know if no one is here, a lot of those things are pretty new, so probably it's Stein when they are usable. Oops. Okay, I just keep those open so I can, sorry. So as I said, we have three providers right now that we know of. We have two open source, one is our reference implementation, the M4 driver, so that's like feature rich, we test with TCP, UDP, Layer 7, TLS, a minute of the listeners, and that's what we mostly test in our project. And then recently in Rocky, Stein, we had also the OVN driver, which is open source, so that supports Layers 4, so TCP and recently also UDP. It's in Stein for sure. It's very lightweight, so there are no VMs, whereas in M4 driver, you have service VMs, and because of that, it's very fast to provision, so it can be a matter of three, five seconds I was told between that. Yeah, it's using the OVN you might have in Neutron. It's in OVS. OVS. Yeah. So you would have to run an OVS cloud. So if you have like Kubernetes on top of OpenSack with career enabled, this is very useful because you don't need to create a couple of dozens of VMs and four VMs, you can just use OVN and that should be enough. But they don't have Layer 7 or member health check at least right now. Which is okay for Kubernetes, so you can think of the OVN option, what they now call Amazon the network load balancer, and M4 would be more like an application load balancer. And the rationale for having that is that Kubernetes often comes with their own built-in load balancer, and so you only want to do L4 load balancing in front. And as we mentioned already, VMware, they also have in their upstream repository the support for Octavia. So there is the link, I left it there in case you want to check the code. It was already merged, but nonetheless. And the slide that I previously presented. Okay. Okay. So before we get to the demo, let's summarize, Octavia is more robust resilient than Neutron albass, so we have one Neutron albass with an Octavia driver, and the singing between of the two databases has been very cumbersome. So we decided that we need to just need to have one source of truth. And so when you're running only Octavia and make that a source of truth, then you should have a much more robust installation. Also the locking in Neutron albass doesn't work very well, so if you hammer it with a lot of concurrent requests, you will definitely run into errors. So the Octavia API, as we said, is a super set of the Neutron albass, albass v2 API. So it's compatible, but you have more features. Again we said earlier, plan to retire is September 19th, or the U-Cycle, whatever comes first. And we want you to migrate soon from Neutron albass to Octavia, so you're ready when that happens. Then we are trying to get more, so in case you are not using one of our open source drivers or VMware, we are trying to get more of the vendors to develop their drivers for Octavia. So we know that Redware has started work on it, and we heard of people want to do an F5 driver, so things are coming. And also RV. RV. They're making a driver too, okay. Which is good. Which is good, yeah. And in case you're using any of those, Installer's OSAR, OpenStack Ansible, Triple O, Collar, I think Helm has it, so by now I think all the, except Charms, I think Charms is getting Octavia support, but everybody else is supporting Octavia, so you don't have to install it by hand. You can use one of the Installer's. It has been GA in OS and Red Hat's product since the green cycle, since 2013, GA in the workspace product, so it's, yeah, everywhere. Okay. We're always looking for more people helping us, so we always need developers, we need code reviewers, and yeah, so, and we have lots of work available, bug fixing, open flow, where you want to do tempest testing, documentation, if you don't want a code, you can have the right documentation, so everything helps. If you're a low-banking vendor, write a driver for us, so we also made it, yeah, we will talk about tomorrow in the project update, but we now have a driver library and a driver developer guide and so on, and yeah, that's the session tomorrow is where we talk about the project update, so now you check the time if you can run the demo or are there questions, so demo or questions, we got to balance. Let me just prepare the demo, and do you want to do it? Yeah, you can talk about it, just fire it off. Just come here, because you know, where it is. Just make it go, not this, maybe just click on the movie. Or just for the slides, okay, yeah, yeah, still trying to figure out a software. Okay, now I have to play it here. Can I just move this from here? I don't know. Okay, you want to move to here, to this part, to the middle? Yeah, just move in the middle. Okay. So here we are, so the whole demo goes, starts with grading security groups and stuff for the web, and grading web servers, and here we're getting where we are, graded the health monitor, the low banter is already graded, new drone, albass, and now we're grading members, looking, everything else, then I'm curling, and then I will be surprised. Okay. So I see it's working a low banter, but I'm not really happy, because it's only, I have two web servers, so it's only the 10 or 14 web server, and then I'm trying to figure out why I only get one, and I forgot to add the other members. So if you have Octavia as a provider driver in new drone albass, this migration will be, like, instantly, because we just update the ownership of some parts and move data from one to the other, or not even in that case, with the Octavia provider driver. So it's like, the migration takes one second, depends on also how many load monsters you have, but it's pretty, yeah, it's pretty, pretty quick. So here I'm setting up the configuration file, so I'm adding the new drone DB connection, which I get from, as I said, from the EDC new drone conf. Just get the connection string from Octavia, from new drone albass. You put it in this config file. Then I'm going over there, get the Octavia one, which I already had open, so it will be really quick. If you are migrating from the namespace driver, Ajay proxy driver, to Octavia, then you need to set also the Octavia account ID there. If you have the Octavia provider driver, like in this case, you don't even need to set it. Yeah, now as it's set up, I'm running the migration. So it was just like that? Like that, because only one load balancer, so it's fast. So we're testing, are they still working with load balancers? Yes, they're still load balancing. And then we are looking, we are listing in, see, I think it's new drone albass, listen, you don't see any load balancers anymore because they've been migrated, so they are gone. But when you do the OpenStack command, which queries the Octavia API, then you see there's the load balancer, with the Octavia provider. Now we are configuring the block in proxy, so here I'm going in there. I'm looking for albass, found it here, albass v2, I'm adding dash proxy. So the migration is already completed. What we were doing here is just to enable the proxy so that your current users, they can still continue using the Neutron command, the CLI, and they will see the load balancers as they were before. So it will be restarting Neutron, so it loads in the plug-in, takes a little bit on. Then now when you do another Neutron albass load balancer list, it will show all the load balancers like it would if you would do an OpenStack load balancer list because the proxy will just proxy it through. That's the end of our little demo, okay, now questions. So there are different ways that you can migrate, right? You can first migrate and then enable the proxy if you want, or you can enable the proxy first and then upgrade so it's up to your clouds to decide your users also because they may be impacted by some potentially down time. Any questions? Oh, damn, yeah, got one. So if I understand correctly, the whole Octavia implementation wraps around starting VMs, which then run HAProxy, right? Yes. So that is a lot more heavyweight than just running an HAProxy inside of a namespace. Is there any minimal implementation? Like, because, I mean, after migrating, I will have to run a lot more VMs than before. So I don't know how big these VMs should be. So what are your thoughts on that? Yeah, so the problem with the namespace, I think, is the scalability. So if you have a lot of load, then it will load down your network nodes. And so moving it into VMs kind of isolates you from that and allows you to have more separation between different tenants and separation between the loads. So that helps you. So that's why we are doing VMs. Of course, it's a little bit more heavyweight. But as we have said, we are working on doing containers. People do a lot of over-committing. So you have ways in NOVA to make the impact less drastic. So if you know it's like a development environment, you might over-commit a lot. And then you can put a lot of load plans as a stair without having to give a lot of hardware. But as I said, we are working on adding new things like a container driver and stuff like that to make it easier. You also have the containers that we don't have today, but we could have. So contributions would also be welcome. So if you don't want VMs, you could also use OVN if it is good enough for you. Or for right now, that's what they support. Yeah, the other good thing by having VMs, though, we should say is that you get high availability for active passive scheme. Because if you use the namespace driver, if one of the networking nodes would go down, it would try to reschedule them on other networking nodes. But it would be downtime, whereas the Octavia system is a zero downtime system. So you can have two VMs. One is the active, the other one is the standby. And you can make sure that they are all set on different compute nodes. So that's it. So how is that done? So how's the distribution done of these VMs? Are there any options to configure like schedule three VMs to three different availability zones? Or is that done automatically? Is it a server group? OK, the way we do it today, is we use a server group. And we use, and you can specify an anti-affinity policy. So we allow you to do the soft anti-affinity if you want to on death stack or something. But with anti-affinity and then NOVA will pick where to put it. You can also specify a flavor, which we can tell NOVA where to go. If you really want to go with availability zones, there's a patch up. The big one with availability zones is that then we become a scheduler because NOVA is really poor on scheduling. It doesn't really have construct availability zone. But we have a patch up where we will pick availability zone, make sure that we only pair low bands or some two different availability zones. And there's active work on that. So that's the other advantage of using VMs because we let that scheduling to NOVA. Thank you. Hi, my question is related to the other one. Is it possible to have multiple load balancers in the same amphora pair? No, not today. No, we only support one load balancing construct in our system. So in reality, we will run one HEPROXY process per listener on the amphora. But yeah, it's only one load balancer. Yeah. Thank you. So for instance, one thing that in Stein, we plan to implement is flavors. So with flavor support, the operators can create flavors, give a name. And in the flavors, they will be able to tell which image from glance they want to use to create the service VMs, the amphora. So they can give a smaller VM. Also for the flavor for NOVA, they can choose a different one instead of the one that is applied to all the amphoras in Octavia. That Octavia creates. So we can also create a smaller one so you use less resources for one load balancer. Because right now, in Dev Stack, we use a flavor which takes one VCPU, one gigs of RAM, and two, three gigs of disk. That's our standard default configuration. Hi. That raises another question. If you start a load balancer with a small VM, do you have the capability to upgrade the VM later, maybe spawn another one bigger because you have bigger needs and migrate between the two? Yes. So basically, the way it works today is that you specify the flavor for your installation. And then when you decide you need bigger load balancers, then you can change the flavor for installation. And then the existing ones, we have something called a fail over API where you can just initiate a fail over. And then the system will fail it over from the small VM to the big VM. The only downside today is, because we don't have flavors yet, is that all new load balancers will be created on the bigger VM. OK. Thank you. No more questions? OK. Thank you so much. Thank you. Come in.