 It's very small. OK, so let's get started. My name is Gavranaik Berger. I work for Wack Space. My name is Karol Wischmann-Sauw, which I work for Red Hat. OK, and then we have our other colleague, Michael Johnson, who's our fearless PTL on the slides, because we made them before we made them better. Anyway, so let's get started for people who are new to Octavia. What are we? So we are load balancing as a service for OpenStack. Nowadays, I try to avoid seeing network load balancer because that has, nowadays, implications just L4. But we do L7 with some of our drivers. But anyway, Octavia provides scalable on-demand and self-service access to network load balancer. Again, it's a technology-agnostic matter for OpenStack. And our reference driver, which we now call the M4 driver, is highly available because you do active passive load balancer that scales with your compute needs. You can make as many load balancers as you have servers in your data center. And what got founded during the Juneau release of OpenStack and we have plenty of contributors, 78 from 29 companies. And we started out as a sub-project of Neutron. So that's why you sometimes see Neutron albass. And then we became recently, actually a year ago, not that recent, we became a sub-project. So Octavia is now completely independent of Neutron. That's one of the reasons we are deprecating Neutron albass. And while we were under Neutron, and after that, because the foundation never updated the survey, we always have been the number one feature people wanted in OpenStack is software load balancing. So for Rocky, which was released a couple of weeks ago, we have a few features that you can start using now. So the first one that we highlight is the provider driver support. So before we only add one driver, which was the Octavia driver. Now it's called an Afforded Driver. It's the same implementation we just renamed to be easier for people to understand because we have the Octavia project and then we had the Octavia driver. So it was a little bit confusing. So we finally renamed it to Afforded Driver. And it allows to third-party providers to also include their drivers. So you could have like a VMware, which they already wrote their driver. There is also the OVN driver. So that's an open source implementation you can find on the OpenStack repo projects. And then we have UDP support. So before we only supported TCP, now we do TCP and UDP. So you can enable other use cases like IoT. Many IoT applications are on UDP. Another one was listener timeouts are now exposed via the API before there was a default one that users could not change or only the admins could change on the Octavia Conf file. Now the users can, when they create listeners, they can define these timeouts. So for TCP timeouts, member connection timeouts, so there are like four or five timeouts that you can specify now in the API. Then there is also the support for backup pool members. So when you have an old bouncer with a couple of members, when all the members are down, if you have backup members, these backup members can take over. So for instance, one use case that German was telling me the other day, or it was today actually, it was today during lunch, is that so you can just have like a page, one member showing, sorry, we are down for business, so come back later. Yeah, and we extensively also extended our Tempest plugin repo. So we have the Octavia Tempest plugin, and we have way more API and scenario tests. Then we have also layer seven and other insertion support in Octavia dashboard. So that was already available through the Octavia API and client. Now you can also configure those using our dashboard. So that's an horizon dashboard there. Also in the dashboard, you can see live status updates. So it used to be when you created load-bound site would be in pending create and never update, but now it will automatically update. So you don't have to hit refresh any longer. Yeah, it was even shown in the summit, I guess, in the keynote, there was a keynote and they had already that patch there. So you do not need to refresh F5. So F5, the shortcuts on the keyboard, not the vendor. Okay, so the next one was the migration tool. We presented this yesterday. There is a video recording in case you missed it. So there's a migration tool from Neutron albaz to Octavia. You can migrate your load balancers. You can even migrate from the namespace driver, or H8 driver, to the unfolded driver. So you could even do a cross provider for these two ones, between these two, at least. The other ones we did not test, so okay, so we also have a new grenade job. So that's for testing upgrades. And we assert with the two tags in OpenStack for upgrades. We also reduce the M4 image size for the Ubuntu-based M4 image. And finally, Barbican ACLs are automatically set. So if you wanted to like, TLS terminated the listeners before you would need to upload the certificates to Barbican and then add the Octavia account ID to the ACL in Barbican. So it means that the user would need to know the UID of the Octavia. For that, it would need to ask the admin for that ID, which is, and now it's just transparent. Octavia will do that for you. So it just needs to upload the data to Barbican, create the listener, point it to Barbican. The container there. Yeah, the container ref. And Octavia, behind the scenes, will do the ACL part for you. Okay. So for those who don't know the, there's this beer vessel we call as Stein in Germany. So that's why it's there. And what are you planning for OpenStack Stein or Stein? You want to do Octavia flavor support, which is a big one for us since we get a lot of us. So how can you scale Octavia and it would be better if multiple flavors that you have, bigger VMs and stuff like that. So that's high on Michael's agenda and he's working on that. We also want to implement the cloud auditing data federation standard, which is CADF. You can Google that. It's basically that you can, that we report the gets and puts and whatever on the API server in a exchangeable format. So I'm going to add redirect prefix as seven policy. What this means is currently when you do this, often people do load bansas where they have an HTTP endpoint which then redirects to the HTTPS endpoint. So the user can go either way, but then it's always up on HTTPS. And with the redirect that allows them us to send back a redirect instead. And basically that then the browser goes there instead of just us forwarding it. And client certificate authentication. So we want to do a lot of stuff with certificates. And this one means that you can, right now with our SSL encryption, we are SSL offloading the support. You don't really look at the client certificate. So anybody could can go and look at those sites securely, but we also want to add that to it. So we can cross reference that with a certificate and then can only authorize users who have the right client certificate could get to the SSL protected pages. So that's coming. We want to do improved TLSI one protocol support. So we want to have newer ones, more secure ones there. And not a big thing is the backend TLS re-encryption. So basically for those that are familiar with that, we want to, so you're coming in with TLS, we terminate that other load balancer, look at the headers, and then we re-encrypt that potentially with a different certificate. But sometimes people have different certificates and then send it then as a TLS back to the back ends. Why would people do it instead of a simple TCP pass through? Well, a common problem which we addressed is when you just see the P pass through of an HTTPS thing, then we don't know which timeouts are requested. And so a common problem is that we shut down the connection before it's done because we can't read the timeouts to giant sense to us. So if that you get that, or the other new way is that you just make it 10 minutes and hope you don't get DDoSed. But yeah, so that's this. Metadata tags. So we will allow that we insert metadata tags from the load balancer into the request. And we also want to increase our test coverage for the IPv6, UDP, and TLS offloads. So right now I don't have a lot of IPv6 tests. And we want to add to those some because maybe it's arriving now IPv6. There's always a thing, always a joke that IPv6 is just around the corner. But maybe now is the time. So we want to be ready for that. It's going on beyond Stein in the far future. So we still are committed to do our active-active with auto scaling hopefully one day. So we have patches up there for OVS and so on things. But that's probably beyond Stein, unless somebody wants to do it. So if somebody wants to use it. We want to do, yeah, we need to do some lock offloading. So right now we just eat up all the locks inside the M4. And we improved on that because in Europe you now have GDPR. And so we improved on that. You can switch off the locks. So no locks, no problems. But we also want to make it so that we can ship them somewhere and then you can handle them and remove whatever you want and look at them. Health monitor, a content check-in. We want to do that right now. We want to do gets and look at it to hundreds. But we also want to do a little bit more smart things with that. We want to do additional health monitor protocols, just which would be good. And then what always comes up is the ACL firewall support. So the underlying problem of that is usually people want to basically, when you put it into the, or you have two modes, you can put it into your tenant network. Then all your tenants can get to it. They can put it on the internet and then everybody can get it. And that's kind of too limited. And so we want to put an ACL on so you can basically restrict that a little bit further who can access the load balancing. And the other thing we have is neutral albaskos end of life. As I said earlier that we are moved out now being a top-level project. Then we try to support both. And we noticed that it was very difficult to keep both databases in sync. And then we rewrote our API to be compatible with Albers V2. With the Albers V2 API, then we learned, and by comparing that also, the logging on the neutral albaskos end is very poor. So if you have a lot of concurrent requests to the Albers API, then you will definitely run into errors and problems. So we don't trust that we have to deprecate it. It's not worth fixing all of that. OK. You have committed to all of that, Michael says. And it should probably come in. So just to make sure that you got the message, neutral albaskos is deprecated. We planned, so it got deprecated in Queen's release cycle. And we do not accept any new features, only bug fixes. And we plan to retire it, as well as the neutral albaskos dashboards in September 2019 or the U cycle, whichever comes first. And we put together a page where you can, there are FAQ. And you can see how to migrate. I mean, there are a couple of questions that people have already asked us. And you probably can also find on that page. So yeah, we gave a whole talk on that. So we will refer you to that. Let's just quick go into the gross project updates. That's a picture Carlos shared for me from the Tokyo Summit. But those who have been there, we want to still do the container thing for Octavia. So we are trying to get the soon project to help us. So one of the big problems we have is that we can't restrict or members can't. So members can't restrict who's accessing them, because our IPs always change. And so we want to do some firewall integration so we can kind of look that a little bit more down. And we went ahead with the RBAC rules. So we implemented them like NOVA did them. So we have old-based access when you look at that. But then nobody else didn't. So we're working with the Keystone team to convince them that the way we did it is right. Something like that on this line is that you don't have to rewrite a lot. And that this can rule. So basically rules going, the main thing of rule, which is a library to talk to OpenFlow, is that he doesn't want to do it anymore. So Neutron is taking that over. And we want to use rule for our active, active bar check. Since that's OVS space, we want to talk to OVS or OVN with VR rules. So we are trying to collaborate with Neutron on that. You can always give us feedback. There are lots of ways. We are on the channel OpenStack Albas with a weekly meeting. And we have a mailing list, which Michael looks a lot at. And we have migrated to Storyboard. So if you find any bugs, please submit them to Storyboard. There's a little bit more. We need your help. We have a lot of stuff. We have very small teams. Me, you, Michael mostly. Yeah, we have five cores. Five cores, yeah. Five cores and a bunch of them do that in the spare time. They're even paid for by the company to be cores. So we have work available here. Bug fixing, OpenFlow, Tempest. And if you're a load balancing vendor and want to have a great provider driver, you can also approach us. We will help you with that. So we accelerate it by the end. So we have three minutes for questions. Yeah, Q&A. Uh-huh. Okay, so multiple availability zone support. So one of our philosophical problems with that is that the term availability zone is not really defined very well inside OpenStack. It's one such as a pretty big gamer. Saying that, we have a patch up which adds multiple availability zone support to Octavia and this patch has been used by two companies already. So there's one thing we don't like about it is that this turns us into a scheduler. So we don't really want to be a scheduler and picking availability zones and doing stuff but because we would really like Nova to do that for us. But that's, yeah, anyway, so we probably should merge that because we get to ask a lot. And maybe that will help. The other problem if it is, as I said, it's not really well defined. So if we merge this thing, then it might not work for everybody since it assumes a certain setup of your data center. Yeah, so there is a patch upstream. It's open. I can point you to that patch if you want. Yeah. Yeah. No, no, we definitely would love to have that but we also would like OpenStack to do a better job with availability zones. We're going to become a scheduler. Yeah. Okay, more questions. Oh, yeah, so basically the current thinking we have, so it was asked a question about log offloading. So the current thinking we have is that we might add, when you create a load button, so you can add an IP or something that you want to send the logs to and then a user, if you operate a user, could then put in an IP in this tenant network and could just fire up a VM which aggregates the logs for him, would be one option, because we talked to some provider, talked to Mohamed and he said, our original plan was to just ship it to the operator and then Mohamed said, nah, he doesn't really want to get in the business of writing software which kind of splits it by tenant and stuff like that. So you might go that way. Okay, thank you. Oh, one more question, okay. So go ahead. How should we update the current running emperor? For example, there is some security issue or upgrade, they get from, yeah, we increase the role here and so on. So sometimes we have to upgrade running emperor, right? So is there any proper way to do that? Oh yeah, totally. So we have four APIs that are two ways. So we can go to load balancer, failover. So you basically upload a new image and then you go for each of your load balancers and issue a failover command and then it will fail them all over. Or you can do a little bit more selective if you have four APIs as a failover command, yeah. There is a documentation page, I think that talks about this. So then there are different ways to minimize downtime so you can have spare pools where you have already like a standby in forest and when you failover of one and four, the active? You can't use spare pool with active passive. Oh, no, with active passing over, right? But if you have, yeah, single. We cannot do that without any downtime. No, no, you can do, no, no, no. If you have active passive, then it doesn't have downtime. Then it will just fail over the active one and then it will fail over the passive one so there won't be downtime. And if you have single, then there won't, there will be limited downtime if we make a ginormous spare pool because all we have to do is reconfigure and that only takes a few seconds. Yeah, thank you. So, yeah. Okay. Okay, thank you. Thank you, guys.