 So, hello. Good afternoon. We are here for the Octavia project over you an update. That's our last talk in the big Octavia series at this summit. My name is German Eichberger. I'm with Wrexspace. And I'm Adam Harwell. I'm with GoDaddy. And our fearless leader Michael Johnson is on top. He did most of the slides and he's our PTL. Yeah, unfortunately he is unable to make it due to budgetary issues. So, hopefully we'll see him in the future, but it's just us today. Okay. And one more thing. They are recording that and putting it up for generations to come to view. So, we will allow questions at the end. So, don't do them in between. Okay. Okay. Let's get started. So, the first thing is what does Octavia actually do? And Octavia does is basically the project for network load balancing. So, it provides brings load balancing to OpenStack. And we provide a scalable on-demand and self-service access to a network load balancing service in a technology agnostic matter for OpenStack. What we mean with that is we provide an API which is the same regardless if you use this Octavia is kind of overloaded. Octavia load balancer. And the project is also called Octavia. Or if you're using an A10, VMware or F5. So, it doesn't really matter what's in the back end. You have an API which is basically the same. And Octavia, as I said, is the reference load balancing provider. And this one is highly available and it scales with your computing environment. So, basically it fires up VMs or similar. And depending how big you make them, the more you can load balance. So, this project has a somewhat long history. Not as long as some of the others around here. But we were founded during Juno. We've got 65 contributors from 28 companies for the latest release, which is awesome. So, thank you, everybody, for contributing. We may have some contributors here. So, thank you a ton. We started out as scalable load balancer driver for Neutron Lbass. But now we do all of network load balancing for OpenStack. So, we used to be a driver. Neutron Lbass is now kind of merging into Octavia. So, yeah, we used to be a sub-project of Neutron. Now we are a top-level OpenStack project as of Okada. So, that's awesome, too. We're really excited about that. And we were, for two user surveys in a row, the number one networking feature that people are actively using, interested in using, or looking forward to using. So, I think we're hearing your feedback and definitely go fill out those surveys when you get them. It's really useful for us to know that we're actually something you're wanting to use. So, let's see. Okay. Then some key features of the load balancing project in OpenStack and Octavia. We have flexible network topologies. That means we can plug into public network, private network, work with floating IPs, whatever you need, flat sub-nets, whatever. Our reference driver, which we also call Octavia, who has highly available load balancer and uses VMs for that right now. And we have a highly available and scalable control brain. So, it's shared nothing. So, you can scale it whatever you want it and put more stuff there. And we support layer seven load balancing. And we also support session persistence. So, and we also support TLS offloading. For TLS offloading, we require you to use the Barbican project to store the certificates, because we didn't really want to, we really didn't feel that we could do it in a secure way. And so, Barbican is the secure secret store in OpenStack. And so, we're leveraging them for our certificate management and storage. All right. So, for Pyke, our new features and enhancements, we are wrapping up right now merging the Neutron LBAS V2 API into our standalone Octavia V2 endpoint. So, at this point, the LBAS V2 API that hopefully you're already using through Neutron LBAS will be available as part of Octavia. So, you can use Octavia directly. And we have a keystone service for that called load balancer now. So, we also have a new client and that will go directly to that service type, load balancer. And it's part of OpenStack Client, finally. So, no more deprecated Neutron Client. You can see the docs for both of those things right there. We also have support for Octavia deployment with OpenStack Ansible, which is cool. We know a lot of people were asking us, like, how do we actually deploy this thing? Like, we don't want to just look at the DevStack plugin. So, there are roles for OpenStack Ansible now that will allow you to deploy this. And I think also Kala and Kala Ansible have roles for building Docker stuff as well. And Tableau. Yeah. And we have also support in Tripleau coming in Pyke. So, that's awesome. Let's see. In docs for all of those there. We'll make these slides available after the session. Okay. So, talk about the release themes. So, the major focus in Pyke was to increase modularity and user experience. So, we feel user experience goes up when we do the API migration because now we don't have two databases anymore. We only have one database as a source of truth. And there has been problems with syncing them. So, the Elvis database on the Neutron end sometimes got out of sync with the Octavia database. And that's something we will be improved in Pyke. And so, that will improve user experience. The other thing we are doing right now, you have to use the Neutron client. Neutron client is deprecated and we are moving everything into the Octavia client. And we moved Octavia client into the OpenStack client, which also makes user experience better because you now can use the OpenStack client users use for everything else. And modularity is we are basically not anymore in Neutron. And so, it's easier to install us in the system. So, we don't need to go and lock stuff with Neutron anymore before you had to solve the same version of load balancing as you had with Neutron. And now you can kind of keep that out of sync, which makes your play out more modular. So, you don't have to upgrade Neutron to get the latest load balancing anymore. Yeah. Sorry. Excuse me for a second. So, yeah, our release themes for Queens are a little bit similar. We also are still focusing on user experience a lot. And also interoperability and scalability. So, one of our focuses for Queens, again, is we want to be able to scale a lot better. So, we're hoping we can get our active, active stuff going. And we're, again, really focusing on user experience this time, I think on the UI as well. So, our UI has been kind of stale for a few releases, but we're hoping that we can get that beefed up a little bit, a little bit more featureful, and really be able to expose all of what we can do. So, that's kind of our focus for Queens. Okay. Then, looking further in the future, one of our biggest things we'd like to have in your Octavia load balancing thing is our active, active load balancers. We have submits right now, patches, which would bring this capability to us. But we have to, by the people who did that left, and so we have to kind of pick that back up. And we're working with other partners to help us realizing our vision for active, active load balancing, which will allow you then to scale horizontally the load balancing system. We also want to add support for vendor drivers on the new Octavia V2 API. So, right now, the vendor drivers are in Albers V2, and we haven't started on porting things over, which is a big green goal, but we want to definitely do that. We also want to add load balancing flavors. There are basically two things you want to do with that. So, one, it's for the Octavia load balancer, right now you have to pick your Nova flavor for all the load balancers you are creating, for all the load balancers VMs you are creating. And so, you have to do one size fits all with that. If the, when flavors come, then you can basically have different sized VMs based on the load balancing function you're looking for. For instance, if you want to do less offloading, you might want to get a flavor which has a higher CPU or something like that. For the hardware vendors and for the third party vendors, flavors will allow to unlock specific features on their load balancers, which we don't support in our API, because the API is supposed, basically only supports things which everybody can do, and some vendors have specific things which, which they are unique for. And so, flavors will allow to unlock that, but if you're going to use that, then you have the risk that if you have a switch vendors, it won't port over. The last thing we want to do, and Adam talked to that, is we want to improve our horizon dashboard. So right now, it doesn't allow you to define as seven load balancing things, but we want to add that in Queens and bring it up to parity with everything. Yes, and looking forward to our, we still want to focus a lot on scalability, and I think the active, active stuff is going to roll into that as well. And also resiliency, we're looking at, I don't know if we have a slide on it, but we've been looking at job board for a while. So we use task flow, and being able to pick up jobs with job board would increase our resiliency a lot. So right now, if a worker dies while it's in progress on something, things can get a little strange. So job board should let us pick stuff up a lot better. And then manageability, we have designs on an operator style API. So allowing operators to do things like manually fail over load balancers. Actually, there's a patch for that already in Pyke that hasn't merged yet, but we're getting there. And so by R, we want to have something a little bit more full, fully fledged out. And again, as always, user experience, because for us, the user experience is the most important thing there is. We want people to be able to use this easily and actually feel like it's not a roadblock for them to be able to create a load balancer. It should be fairly simple. Okay, then basically that's, so basically we have a lot of questions. We need your help, and we really need to help you with it. We basically want to know what you're using in production so we can serve you better. We want to know which load balancing features you would like to see us implement so we can prioritize our work. We also have requests for developers. You can never have enough people working on something. So we can really use some help on the QA end. We can use some help on the UI. And basically we can also, we also need some help with the core review, with the reviews. So if somebody wants to become a core reviewer, we have definitely open positions there as well. Yeah, if you're looking to be a core reviewer for Albas, definitely we will help you get there. Just start reviewing, come by, talk to us. We definitely have slots open for cores. Yeah, we definitely have, especially Michael, we have spent a lot of time mentoring people when they approach us and meeting with them. So now you won't like to open that up for your questions. If you have any questions for us, please step to the microphones. Since it's recorded, it's important that we do it all over the microphone. Again, anyone who wants to tell us, if you have a deployment of this, what are you using exactly? Are you on Neutron Albas V2 at least, hopefully? Do you have specific concerns about the upgrade moving forward to Octavia once Neutron Albas is deprecated? What vendor drivers are you using that you want to make sure that we support properly? Yeah, and like I said, what features do you think we're missing that you'd really like to see? Because I know we've heard a lot about re-encryption on the back end when we do TLS termination. So that's one of the things we're looking at. And I know there's some other stuff, but what would you like to see? Anybody? We can wrap this up now and head to lunch if no one's got anything. Or we can ask people questions. Why are you using Albas V2 now? Hey, so I think we're using Albas V2. We have a hardware vendor plugin we're using, which is A10. Some of the things we are interested in are exported for. So header insertion in general. And what was the other thing there? I thought we had explore. Yeah, so at least in Pike, maybe Okada, but probably Pike, we, definitely Pike, we added more customized header insertion stuff. So it's possible to add even more, but I know exported for is one of those. What cloud are you running or what release are you running on? We are on Metaka right now. Okay, yeah. So I think probably, yeah, probably by Newton or Okada, you should be able to see that. And in the general use case, is the flavor support supposed to be helping us pass, you know, vendor specific features in? Yeah, so basically with flavors, one of the use cases that German mentioned was like, when you do TLS offloading, you might need a little bit beefier box or you might need maybe to deploy to a different cell or something, depending on what the hardware you have is. And that'll allow us to say like, I need the flavor TLS, maybe. And it depends, like the operator defines what those flavors are, but it would allow you to do that. Or if you're using a hardware vendor and we don't expose something, you could use a flavor that enables advanced features. Is it on the load balancer level or the listener level? It's on the, I believe it's on the load balancer. On the load balancer level, yeah. The flavor will be there. Yeah, you would create a load balancer type and even you could even go so far as to do, like, I would like to create a software load balancer versus a hardware load balancer. That's how you want to set things up. But if you're really interested in flavors, there's a spec currently up and you can comment on that. Yeah, I know the spec is, needs a little rework right now. So the state that it's in is not the final state that it will be in. But we could definitely, that means right now is a great time to participate in that discussion. Yeah, we're really interested in supporting features that might not even be in the API. Yeah, yeah, that's why we want to do flavors. All right, cool. Thank you. Anything else? Now, we can specify only one image for Anthro, right? Yes, currently. So is there any plans to support multiple images? Oh, different images. That's an interesting question. We would probably put that, if you do that, yeah, that probably will be part of the flavor support that you can and as those, you haven't thought about that. But I think we probably need to do that since the images are tight. To the size of your VM, so yeah. Yeah, yeah, we should definitely, I think that now that you mentioned that, at least I have that in my head. So yeah, we should do that as part of flavors. So I think you should probably see that as part of flavors. Okay, thank you. Yeah, thank you. That's an excellent, like exact, perfect example of what we'd like to hear because now that can definitely, we can make sure that that's in there. Yeah, anything else? Any other questions? You want to see me do a horrible dance up here? Because I will. I'll do this until you ask me a question. Oh, down the back. Very good, very good. It was regarding deleting load balancing through the horizon, maybe not Octavia, but the previous version. Is it possible, would it be possible, using Octavia to do it through horizon rather than say CLI or whatever? Yeah, so we do have the horizon plug-in for Neutron Elbas. It's Neutron Elbas horizon dashboard. Yeah, it's a horizon dashboard for Neutron Elbas, and it works and we stopped doing releases because nothing changed, and we recently picked that up because it confused people. Yeah. And so we now release the dashboard, even if nothing changes, alongside our other releases. And we also tested it recently, and so it's working. It just doesn't have all the features the command line client has, but you can create load balancers, you can create listeners. It's just the biggest omission is L7. Yeah, if you didn't see that, it's possible you're on one of the releases where we didn't cut a release of that, because I know there was one where we didn't cut a release. Like you said, caused a lot of confusion. So yeah, we've been cutting the releases now, but yeah, there is a Neutron Elbas dashboard that you should be able to use in horizon. But the dashboard is carried on over to... Yeah, and then we basically, what we did is we cloned the Neutron Elbas dashboard project into Octavia dashboard, because it's the same API, and we're just going to be maintaining that one moving forward. Okay, thank you. Yep. If nobody has anything, I know I will be around, obviously, for the rest of the conference. So if you see me around, feel free to ask questions. You can reach us on IRC practically all the time. And the other thing we might want to add, we see a lot of our vendor friends here, so if you have questions for specific vendors, it's a good chance right now too. Yeah. Oh, that's another question. Oh, yes. My second question is, are there any plans to boot, and for instance, by using cinder volumes? That is an interesting one. I don't know if we've considered that. No, we haven't considered cinder volumes. I think it's useful to migrate and for instance, when we have to update computers. Yeah, so right. So that's an interesting one. Generally, our strategy on M4 and our recommendation is that essentially, if you're done with an M4, we treat them kind of like disposable entities. So if you do a migration, we generally say set things up so that all the provisioning will happen to the new cells or whatever and just manually fail over the old ones. And hopefully, again, as part of the operator API, we'll have a more automated friendly way to do this, but basically go through and shoot the old M4 and they'll be recreated automatically on the new stuff. If you're running in single topology, that's a little scary obviously because you'll have some downtime. No, you don't have downtime. Well, there is, it exists. It exists. It exists. If you're on active passive, that's a lot less scary because that way, at least it'll swing over very quickly to the new one. If you do want to live migrate, it is an interesting point. It's not really what we recommend as a strategy, but I mean, people want to do all kinds of things. It's probably a valid use case. Probably would be best if you were here. What would be best, you write us an ARV-E, best for enhancement, and then we can discuss it in our meetings. We have a launchpad, which we didn't put that slide in this, but drop by the Octavia launchpad and put in a bug or an RFE, whatever, with that, and we'll definitely take a look at what would be really hard to do that. Absolutely. Thank you. Just a comment on the gentleman's question. We do block live migrate, so live migrate without shared storage, and that actually works really well. And the other thing is about booting flavors on Cinder. There's a lot of talk about that right now, making that kind of, I think it actually already works, like having the Glantz image be directly a Cinder volume. There was just a talk, just the previous session, from one of the Cinder devs about that, I didn't know, but I think it's actually possible already. I'm pretty sure it's possible to do the Cinder things. So on the other hand, there's always the thing in OpenStack, how much of OpenStack are you using, or do you want to go without OpenStack? And we often, we also get requests from people who say, hey, can we use Octavia, actually without OpenStack? Can we use it with VMware? Can we use it that way and this way? So we get both sides, so it's kind of tough for us to strike a balance. I mean, fortunately, the way that we're architected, everything is a driver. So you can do pretty much anything you want with most of the stuff. It's just we really only have the reference driver for each thing to make the system work the way it works now. But if anyone has development time and is interested in any of that kind of stuff, it shouldn't be that hard to do it, and we're definitely willing to help out. So yeah, again, come talk to us. We are really friendly, seriously. Come by will help you get things working. Is UDP support high on your list of things to accomplish? It is not. So yeah, UDP support is a tricky one. So right now our default backend is HA proxy, and it doesn't have UDP load bouncing support built in. We could add it as a protocol that's optional. Like there's some interesting stuff around that, like how do we add things that may or may not be supported by backends. So the other thing we found is, so we get a question a lot that people ask for UDP support, and then we go in and ask them, what do we actually want to do? And then often they don't need a load balancer for that. So we have to come across very, very rare use cases where they would have absolutely needed load balancers. So that's why it's not high on our agenda. Yeah, I mean, there are valid use cases. There are valid use cases, but as I said, people come to us and say, oh, we need a UDP load balancer, and we look into it, and then they can do it different way as well. So yeah, I believe that. I believe. Yeah, there are valid use cases. I know. I know that there are use cases in the world. But it's just, yeah. As far as our roadmap goes, it has to be pretty low priority just because it's like one in 100 or less. People actually ask for that. And I mean, we would love to get more backends besides just HA proxy. So if you know of a good backend that we could implement that supports the rest of this stuff, but also UDP, we'd love to see that integrated. Or the trust supports only UDP. Yeah, and I know some of our vendors may support UDP on their end. So being able to just expose it as a protocol is something we could look at doing. We just have to figure out how to do that. The UDP thing, it's quite simple to add another protocol. We know that it's simple. The key change was how can we support backends that does not support UDP in the sense that you will create a non-valid configuration, what will happen. Should be, I mean, this is quite simple in a sense. I mean, it should be a rocket science. Yeah, so we can look at that. I think, yeah, I don't, I don't know if that should be something that we look at with flavors or whether we can handle that in a different way. Yeah, we can. Doing that with flavors wouldn't make much sense. No, no, it has to be a protocol. Yeah, the reason we haven't done it in the past, we know most vendors support it. I guess it would be like a implementation to reject whatever it does. Yeah, so yeah, we basically, and honestly, I'm thinking about it now. Yeah, it's probably pretty simple. Yeah, but we could probably expose it. I mean, I may just. No, the big problem has been in the past that we can only expose things where we have a reference implementation for it. Since we don't have a reference implementation for UDP, we couldn't expose it, but maybe they kind of, now it's via our own project and not anymore in neutral and we might have that flexibility. Yeah, so. And the internet also supports stuff that is not supported. Yeah, but, but, but yeah. I mean, I seriously, I think it wouldn't be that hard. No, no, no, no, it's not that hard. Absolutely correct. It's not that hard. In the past it was completely political, so yeah. And, you know, I did get two at least of the three patch that's I promised you already in review now, re-based and kind of ready to go. Still have to do providers, but the other two are there. And I can look at that one if I have time. Yeah, I can probably need to check our politics. Yeah, I mean, I can propose the code either way. We'll see what happens. Hi, I don't know if it, if it already exists or there are still plans. What about for the active active? Oh, yeah, yeah, yeah. Yeah, the huge plans for that. So basically we have patches from IBM, but they kind of dropped away those people. And so we have to figure out how we kind of rescue that. So basically one of us would have to take it over multiple of us. That's one way of doing it. We also, there's also another company which really would like to have active active and they are talking about contributing code to do that. So the IBM solution, they use OVS on an extra VM and kind of to fan it out. The other solution from this company, they want to use their network fabric to do the active active. And we would love to have both. Yeah, we're looking at- But we got to see how the chips fall. There are a couple of patches that are kind of the base for that framework. And there's a spec that I think we're, did we actually merge that? It could probably use a little bit of review at this point. Maybe some updates. But yeah, there are a few patches that are sort of the base of that whole framework for active active. So if we can get those in, it should be easier for people to add more methods because active active is one of those things that honestly tends to rely really heavily on an individual operator's network fabric and the way that they're doing things because everybody does things differently. So we can kind of get you halfway there, but it's hard to do something that's generic enough to work anywhere. I know I'm working on something for that that involves- Well, the biggest problem is- Using floating IPs and- Well, the biggest problem in doing active active with the network fabric is that it's hard to replicate it in DevStack. So then you're running into those problems. So it makes it a little bit more complex because stuff that you can't test is difficult to kind of maintain and things like that, but we are working on that. And the other thing we could use help for that. Yeah, it was on- It was one of our slides. It was on one of our- I think that was one of our goals for Queen, it's actually- Yeah, I definitely want to do something. We want to have something to show for that. Hi. So from what your comments now, I understand that what you want to do is distribute the load towards a listener in several DMs. Is that correct? Yeah, the idea is you have one IP address, a DNS name, and then it gets phanned out with as many virtual machines as are necessary to do your load. There are a few wrinkles on that, but that's basically the basic idea. Yeah, that was the IBM approach. No, that's also the networking approach. If you use a VM to do it, you use a network, the result is the same. Using a VM to do that fan out with OVS was the IBM approach. The other approach is, I think using like BGP. Yeah, using the network fabric to fan it out, but the end result is you will have N VMs handling your load. I thought when you were showing it in the slides, I was assuming that your idea was to distribute the listeners, let's say that they have a load balancer, a VM load balancer with 10 listeners, then distribute the listeners. No, we didn't plan that. Yeah, no, we planned to have every listener that's on a load balancer available all currently, actively processing traffic. Yes, but scaled. But we could, but if that's something, you can definitely propose that and we can think about it, if that makes sense to split it up. That said, if what you want to do is distribute your listeners horizontally, you can do that just by creating one load balancer per listener, right? But then you don't have to say my P. Right, yeah. Yeah, that's definitely, yeah. You don't have high availability. Yeah, well, I mean, you have high availability in that the load, each load balancer is still highly available. It's just, yeah, like it's, yeah, it's a little different. But yeah, splitting up the listeners is one approach, but it doesn't answer for what a lot, the major use case I think. Yeah, I think, yeah, we should, we definitely need to think about splitting up listeners. That's a good idea. I mean, I think for us, the number one use case is, basically, you have a load balancer. It has one listener that needs to scale dramatically. And we need to figure out how to do that. So, definitely looking. Thank you. Yeah. You may actually be running up on, close on time anyway. I mean, we can, if that's it, we can let everybody go to lunch. And when I said I'll be around, German will be around. Yeah, we can always find us on IRC. And also, as I said, the vendors are here. I don't make them stand up, but anyway, sure. Yeah, we're looking. Sick everyone on this. It's very. Those poor souls. Thank you.