 We can start. OK. So originally when we proposed this, we said it's going to be a load balancing and service version to Juno and beyond. We'll talk about what was done and what's the actual delivery date. And I would like to start with the question of, from everyone in the cloud, in the crowd, how many people have load balancing functions in their environment? OK. How many people would like to get load balancing APIs as a standard APIs in their open stack? OK. Good. So you are in the right session. OK. So we'll talk a little bit about what led us to do the V2 API, a little bit about the process and what's the key features that we put into version 2. Drive a little bit about the core capabilities, the core API, and then the key requirements was to do TLS termination layer 7. So I'll talk a little bit what's the solution. Then we'll switch to Phil and Brandon that will talk about the reference implementation. And Doug will summarize what's the current status. OK? OK. So the road to Elbas V2. So let's say about, I believe, about seven or eight months ago, we started to see a lot of traction around the Elbas API. Both service providers such as Bluebox, Rackspace, HP Cloud, and some users and obviously the ADC vendors were discussing together on how the V1 API, which I'll be covering very briefly, could be done to actually meet what we were seeing as customer requirements. As you can see, all those people were kind of involved. And the process that we were doing is we were asking how the V1, which basically is a very basic, low-followed balancing doing HTTP, HTTPS pass-through, and TCP with some basic stuff, could be extended and could address user demands. So the process that we've done, basically, and I have the links underneath it. So if anyone is interested to actually look at the raw material, we did first a user survey. We got some people voting on features and requirements and we summarized all of this with a result of that the first requirement was people actually wanted to have under the same IP multiple TCP port serving, kinds of same trivia, but the V1 didn't support that. And there were some critical issues from the way it was defined to do that. TLS termination, meaning the low-bancer would terminate connection. By the way, who knows why TLS termination is such a coveted feature? Anyone? Stephen? OK. And why do they want that? Right. Sure. And this is important not just because you want to terminate it on the low-bancer because it's simpler to manage on a single port of management and a lot of the more advanced capabilities of low-bancer will only be available if the low-bancer could actually look on the content. So this is one of the key requirements. SNI, TLS SNI is the capability to support multiple different certificates out of the low-bancer using the host name. And there's some content switching is the capability to use the payload information, such as URI, cookies, et cetera, to actually define how traffic is being handled, whether it needs to be routed to different server groups, whether it needs to be stopped, or whether it needs to be redirected to another URI. So those were defined as the phase one features that we want to do. But on top of that, there are additional features that require things like client certificates, the backend encryption, content modification, and UDP support. So in order to do that, the first thing that we need to do is to fix the core model. The original V1 core model has a few shortcomings. The first of them was that the pool was the root object. Now, obviously, if you want to do things like clear seven content switching, which will have multiple pools, you cannot have the pool as the root object. So the first thing that we went and fixed is created an object called load balancer as a root object, which has the IP. And then you can attach multiple listeners underneath it. Hence, this addresses the requirement to do multiple TCP ports under the same IP. And then the pool and the health monitor, et cetera, become a simple logical objects. Another thing that we fixed, because we've seen that the V1 is kind of problematic in that way, is that the health monitor and pool had many, too many relationships. And we thought that it makes sense to actually have the health and pool a one-to-one relationship. So that's the first API change. The other thing is that we really wanted to add the TLS termination and the R7. The key objection to doing TLS termination originally was there was requirement to not store the certificates as part of the Neutron database. And out of nowhere came the Barbican project, which will be used to store the certificates of the load balancer. And then those certificates are going to be presented in the load balancer APIs just by their TLS ID. So simple TLS termination would be done by specifying on the listener that the listener does TLS termination and attaching an SSL ID stored in Barbican. The other feature is the set of certificates as an SNI list. So that addresses the SNI requirement. And the last feature is actually since we created the pool as a logical object, we can now reuse the pool as part of the content switching. So content switching, in essence, is a list of policies, the ordered list. So the engines are expected to evaluate them in order. And the first one is being met. This is the one that gets executed. Under a policy, we get a set of rules that are, in essence, an or condition. So all rules need to be met for a policy to become true. And rules, obviously, could be anything that compares the content, things like URI matching, cookie matching, and things like this. And the action, in essence, is either redirect to pool, stop, reject the traffic, or redirect to a URL. All of this was the things that we were completing as an API, as a basic reference implementation intended to get into Juneau. Phil? Yeah, Green is go. All right, folks. Thank you, Sam. My name's Silip Tuhl, developer for the Elbas team. I'm going to briefly discuss some of the ideas for the reference implementations that we have and some ideas going forward. We have some concrete implementation options, three of which. First, the non-agent-based implementation, if you want to try that out now, development version. We have ideas or plans for an agent-based version, which will provide for some scalability allowing you to deploy out multiple nodes. In Octavia, our operator grade may have a little bit longer initial delivery cycles there. So first, I'm going to discuss the non-agent version. Currently, that's available in feature branches. So you could actually go ahead and pull that down. It's in reviews, of course. So you could pull that down, test it out, experiment with the Elbas V2 API. It's not scalable nor is it highly available. I don't plan on updating it to do any of those things. This is strictly a proof of concept. It's a development version just to get you comfortable with the V2 APIs. And as you can see here with my amazing diagram, that everything's deployed directly on the Neutron API node. You'll call the V2 API, and it will deploy an HAProxy process directly to the API node. So it's really simply, strictly for development purposes, proof of concept. Next, we have our agent-based version, which is something we plan on doing. It's currently not available. It will provide some scalability. It's based off of the V1 agent. So besides adhering to the V2 data models, there's really no difference there. So it provides some scalability. You'll be able to scale out with the agent in the back ends. The HAP capabilities would require some more updating or mechanisms to actually make that highly available. So that's a thing to be discussed if we go the agent-based route. And this is not really operator grade, but it's definitely prime for feature branch acceptance. That way, we could get things moving with the Neutron Elbas. And to, I guess, to visualize that a little bit. As you can see here, the agent driver is going to communicate with the agent on the nodes. We'll actually communicate back and forth. So you can actually deploy this out over multiple nodes to make a somewhat relatively scalable reference implementation. Next, we have our operator grade-based version. And Brandon Logan will discuss that a little bit further. Hi, I'm Brandon Logan, developer on Elbas. So Octavia is meant to solve the issues with the agent reference implementation. The agent representation, like Phil said, is not HA. And Octavia is going to provide some more scalability. A lot of vendors and operators in the Elbas community kind of collaborated on this, and it's currently in design. This is going to be a really high-level view of it. Octavia actually should deserve its own talk, but we only have so much time. So Octavia is going to use Nova instances with the HA proxy on them. It's going to be scalable. It's going to be highly available, and it's going to have the monitoring in place to detect when the VM goes down or HA proxy is not responding. It's also going to support all the Elbas V2 features out of the box, because the data models are essentially the same. The API is a little bit different, but the data models are the same. So the translation layer between the two should be relatively simple. So it has a very simple Octavia diagram. A user is going to request to create a little balancer to Neutron Elbas. Neutron Elbas is going to pass it to Octavia driver. The driver, like I said, is really essentially kind of a pass-through. It's just going to send it off to Octavia API, same data model, and everything. Octavia will then do the Nova life-cycle management. Probably in this case, it's going to spin up two VMs, probably an active passive topology. So it'll also do the monitoring. So if one VM goes down, it'll automatically spin up the passive one, the fell over one. So like I said, this is a simple diagram. It doesn't show the Neutron integrations, because it's also going to talk to Neutron to plumb the network, to create the network resources. And it's also going to talk to Barbican, which is kind of a moving target right now, but I think we've got it in place for the Secure Data Store. Now Octavia, I think we're targeting a 0.5 release for Kilo. Hopefully, we can get that, and we can get an Octavia driver in for Kilo. Speaking of the current status in the future, here's Doug. Hello. OK, it works. I took away my boyband, Mike, I think, because I couldn't master the dance moves. Come on, wake up. That's the best it's going to get. All right, so all the code and features we've been talking about, the initial code is available now. It's on a feature branch right there. You're all going to type it in while I talk, right? I told you it wasn't getting any better. The core object model and CLI and API he'd been talking about, that's in there. The reference driver that Phil talked about is in a Garrett review. There's an etherpad there that has everything else. So if you want to merge together TLS and L7 and play with it as a proof concept, it's there now. So the code was supposed to be in Juno, and it's not. I don't know if any of you were familiar with the Neutron incubator and the group-based policy and stuff, but they pretty much ran out of review cycles. And they decided that large projects weren't working so well as like 12 Garrett chain deep review patches. So they needed somewhere else to put them. The infrastructure team suggested that we just use a branch on Git. So that's where Elbas is trying things out. We're trying to get all of our code in there matured till it's fully baked and then hopefully we'll merge into Neutron in the key of the time frame. Octavia is in Stackforge. I think the page is URLs on there somewhere. And Neutron Elbas in Neutron will have a driver that talks to Octavia, but if you want to run Octavia, you'll have to go get it out of Stackforge. This is just showing you what community drivers we had before, what community drivers we're going to have now. We're adding a couple, we're subtracting one. Features for Kilo and beyond. So we talked about merging the feature branch into Kilo, an Octavia driver. Our friends at eBay demoed a Horizon UI that we loved which was right now to spawn a load balancer, you have to go in and create a pool and then add some members and then add a VIP and then tie them all together and add health monitor. And unless you're in the load balancing industry, I don't think that makes any sense at all. So the demo he made was it looked like you're launching a Nova instance, you click I want a load balancer and there's his name and here's his members and here's the health monitor, go. And then you have a fully functioning load balancer. So that's something else we're gonna try to get in in the Kilo timeframe, talking to the V2 subsystem. If you want to contribute to Octavia, there's a URL. Other things we're planning for Kilo, integration with heat, integration with salameter. Neutron flavors, they're supposed to be in in June and those got pushed as well. Hopefully those, we're gonna try to get those in Kilo again. Stay tuned, I don't know if the albaster's gonna stay in Neutron, I don't know if it's gonna go Stackforge, I don't know if it's gonna go to Incubator, there's a lot going on at this summit to try to figure that sort of thing out and solve this code review cycle bottleneck that we've all been having. So that's the rest of our content. We wanted to open the floor now for questions and answers and then we'll let all of you get out of here for the free beer that is right after us. So come on up or we'll get out of here early. Nobody wants to yell at us for being late yet again. Yeah, so if we. So it's a question about Octavia. Are you talking directly to Nova to spun up instances and if yes or no, have you investigated using heat to do that? Come on over Brandon. I don't think we've actually investigated using heat but there is a Neutron service or an advanced service that's a, what is it called? StackTec, no not, Tacker, Tacker that does the life cycle management which we're investigating in that but I don't think we've investigated heat yet. Okay, I mean like project like Trove or, I mean projects that spin up instances to deploy and the percentages have been using heat and I think like if you had, if we have your feedback and know what's wrong or what's not wrong, that'd be awesome. Okay, yeah, let us know. Thank you, any other questions? Anybody interested in seeing a horizon demo of the feature I talked about? eBay, come on up here. Wait, wait, did you get a question? Is there a compatibility list for for load balancers that are compatible with the solution? Yes. Let me go back a couple slides. But for V1, if you're gonna go into the marketplace of OpenStack, you could actually find all the vendors that have V1 implementation which is tested and certified, et cetera. And obviously when we get V2 into the trunk then we'll have the similar thing there. Does that answer your question? Yeah, there are also a couple of private drivers beyond the community drivers. I know there's Burkhead drivers and F5 drivers. Okay, I was gonna ask about F5, they're kind of... Yes, contact them directly on that. Okay, thanks. So if I heard that right, you want us to explain the HA model in Octavia and why we're not using HA proxy for that? Okay, I'm gonna let one of the... Well, that would not have happened. Yeah, you need to answer this question. Okay, we're gonna have the Octavia PTL answer that one for you. I don't know if that's on. Go Stephen, is this working? That's sweet. So there's actually a couple of different models that are possible to deliver HA in Octavia. One of them actually does use something like VRRP, that's our active standby model. But we also had operators, a lot of people who have actually contributed ideas in terms of what they want out of Octavia are operators. We've had some operators say that they wanted to have the option to do single instance as well, which wouldn't have any kind of HA in it. That would basically, if you have a failed VM or a failed container would get replaced from a spare container in the spare's pool of just spun up and ready to go containers. There's also, in Octavia version two, we are planning on delivering an active active model, which would require quite a bit some interesting stuff when it comes to, will require, because we're gonna deliver it, some interesting things when it comes to the actual routing. And in that case, VRRP is not necessarily appropriate for that. But you can see, if you wanna see specifically what our front end and back end topologies look like, there's actually information in both committed to in the documentation section within the Octavia repository, as well as some Garrett reviews for the version one and version two stuff, which I haven't updated in a couple months, but they're still relatively accurate. So does that answer your question? I didn't even see who asked it. Is it good? Okay. Any other questions before we do this UI demo? There's supposed to be one while they finish setting up. Here's one. I was a bit confused with the discussion of how this is part of Neutron, possibly not later. So the fastest way to move the project forward is to be a part of Neutron. Why is there a thought to move it out later? And if the fastest way to move it forward is out of Neutron, then why isn't it moving out now? So if we move it out of Neutron, at least a year plus away, because we have to go through TC incubation and all that stuff. So the theory is Neutron should be faster than that, except in the past it hasn't been. So do we keep waiting and hoping that things will get better? Which everybody in Neutron is great. I love those guys. And they say the right things, so we think good things are gonna happen. And then I've only been on this for six months, but my understanding is load balancing has, for the last two years, been kind of stalled. So do we bite the bullet and take a year plus hit, or do we hope Neutron gets fixed? And so it's not completely clear which of those is the right answer. Has there been any thought about having a distributed load balancing function, like the distributed router function that we were just looking at before? So like a global load balancer in different regions? No, like on each compute node having an agent that is managed, which sort of knows the state of the nodes that are on there and can direct traffic via a bridge that's on that compute node. I can answer that. So if you think about it, if your system is well balanced, then there's more chances that your VMs are gonna be spread all over the place, right? And load balancer in essence has a centralized logic that gets traffic and based on some logic distributes it to a single place. So realistically speaking, how would you distribute that? It's quite different than security solutions such as firewall and IPS and things like this that actually makes sense to distribute load balancer in essence is centralized. So we were thinking about this, but it's not clear that you actually get any benefit by distributing the load balancer into the different compute nodes because eventually what will then distribute the traffic to the distributed system? Does that answer your question? Doug? Hope you wouldn't mind if I just take a moment to reply to the previous question about F5. Go for it. Yeah, so we do have a package that you can download and get the V1 support as Doug alluded to. It's been out for six months. We are tracking the V2 and intend to support that as well. Awesome, thank you. Thank you. Now, oh, slider. You saved the day, thank you. I think they'll turn on. Hello? Yeah, okay. So we, as Doug spoke about it, the UI which comes with default with horizon really sucks. So then we decided to implement our own UI. And this is how it looks. On the landing page, we see all the LB instances created with the configuration details. And then like you, if you want to create a new load balancer, you just launch it as if you're launching an instance. You can pick the existing IP which is allocated to you or you can just choose to create a new IP. You enter your name and this UI will also create a DNS record for you. So I can do demo. You know. And some description. You can choose the algorithm here. The instance board where your service will be running. Your protocol. We have like HTTPS even. We have like also implemented SSL support on it. Here you can specify your SSL certificate. You can choose to use a common certificate which are some wildcard certificate based on your VPC. Or you can provide your own certificate key and the chain. Then you specify your monitor. It can be a simple TCP, HTTP monitor or ECV checks which are more enhanced validation with your send string and resist string and frequency. And then in the end you select what instances you want to make a part of the load balancer so you can select all the... So it will only show you the members which are part of your tenant. So right now I have two. So I can pick or do all this. And then finally I will say launch. It will do some validation. Since I selected STTPS, it will give me an error because I did not provide SSL certificate. So likewise, I can do STTPS, TCP pass through. Should be fine. Yeah, and anyways, yeah. So that will launch a load balancer. It want me to fill all the details. I don't want to. Vivek didn't know he was speaking or demoing today. Yeah, it was. Thank you very much. It's a bonus. And then you can edit your LB. You can edit all the values you have selected. You can change the instance port or the web port. You can remove unable disabled members like this. This is what we have today. Yeah. Any questions? So the functionality that you just showed us there. You started with a question about an API being available for this. Is any of this stuff available in API or is that what you were asking? No, this UI is entirely running on APIs. Okay, so if we want a script, creating load balancers and whatnot, like you did here through the GUI, it's available today. Yes. So what was the API question that you were asking earlier, saying how many of you would like to see an API for this? The existing API is V1, right? So it doesn't support all of this functionality. We are talking about V2, which is the V1 doesn't have SSL certificates, doesn't have the listener ports, and doesn't have the Larry 7. So the question was in, since we have V1 and we really want to push V2, I was kind of interested to know how many people are actually using it and how many people are actually interested to see it as a standard API. And because we are gonna go into the design summit and we're gonna have a discussion on when and how, which is actually was touching. And I really believe that this is something that is crucial and waited long enough to get it. Okay, so this demo, this was V2 functionality, correct? Part of it, yeah. Oh, okay, so most of the things in here are available in the V1 API. Without the SSL termination. With it, just not the SSL termination. Right. Okay, thank you. Correct. All right, yeah, so this horizon functionality is this just internal to eBay right now? And if so, are you intending to push it upstream? Yeah, this is internal to eBay. We are working with Horizon Team to make it part of OpenStack, open source. Is there a blueprint up that we can take a look at? Not right now, yeah. But I would like to work, if you know someone in Horizon, we can work closely with Horizon Team. The, we actually slowed down the pace because we learned that we are coming up with new API version. So we wanted to move over UI to use new APIs and then launch it. Okay, cool, thank you. Any final questions? All right, thank you all of you. If you have any more questions, come talk to any of us.