 Hey, looks like we're live. Hey, my friends from AT&T. So in Austin, in Austin in April, we did a dog and pony show and a bunch of orchestration and we showed fully deploying apps with our VEs and he still wants our licensing orchestration and bunch of other things. We're gonna do a repeat of a message that we kind of pushed forward about three years ago on a brown bag and it's really about what you get by doing things the open stack way and what that means to vendors like us. So we're gonna start out by just, you know, when we put the development team together, there was a whole pile of things we know we wanted to get on top of and whole pile of things that our customers expected us to deliver as part of the open stack community. But once we got going, it turned out we started seeing all sorts of other requests and if you've noticed how diverse are the topics you could talk about right here. Again, bad ideas, great ideas, really bad ideas. You've done this, if you've been around open stack at all, you know that this can be a little chaotic, right? It can be hard for a vendor respond to and figure out where everything is, how to test everything and put all these things together. But the fundamental reason why a vendor still wants this is because there's a standard model with open stack. If you look at physics, why did the standard model allow physics to continue forward because it could build on the theories, it could build on the pieces? Did it hold them back in some way? Yes, open stack has a standard model. Now, what does that mean for a vendor like us? Well, let me ask you, when you deploy open stack, Neutron has to go along with it now, right? So what do you get day one when you install it? Okay, this is what you get. Whether you want it or not, it's there. So as a vendor, there's a thousand different ways for us to deploy our services in network virtualization. But this is the model that's there. So what should you see from a vendor? Well, if I have all this, you really want to see this, don't you? You really want to see us follow the same orchestration model, same deployment model, same pieces. We're using the same controlling, same messaging, same everything else you're seeing with the rest of your infrastructure. And I'm gonna ask you, as a good vendor, did we touch the data model? Did you see us touching the schema? No, because there's a standard model. And these are the hallmarks that we're trying to make sure we stand for our customers. The good news is that just staying with the standard model happens to be incredibly productive. One way is, you'll notice here, there's a lot of operational data you can live with by putting things in the standard model. So for instance, you can use the standard neutron API and you should be able to tell the state of all the orchestration and all the agents by using this API in your knock, not by calling a separate controller. Because you should be able to see all of the pieces as part of your infrastructure. And it turns out the standard model has a lot of flexibility to also put a lot of interesting CMy-type data in there. So for instance, from what we can report back, that's just Neutron agent show. You can get all this to the standard APIs. We can push through all that support data. So if you ever have to open a ticket, should you have to use a proprietary API or open up one of our interfaces? No. You should be able to get all of these things from the open stack standard model. It's not limiting in other ways. So for instance, notice in this case we have the same service plugin. If you look at our Elbez plugin, you'll notice that we can set a provider. A lot of people think provider means you have to have different vendors. No, we can use that same model piece to be able to give you the tenant, the ability to pick which environment he wants to deploy his services in. Prod, test, dev and have different types of installations being able to be pushed through, again, the standard model. It's incredibly liberating. Another example of this, we're gonna take everything the left-hand side and we're gonna move it over because it's all the same and we're gonna ask, is this agent model limiting? No, it actually supports a lot of different functionality. For instance, it's using a distributed queue. Should you be able to do agent resiliency or HA for your agents with a distributed queue? Of course you can. Should you be able to do things like scale out? Yes, because we have scheduled jobs through the queue. Now, with any scale out environment, you would expect to have a reason to scale. You have to reach some capacity before jobs start getting separated or some form of load balancing of job scheduling across the different parts of your infrastructure, right? Where do you think you should look to find the capacity of a given element inside of your OpenStack infrastructure? Again, we can push this all through our agent update. See the little thing there that says environment capacity score? 0.4, that gives you a weight immediately that you can pull from any product using the REST API and monitors to know the heat of how hot a given installation is. And what do you think it should look like on the agent if we're sticking to the OpenStack model? If you probably look the same as any of the policies due for things like your quotas, wouldn't you think? And that's the way it can look. Here's all of the things we can base that capacity score on for given installation. It helps us out on the network side, too. ML2 standard network types lets us know exactly what we're dealing with. It's nice to have these, isn't it? Because as a vendor, we look at that and go, great. I can make those network types work on software, on any one of our appliances, on big clusters. I've got this. I know exactly what this looks like. And again, standards are good things. Now, it turns out for a lot of our customers, they just want her to stick to VLANs. How happy are you to get a VLAN-aware VMs? Cheer, please. It gives us a lot of flexibility in a very standard way. Now, what if someone comes to you and says, well, but is there a limitation to VLANs in the standard model? How about if I don't want my VLAN IDs to be global? Do you think there's a way to have more than 4,000 VLANs in a given installation and not deviate from the standard model for VLANs in Neutron? There is. As a community, we've come up with one. It's hierarchical portbinding. Now, I'll be honest, full disclosure. I sat there in a community when they first introduced hierarchical portbinding, and I went, that's a terrible idea. I was wrong. This enables a lot of flexibility, and again, without deviating from the standard model. So you can get all the vendors to play the same. Hierarchical portbinding is a very simple concept. We can take a physical network string, and we can tie it, in our case, to the clusters. And for each one of those clusters, whenever a Neutron network is defined, you'll notice it has different segmentation IDs, providing 4,000 L2 segments per cluster. Do we have to deviate from Neutron? Do we have to deviate from anything you don't install when you install Neutron? No. It's all part of the standard model. So let's say we did something whizbang down in the network, and we wanted something that was a little different. And you have an SDN vendor, and the SDN vendor says, hey, L4 through 7 vendor, when you go to provision anything at L2 or L3, I need notification that I'm not getting out of Neutron. Do you think we can do that in a OpenStack standard model way? Turns out you can. How many know that OpenStack has plug-in support and extension support? And they're all done in very specific ways. We can do that the same way inside of all of our agents. So for instance, you come to me with xyzsdn, and you say, do you work with this? We can look at them and say, well, if we're using VLANs, we have a VLAN binding driver that they are more than welcome to implement. And we'll call them for allow or prune VLANs. Anytime we go to add an IP address to any one of our appliances, if you think about it, a proxy burns through IP address is pretty good. We'll pull those from Neutron, and anytime we go to add one, we will add a bind address or an unbind address to any what we call an L3 binding driver. It could be implemented for any SDN stack. Again, because we're following the standard models, because we have all of this code base to look at, we can make this work in a way that we can test and still be decoupled from your SDN vendors so that they can test. So that you know we're not getting into versioning difficulties, right? That we're always testing things the same way. Pretty simple interfaces. We got LBASv2, we got TLS offload, and that came with Barbican. How many love Barbican? How many have implemented it? Okay, all the hands stayed down, right? Do you think e-commerce liked Barbican coming out of the gate? What do you do when Barbican is not production ready according to you or it's not available in your distribution? You'd want a vendor to do LBASv2 in an open stack way, why? Because that means we're gonna make that pluggable too. So if you have some better way to do certificate management or some homegrown certificate management that meets your security compliance, meets your regulations, you can still do LBASv2 and have your own certificate manager because we follow the standard model for using plugins. Is this starting to make sense? Starting to make sense why you really want all your vendors to view it this way? Wouldn't it be great if everybody's interfaces and integrations were all pluggable? We can do that. OpenStack's teaching is how? Now what do we do when the open stack way isn't quite up to where we need it to be and hasn't progressed to what our customers demand to have? Let's take LBASv2. We know that the LBASv2 spec is missing several features that our customers in the enterprise ADC market have been using for a long, long time for their edge load balancing use cases. Do you think we can enhance it and still stay inside the open stack model until the community gets there? Sure. What's the big deal? Well, in order to stay inside the community you still have to live with the service lifecycle. Does that make sense? So that when you delete a load balancer we get rid of all the resources on the devices. So still has to live with the lifecycle. So what are we doing here? We're adding service tags. Is there tagging in Neutron? There is. Are we trying to push for tagging in Neutron LBAS? Yes, why? Because then we can go through and add specific things. By the way, all of these were added in conjunction with one big customer. We got three more from a different customer at breakfast this morning that we can add to these implementations by simply adding tags. We can do SSL re-encryption. We can allow you to have your own cert management on the box, use things. We can do TCP optimizations at L4 on both the client and server side. We can do business logic in iRules. We can have custom persistence and we have custom pool behavior simply by adding tags to the existing specification. We found by doing it this way, we can cover over 80% of the enterprise load balancing use cases while still maintaining the LBAS API so it's still done inside of OpenStack. Does that make sense? Does that make sense? So we walk into a customer and they say, I use big IPs. They said, can we do this with OpenStack? And we go through and we map to all their use cases. We pull their config. We can typically get them into a couple buckets. And then we can do a gap analysis on LBAS and say, well, this one over here is gonna break the LBAS lifecycle model. You should do that as a single tenant policy. And that's when they launch our virtual additions and do some ADC functions there. But 80% of everything we need to do to move a vendor from an enterprise workload using F5 big IPs can be handled inside of the LBAS V2 lifecycle, which means it can be done in the OpenStack model. Isn't that great? Isn't that great? If we could nail 80% of your enterprise use cases and not have to deviate from OpenStack, that's the point of doing it the OpenStack way. Now there's a couple other things we're progressing into. So for instance, how many of you have messed with MultiQ VertIO for high speed VMs? Okay, we're looking at things like, this is all software, by the way. This isn't SRIOV access. We've got 40, and we showed off 100 gig access with SRIOV. This is MultiQ VertIO in software. We're looking at 20 gig type VMs in pure software on top of commodity hardware, using like OBS on top of DPDK. But I'm gonna ask you in the implementation, were there any things you got rid of when you went to VertIO MultiQ? How about your security groups? They went away. Did you know that? They went away because the path where the security groups, the code path where the security groups are being done is not done in the high speed path for VertIO MultiQ. Uh-oh, that's not good. Or what do you do if you wanna move into a hardware appliance that can go into the hundreds of gigs per second? What if you still want the security for your security groups or moving forward into the firewall spec to implement? Well, I'm gonna ask you, security groups are attached to what? Ports. The firewall as a service is now attached to what? Ports. Do you think inside of the OpenStack model we can take the interfaces and the MAC addresses off an appliance or take that port number and be able to have that in Neutron such that if you put a security group on a port, we can actually implement the security up inside the high speed VE. Inside the firewall and code actually in the VE itself, not down in the network fabric, but up in the VE. So you can get your 20 gig VE, you can get your security groups, but we're gonna push the implementation up into the VE and do the orchestration there. Does that make sense? That's where we're going. The community was good enough to tell us that security needs to be attached to ports. Thank you, that's all we really needed to know. From there, we're moving to a port based model, not there yet, this is a big winter project. But what will you see? When we're ready to start delivering 20 gig all software VEs and again, we can do better with SROV, but when we go to do these things as a vendor in a pure software solution, you won't have to get rid of your security groups. Isn't that what you wanted? Because you had an API for security. When we do it in our appliances, this is the same direction. So this is the slide we kind of concluded with in Austin, whether you're asking us to do multi-tenant infrastructure services or whether we're orchestrating a bunch of single-tenant ADC as middleware for you inside of your app deployments, it should all be done in an open, stacky way. Don't you agree? So that you can stick with the same tenants, the same models, the same thinking, the same kinds of designs. We don't need to deviate, we don't need to walk into external controllers to do things, it doesn't have to be turtles all the way down, we can stick to the things we know and love and as we deploy them and get them to work. Here's a couple URLs you can take a look at ranging from our own product documentation on F5.com, all of our code to orchestration on GitHub, or even our own social media, our dev central, all the media pieces there, our articles and things to talk about. But no matter what you expect from F5, you should want us to do it the open stack way. Any questions? I'll tell you a couple of things. The community does push us. You guys do push us. So for instance, the Elbaz v2 interface is gonna do what in the next two releases? It's gonna boil up and go away and they're gonna push us into Octavia. We're gonna have to do it inside of that model. Okay, that's fine. Why? Because that's the open stack way. You're gonna kick us into that and we'll go thank you. That's great and we'll do it. So there's a lot of work that has to go on. There's a lot of interesting things we've seen implemented at customer sites. You can come talk to us at the booth. We have a nice heart to heart about some really great forays into creative thinking on a part of our customers. And remember on the thing we showed on the first site, we have some colossally bad ideas we've tried. Again, that's kind of the adventure of open stack too, isn't it? We're trying these things, we're moving them forward. But no matter what, don't accept slow, don't accept insecure, don't accept these things, we can do this as a community in the way open stack does things, the open stack way. That's it.