 All right, I think they're getting us going and getting us started and there was light I'm John Gruber with F5 Networks. I'm in our product management. This is Rick Masters, and I'm in our architecture group I'm a senior director of product development. So when we talk about open stack things Obviously the networking component of open stack was very very important to us But you also see things that involved compute you saw all sorts of things to range the whole catalog of things the first time F5 published solutions with open stack was for neutron LBAS and it was last year that we came out and said Hey look an LBAS solution should be flexible no matter what your deployment was because we saw a bunch of LBAS solutions that Could do very well if they were running in compute could not do very well If they were to graft in to open stack as a network device, so we talked a lot about that and We'll continue to talk about that because one of the functions that is a necessity is To run LBAS in a multi-tenant way We all know that open stack supports multi-tenancy, right? We all know that that tenant model is very well-defined if you think about the resource consumption of a load balancer There are many many many many many use cases where it makes no sense to burn all those resources to launch one VIP There are also you score use cases where you have to deploy where for us It made sense to concentrate a bunch of time to be able to give you load balancing across our catalog Because think about open stack itself Open stack itself was built as a framework that could be deployed in many different ways to support many different work loads right sometimes launching a multi-tenant virtual edition of our appliance and doing Tunneling from that to do multi-tenancy was what made sense Sometimes running clusters of our appliances makes sense. Sometimes we have customers whose operations have had active standby big IPs forever Sometimes you need scale up if you think about load balancing for a second the scale out of the pool Makes sense to everybody you connect to a VIP you scale out to a bunch of pool members But what happens when you need to scale up the VIP? What happens when you have use cases that legitimately call for a VIP at 40 gig? What could you do? That's why it was very important for f5 to say that our LBAS solutions for open stack Supported our entire product line including our own virtualized appliances our VCMP instances So we spent a bunch of time and a bunch of work to make sure that when you got LBAS from f5 you didn't get a VE LBAS You got an f5 LBAS Across our product lines and across our product offerings because your workloads may not make sense to run in a VE Your workloads may make sense to something where the a hundredth connection Connects at the same reliability and with the same timing as the millionth Open stack should support those ecosystems and network setups and workloads. So should we now that didn't mean we were stuck with multi-tenancy We can also do single-tenant LBAS So for the use case where you just wanted to launch a VE and you want to hand it off to that tenant Can the LBAS solution work with it? Yes. Do we have solutions that are smart enough to know the difference? Yes, so our LBAS at this point if you launch it and a tenant makes a request for a load balancing service It can be intelligent enough to sit there and say does that tenant have F5 load balancers launched as VEs in their tenant if they do Make the load balancing service provision their single tenant load balancers if they don't Go to the multi-tenant solutions go to the things that are registered for multi-tenancy again Your workloads your way because open stack was about being flexible Being deployable in multiple cloud environments One of the other things that we highlighted last year is guess what? SDN is a critical component in open stack, right? so Your big IPs are VTEPs Now what does that mean? That means that it's not just a load balancer It is now a full-fledged VTEP as part of the neutron network Which means we have to update fib tables when you add a when you add a you know you add your guests over here We have this beautiful service called L2 populate Well now we have to pay attention to it like we were an L2 solution and be able to graph those things in as VTEP Now get this this is something that that Rick and I spent a lot of time looking at some of the core vendors And said no no no no no our Vee is on a VTEP Right, which is a little unusual which was unusual, but we're like no you need to support that Can we launch some Vees up there in a multi-tenant fashion go right through? The security groups because the VTEPs the address it's routing to and still support that device having a bunch of multi-tenant tunnels on top of it. Yes Can you do multi-tenancy with us with just Vees? Yes It is a VTEP that joins into the network with your compute nodes. That was the heavy lifting because again the point was ADC the way you wanted to see it in your specific open-site cloud to support your workloads Does that make sense to everybody? You see why it's a little larger definition than saying I can launch a Vee and Put a VIP on it There's a lot involved Now we know that Vees are very important specifically to NFV So on our virtual edition so on our software based Virtual editions we knew there was a lot of things we had to do to be able to support to take advantage of the ecosystem That is OpenStack. OpenStack provides a tremendous amount of information about those Nova instances right I can tag in with metadata. I can do all sorts of good things Now the first thing we had to do was like a lot of network vendors we have a virtual appliance Does that make sense? It's a virtual appliance, which means there's some black box nature to it because you you ship it as an appliance It's not an Ubuntu or CentOS image that you add packages to Now there's a reason for that. We have security certifications. We've got all sorts of hardening. We've done There's all sorts of pieces, but when you go to downloads.f5.com You see virtual edition this speed this stuff when our marketing terms and our stuff and all these things So one of the first things we had to do was translate Translate into OpenStack speak because to use our virtual edition should you be should you have to be experts on f5? marketing speak and Licensing speak no You should be an OpenStack person So one of the things we did is translate those into appropriate Glance images and appropriate flavors. Do you know why you need unique flavors for ADCs? Because they're unique workloads right? Could you deploy an ADC and because you pick the standard flavor be very inefficient in your cloud? Yes, so do you need to see the flavors that match to the ADC workloads? Yes, so we did that work There's also this notion of high availability in your load balancer. How many want a single point of failure load balancer? right F5 platforms support clustering Now those clustering Like to put that cluster in that's an exercise that we normally shake hands with our lovely certified F5 Engineer and the engineer knows how to set up our clusters and do all that work and they say yes I can get that done and there's a thousand knobs you can tune a thousand ways to get it to work great In OpenStack because we know so much about the environment we can put it all together for you and build those clusters for you on the fly And that's the work we've done now What kind of workloads? 20 mag 200 mag 1 gig 3 gig instances those all pretty much can work as Sitting on top of most software networking solutions, but I'm going to ask How well are the v-switches working at higher speeds 5 gig and 10 gig? How we doing? What Rick what what did what did neutron add in Juno? SRIOV so you think we should do that of course we should do that So do you have SRIOV support in your software load balancer? You should and We have SRIOV support to get you to the 5 gig and the 10 gig instances of our software appliances Does that make sense all stuff the community supports all stuff that we should track Now There's also this notion that not all network implementations of neutron look the same do you appreciate that? We go with the stock we go with the reference implementation with OBS or Linux bridge We've got all this data we can collect and do stuff We work with the particular SDN controller and maybe those neutron extensions aren't supported Not if you understand what I'm talking about There can be a lot of source of truth issues about where things are For instance, I want you to think about a complex cluster setup of virtual additions across three different compute nodes This is just the network part. All I did is say hey, I'd like a load balancing vip Great. What's that mean? Well, that means we have a lot of dynamic on-the-fly multi-tenant L2 binding To every device Depending on how you set it up. I could have two legs. I could have five legs. I could have a hundred legs now Is your load balancer an L2 device? No It's actually higher than that goes up the stack So do I need to graft L3 onto those L2 and by the way does neutron give me Subnets and all that all the IPAM data sure it does so should I be able to do that? Yes So there's L3 bindings. Well, what all does that mean? Well, that means I have to give things fixed addresses for the cluster I have to go through things how many people know that snap pools should travel with cluster addresses for high availability So you don't have connection failures when you have a failover device Or how many people know that you'd like to have more connections and 60 and then 64,000 So you don't run out of port exceptions So should you have to dynamically build your snap pools and should you dynamically have to be able to place your services so that they fail Appropriately cross-ter clusters. Yes Think about and by the way the failing in an L2 cluster requires allowed address pairs in the reference implementation, right? Think about all the complexities of binding L2 and L3 in a fully clustered solution We've orchestrated all that works great with the reference implementation works great with people who are ML2 compliant if you're not Our agent has very distinct L2 and L3 bindings broken out. Why do you think that is? So you can tell me who you're working with so we can shake their hand and we can get it to work Does that make sense? How many of you have open-site clouds that are not the reference implementation for your v-switches? Oh, we love you guys Little work to do Tony put your hand up. I know you do So does this make sense to everybody there's a lot devils in the detail you always appreciate that The details are important and we've worked very hard to make sure those details were covered now What do we do behind beyond LBAS? How many people want more features than what's in LBAS v1 or v2? How many people know that we support a lot more features than are beyond LBAS v1 or v2? Our devices do a lot more things than that. However, let's ask are the underlying network primitives that are supported by LBAS v1 and v2 Good enough for 80 plus percent of your use cases is a good enough to do the L2 binding and the L3 binding into neutron Yes Now what do you want us to do beyond that? Well, it would make sense that you'd want us to bring the full weight and bearer of F5's application proxies to OpenStack So LBAS is one service definition At F5 we have a templating language that we've spent a lot of time and money and energy to be able to do Application service templates. It's called IAPS So enterprise vendors for a long time have been working with us to say I have this templated deployment that simplifies out how I do a complex application in a templated fashion for big IP Now I'm going to ask if we already have all the network primitives and all the plumbing and all the L2 and L3 plumbing done for OpenStack Do you think we should be planning on bringing our IAP templates as service definitions down to that plumbing? So that we can give you advanced services inside of OpenStack That's the way your LBAS is running now It's already a service definition We prepped for that beginning. We built service definitions inside of Neutron we pushed it out the agent played it next step was move it to an IAP Now that we have is in an IAP. Do you think we can start templating other services for you? We can so In our little bit of time we had with you today what we wanted you to appreciate is one network grafting into OpenStack and into the use cases and workloads that OpenStack could support isn't trivial There's work to be done We spent a lot of time doing that work across the F5 products to On our virtual editions for NFE and for other things There's a lot of work to be done to be able to say that we can onboard them correctly make them look like proper OpenStack resources, not just a vendor resource. We've done that work Where the community supports higher speed options. We've done that work When you talk about other service definitions beyond LBAS that we can support with our application proxies We have already architected our LBAS plugin to be able to take the top end of that service Replace it with another IAP and fit on top of all the work we've done to cluster and get the plumbing right Does that make sense everybody? Did we do the right thing for you? We're trying We're trying very hard to make sure that the full weight of advanced network services comes in a tested Supportable across multiple platforms your workload your way type of Service definition and deployment That was our LBAS. It's a lot of work Absolutely Do you guys have any questions on stop? You have any questions about the specific stuff that's inside the goodness. Please come see us if you do Specifically any questions you have about the SDN grafting or big IPs acting as full VTEPs inside of your clouds Specifically questions about what do I do when my company has the following 12 services that I want to build into IAPs And I want to graft into open stack Because we can tell you how we're working through that specifically in private cloud. Okay We're taking the red hat model to that kind of you know how red hats the 90-day delay to watch the blood spill on it So the answer that is No, we haven't released it. Oh, yes, absolutely And and so the LBAS V2 for us gives us more flexibility and even the next stuff That's coming for Liberty to be able to say that we've completely decoupled the pools in the VIPs Again for us. That's just another Reorganization of the service definition. Yeah box does it right? Right, right and and we'll continue to keep up with what's released To be able to say that all of the standard services, of course will work Because if you just wanted to do LBAS It's simple. We'd love to do that if you want to do an advanced service or I guess our statement to you guys is let's let's Templatize it as an IAP and then graft it into the existing network architecture with neutron that we've put together Now we're going to have to do proprietary solutions with proprietary SDN vendors that don't put all the Data model in neutron. We're going to do it So as customers come to us and they say hey, I've got this proprietary SDN vendor. I need you to go do this We're going to work it into the model I'm going to ask once the SDN has been worked into that model. Does that change the IAP at all? No So all the services deployable across different SDN vendors. Sure. We just have to figure out how to do the L2 and L3 plumbing Your way the reference implementations are done the ML2 implementations are done And it's worth mentioning that we have a partnership with New Osh and we have a reference architecture Right to be included in their reference architecture and there's documentation on at our booth about our New Osh integration Right, and I have to I have to I mean I'll speak to that a little bit our friends at New Osh, too We're kind enough To say hey look there's some of these API calls that we can simply support using the Neutron APIs and not make you do some things and they've been very good to us to say yes They should be supported for instance allowed address pairs A lot of address pairs should be a neutron call doesn't necessarily have to be a proprietary call So we're working with them and we're always going to try to push into a community solution So that the API calls would look the same whether using the reference implementation or something else We will always try to promote that Right, it's good to see things like the L2 gateway right that we can take advantage of and Vlan aware via VMs Things like that will allow us to expand our solution and we're talking. We're talking with the folks that did the hierarchical port Demonstration for for you saw that that presentation there for hierarchical port so that if you didn't want us to be the V tap We could simply do Vlan taggings and let somebody else do that work So we're going to try we're going to try to be the community member and try to promote the open way of doing this But still bring the richness of templated services to what you can do with our product Make sense Good. Thank you for your time. We appreciate it moments