 Hello, good afternoon everybody. So my name is David Peret and I'm a solution architect working for Fed 5 New Words. I mainly cover in the service provider market, so I don't know if there are any people working in service providers like mobile or fixed network in the audience, but if you are, feel free to ask me questions at the end of the presentation or in the networking event. The agenda is pretty straightforward, so I will just discuss about the implementations that Fed is doing for OpenStack. The way Fed can provide network services in your OpenStack data center with the same quality as you would use in traditional physical data centers. So first of all, regarding OpenStack, this year we can finally say that we are all in and this is a sentence I have stolen from my colleague John Glover who was speaking at OpenStack Summit in Austin. I don't know if you have been there, but it was a really great presentation where he was stretching the fact that Fed has been working with OpenStack for quite a few years, but during the last year we have made big investments and dedicated a facility in Boulder, Colorado where we have product development, testing, dedicated data centers that are used in heat for orchestrating the different procedures. We also have dedicated business development teams that are working with commercial distributions in OpenStack. I will talk about that in the next slide. Of course we are a member of the OpenStack Foundation since we have known that this is the footprint of the future and today. Also the ecosystem, so when we go to visit our customers and talk about the five running in OpenStack, they don't really quite much believe it until they see the page of the vendor of the distribution. This is one reason we have evaluated and tested our solutions with all the main commercial distributions of OpenStack. I would say that they are not maybe all equally important for us, because they are not equally important for our customers either, right? We have put the most emphasis on those in the top right, MyRandy, Sredadan, HP are the most well tested and the most integrated. Of course any of them are possible for us. In this section I will talk a little bit about the implementations we have out for OpenStack with F5. Since we are known to be a lot balancing company, but we do much more than that with all the network security services, with all the details protection, with all the VPNs and many other things. First I want to stress the difference between these two technologies that we are offering to our customers. Elbas is used to deploy the balancing network service, as you have heard from the previous presenter as well, and he is used to orchestrate an application infrastructure. Some more details about these two technologies that might help to differentiate them is that Elbas is an OpenStack neutral project. It collects a minimal common set of functionality for a lot balancing that has been agreed in the OpenStack community, and it gives you the advantage that if you ever get tired of using F5, you can replace it with a different vendor and vice versa. We can be replacing a different vendor without making changes to your configuration standard. But that has also limitations. It's multi-tenant, but it has limitations. To overcome those limitations, we have heat. With heat, you can basically deploy not only service, but also infrastructure. You can deploy infrastructure from OpenStack, but you can deploy configuration in those virtual machines that you deploy in OpenStack using heat as well. So it's a tool that covers all possible options for you. And F5 acknowledges this, and we have heat templates for you. An advantage as well of heat is that it's flexible, it's composable, so it means that you can run heat templates inside it in place and multi-tenant nowhere, so it has all the features that you can get from Elbas. So maybe to explain a little bit more, when would you use Elbas, and when would you use heat within F5 solutions? When you do something like a simple TCP application load balance, you could do that with Elbas. When you want to load balance web portal or secure web portal using HDDBS, you could also do that with Elbas. But if you want to deploy advanced data protection for your data center in a F5 firewall, or you want to do VPN based on an SSL IPsec or something else, or maybe you want to deploy your Microsoft application in your IT data center, which could be a SharePoint or a Exchange, as you know those are very well integrated with F5 load balancers and recommended by Microsoft, but the configurations are really complicated, so it's good to have heat to automate those. So to move it a little bit farther, in order to simplify the heat templates for our administrators, we have developed a plugin, which is a piece of software that heat supports, that heat uses to support F5 configurations, to support them in a similar way as the rest of the infrastructure that you have in OpenStack. Maybe I can both over that in a little bit more detail in the next slide. This is how client-floating IP, neutral-floating IP resource looks in a typically heat template, very simple template. With F5 heat templates or plugin, you get a new type of object or resource, which instead of using OS colon colon, it uses F5 colon colon, and it maps to our object map tree. So, for example, in this case, you have an object called IDAP Full Template, which is a configuration template that we're going to talk about briefly. And you can see also that it depends on other objects, that it's also on a F5 object, which is called partition. Partition is like administrative division of our configuration so that you can share F5 with your machines or physical devices using different tenants on scheme or multi-tenancy. And IDAP, the IDAP object that we were presenting in the previous template, so IDAP is a cornerstone of F5 application configuration and life cycle management. And what I mean by this is that with IDAPs you get like an easy button, you get like a simple wizard that you can use to deploy F5 configurations that cover up the necessary aspects of an application such as Microsoft Exchange, and you don't really have to know F5 to deploy, you just have to know Exchange, you just have to know your application. So you go through the wizard and sort out the questions and it gets deployed in an F5 device, right? So this is a very useful tool for orchestration because the same tool can be used by an API, right? So you can enter all the answers to the questions using an API and this allows you to automate. Basically it can be any configuration from F5, whether it's a firewall, balancer, VPN, anything. So the managing orchestration with F5 is very simple. We have F5 for IDAPs, we have HIT and our E-plugging. So with those two together you get really easy to deploy and maintain the year-form to the year-seven network services for a best stack infrastructure and tenants. One thing that I forgot to mention about IDAPs is that they allow reentry and this means that once you have deployed your application or configuration in F5 device, it would be very vulnerable balancer, if you decide to change the configuration, you can run again through, you can run again this IDAP template and while your traffic is light it will change the configuration to different IDs, to different settings or whatever you need. So this is a very important feature and it also gives you a configuration consistent because these IDAPs are templatized configurations, repeatable and tested by F5 or our professional services or even your own DevOps teams. Moving on, let's talk about the Tenancy model that F5 supports for OpenStack. F5 acknowledges that OpenStack has a very well defined Tenancy model and we have to support it if we want to play in OpenStack. So as you can see in this picture, we support multi-tenant deployments and single-tenant deployments. What is the difference? The multi-tenant deployments share our devices, whether they are physical or virtual, they can be shared and this is interesting because you can use a virtual machine like here that is shared by multiple tenants. These are virtual machines but still shared. You don't necessarily have to buy hardware for F5 and of course you can dedicate the hardware or the software to single-tenant if you so wish. And the multi-tenant deployments, of course we would connect to the network provider networks and it's important to say here that we support all the overlay mechanisms that are standard in OpenStack, so VXLAN, GUI and others to come in the future and for extensions to VXLAN, those things will be supported as well. And this makes it possible to extend the layer 2 from the tenants directly into our devices and to share them across all those tenants. And everything is managed by Neutron so we are not redoing anything different than all the vendors here. Then the other options, of course, dedicate the virtual machine, letting the tenant deploy this virtual machine in these networks, which are also controlled by Neutron. But in this case the F5 is not aware of the underlying, especially when deployed as a virtual machine, it's not aware of the underlying overlay structure. And in this case you can still share the VIP and how can you share it is by using iApps again. So a single tenant can deploy any number of iApps in the same device or cluster of devices and these iApps can be from the same application or different applications, so it gives you the possibility to reduce to the maximum your F5 investment. Then we do something more OpenStack related that the iApps themselves even we will talk about the onboarding process. Onboarding is what happens when you deploy an F5 virtual machine from Horizon or CLI into your OpenStack cloud. So we have created a script that is included in the F5 virtual edition or it is injected by some of other script as I will explain soon that allows you to harden the security of your load balancer for OpenStack and there we are doing things such as we are securing initial passwords of the virtual machine because as you know when you download something from the support side of F5 or other vendors they come with the full passwords and they are all over your support manuals so you don't want anybody to hack into your virtual machine but just by using the common passwords that are available to everyone. So this script will make sure that you get a random password that is only visible through the console of OpenStack. The administrator and the tenant can be looking at the console and check the password to change it to whatever he wants but it is secure and cannot be stolen. Then you can also get the search keys for powerless security access maybe for orchestration purposes. We can adjust empty settings we can auto configure the data plane and management plane interfaces from Neutron. All of this happens thanks to this script that we have injected and of course the licensing is automated. The requirement is that we must use version 11.5 or higher it is already pretty old version from F5 and it requires the use of NOVA metadata services because what the script is doing is basically sending queries to the NOVA metadata services which is where the actual data is stored such as search keys and all of those things that the tenant is using. Moving on, before you can do what you saw in the previous slide you need to import your F5 images into Glance. There are quite many F5 virtual edition images for different versions of the software there is also some virtual image for your centralized management and all of them has to be imported into Glance if you want to do that manually but it is kind of a tedious process especially if you also want to include the hardlining from the previous script that I was mentioning that has to be injected in some of those versions so in order to automate that we created a template that can be run via administrator account because it is really plugging into kind of privileged places of your OpenStack cloud and it will basically download those images that you have previously downloaded from our support site and place in a local web server some provider network of your data center and we will modify the image injected in the script for the initial configuration that was playing in the previous slide and kind of create the imported images in Glance and after that any tenant or administrator can deploy F5 images into the cloud another thing that we have taken care of is the high availability of our services and here we haven't invented anything new as well, we are using the same high availability mechanism as in our physical virtual machines physical or virtual instances of F5, they use this technology called device service clustering that allows clusters up to 8 machines that can be virtual or physical and these are stateful clusters which means that when one of them goes down all the services in this virtual machine can go to any of the others and the order of the failover and you can even do a load based failover and kind of smart ways to do the transfer of the fake services to other instances and of course what happens in OpenStack is that it is a somewhat defined network and you have to know that these IPs that are necessary for this clustering mechanism Neutron have to know about them and they have to know that these IPs can be moved from one IP from one virtual addition to the other so in order to do that you have to answer that Neutron commands and that can be a bit slow and tedious so we created again a hidden place that will do all of this they will deploy the virtual machines they will put them in a cluster whatever services F5 is provided in your cloud again going a little bit fast because I am getting close to the end I try to slow down so this part is about LBAS LBAS is standardized balancing as a service that we have seen in the previous presentation so F5 already has also one of these plans this is the first version that was supported by F5 it was only supported by the community which meant that if there was any problem with the plugin you couldn't call F5 support but now you can anything after this version is supported this is kind of the overall architecture it's kind of the standard thing that you have standard interfaces from OpenStack, Horizon, CLI and it through those you can enter those standard parameters of a load balancing configuration just something that was do load balancing of web servers or whatever applications it has it has to enter an EIP then a list of pool members which are your instances and it can configure some basic things such as couple of load balancing algorithms and anything that the community has decided to include in here then the rest is more F5 proprietary or let's say more F5 features that are not visible to the hands but have the administrator to configure and have powers to improve the scalability of the solution as you can see you can attach LBAS service to physical or virtual virtual deployments and we have an interesting feature that we discuss in the next slides the first thing to discuss is about how it's going to be the topology for this LBAS service where I'm going to place my load balancing service especially if it's physical but the same problems arise if you are looking for the virtual and we have mainly two modes one of them is the most simple from the routing point of view or let's say the most simple from the configuration of the F5 site point of view which is a global route mode in which all the routing has been taken care of by a SDN controller or neutral or something like that and then F5 only has to deploy like the layer 4 to layer 7 details like the VIP for listening to the load balance traffic and the IDs of the members in your pools and this is the case maybe when you have F5 close to the edge after some router virtual or physical and you have configured all the routing so that the nodes can reach the F5 the five can reach the nodes and the outside world if necessary but the most common setup is this one in which F5 reaches the layer 2 of all the subnets all the layer 2 domains created by the tenants every tenant can create their own subnets and using Neutron as the engine to do that when they specify those subnets into the LBAS configuration the LBAS agent will create the necessary overlay termination points into the F5 so for example this is a BNI for BXLan for a tenant A that will be created in the F5 so that it can reach after the virtual machines of the tenant and this is for example a two-arm deployment in which there are some clients and some servers and both of them are reaching the F5 with chassis or for your platform but the same would be happening if this was a virtual machine so it doesn't have to be a physical appliance or chassis the F5 specific feature is the simultaneous support for multiple LBAS services and what it means is that you can deploy parallel LBAS services each of them associated to different clusters of F5 BX and each of them associated to different tenants and the tenants could be classified as belonging to different environment development production for example with a common division in many companies so you don't want your development guys accessing the production machines and vice versa your production modifying the LBAS configurations of your testing environment so this allows us with a single LBAS deployment let's say three parallel deployments each of them dedicated to one of these with a configuration parameter between the test and production to have dedicated configurations depending on the tenant you are using and then regarding the scalability this is something pretty good from F5 that we have continuous measuring of the capacity of our clusters so those clusters that you previous to the configuration of LBAS plugin have deployed will be measured constantly by the LBAS plugin and it will report this information to Neutron so that it can decide effectively where to send the next LBAS configuration based on how loaded my cluster of LBAS machines is you can have multiple clusters multiple machines within the cluster so where the next configuration will go depends on how the clusters are loaded and how each of the machines in the cluster is loaded and those parameters that are used to decide what is the actual figure that we look at to decide how loaded it can be throughput, it can be connections it can be the number of tenants that this cluster is taking care of those route domains route domains is by kind of parallel forwarding planes that you find in Cisco like are necessary when you have to share the data plane across tenants and they can have overlap in IPC using the background route domains but this is all transparent to you the agent is doing it, you never see it but of course in the configuration file you can verify the algorithm so that it suits better than it is of your customer and we are reaching the end so this is about the roadmap of the five and I'm not going to detail for all the things that we have because it's quite a lot as you can see in this slide is the same that was presented in the OpenStack Summit in Texas and we have gone through all the things already in this part but all of this on this side is still under development and I would say that the priorities for F5 now are developing additional heat templates that suit most of our customer needs to be automated I mean of F5 configurations I will look at our customers what are the most areas that we are going to to work with us regarding heat and how they want to use the LBAS plugin but the LBAS is something defined by the community and we are part of the community we have to wait for the community to decide any of this has a service implementations that are blinding us to neutral the community together with F5 will decide if these functionalities will ever get implemented it's quite likely we will get for example HTTPS of load or SSI termination using LBAS in the future because it's interesting and now we have this barbecue repository that can store the certificates for us in the OpenStack Cloud it's one thing we need in the future we would like to also provide firewall as a service we cannot do that today because in order to do that we have to become a layer 3 agent in plugin to OpenStack Neutron that's under an implementation under way that also has to be agreed with the community so if you cannot remember anymore that means I think this would be the slide to remember for you from my presentation that there is the place where you can go to find all of our driver's code all of our open source configurations like LBAS driver and all of that and you are free to modify them and to upload them to the GitHub repository if you search for F5 GitHub you will get to these pages very simple and this is just one proof that there is a kind of paradigm change in the philosophy of F5 in the way we approach public cloud and we believe in open source for one reason that is the most neutral and non-neutral approach for us and we have seen it it is very successful today all over the world in service providers but also enterprises so that's all for me if you have any questions feel free so that's it