 Good morning, everybody, and welcome to the presentation. My name is Shridhar Devrapalli. I'm here to talk about how NetScaler Power's production grade opens stack load balancing as a service, guaranteeing availability, performance, scalability, and service level assurances. I'm a part of the product management team for NetScaler, just so you know. Can I see a show of hands? How many of you are NetScaler customers, existing or prospective, looking at, or perhaps looking at deploying load balancing as a service in your open stack environments? All right. Couple of hands. How many of you are familiar with what NetScaler is? So just a quick introduction. NetScaler is a market leader in the application delivery controller space. What is an application delivery controller? A minimum function that it does is basically load balancing, but it's not limited to just load balancing. Application delivery controller seems like it's a fancy word, but it also consists of a set of sophisticated functions that provide layer 4 through 7 accelerated features for different applications in your data center. And the functions that it provides can be categorized in various categories. One is guaranteeing availability. The other is performance. How does a network element like NetScaler guarantee availability and performance of your applications through very specific technologies that we build, including compression, caching, assistive offload, URL switching, and several others? Offload is a big thing amongst what we do. TCP offload, SSL offload, and many other acceleration technologies. And the last is security. We have a end-to-end web application firewall that protects against logical web attacks. And we have a whole slew of security protection mechanisms that span across all layers of the network. So essentially, it's a broad feature set that NetScaler comprises of. And it's a market leader in the application delivery controller space, as I said. Today, we're going to talk about, specifically, integrations of NetScaler with OpenStack. But before I get into the details and the specifics of what we do with OpenStack, I do want to talk a little bit about our objectives behind orchestration and automation, our strategy, and what we think about as we develop our integrations. We have two strategic objectives in mind for orchestration. One is irrespective of what the orchestration system is, which orchestration environment that you eventually adopt, or whether you're today on OpenStack and if you transition over to VMware, let's say later on, from an automation of NetScaler capability standpoint, you leverage the exact same set of capabilities irrespective of which not-bound orchestration system you've deployed in your environment. So we wanted to build a common architecture that will guarantee that for our customers. The other thing we wanted to kind of ensure is that irrespective of the NetScaler platforms that you've deployed in your environment, be them, are standard physical appliances that have great capacity, which are MPX appliances, be that virtual appliances, which are VPX appliances, or be it SDXs, which are multi-tenant appliances that are purpose-built for consolidation. Irrespective of which platforms you use, you get the same level of automation and orchestration capabilities offered by NetScaler. The first step towards offering infrastructure as a service is actually pooling capacity that forms the first step. That gives you the economic advantages of running infrastructure as a service. So we give you the ability to pool together a mix of NetScaler appliances and use that as an aggregate pool of capacity to offer infrastructure as a service to your tenants, basically. So with these objectives in mind, we built what we call NetScaler Control Center, which is actually acting as a single point of control and visibility for your load balancing as a service offering in various orchestration systems, and including load balancing, including OpenStack, and which is what we'll be talking about today. A little bit more about our OpenStack integration. As I said, the integration is designed in keeping Control Center in mind. So we have a driver that's been upstreamed into the OpenStack source code. It was upstreamed in Ice House. It's part of the OpenStack source code, starting in Ice House. The driver is designed to interface with NetScaler Control Center, and it's Control Center that's offering the core automation capabilities that we'll be going through in the rest of the presentation. So let's talk a little bit more about the core capabilities that Control Center provides, and why it is more than just an API translation layer. You may think of it as just an API translation layer that is translating neutron LBAS APIs over to NetScaler appliances APIs, but it does a lot more than that. And that's what actually constitutes critical functions that you want to run production grade load balancing as a service that guarantees service level assurances to different tenants in your environment, that guarantees resource hardwalling for different tenants in your environment. So first thing, the most basic aspect, which is often understated, is the complexities involved in automated end-to-end provisioning. Provisioning includes creation of new instances on the fly as and when required. Provisioning includes applying licenses to the newly created instances to unlock a certain amount of capacity and feature set. It includes connecting the instances to the right set of networks without any manual intervention. It also includes deploying a specific version of software on those instances and deploying specific whips and policies after the instances have been launched. In the infrastructure as a service model, the users themselves, the tenants, are actually consuming load balancing as a service. So sorry about that. The user is consuming a service, and the user does not care about and should not care about the complexities that are involved in managing infrastructure behind the scenes. And as far as the provider is concerned, we wanted to keep it simple even for the provider to offer infrastructure as a service, even in a much complex environment consisting of many net-scaler appliances, in as simple workflows as possible, basically. So let's talk a little bit more about how policies are triggered and what happens behind the scenes. So the user creates a new load balancing policies, and it's his way of consuming load balancing as a service. He creates a Neutron LBAS policy, and then after that, behind the scenes Control Center does a series of things. It will create a new instance if necessary. It will attach the instance to the right set of networks on the data plane and management networks. It applies a license to unlock a certain amount of capacity and feature set for that user. It will allocate resources to that instance as specified by the provider to guarantee a certain set of service-level assurances. And it will apply the policies that the user is creating on these newly launched instances. So this entire automation work, this entire workflow, is completely automated and to end. And it's done on demand, only when the user starts to use load balancing as a service, and there is no manual intervention. And more importantly, this level of automation is not restricted just to our virtual appliances, which would be intuitive if you think about it. There's a VM that needs to be launched as a Nova instance, and Control Center can launch that VM connected to the instances, and then it's up and running. That is one way of thinking about how to offer load balancing as a service. But as I said before, we don't want to restrict our customer's choice around the appliance types that they're allowed to select. You choose an appliance type based on the performance and scale that you need. But as far as automation is concerned, we provide you the same level of automation irrespective of the appliance type. So this automation workflow that I'm describing, you can avail yourself of the same level of automation, even if it were on an SDX appliance or an MPX appliance, basically. So we spoke a lot about automation, but what about control, and what about flexibility? If you think about automation and if you think about control, you can think of these two as opposing forces. The more you automate, there's a tendency to relinquish more of control, because it's easier to automate as you define one linear path without having the ability to control and have the flexibility for guaranteeing policies, basically. So as we designed this product, we kept in mind the need for control and flexibility as important as the need for automation. By control and flexibility, what do I mean? How do you control the type of resources that are allocated for specific dedicated instances for every tenant? How do you control the isolation policies that are guaranteed for every tenant? Does a tenant get a shade? Let's kill an instance, or is a tenant allowed to get a dedicated instance for himself? Questions like this. And we give you that kind of control through our SLA policies. So as an administrator, as part of the onboarding steps, you can define service-level assurances by creating service packages that we allow you to create. So you can say things like, I want all my goal tenants to get a dedicated SDX instance, and I want a certain amount of CPU capacity, a certain amount of memory, a certain amount of throughput, and a certain amount of SSL capacity guaranteed for each of the goal tenants. So every time a goal tenant starts using load balancing as a service, that amount of capacity is guaranteed for that tenant. You can create, let's say, a silver package. For the silver package, you want to give them just the VPX instances, virtual instances, basically. And for, let's say, a bronze class of tenants, you want the bronze class of tenants to share the same MPX appliance, but you still want each of the bronze tenants to have, let's say, 2 gigabits per second of throughput. So these are the kind of service-level assurances you can create using Control Center, and they're always guaranteed even after automating end-to-end. So from onboarding steps, it's very three simple steps as far as the provider is concerned. The first one is you add your NetScaler platforms to NetScaler Control Center to form that aggregate pool of capacity that Control Center will then manage, right? And then you define your SLAs to guarantee service levels for different tenants. The third thing you do is you assign those SLAs to your OpenStack tenants, and then they can start using LBAS, basically. So let's go ahead and do a demo of what I just spoke about. So what are we demoing today? We'll show you two tenants, the Koch and Pepsi. Koch has been guaranteed the gold service package, and Pepsi has been guaranteed a silver service package. The gold service package consists of a dedicated SDX instance, and the silver service package consists of a dedicated VPX instance. And we'll see how this entire process is automated behind the scenes. All right, so over to the demo. So here I have three screens. This is the NetScaler Control Center. This is the automation product. It's starting off on a clean slate. Nothing's been configured. And the second thing is, here's the Horizon UI. You see, this is the admin view of the Horizon UI. And there are two tenants that have been created on the Horizon UI. One is Koch, the other is Pepsi. And finally, you see, this is the NetScaler SDX appliance. And again, this forms the platform on which the tenant instances are created. And we see that there is nothing on the SDX instance running yet. So from a configuration standpoint, the first thing we'll do is we form that pool of capacity that I was talking about. So we're creating, we're adding devices to Control Center that Control Center can then manage. In this case, we're registering the SDX appliance with Control Center. We're registering the credentials. And the product name is SDX. There you go. So we added the SDX appliance. And as you can see, there's nothing created on this is the SDX screen. Nothing's been created there yet. The second thing we'll do is we create service packages. So this guarantees SLAs for different tenants. So in this case, we're creating a gold service package. And you can see that I specified the isolation policy as dedicated, which means there's a dedicated SDX instance that's allocated for every tenant of gold, basically. You can also see the resources that you can specify. This gives you granular control over how much capacity is allocated for every gold tenant. So we said, for this case, we said two CPU cores, four gigs of memory, one SSL chip, and two gigabits per second of throughput, or 200 megabits per second of throughput. You can also see that I'm able to specify a specific software version, which means that every time a gold tenant instance is created, we're running 10.1 software from Net Scaler. So different tenants can have version independence. And as a provider, you can choose which version of software of Net Scaler can run on each tenant's instance, basically. So there you go. That created the gold service package. The next thing we'll do is we'll create the silver service package. The silver service package, we want to use VPX instances for the silver service package. It'll be a dedicated VPX instance for each of the silver service packages. If you see, we're actually using the flavors concept from OpenStack. Control Center has a native integration with OpenStack, so it can actually retrieve the flavors defined in OpenStack. And you can choose one of the flavors. In this case, we chose the medium flavor containing two VCPUs and four gigs of RAM. And we said VPX 3000. That means 3 gigabits per second of VPX throughput for every dedicated VPX instance for all silver tenants. So we've created the service packages. The next thing we'll do is actually add the device to that service package, to the gold service package. Because it's an SDX type of a service package, it needs to be added SDX appliances. So that's the final step. And that pretty much, and the last thing we'll do is actually assign tenants to the service packages. So we said gold should be assigned to Coke, and silver should be assigned to Pepsi. So let's go through those two steps. We just added Coke to gold, and now we'll add Pepsi to silver. And that completes our onboarding steps. Like I said, it was three simple steps. You create your pool of capacity, you create your service packages, and assign those service packages to various tenants. So we just went through those steps, as an administrator would do, as part of onboarding load balancing as a service. So the next thing we'll do is we go to OpenStack. We log out as the administrator, and we log back in as Coke. Now this time, it's Coke who's logging in. And OpenStack tenant is going to create load balancing as a service policies. And we'll see what happens behind the scenes on Netscaler. So this is standard workflow for creating load balancing policies in OpenStack. He's creating a Coke pool, selecting a subnet, and then so on and so forth. He creates HTTPS as the protocol. Load balancing method happens to be round robin. So that created the pool. The next thing we'll do is actually go ahead and create the WIP. So creation of the WIP and binding it to the pool completes the workflow for deploying load balancing policies in OpenStack. So now there you go. We actually launched the creation of a WIP from OpenStack. Now there's a series of steps that are occurring behind the scenes after this step. And this is exactly what I was talking to you about earlier. Once a WIP is launched, Control Center gets that call. And it checks to see whether an instance has been created for Coke. And it checks to see, oh, Coke belongs to the gold tier. And every gold tenant needs to be dedicated in a SDX instance. So it will check whether an SDX instance has been created for Coke. In this case, it was not. So it will go ahead and launch the creation of an SDX instance behind the scenes. And then apply the policy on the newly created SDX instance. And we'll see how that works in just a second. So here we're on the SDX platform. It was not created before. But you can see now there's an instance that's been created. And if you see the resources that have been allocated for that instance, you can see these are the resources that were specified in the policy during the service package creation. Total memory was four gigs. Number of SSL chips was one. And you see CPU cores and so on and so forth. And the instance is coming up. It takes a couple of minutes for the instance to come up for the first time. After the instance is up and running, we can actually see that the new policy that Coke has created has been installed on this instance. So if you see the Coke clip from the Horizon UI, it takes a few seconds to be active. Now the other thing we're doing is actually, now from the Coke tenant is actually accessing Control Center using his open stack credentials. And why is he doing that? Because he can actually see a richer, deeper set of statistics around operational status, like status, statistics, and health information that you will not be able to necessarily see from the open stack UI. So it's the Coke tenant interacting with Control Center using his open stack credentials. And we're able to do this because of our integration with Keystone. So that completes the workflow for Coke. Essentially, we've seen how the SLA policies for Coke have been guaranteed by Control Center. Now we'll repeat the same process for Pepsi. And now we'll log in as Pepsi. Now this is the Pepsi tenant creating load balancing as a service policies in open stack. We go through the same series of steps. Pepsi creating Pepsi pool, selecting a subnet, and so on and so forth, standard open stack LBAS workflows. Again, same steps. Once he adds a web, now that web creation again will launch a series of steps on Control Center, it'll check to see that the silver package was actually created for Pepsi. And it'll create a VPX instance corresponding to that silver package. As you can see, now this is the horizon view of Pepsi. And because the VPX was created as a nova instance in open stack, you see VPX being listed as part of the nova instances, the list of instances for Pepsi. It's the net scale of VPX that's been launched. And the flavor that you see here is the flavor that we specify during the service package creation. And it takes a couple of minutes initially for the instance to be launched. And just like we saw before, after the instance is launched, the load balancing policy is installed on the instance. And you see the VIP being active as the next step, basically. So that pretty much completes the crux of the demo. It's a simple demo. But as you can see, it's a very powerful demo because it illustrates how you can kind of guarantee automated end-to-end provisioning and deployment while at the same time not relinquishing any control and flexibility over either the resources that you allocate for the tenants or the isolation model that you describe for different classes of tenants. That gives you best of both worlds, which is you get the operational efficiencies of end-to-end automation, but you get the capacity efficiencies of pooling infrastructure and how you can slice and dice capacity in very granular fashion for different tenants in your open stack environment, basically. We deploy HA. We can automate deployment of HA nodes in open stack and HA nodes of net scaler. And we can deploy HA pairs of our VPXs, which are virtual appliances, as well as SDX instances that can be launched on separate SDX appliances, basically. So thanks for joining the presentation. Hope this was useful. Please hang back and feel free to ask me any questions that might have come up during the presentation. I'll be hanging around for some time. Thank you.