 Good afternoon, everyone. My name is Jeevan Sharma, and for the next 20 minutes, I'll be talking about the secure application delivery services in a private hosted cloud like OpenStack. So with that, let's see. So as we know, as we see, the application delivery, how it has evolved over the years, the applications have moved from the traditional data center, and more and more applications are moving towards the private public clouds, IT is building out these new clouds. So what is the implication for the, as applications go cloud native, what is the implication for the application networking services or the application delivery controllers? So the networking services and application delivery controllers, they need to integrate very well with the data center infrastructure to provide seamless integration, to provide you the agility, the same level of support that these applications have so that you can ensure compliance, you can ensure your service level agreements, and you can provide the services to your users. So I'm going to talk about the, so a lot of our organizations today, as you will see, they have their traditional data centers. They have their applications hosted in the traditional data centers, which they are going to probably migrate over the next few quarters years. At the same time, they are also moving new applications, or they are moving applications to the cloud world. They are containerizing their applications. They are using microservices to spin up applications into their cloud environments. So if you say, hey, what is the application delivery controller needs, or what are the load balancing needs for these applications? So it's not one single requirement that these customers have, but there are. Some customers need higher performance. They need higher throughputs. They need services which can work well in their traditional data center environment. At the same time, there are customers who need very agile ADC services, which follow the applications. At ATEN, we provide the most flexible options, so we have the flexibility to offer solutions for both of these use cases. We have ADC solutions for your traditional hosted apps in the traditional data center, as well as we have solutions for your cloud hosted applications. So what are the ATEN benefits, or what are the benefits of ATEN's ADC that we offer? So first of all, we have an ADC solution, which is best in class. We do have the DDoS protection capabilities. We offer SSL insight. We offer firewalls. So we have different products that cater to this problem. Then at the same time, we are also very agnostic from an infrastructure point of view. So we don't really have a requirement that your application ADC needs to be in a particular environment. We support certain environments. We support your traditional data centers. We support virtualized data centers. We support public clouds, private clouds. So no matter what your infrastructure is, we have a solution for that. We have a support for a multi-cloud management, which means that from a single controller or from a single pane of glass, you can manage your ADC services in multiple clouds. At the same time, we provide you the auto-scaling, the application analytics, and visibility into that. And we have a very agile and responsive customer support. So what is our product vision, or the ATEN harmony vision? Basically, what we have is, hey, if your infrastructure is your, if you have your physical data centers, the traditional data centers, or you have your virtualized data center, you have your applications hosted in your private clouds, like such as OpenStack, or you're using public clouds like AWS or Amazon as your cloud. Then we provide application delivery services, secure application delivery services in all of these environments. We provide you a choice of consumption or the choice of form factors. So we have ADC offering as an appliance, or you can buy it as a software, run it on a bare metal if you need higher performance. We have virtualized appliances, which you can run in your virtual environment. Or you can also have the cloud-native ADC that you can run in a cloud-only service. Then our ADC, we have the 100% REST API compliance. We have programmability. So if you are a DevOps shop, if you are using REST APIs, then you can pretty much do everything using our AX API or the API capabilities. We have the cloud delivery services. We have these cloud-native applications where you can use, where you can elastically scale them in, scale them out. And we have the lightweight ADCs as well. So I'm going to present two demos today. So one is about our eight-in-thunder product line, which is more traditional for traditional, which uses the ALBAS option. And I'm going to also present to you the eight-in-lightning solution, which is new off the bat and uses the containerized lightweight ADCs. So when it comes to OpenStack, we have two choices. So what customers want to do is they want to offer AWS-like services to their end users. And for that, there are two choices. One is you can use the ALBAS driver, which is native to OpenStack. We have an eight-in-0 ALBAS driver that you can use. The eight-in-0 ALBAS driver is one of the end of the spectrum, where it provides integration into the physical eight-in appliances or the bare metal or virtual appliances. So you can use that for higher performance, for applications where you need higher throughputs, you need to provide application delivery services in a more vertically-scaled manner. And on this, with our eight-in-0 ALBAS driver, we also support the SDN overlay on our ADC platform. What that means is if you are using SDN, then you can use our eight-in-thunder devices. And you can use VXLAN tunneling to have your applications or your apps workloads talk to the ADC device over overlay. Then we have the eight-in-lightning controller that provides you the ability to spin up your ADCs next to where your apps are. It's a centralized controller with a load balancing configuration through a single pane of glass. It provides you also per-app analytics. So for every app that you use, spin up an ADC or app delivery proxies, it provides you analytics for that one. Then it also provides your performance monitoring. You can use it for auto-scaling. So you can scale up, scale down instances based on your load. So this is our eight-in-0 ALBAS driver. It provides you a lot of capabilities. It provides you layer four to layer seven load balancing. It provides you layer seven content switching options. We have health monitors, not just the standard that the ALBAS driver provides, but we provide more number of health monitors. We also have SSL offload capabilities on our device. And that can be done through the custom driver that we have that provides you the ability to configure SSL certification keys through the ALBAS GUI. Then we have multi-tenancy support. I talked about that. We have vThunder orchestration through our ALBAS v2 driver. So let me jump into the demo and we'll take it from there. So in this demo, what I'm going to show you first is how you can orchestrate a vThunder, which is our eight-in-0 ADC device using heat or heat or any other mechanism you can use. And this orchestration, you can also accomplish it using the, I don't see the video. It's working. Yeah, it is. I don't know how you did it. Did you extend it? Did you extend it? OK. Sorry for this one. So here, what I'm showing here is the vThunder orchestrated through the ALBAS driver. So in the interest of time, I'm going to move ahead and show you the next thing. How do you configure an ALBAS service? So here, I have an application. I have these servers. So I'm configuring an ALBAS service through the ALBAS Horizon dashboard. So here, I'm creating a pool. Next, I'm going to go ahead and create a VIP. Then I'm going to add some members to this pool. And we have the service up and running. Now I'm going to go to the A10 device. And this is the 27A cost. On this device, we're going to see that the service is configured. By the way, this is asynchronous. So any ALBAS goes and configures this device using REST API. And if you want to make modifications, so if you want to add more service, more advanced features, you can go ahead and do that in our GUI or through CLI. So this is the VXLAN configuration. So it is using the VXLAN overlay to talk to the servers through our device. It is sitting next to the compute nodes in the OpenStack environment. And I'm going to show you. So here, the services are coming up. And here's the sample website that I created. So that is up now. Next, I'm going to talk about the, quickly show you the how-to licenses. So we have a very flexible licensing policy. So we support both your perpetual licenses, which you can purchase and apply to the vThunders that you spin up. Or we have a flexible pay-as-you-go licensing, where you can have the vThunder go ahead and contact the licensing server and get the license automatically. Now I'm going to move to our Cloud Controller. So where's this? So here, I'm going to talk. Next, I'm going to show you a demo of our Lightning application delivery service or the Lightning Controller. This is a centralized controller that provides you three things. It provides you policy management, load balancing policy management from a single pane of glass. It provides you management for the proxies. And it also provides you the analytics for on a per application basis. The ADC proxies can be your lightweight NGINX-based proxies, or it could be your high performance vThunder or bare metal A-cost running. We have the controller. It can be either cloud-based. So it can be a cloud, it can be a SaaS-based. Or you can have an on-prem installation also within your OpenStack or any virtualized environment that you are using. It is a self-service portal, and it is highly multi-tenant. And it supports all of these clouds, the environment that I show here. Of course, OpenStack is one of them. So I'm going to move into the lightning demo. So OK, so let's watch how the controller works. So you have this option. Either you can go to the Elbas UI, Horizon Elbas interface and configure your load balancing service there, or you can use it to use the controller. So in this case, what I'm showing you is my OpenStack environment where I have a portal running. I have these five different servers, and I want to load balance, provide a load balancer to them. So what I'm going to do is log in into the controller. And this time, I'm going to do it through the controller. So right now, as you see, there's no Elbas pool configured for this service, for this portal. So here, I'm logging into the controller. From the controller, I create an application. For that application, I create an endpoint. That endpoint is the URL on which the service will be hosted. So if you point to your browser to that URL from there, you'll get the application service. Now, in this case, it supports multiple clouds. I'm using OpenStack. So I'm going to say, hey, my application is hosted in OpenStack. I'm going to use the OpenStack user credentials in this case and move next. So now I'm going to add my application services. So I'm going to tell the controller, OK, where are my app servers? And what is the IP address of those app servers? In case you are using this in an AWS environment, you can just go ahead and say, OK, my application servers are configured in ELB, and it will pick up from there. In this case, you can add them here in the UI itself. So I'm adding these application servers. I've added the five servers that I had for the web portal. Next, I'm going to go ahead and say, hey, create the application proxies for me, or create an app, create a load balancing service for me. So now it starts the application, it starts the load balancer configuration. So it is going to contact the controller, OpenStack controller, and initiate the proxies. This usually takes like a minute or two before the proxies are up. They have an IP address, and they are configured. So once the proxies are configured, then the controller will tell you that, OK, the application proxies are up. The application proxies can be spun up in the same tenant space, next to your app servers, so that you have the ability to scale up or scale down those proxies based on the load. So you can specify, hey, if my application proxies are running at a certain load, then spin up additional proxies. And in case if the load recedes, then it can spin down those proxies. The other thing it can do is it also collects information or statistics on the usage from those proxies in real time, and populates that onto the controller. So from the controller side, you get the analytics on the application usage that I'm going to show you. So it's still running the proxy creation. Let me go there now and show you. So here you see those two proxies coming spun up from the controller view. So this is the lightning controller which talks to the open stack controller in this case and tells the open stack controller to spin up the proxies. Now the proxies are running, and you can see it took about a minute or two to usually spin up the proxies. The proxies should be up. So next, we're going to say, so here, if you go back to the controller, this is where it goes and says, hey, the proxies are spun up, the service is up. What do you want to do next? So you can say, OK, fine, thank you. And now we'll go here, open a browser. We'll go to the LBAS configuration. So under network, LBAS, under this, you can see that there is a LBAS service created. This is created by the controller itself. So in this case, if you don't want to use the LBAS UI, you can just use the controller. And the controller takes care of spinning, creating an LBAS service for you. And it has these five members, as you see. And then it's monitoring the health of those application servers or the web portal servers. Now, this is my application domain. And it has the configuration. I'm not going through those details today. But I'm going to quickly show you. So this is the website we go to that was hosted by the application that's hosted on the open stack. And it is powered by the proxies, load balancing it. Now I'm moving forward. After this, I go back here. And I want to say, hey, I want to give me the dashboard. Give me the statistics on this, how many requests were made. And each of this you can actually zoom into and you can go get more details on that. Made some more traffic. And you can see that it's updating the dashboard in real time. So it provides you information like response time, popular sites. It provides you the, if there are any errors, it also points out that. So with that, I end this session. Thank you very much. Thank you very much for joining.