 Hi, everybody. All ready? Good afternoon. My name is Ashish, and I'm from Aave Networks with my colleague Praveen. I'm the open stack architect at Aave Networks. Welcome to the session. So we're going to talk about some elastic LBAS, real-time visibility and analytics, some cool demos, and we have some drawing going on at our booth right after this presentation that we'll talk about. So what is Aave Networks? We're going to spend a couple of minutes on what Aave Networks is and then right away go into the demos. What is Aave Networks? Aave Networks is the next-gen ADC, load balancer, with integrated analytics. Okay? So let me tell you a little bit more about that. It's a full-featured ADC. So unlike the reference implementation, which doesn't have most many of the features, it's a full ADC, L4, L7, high-performance SSL with perfect forward secrecy, and acceleration. Okay? Good. You know what ADC is. What's next-gen about it? ADC is a load balancer. It's a fancy name for load balancer. What's next-gen about our solution? First of all, it's 100% software, no appliance. Industry's first distributed architecture with automatic scale-out, HA, and a single point of control and management with full REST API-based integration. Okay? And then we are beyond ADC. We're called badass. Beyond application delivery is a service. We have some stickers out of the way if you want them, because we do visibility in analytics. No agents, no monitoring fabrics, all in line, end-to-end visibility from users to applications, and everything in between. All right. Let's step into how it works. What is the distributed architecture? The brain of the system is the AD controller. It runs as a clustered set of VMs or containers. As NOAA clients, it runs inside OpenStack. It's not outside. It's fully integrated with OpenStack. It has a centralized policy engine and single point of configuration. The controller is what one integrates with OpenStack components, NOAA, Neutron, Elbas, Keystone, and Glance. And it automatically manages the lifecycle of micro-ADCs or the service engines, which are what it spins up as a VM or a container again with application affinity in mind. So it spins up these service engines on demand in the right tenant. They provide load balancing and data collection services. On top of the AV controller is our AV UI based on the REST API. Or you can configure and manage everything through Horizon. All right. So without further ado, Praveen, can we talk about how an OpenStack admin use AV solution? So thanks, Ashish. So I'm going to show how a user would use the AV load balancing as a service. So from the user's perspective, they would just need to invoke the Elbas APIs. So our controller exposes both REST APIs. And we have, as Ashish showed, we have an Elbas plugin from where the Elbas API calls our proxy to the AV controller. So let me switch to our regular Horizon UI. And I'm a user in this tenant. And I have these two servers, server one and server two, to which I want load balance my traffic to. Like any other load balancer, I would go to basically the load balancers tab. I would create a pool. So let's create a pool, a demo pool. And in the provider list, so we'll pick up the AV as the provider for this load balancer, the instance that we want to create. And we do the regular choosing the subnet, a protocol. I want an HTTPS application. So I'd like to go with that. A round robin as the way to load balance. I created the pool. And now we'll add the members to the pool. Have these two back in servers. And again, I would like to use the SSL HTTPS for my application to make it secure. So probably while it's working on, I noticed that you configured SSL. Now Horizon itself doesn't support SSL certificate. So are we going to show some SSL certificate also? Yes. So let me, as usual, this is, OK. Good. There. So as you mentioned, the Horizon doesn't, like the stock reference implementation of Horizon doesn't have the SSL certificates. And we chose to do an HTTPS application. So let me finish the HTTP configuring the WIP too. And I'll show how we will do enable certificates for this. All right. For this application. Sounds good. OK. I got everything. Right. OK. So as you were asking about SSL, Elbas version 1 API didn't have any support for certificates. The Elbas version 2 has it, but the current Horizon reference implementation doesn't have any support. So we implemented a patch for the Horizon. So we can support certificates. OK. And all these certificates tab does is you can upload certificates to via Horizon. As a user, I don't need to go to any other UI or do anything else. I can go to this certificate tab. Yeah. I can add my own certificate so that that gets used by the load balancer to terminate the SSL handshakes. Sounds good. Now, by the way, this extension to Horizon is something that we have. It's made available in public in our public repo, and we'll upstream that as well. But we believe it's very important to have end to end SSL with Elbas. And so that's what we have done. So Praveen is uploading key and certificate for the load balancer. OK. So there, I have added the certificate. OK. And one of the other extensions this extra package does is create one more link in the load balancer tab so that we can associate a certificate with the... With the WIP. WIP that we just created. Sounds good. OK. So we have a full end to end SSL application all created through Horizon UI with a custom certificate that I want to use for my application. And this is possible with the AVI solution. OK. So now this has the... OK. Let's see. All right. Let's have a demo. God have some problem. That's fine. Let's continue the way. It might be mostly likely the horizon error. Let's see. So we created the WIP through Horizon dashboard. Can we actually see this in action that what did AVI controller do at this point? So at this point, the AVI controller, we have created the virtual IP. So at this point, all the Elbas API calls have been sent to the AVI controller. OK. So the AVI controller, let me go back to the picture. So the AVI controller, so as you see on the left-hand side, all the Elbas API calls made, they are sent. The Elbas configuration is sent to the AVI controller. AVI controller talks NOAA and Neutron APIs to bring up new service engine instances to implement the load balancer instance. Got it. That sees a service engine and the network topology. Yeah. So you can see that in this. So in this case, you can see that we automatically spin up new service engines. Right there. And that's the one that's going to implement the load balancing service. Got it. And we automatically create these service engines on demand as needed based on what's the amount of throughput that you have or what's the amount of CPU utilization. And if you have a lot of virtual services you are creating, then you will have a lot more service engines. Sounds good. And these are all configurable. You can place limits in the AVI. You can place these limits pertinent. And even in site tenant, you can create multiple categories of service engine groups and say these are gold applications that I want to have. These are platinum and create different levels of limits for them. Can we run some traffic and see that what's going on? And maybe we can also show the AVI UI for people who are not familiar with to see how it looks like on that. Okay. Let me switch to the AVI UI. Okay. It's over here. I have to switch to a tenant? Oh, yes. You need to go to the right tenant. Because see, AVI inherits the multi-tenancy from Keystone. So whatever tenant that you created, the service engine to whip in, that's a tenant that this whip will be available. So we got in the right tenant. And here we have the whip created. Yes. So we have the whip created and it shows all the status information about the whip to say that everything is nice. Let's run some traffic. Okay. And then we'll go a little deeper into the AVI UI about what this end-to-end timing diagram is in just a minute. Okay. So I'm going to open a client and let's log into it. All right. Check something is getting cut off, but that's okay. Okay. Let me pull it into the middle. Don't worry about that. Okay. Let's continue with the traffic. Okay. Minus K. HTTPS. 192.168.100. Sorry. T.100. Probably I have... All right. How about we run some ongoing traffic? Okay. Because in addition to the SSL, what we also want to show you is auto-scaling capabilities, right? So as the traffic goes up automatically, we spin up new service engines on demand so that we can increase the traffic. So if we can show some ongoing traffic that triggers the creation of the next service engine. So you're looking for the whip? Yeah. All right. Let's see. We're going to create a new whip. Okay. I see. Because I think the load balancer doesn't have it in the database. All right. Okay. So while we do that, a couple of key features of the AV solution is, one is auto-scale that we're going to talk about. We saw that we did SSL. We also have full HA. So we have active, active HA that we're going to show, which means we kill one instance of the load balancer, non-destructive, the traffic continues. All right. So let's go back to the... My horizon comes back up. Yeah. How about we show some visibility while that comes up if you want to go to that? So a key feature of AV solution is end to end visibility. So while we bring up the horizon again, let me show you the end to end visibility. So let me go to the analytics page. All right. All right. And let's make sure that the data loads. There you go. So as you can see here on the screen, what you see at the top is an end to end timing diagram. What that tells you is that what is the latency breakdown between end clients, like your mobile phone or your laptop, to the load balancer. Load balancer, the backend pool server that's processing the service, processing the request, and the application processing time itself. So without any agents, without any span ports, we're able to give you the end to end breakdown of latency instantly in real time, as well as on an average basis. So for example, in this case, the client latency is 34 milliseconds round trip. The server RTT is minimal. Let me get to another data point which has... Here, right here. The server latency is, in this case, it's right on the same host, so it's two milliseconds or even less many times. The application processing time is about 40 milliseconds. So we differentiate between what it takes on the network versus what the application takes, and then the data transfer time, and the round trip latency of 80 milliseconds. This is something unique to the industry that only we provide, and this is very important for as the migratory applications to open stack. This is a key feature that you need. Additionally, we have a big data analytics engine that captures all the HTTP logs. So for example, I am going to configure not just significant logs, but what we have called non-significant logs. As you can see here in the color coded fashion, we have a bunch of HTTP transactions, and I can break open one of them and show you that in this particular transaction came from, in this case, an iPad running mobile Safari in iOS from US with a particular IP address. It took about 40 milliseconds to hit the load balancer. Then it hit the backend server with a particular IP address. It took only one millisecond. It looked for a PNG file. In this case, this is the ur it was looking for. And the application, it was probably cached, so it came back quickly, and the data transfer time was less than a millisecond. So I can do needle in the haystack kind of analytics just as being a load balancer. I can do other cool things like I can see a bunch of 404s here. So I click on a 404. Let me click again properly. And in this case, let me just search here as a 404. I just search 404. It's going to show me anything that matches the 404 error code. So I have a bunch of these 404 error codes. I can say things like, okay, what are these 404s coming from? What locations are these 404 IP transactions coming from? And I can see what locations of the clients are getting these requests. I can do things like, who are my iOS clients or iPhone clients? And I can just search for iPhone, and it shows me only iPhone transactions as soon as it shows up. So for example, in this case, if I open this one up, this particular person came from Mexico on an iPhone. So just being an inline load balancer, we're able to give you this type of analytics. Furthermore, we use this analytics to drive load balancing. So for example, we measure CPU utilization. We measure latency, throughput, and so on. And based on that, we can actually autoscale out. So let me show that autoscale capabilities in progress. Let me go to a dashboard. And let me open up a VIP. In this case, for the purpose of the demo, so that I can do it easily, I'm going to show you in a manual basis. So if I open up my dashboard, I can see that I have one VM up there serving my load balancer capability. So there's only single VM. It's running about 380 megabits per second of traffic. I'm going to scale out. I'm just going to click the button called scale out. At this point, we're creating a new service engine, new VM, putting in the right network. Load balancing, the load balancer. And instantaneously, you will see that the second VM has come up, part of the load balancing infrastructure, and the traffic is starting shooting up. This, for the purpose of the demo, I showed you manually. You can try this to CPU utilization, number of connections per request, throughput, whatever you want. Additionally, not just scale out the load balancer itself, you can scale out the backend application. So we have a retail customers who need Black Friday or Cyber Monday kind of environments. They can trigger and open stack a heat resource request or another environment, any other request to scale out the application VMs. Again, when the traffic goes down, you can scale back in your infrastructure instantaneously. All right. Any questions so far? We have about three minutes to go. Any questions on what you see, what you would like to see? We have more demos on our booth. And what we didn't talk about is a little bit of a multi-tenancy, but here is a slide that we can show you. So Praveen, in this case, what is the purpose of showing a controller in one tenant and service engineer in the other tenant? Yes. So this is a typical deployment scenario for enterprises and cloud service providers. So as Ashish mentioned in our system, everything is, runs as software virtual machines. So our controller is a virtual machine that, as an admin, you typically deploy it as a regular VM in one of the service tenants. And whenever a user comes and instantiates a load balancer, then the service engines, the SES, as we have shown in the horizon picture, show up. They basically get spun up in the user tenants context. Again, this is one deployment model because these are all regular VMs. We have several deployment models where an admin can choose to have all the service engines spun up in a separate service tenant so that nothing is visible to the users in the regular tenant. Now, here we are showing that these SES, they get spun up and each of the tenant's SES are in their own context. So we provide very strong tenant isolation. All the traffic belonging to one tenant, they're not shared with other tenant's traffic if you want to. Like, we have other modes where you can let that sharing happen to. And the other important aspect here is the automation. So a common pain point that we see from our customer is that the app admin doesn't want to create a ticket just to create a whip. So in this case, an app admin has an access control to the controller to spin up its own service engines or create own whips, which will get created in our own tenant. So the network admin is happy, the app admin is happy, and we have a fully automated next-gen load balancer. Any questions from the audience? We have some cool giveaways if you want to move to the next slide, Praveen. So we are giving a bit drones at 2 o'clock. So stop by. We have some bad-ass stickers as well. But any more questions at this point? No? Either it was very clear, which is great. Or you are in awe because of the solution. Or you're still thinking? So if you have any questions, again, come by. Boot T9. We are right behind there. And we have some interesting talks with our partners today and tomorrow, and some cool giveaways. More information at avinetworks.com. We have a free trial, by the way, before I forget. You can download our software on our website. Try it out. Full featured. You can try it out in your environment, or you can run it on a sandbox. So try it out at avinetworks.com. Thank you.