 So, you guys, we're going to get started now. We have Sorna and Bhushan from Avi Networks going to present, so let's give them a round of applause. So, guys, take it away. Thank you very much. Okay. So, welcome to the, first of all, thank you for the control team for inviting us here. I'm Sorna Padila. I head the product marketing team at Avi Networks and I have Bhushan here who is part of the technical marketing team at Avi Networks. We just will quickly walk through how we work on the web-scale elasticity and how we can accomplish automation in terms of an open stack deployment with our open control integration. I also have Ankit from the control team. If there are any more questions, first of all, quick check. How many of you know what Avi Networks does? Wow. I'm impressed. Thank you. That's more than half of the audience. Awesome. Okay. So, I'm good to go, right? You all know about Avi Networks and I can leave. I'm kidding. The one way your knowledge about Avi makes it easier is that I can flip through my slides and just give it to Bhushan for the detailed demo. Just for those few of you that are not familiar with Avi Networks, we are a software load balancing company and the reason why we kind of start with software load balancing is because web-scale is here for computers. So, when you look at the storage and compute layers, it's elastic. You deploy the bits on any off-the-shelf servers and you can reallocate, readjust the capacity or utilization based on the requirements on demand, on the fly. It's elastic. You use commodity servers, so you kind of bring your server investment, infrastructure investment down. You manage it as one because you have a centralized controller that can take care of all of these kind of resources and it's highly automated. You get the telemetry or the visibility into what's going on so that you can then automate and readjust your resources. It's elastic. However, when it comes to networking, there are single-purpose appliances where the control plane and the data plane are fused together. It's not elastic. It's rigid. It's you configure or provision, rack and appliance. It does only load balancing and not beyond anything beyond that. And it's kind of like a black box where you don't know what's going on inside it. Data goes in, data comes out, but you have nothing, no kind of visibility in what goes on underneath it. So what we did was instead of having HA pairs in each environment could be your private cloud, public cloud or container kind of environment, what we did instead of going with the fused approach was separate the management plane from data plane. We said that, okay, let's go with the kind of airport analogy where our airplanes become the data plane. They come in, they go out. That's all they do. The pilot just blindly follows the instructions from the air traffic controller, lands, takes off, boom, done. And then the control plane is centralized, similar to air traffic controller that sits off path away from the runways, sits off path, tells the pilots exactly at what height, at what speed they should start descending, to what gate they should arrive at, what runway they should be taking. And if there is already a plane at the runway, they know that. So they are going to divert the traffic to a different gate. They're going to divert the plane to a different gate. That's the kind of intelligence that ATC gives to the pilot. So the pilot can only focus on bringing people in, like bringing packet in, bringing packet out. So the controller sits off path, has the cycles to munch through this telemetry that they get from load balancers and then deploy these service engines either on bare metal, your private cloud, public cloud like OpenStack, OpenShift. And then there's the VMware environment containers, Kubernetes Docker or Mesos clusters, or even public cloud like AWS, GCP, and Azure. And because we have the visibility with what's going on, we can also integrate with the rest of the management orchestration layer providers to make it even more extensible and automatic to make it programmable for you guys to deploy the applications or resources by automatically. And because we have the visibility, you can now get insights into if an application is slowing down, if there is high latency, what exactly is going down? And I get a notification when there's high latency, I get a notification or when there's a web server that went down, you get an alert, your admin gets an alert so they can take a quick and immediate remediation steps rather than waiting for someone to point out and say that, hey, this is not working. And as opposed to physical appliances that can only scale up, which means that you can only buy more appliances, we are software, so you can either scale up, get to a bigger service engine, which is our load balancer, or you can scale out, add more service engines, or you can do a combination of both. But as your traffic increases on demand, and when your traffic recedes, you can kind of bring it back in, scale it back in and free up those additional resources for any other purpose. So that's the kind of structure that Abhi has. Now, coming with OpenContrail, how we integrate is, if you have an OpenStack deployment, we, at the configuration's setup stage, we take the input from the, like we work with Abhi controller at the bottom, works with the Neutron server, takes all the credentials, like keystone credentials or anything from the Neutron server, passes it on to the Neutron server. And the Contrail client, it works with the Neutron server. It takes every update or every change that is passed on through the Neutron server. And then once you define the provider to be the Abhi networks, it has the intelligence, the service monitor has the intelligence to divert the traffic to Abhi networks as your load balancer provider. And how do we automate it? We integrate with multiple projects, like, I know this is the night chart here. Horizon, Nova, Keystone, Neutron, and Elbas. We have an Elbas plugin and we support V1 and V2. But what it does is you provide the Keystone credentials and the location of the Contrail server. And we use Elbas. I mean, you can either use Elbas or Abhi API to set up the configuration. And after that, Abhi controller takes care of the automatic deployment. We use Keystone to set up the user profiles. We spin up the service engine VMs, which are the load balancing VMs, as a Nova instant. In the correct time, we know where to place the load balancing instance, the VM. And we place it in the appropriate Neutron by talking to, I mean, we place the VM in the appropriate network by talking to Neutron, getting the information from Neutron. And we also allocate the VIP and using the Neutron API. And binding the VIP to the secondary IP as well. And associate the floating IP to the VIP. So all of these additional processes, which could be manual on auto-scaling load balancing fabric, like spinning up another load balancing, more additional load balancing service engine as the traffic grows up. All of these are completely automated. None of these steps have to be manual. And because we also get the analytics, as I mentioned earlier, because we get the analytics, we have integrated that into Horizon UI. So you can either add it as a tab in your Horizon dashboard or pull up the entire ABI dashboard into Horizon itself. And those are the kind of supported configurations. I know the last line is not visible probably at the end, but it supports OpenContrail and JuniperContrail 3X and above. But we support pretty much everything from an OpenStack perspective. Just to give you a quick sense of how easy it is. This is a snapshot of the ABI dashboard. There is just one checkbox over there that says integration with Contrail. And that will take you down the path of setting up the Contrail server as well. You just need to give the location of the Contrail server. It talks to the Contrail server on the back end. And here are a couple of screenshots that I have here, but I think Bhushan will show it in his demo as well. You can either bring just the analytics as part of your Horizon dashboard. This is the ABI analytics, where it shows the per hop latency. As you can see it from when your client hits the load balancer, when your load balancer sends it to the server, server sends it to the app server, and then the database. How fast is the data being fetched? So it gives the per hop latency, and it gives the granular details at the bottom per transaction. Could be a significant log. Could be a significant log. But we get the transactions or the visibility for each and every log that we collect. And this is the other way you can get where you have your entire ABI dashboard in your Horizon itself, as opposed to this, where you just have it as a tab, just the analytics as a tab. So there are two ways of deploying ABI. And with that, I will stop and have Bhushan show the demo. Any questions? Sorry, before I let Bhushan. Yes? That will be Bhushan. Can you repeat the question again? I'm not sure about the scale in terms of that right now. I'm sure we have the numbers up there on our KB. Let me get back to you with that answer. Yeah. OK. Moving on to the demo. So like we saw in the presentation, we can either run directly with our LBAS plugin and configure the load balancing using normal LBAS, or else we can have the whole ABI dashboard right in the Horizon UI, like you see out here, so that the people can directly use AVI's API on the UI. So everything we do on the UI and CLI are backed up by REST APIs in the back end, so that users can directly use AVI API to configure load balancing with much more richer features than they get with normal LBAS APIs. So what you see out here are the same. Even if you configure something using LBAS APIs, we map them to AVI. So let's go ahead and see how easy it is to create a new virtual service or a VIP. Just give some name to the virtual service. Test VS. VS. And we can auto-locate IP addresses based from the networks you have got. So I'm placing the VIP in the client network. I can also assign a floating IP real easy. We can select servers. We can directly give the IP addresses, or I'll select servers, selecting them from a certain network, just add the servers into the load balancing pool, and choose what type of application you want. Is it HTTP, HTTPS, what kind of certificate you want, and just save? And within seconds, the new application is up. So AVI is split up into two parts. There's AVI controller and the service engine. The service engine does the main act of load balancing. The data handles the data path. So if the service engine is already available, the only thing the AVI controller has to do is push the configurations and connect the service engine to correct networks based on the VIP and the back end pool members, and the service is up. If not, it will go ahead and deploy a new service engine and then connect it and get the application up. This is just a brand new application, but let me move to another application I have out here already running, which also has a lot of traffic on it. So as you can see, we can show complete end to end time, not just on real time, but also past historic data. The amount you can show out here depends on the size of the disk you allocate to the service engines. We can give a kind of a summary of what the client latency looked like, what the server ITT is, what kind of app response you're getting, and what data transfer is. Not just as a summary view, but also at each and every point in the time on the graph. If you want to dig in more how your application is performing, you can switch to logs. Now we classify the logs into two parts, significant and non-significant. The significant ones are the ones which end up in 404s or some kind of errors, or take a really long time then expected to complete. And the non-significant ones are the ones which end up with 200s or end up without any errors. So you can explore even more. For example, if I expand one of these logs, we see that it's coming in from Mexico, what kind of OS it's using, what kind of browser the client is using, the certificate type, the TLS version, which load balancer on your setup did traffic hit, which back in server pool it went to, and what was the kind of URL, lots of information out here. Everything you see out here in blue can be filtered in a Google search kind of a way. For example, if you want to search all the 404s you got on your application with just one click, you get all the transactions which end up in 404s. More than that, we can do SSL termination. We can do SSL re-encryption so that we have end-to-end SSL. We have a really feature rich, when it comes to layer 7 features like content switching, redirection based rules. And on the analytic side, we also can show you a really good overview of what kind of certificates your clients are using, what kind of TLS versions they are using, how well your application is performing, if your application is facing some kind of attacks, what kind of attacks they are. For example, out here, my application is getting simpler attacks. And using the layer 3 policies, we can also, with just one click, block any of the attacks right then and there. So that was all I had for our demo out here. If you guys have any questions regarding AVI, feel free to answer them. All right. So we talked a little bit about the spinning up of the AVI instances, whether the control plane or the data plane. And it sounds like they're virtual machines, right? Is that correct? Is there any plans for containerization in the future? We already support containerization. We work as a container in OpenShift, as a Docker container in OpenShift Cloud or Kubernetes or Mesos environment. We do have the support. Did you want to show the scale out demo? Scaling up. Yes. Yeah. How to set up autoscale or those kind of things. OK, sure. So we also have intelligence of setting up something, autoscaling our application based on CPU utilization. Let me move to tenant view. So here in this tenant view, you can see the list of tenants when he's switched. Shows that you can provide self-service provisioning for your application team. So when your application team is ready to move from local development to either production setup or testing or even just production migration, they can take their load balancer and set up and provision their own load balancer instead of waiting on a networking team. So you have these agile methodologies for app development. But once you go to app deployment, you kind of slow down. So it's kind of like rushing faster to slowing down, only to slow down, to be slowed down by the networking team. So instead, you can provision self-service load balancing slivers for them so that they can provision their own load balancing by the time they're ready for deploying their application as well. And this is the autoscale? Yeah. So in our view, we have a concept called service engine group, which is just a logical grouping of the service engines based on common properties. So in that, you can set the kind of HAA mode you want. We support a normal negative active standby. Then there is an active active mode in which you can have two or more service engines running active, active, active. It need not be just two. Then there is N plus M buffer mode, which is, I would say, most preferable. Because in that, we have N service engines running active and M buffered. So the buffered service engines constantly sync with the on the connection states. But they don't take part in the traffic as soon as one of the active service engine goes down. We can use one of the buffer. Buffer automatically takes over the traffic. So while scaling out in active, active, or N plus M buffer mode, we can use a feature called auto rebalance in which we can program the CPU threshold, which Avicontroller looks out for. If the CPU of the service engine goes above a certain threshold, it can scale out the application to multiple service engines. If the traffic goes down, the CPU utilization decreases. Again, it will scale that back. And if it sees that some of the service engines are just lying around handling no traffic, it will also delete those VMs so that you save on your infrastructure resources. Apart from automatic scale out and scaling, you can even manage per application-wise. You can scale out manually. If you see a current service engine has some problem, you can even migrate it out to another service engine. Pretty easy. Everything in the UI is like a few clicks away. So any more questions? Yep. I wanted to. So if you have a question, can you say it on the mic? So is there anything special in this with OpenControl versus standard OpenStack Neutron integration? So the part which we integrate with OpenControl is we provide automation in OpenStack whenever you configure a new virtual service. We have the responsibility of configuring the service engine and connecting it to a proper NICs and all. So that's where the OpenControl integration comes in. We read in the data from OpenControl so that we know what overlay networks we need to connect it to. How can we program the floating IPs to the WIPs and all? That's the part where we talk with OpenControl. Yes, I would say it's added benefit when you have extra features which Aviv provides with OpenControl, extra features of load balancing which you won't get if you go with a normal LBAS running with OpenControl. And yes. No. So the integration comes into play has to be supported using OpenStack Neutron, right? Let's say that you have an OpenStack automating LBAS using Neutron phones and you decide to choose OpenControl as the SDA over the Neutron. You use Aviv over HAPocs because of the obvious difference. So you're existing in OpenStack Neutron. I think we're trying to say, can we pick and choose? Absolutely. You can do one without the other and vice versa. Some of the things that you get when you integrate those, your IP address management, for example, is simplified. You get unified visibility to all your flows because Aviv will provide that, but they're more of layer four and above. On the contrary, you can get it from below. So kind of like the split world, but when you put them together, then you can pretty much look to one central source for all your traffic flows and management from your network perspective. And they kind of like dump the application concerns off to the application team. So there's a kind of nice differentiation of separation concerns that does help. There's ECMP on top, which is nice. If you want to scale out with one IP address across many instances, that is something that I think is pretty unique to Contra. So the ECMP layer is basically like whatever your gateways are configured for. So you're probably looking at a standard like five tuple hash there. But where you want like an Aviv network controller or HAProxy or something, depending on your level of sophistication, is if you want to do like a graceful drain example of like a node or have a specific like health check that says, oh, this is returning a 200 and I will take action if I get something else. ECMP is just blind. It's just like, you know, here's traffic to fire hose where you get much finer grain control and like application understanding to help you manage that. Especially like blue-green deploys, things like that. They're real valuable. So for any application kind of deployment cycle, you do the local development, you do the test and production set up blue-green kind of deployment canary testing and then move it to the production slowly and gracefully. And it can be either for applications or even for load balancers or even for web servers. If you are trying to get off of one web server to another, you can do the graceful migration of slowly doing it, testing it out and because you have the visibility from AV load balancers, you know exactly where it's going wrong. So for example, I'll give you my own living example. I'm in product marketing and I am not an engineer that can go into the weeds. When we were migrating our AV network's website from one web server to another, we had to make sure that it doesn't go down because we just did that last week during Red Hat Summit when we had a press release going out. So we had to make sure that the graceful migration happened and when there were four or four errors from random URLs that I didn't know exist, that I didn't know were existing because those landing pages or those specific URLs were set up before I joined AV more than a year and a half ago. I saw from the dashboard that there was a four or four error. I looked it up. Okay, these are all the four or four errors coming from these clients or these random requests that are coming in. I could immediately set up a redirect from AV console that if a request comes in for this URL, redirect the traffic to this URL. So it was quick, easy. Even for someone in product marketing to do that, it was just easy. Yes. Contrail. Right. So if you go with Contrail for your Layer 2 and 3 network automation, we extend that kind of network automation and application automation through L4 through L7. So you get a complete network automation from Layer 2 to 3 to 4 through 7. Yes. Support for DPDK. Yes. Any other questions? Anything else you wanted to add? Okay. So we are right at the marketplace at Booth 821. So if you're coming out from Sheraton, you can walk through the other exit. We are on the left. Walk by the Booth. If you have any other detailed questions on how Avi works to further extend integration with Contrail, we can show you that we're just open stack. Or even outside of open stack, if you have kind of an on-prem deployment, we can show you how Avi works on an on-prem deployment. And we have cool shirts like OpenStash. And we also have an acronym that goes something like beyond application delivery as a service. I will let you guys figure out how to spell it. I mean how to say it. Load balancers are traditionally called application delivery controllers, but we go beyond application delivery. So we kind of came up with the acronym called beyond application delivery as a service. So we have the t-shirts. We also have the fidget spinners if you're interested in fidget spinners for the kids. So come over to the booth. We can show you more demos if you want. All right. Thank you, everyone.