 All right, everybody. We're going to be making a presentation on AVI Networks. It's going to start in about two minutes. It's going to be a demo and architectural overview. All right, let's get started. So intent-based application services, what are they? It's load balancing. It's application delivery. It's what you're doing with your F5s, your net scalers today, maybe even your HA proxy. My name is Josh Gross. I work for AVI Networks. I work with extremely large companies in the Bay Area and helping them to get off hardware-based appliances and deliver application services faster via APIs, via analytics. Today, what we're going to do is we're going to go a little bit into the architecture, the problems we solve. But more importantly, we're going to jump into a demo. All right, so what is it like being an operator today? Well, you guys are all doing it. It's really damn hard. So you've got to do architecture. You've got to make things like OpenStack actually work. And you've got to deliver value to your customers. So delivering the ability to streamline the application release process. When you think about all the bottlenecks you've got in front of you to accomplish that, it's pretty hard. When you think about what you're doing today, so with your OpenStack environment, you may have an F5 implying sitting in front of your HA proxy, your LBATS driver. And to just get an application out, you can spend up a VM in no time. You've got all the networking, the overlay set up. But then you have to put in a ticket to actually get a VIP created. And then once that VIP is created, you have no feedback in terms of, how is my application actually doing? That is a problem. Or when there is a CVE, maybe there's a security vulnerability. How do you actually patch that without it being disruptive? It's really hard when you're stuck to hardware appliances or config files like with HA proxy. Additionally, right now when you create your OpenStack environment, you build it and you say, I'm going to build this and customers are just going to onboard on. These application teams are going to love this beautiful thing I built. The problem is, one, it's not as easy as you think because there's all these bottlenecks in the way. And two, you're not providing them any visibility into how their performance is. And three, you're going to management asking for a budget that's tied to a hardware appliance that should satisfy you for five years. It just doesn't make sense anymore. So how does Avi solve that? Well, when you think about businesses today, it's all centered around applications. So being able to deploy applications very quickly, being able to modify, being able to give feedback to your application teams. The problem is you've got the business side. They're talking about things like digital transformation. So how do we turn all the data we have about our customers, everything we understand about the market into actually applications that deliver value to our customers? Agility, how can we be more competitive because we can get applications out faster that we can spin, we can change our business model? And how do we do that in a predictable cost model? Now on the right side, we've got problems that all of your application teams are facing. They're coming to you and saying, hey, I want to be able to deploy things faster. I'm just going to go to the public cloud. I don't have time for tickets. I don't care where you put this. I don't care about the rack. I don't care about the time it takes to ship these hardware appliances. They want to be able to just put their code into a repo and see it into production minutes later. And then of course, every time they do develop an application, it's going to be container centric. So they're always looking at why wouldn't I be developing with containers? So I don't have to worry about the underlying OS, the underlying hardware it sits on. Now as an operator, so what you guys are doing is you're still asked to deliver the same availability, security, invisibility that you've always been asked to deliver, whether it was a mainframe, a three tier app, or now these cloud native apps running Kubernetes. That's really damn hard because now what you're thinking is, okay, I've got these things that are very predictable. I understand all the protocols. They're very mature. I know how these function and I know what to do when something goes wrong. And now you have these app teams that are asking you to deal with ephemeral workloads, where you've got, you're inundated with billions of logs and how do I make sense of what's going wrong? So what Avi provides is a single, intent driven application delivery services platform. What does that actually mean? Your users, well actually first, let's start with the operator. So you yourself, you're actually declaring, okay, what policies do I want to cross all my applications regardless of where they sit? So what type of role access do my customers have? What type of self-service do I want to provide? What type of APIs do I want to expose? What type of dashboards do I want to create? And then for your users, they just put the code in and they're able to deploy their applications wherever there's available resources. So now it doesn't matter if you're in OpenStack, if you're in VMware, if you're on Fair Metal, or even if you're in the public cloud. That is hugely advantageous because when you think about the operational model, when you're building an OpenStack cloud, the first thing you do is you say, what do I want to bring with me from my existing infrastructure? What's working today? So that I don't have to change policy, I don't have to learn anything new. The problem is it never fits with what you're trying to build today, whether it's in the public cloud or whether it's in a new virtualized environment. So then all of a sudden you go to an open source route. Great, we understand what it is, we can read the code, but now you have to do all the modifications to make it actually work. So then you go for something like a third party commercial offering. That's where a Navi comes in. And for an operator, we deliver a consistent experience across all these platforms. So how are we really different? Well, you're probably familiar with load balancing or proxy services. Essentially, it sits in between your clients and the applications that they want to access. It ensures security, performance, availability, reliability. Now, in a traditional model, so when you think about your F5s or your NetScalers, even your HA proxy today, you've got a high availability pair. So you're already paying double for the capacity that you think you might need because you have one active, one standby. Then each of these HA pairs is its own snowflake. You're actually managing them each separately. Plus they're confined to hardware. You have six cores in a box. Like how much compute can you get? How much storage? How much logs? How much analytics? And how do you aggregate all that? It just doesn't work, especially as you think about software to find and how that mobilizes your infrastructure to wherever it needs to be. So what we've done is we've broken apart what you think of these monolithic appliances like an F5. We've done everything you're seeing everywhere else in the stack. So 2000, you have virtualization. So you're separating the OS from the compute layer. Then you see in about 2009, you've got NYSERA coming up with software to find at the networking layer, so overlay networks. So how do we separate that out so we can make it scalable, distributed? Then you got software to find storage coming up over the last five years. But for some reason, no one's thought to do it at the ADC or layers three, four through seven. So that's what we've done. So now all of a sudden, you have an extremely scalable, highly resilient control plane. That handles all these analytics. So it's a big S database handles all the API calls. Everything is based off a REST API. We're not just gluing on APIs like traditional vendors have to do. And then you've got centralized policy regardless of where these load balancers are deployed. So now all of a sudden you've got load balancers that deployed on x86 cores and they can be in the public cloud. They can be in bare metal. They can be virtualized. They can be in your Kubernetes. The biggest difference though, because every vendor is gonna say, hey, we're gonna go faster or we can work in all these environments is if you don't have an architecture built on an API you can't listen to message buses. You can't see what's going on in those environments. And that's why companies like F5 are failing in moving into containers. Now, what we've done as well is so now that you can deploy in an integrated fashion into your AWS, your Azure, your GCP, you can turn a bare metal server into a high-performance load balancer that would traditionally cost you 400K. Now you're spending $6,000 on a server and turning it into a box that can produce 80,000 TPS in a true active-active fashion across the fabric. That makes it extremely resilient and mobile. So flexible to be deployed anywhere. Now, what this also allows us to do is I mentioned that big database component. So you've got your controller cluster. Now you're running a database that essentially takes all these analytics, all the metrics that are produced from all the SSL transactions. And now you can actually automate off those. Without metrics, you cannot do automation. And that's the problem traditional vendors face. And even with an HA proxy, you look at your HA proxy today, yeah, you can get the logs, but then you have to parcel those logs. You have to build all these dashboards yourself. You're still managing the upgrade cycle. It takes a full person job to do it all. Now, since we're based on an API, or an API-driven architecture, it is very, very simple for us to integrate into existing or even new orchestrators. This is also big. It's not a heavy lift. So literally, we build something called a cloud connector, but essentially what it does is it says, oh, you're in AWS. Okay, let's use your DNS throughout 53. Let's get into your auto-scaling groups. Let's grab your Elastic IPs, your VPC. We're aware of everything that's going on in that environment. So as you spin up new application servers, we automatically spin up vits and load balancers to support them. If we start to see more and more traffic, great, we'll spin out the back, so even your pool servers, or we'll spin up more load balancers. It's that simple, and that's because of this controller-based model, something that again, none of the other vendors can provide. With Ansible, we've got about 100 playbooks written. Again, when you have a consistent object model, a consistent API, it's really darn easy to automate. And then we do actually have a Cisco appliance as well that we get deployed on. So if you do have a siloed operational model, you need a network appliance, we can deliver that as well. But primarily, we're consumed as 100% software. And finally, when you think about performance, this is the thing that legacy vendors have always talked about. They say, hey, you need a custom ASIC. But the challenge is, all your customers are moving to newer ciphers like Elliptic Curve as well as Perfect Forward Secrecy. And the ASICs are all optimized for RSA. So now you have four cores in your boxes to actually do all the offloading and SSL offloading for this Elliptic Curve. So now you've just got a really big piece of junk that you're paying for that's not delivering the performance that your application teams need today. With Avi, you can literally just take these cores in their building blocks with consistent linear performance. For one of our customers, a huge financial services, they were looking at F5 and GCP. They couldn't get the performance they needed because they had to actually build multiple layers of load balancing. With us, they just spun up 400 clients going across 40 load balancers, which is essentially x86 cores, so instances in GCP. And they got over a million TPS running within eight minutes. So when you think about retail, when you think about companies with variable workloads, you need this elasticity because otherwise you're paying through the nose for two months out of the year. And then finally, use cases, load balancing, public cloud, security, so web application, firewall, microservices, containers. And then the benefits of course, enterprise class features. So we're doing layer three, four through seven, load balancing, ELB like automation. So if you're familiar with AWS, it's all about automation. And that's what your customers want today. We deliver that from an infrastructure standpoint while delivering the same feature set that you'd expect from your F5 or that your applications are already using. So it does make lift and ship possible. And then multi-cloud. We also provide global load balancing across your clouds, across your Kubernetes clusters, et cetera. So with that said, we can jump into a demo. So what you're looking at right here, you're probably not used to this because if you were to log in a load balancer, what would you actually see? These are applications. So this is a multi-tenant environment. So you can break it out by a business unit by application so you don't have to deal with noisy neighbors all residing on the same physical hardware. You can actually carve out cores just for these specific application teams. Each of these has a health score. Immediately you know what's going on. Is it based on baseline performance and machine learning, we can tell you, is it performing optimally compared to normal conditions? What's the resource penalty of those load balancers? How full are they? How much resources are being used? Anomaly, are we seeing spikes in traffic that we should be aware of? And then security, are ciphers outdated? If we jump into one of these, and actually let's just go into a tree view quickly, this is something that the community had to come up with for a company like F5, where just to see what your actually application topology looked like. So now you know exactly where the load balancer is, what the health of that is, what the pools look like in each of the individual backend servers. You'll notice these are the load balancers we call them service engines. We don't even care because they're ephemeral. We'll just spin up more of them and route traffic. But you'll notice this glaring exclamation point. What's wrong? Oh, a health check failed. Immediately you know exactly what the problem was. Today to troubleshoot, you're doing packet captures TCP dumps. That's all you have at your disposal. And if it's across a large environment where you have multiple appliances, it gets even more challenging. Now when you go into an individual application, this is kind of that 10,000 foot view where immediately based on our custom TCP stack, you can see end-to-end timing in how your application is performing. So when you get that ticket that's saying, hey, we think it's the network, you immediately have a place to look. But the best part about this is all this is available via API. So we've containerized Grafana images so you can create dashboards not only for your knock, but also your application teams, all of its role-based access, of course. You can give read-only access and we even have a Splunk app. So not only can you see how is your infrastructure performing, but how are your application performing? So you'll notice very quickly something happened at this point in time. Well, it looks like the application response almost doubled. So what does that mean? It's not a network issue. Great. So we've already kind of said, okay, we know where to look. So let's actually look there. So here we're providing every SSL transaction. So this is every request that's gone across your load balancer. Right now we're only looking at significant logs. So these are things that yielded, say, a 404 response. Maybe there is high latency compared to the baselines. If you go into one of these, you're getting all the information that you would have to capture through, again, a TCP dump, a packet capture. This is extremely valuable for troubleshooting, of course. So you know if someone put in this URI URL, what was the error code that we got? And then you can actually find that request you don't have to say, oh, it was slow three hours ago. What are you gonna do? Well, you can look at it over time here and you can pinpoint that. Then you can even get the headers out of here. So although we offer packet captures TCP dumps, if you're using them, there's something wrong. Now, we noticed on that last page that it was app response time that was giving us issues. So let's just search. By app response time is greater than 200 milliseconds. Now you're gonna see all those logs again, but now we can actually go through sortable fields to figure out, okay, what are we seeing? What behavior should we be looking at? Well, this is the client side, so we can see all the browsers that everyone's accessing from. But that's great feedback, probably for your application teams, maybe if you're narrowing something down across the WAN, potentially. But for here, it's app response time. So we really should be looking at the server infrastructure. So we can look at the virtual service IP addresses. If we take a look at the server IP address, which also correlates back to that health check we saw too, which is also interesting. But now we can see that every high latency transaction that occurred was coming from this one server. Okay, so now we just go to the compute team, or we roll that server out, we go to the application team and say, hey, there's something wrong with that. Why don't you debug it? It's that simple. This is kind of the bread and burger because you get this across all of your environments. Additionally, when you go into like a Kubernetes environment, now we also provide you visibility of how all those microservices are interacting as well. So not only the security posture, but also the health checks, L4 through 7, low balancing and proxy services, even whitelisting and blacklisting. So it's extremely powerful. You can run this on bare metal, virtualized environments, public cloud. And of course, everything is automatable because it's all a consistent object model, REST API based. So we have a swagger API. We've got an, in our GitHub repo, you can actually deploy the same demo. All it's all containerized. You just put it in, get a trial license, and now you can recreate this entire environment. If you want to deploy web application firewalls, it's policy based as well. Boom, you turn it on. Just detection mode, enforcement mode. We've got a learning mode and positive security model as well. So that's what I wanted to show you today. There's a lot more we can go into specifically open stack at the booth, but appreciate the time and thanks for getting educated on Na'vi.