 Well, hello everybody and welcome again to another OpenShift Commons briefing. This time, we have AviNetworks, his Shisha, and they're new members to the OpenShift Commons, and I ran into them at KubeCon a while back and they had a different approach to some of the application network services and working with OpenShift. So I thought it would be a good way to introduce them to the community by having them come on and talk a little bit about what they're doing, and I'm not going to do too much more of an intro because I'm pretty new to it. So, if you could introduce yourself and your colleague who's going to be doing the demos, we'll get started and there'll be Q&A in the chat and after the demonstrations is done, there'll be time to ask questions again too. So please take it away. Thank you, Diane. Thank you very much for giving us the opportunity to present at the OpenShift Commons webinar. My name is Ashish, I'm a Senior Director of Product Management at AviNetworks. With me is my colleague, Bhushan Pai, who's a Senior Solutions Architect, and he will be driving the demo about halfway through the presentation. We'll do, in terms of the agenda today, we'll do a brief introduction to AviNetworks, including the product architecture, and then we will deep dive into how AviSolution works with OpenShift. We'll do a deep technical dive there, and then we'll do a live demo where we will show you how Avi works in an OpenShift environment from beginning to the end and what all the services it provides. If you have any questions, please feel free to ask on the chat window and then we will address them as we go, as well as towards the end. So let's start with a brief introduction to AviNetworks. AviNetworks is about four and a half, five years old company based headquartered in Santa Clara, California, but with offices worldwide, including the Continental US, Europe, including UK, mainland Europe, Germany, France, and the Netherlands, and in Asia, including India and Southeast Asia. And so we have global footprint with R&D and customer engineering. As a company, what we do is software-defined application services, layer 4 to layer 7 services, that starts with load balancing or service proxy, as we call it in a container world, but also application security, including SSL, termination, DDoS, detection and mitigation, web application firewall, and also application visibility and monitoring. So all of these layer 4 to layer 7 application services are built into the same product. And unlike any other solutions out in the market, this is a software-only solution that works across all the use cases, including what's a traditional load balancing use cases, as well as private cloud with OpenStack, public cloud, and container use cases, as we'll see with OpenShift, as well as Kubernetes. We have customers worldwide, including some joint common customers. So we have customers that are using us in OpenStack environment with Red Hat. We have customers using us in OpenShift, customers such as Deutsche Bank and Danska Bank, as well as other customers, these customers running in production today with Adi and Red Hat OpenShift. And also we have customers piloting us, customers such as Intuit and others, with the OpenShift environment. And all of these customers also use Ansible as the automation tool. So as far as the ecosystem is concerned, we have customers with OpenStack, with Red Hat, with Ansible automation and OpenShift for their container fabrics. We'll talk about more of the use cases as we go along. But let me give you a little bit more background on the problem that Adi is trying to solve in general. What our customers are trying to achieve across all the use cases that they're running in on-prem or private cloud or public cloud or containers is going from the left side of this picture with a manual ticket-driven operation to a self-service, fully automated experience with CI CD, with network automation, with application automation deployment, with OpenShift, with Ansible and so on. And the use cases might be different, but the underlying reason, the underlying driving factor for these customers is automation. And that's why automation is built in fundamentally into Adi's solution. We'll talk about that in next couple of slides. So what is the problem today? Why does Adi network exist? Why did we get started? And the problem that exists today is if you look at any modern data center, you have racks and racks of servers. And these are standard x86 servers on which you deploy applications. There is no snowflake here, all servers at x86 servers. You can deploy your applications on any servers. You can start with, let's say, 10 servers and you can scale out to 15 or 20 or scale back to five or six as the load increases or decreases. You manage them as a pool of capacities, a fluid capacity pool. You don't have any specialized hardware for applications. And that allows you to do automation. That allows you to have a telemetry that's built in that drives that automation. And it makes the entire management of the fabric much simpler. And that's how all the modern data centers are built. But if we look at any of the layer for the layer of seven services and especially the traditional hardware load balancers, whether they're F5 or Citrix Netscalers or A10s, they're built on proprietary hardware. They're an appliance-centric model. Even if you look at the virtual appliances, there are appliances, which means you're going to manage them as individual devices. There is lack of automation. There's APIs at an afterthought. And you can't basically automate as an elastic pool of capacity. That is the fundamental problem that ADI solves. So let's see how it solves it and then we'll talk about how it works in an open-shift environment. If you look under the hood of a traditional appliance-based load balancer, that it's a monolithic piece of software running on an appropriate hardware. In that software, there are two components, the management slash control plane, which handles the configuration, the policy, et cetera, and the data plane, which is actually doing the load balancing or other security services. The first innovation and disruption that ADI networks did was applied software-defined principles, where we separated the control plane from data plane. And so we have a concept of ADI controller, which is the centralized control plane, that manages a distributed set of data plane entities, we call them service engines. In open-shift parlance, you can just say, this is your open-shift master and then your individual open-shift nodes. Same concept as far as the separation of control and data plane goes. That's the first thing that we did. And what this allows you to do is manage your entire fabric, entire service proxy or load balancing fabric as one, because your controller is your REST API endpoint, and the controller completely automates the underlying service engine fabric. The second problem we want to solve was the heterogeneous compute environment. You might have applications running on a bare metal, on a virtualized environment, let's say with KVM on OpenStack, or you're running in containers with Docker containers running in Kubernetes or an open-shift environment, or you're running in a public cloud environment, whether it's Google, public cloud, AWS, Azure, or anybody else. How do we have a same solution with operational consistency, feature consistency that works across all of the above environment? Well, that's a problem Avi has solved as well. How? Because Avi is software. It can take a form factor of our server by running Avi as a software on a bare metal server. It can be a VM running with KVM or ESXi. It can be a Docker container running with Kubernetes or OpenShift integration as we show in the demo, or it can run as a VM or a container in a public cloud instance. The same software that runs across the board. And the beauty of these two pieces of architecture is that this is managed from a centralized Avi controller. And the way Avi achieves, the Avi controller achieves that automation is through integration with the ecosystem. So Avi talks to OpenShift master, or talks to Kube master, or talks to Amazon APIs or OpenStack APIs to spin up these Docker instances, for example, or VM instances automatically based on demand. So the operational flow is that you communicate to the Avi controller through REST API or Ansible Playbook that's built on REST API and say, I want to have load balancing services for these applications or I'm deploying these applications, do everything. And then from there on, the Avi controller talks to appropriate orchestration engines, spins up the appropriate proxy instances, kills them out as the traffic grows, or shrinks them as the traffic shrinks, manage the high availability, manage the placement of all that, fully automated layer photo layer seven services. And the last piece of the last secret ingredient of the architecture is the built-in visibility in analytics. So these service engines, which are your proxies, are running, are sitting in line with your traffic, right? They're intercepting the TCP, the HTTP traffic. And as part of the proxy functionality, it's collecting hundreds of logs and metrics every minute, pushing it up to the Avi controller and the Avi controller runs the big data analytics engine. And not only to present a nice dashboard for you to see what's going on from performance and visibility point of view, but the controller uses that information in a closed-loop fashion to make automated policy-based changes on the underlying infrastructure. So for example, if it sees the traffic growing, it can talk to the OpenShift APIs and auto-scale your infrastructure, or auto-scale out or scale in, depending on the policies. So it's a fully elastic, automated services fabric. And it's a full-feature load balancer, a full-feature fabric. So if you want to, if the question that comes to your mind is, well, does it have the features that my F5s or Citrix have? Yes. It's not an open-source HA proxy or a coop proxy-like basic load balancer. It's a full enterprise grade with L4, L7, caching compression, content switching, auto-scaling, global server load balancing, SSL, DDoS, VAS. You name it, it's an enterprise-grade solution. Think of Avi as the best of an enterprise-grade a proxy or load balancing solution married with the elasticity of your, of your coop proxy or HA proxy or NGINX that's built-in with integration into your OpenShift environment. Plus the application visibility, the log analytics, the security analytics, the built-in multi-tenancy, built-in service discovery, centralized management are in addition to the features that we talked about. And we'll talk more about this as we go through the demo. But a common question that gets asked is, well, this is software, what about performance? Surely the hardware is required to get high performance. Well, that's a myth that the hardware vendors have propagated, that's absolutely not the case. Here is two data points. First, you can have one gig of encrypted sustained throughput or five gig of total unencrypted throughput, 2,500 SSL transactions per second with perfect forward secrecy in a single VCPO core as a proxy. And then if this Lego block of your proxy, the service engine can be one core, two core, or a full bare metal server, and that gets you more than what your hardware appliances give you in that form factor. The second part, it's horizontal scale out. With these Lego blocks, you can build lots of service engines in a scale-out fashion, centrally managed by every controller. It can reach millions of SSL TPS, terabits of throughput on commodity X86 hardware. And as a proof point, one of our customers' largest, one of the largest e-commerce and payment companies did an experiment where in a Google public cloud, they scaled out with Abhi a single application, single VIP to one million SSL TPS, zero to one million in eight minutes, running on commodity Google X86 instances. You cannot even do that in hardware appliance. So performance is a non-issue. It actually outperforms any hardware load balancers. All right, last slide on the use cases, and then we switch over to the OpenShift specific integration. So Abhi works across your traditional load balancer refresh use cases, where you work on a bare metal or a VM, taking, replacing hardware load balancers, saving over half the cost to our customers. Our private cloud use cases is OpenStack with Ansible Automation or a public cloud or an SVN environment, or the case in point today is Container and Pass Solutions. Okay, so let's switch gears here. And let's, by the way, we have common Red Hat customers with Red Hat on all of the above use cases here. Let's switch gears into how does it work in OpenShift? How does architecture that we just talked about for the last 10 minutes fit into an OpenShift architecture? So let's start with the problem statement first, right? So as the application architectures have evolved from a monolithic application to distributed microservices-based application, you have a problem. You can use your traditional appliances for North-South or as OpenShift calls it, Routes layer, right? Where you can deploy your SSL offload and load balancing for your North-South application. But what about East-West? Because your microservices-based architecture, there's a lot of East-West traffic that goes on. And your traditional appliances don't work because it's an appliance, even it's a virtual appliance. So the common solution is a coop proxy, for example, in an OpenShift environment. There are challenges with that and we'll talk about that in a minute. What AVI has done is if you look at the infrastructure stack of a container services fabric, you have your server layer, physical or virtual, doesn't matter, you have your network layer with or without SDN, and then you have your L4 to L7 layer. And on top of that, you have the Kubernetes and OpenShift for the scheduling and the resource management. AVI network fills in the gap that's in the middle. When you are running on-prem or in public cloud, doesn't matter. But from service discovery to service proxy to application visibility and performance monitoring to micro-segmentation at the application level and security at L7 to DDoS to SLA-based auto-scaling by integrating with Kubernetes or OpenShift. That's the gap that AVI provides an enterprise-grade L4 to L7 services solution. And we'll go into now or the next 10 minutes before the demo, we'll go into the detailed architecture. So what is the before and after? So if you look at the before, if you look at either a traditional hardware-based solutions or if you look at a combination with an open-source solutions, what are the challenges? First of all, you need multiple tools for your route or for the not-south load balance so typically use an HA proxy front-end by a hardware load balancer for SSL offload. That's a common example. There are challenges with that. HA proxy itself is not HA. You need a separate daemon to monitor its hard beats. It doesn't have enterprise-grade features. It's a very basic load balancer. And then when you combine that with a hardware load balancer, it's too expensive and complex. And then you separate tools for your east-west, typically use coop proxy. Coop proxy itself is a probabilistic load balancer. It doesn't do L7, it doesn't even do east-west SSL. And then if you combine that with a separate service discovery solution for DNS, centralized management, hardware load balance for global server load balancing, these multiple tools add pressure complexity and higher cost. How does AV solve this problem? AV has one solution that does all of the above. It does service discovery, service proxy, so local and global load balancing, application maps, application performance monitoring, security, micro-segmentation, auto-scaling. And so architecture-wise, how does it work? So we saw earlier in the architecture slide we have a centralized controller and a distributed data in the service engine. So the way it looks like is that is the AV controller that is deployed typically as a container, typically on the OpenShift Master, but it can be outside. And AV service engines, one pod per your OpenShift node. So AV service engine runs as a container and a pod on each of your OpenShift nodes, okay? So let's go through the light in the life of. So from day zero to day one to ongoing and then we go into the demo to show you how this works, right? So I'm gonna walk you through the steps of how the integration works and how the flow works and show you that exactly in the live demo. Day zero, the way it works is on step one, on AV controller, you configure the OpenShift credentials. So you have deployed AV controller and you as a container and you give it access to the OpenShift Master. That's it. That's the only config you ever have to do on AV. From then on, AV controller talks to OpenShift Master, figures out how many OpenShift nodes are running in the environment and automatically brings up AV service engines as rocker containers or pods in each of the nodes. Then what we're gonna show you is when you create a deployment, let's app one in OpenShift. AV automatically listens to the notifications and pulls configuration from OpenShift Masters, automatically creates these corresponding services, creates a VIP, creates a DNS record, fully automated. Okay, so on day zero, you configure the controller with the master credentials and from then on, as you do deployments, AV takes care of everything. If you're GSLB, same thing, you just do it, you do it, oops, sorry about that. You do it on each of the data centers and then you do GSLB configuration. So again, you do the deployment app one in data center one and data center two. AV automatically creates the corresponding local VIPs. AV automatically creates the global service for GSLB, covering both the VIPs. AV automatically syncs that to all the followers and starts health monitoring the local VIPs. So even for GSLB, zero touch. So how does the traffic flow work? So now that the deployment has been done and your services have been created, let's look at the traffic flow. How does AV provide the set of discovery, service proxy capabilities? So let's say an external user, let's talk about North South Traffic Management, so this is the route part of it. If a customer comes in from outside, looks for app one dot os dot acme dot com, recursively through the DNS, it comes through the DNS that AV is running and it, first of all, does the GSLB resolution because you're running in two data centers. The GSLB returns a local VIP and then the user performs the get on that local IP address and then AV automatically handles that as a service engine as a proxy, TCP, HTTP, SSL proxy. That figures out where the right part is that should handle this specific request and follows up and basically does a load balancing. It's a fully stateful load balancer. Let's talk about East-West Traffic Management. So what happens when an internal service is trying to reach another service for a through DNS? So similarly, AV is going to perform, not the AV, the service is going to perform a DNS query for the East-West application, recursively hits the GSLB, returns a VIP and East-West traffic also is managed the same way. So basically it's a client-side proxy. So the way it works locally is that the part that is doing the request is the traffic is intercepted by the local proxy that's running on AV service engine and the service engine then decides who the corresponding best part is to respond. So because we have a service engine running on every part, this is the client-side proxy, just like Coupe proxy. AV service engine basically replaces Coupe proxy. So every time the request is sent out from the requesting part that's intercepted by the local service engine proxy and then it's load balanced in a stateful fashion. So unlike Coupe proxy, which is a probabilistic packet distributor, in this case, AV is a fully stateful proxy and it does the health checks and it makes sure that the load is evenly distributed. It can handle SSL offload, it can do all L7 policies and rules like URL rewriting, rate limiting, URL-based redirects, everything that you expect in an enterprise grade load balancer is also available for your East-West traffic. This also allows you to then go beyond just basic proxy services and integrate with your CI CD and all your blue-green deployments. Let's say you're deploying applications version two and you want to do a policy-based switchover from a blue version to a green version or version one to version two. Well, all you have to do is you create through annotations, you create policies on AV and as what AV can do is you can configure, say, send 10% of the traffic first to the new version and if everything looks good, then automatically start sending everything to the new version. If it doesn't look good, is a fall back to the blue version. So you can do that programmatically based on policies through this solution and non-destructively. So the existing connections allow to drain the new connections switchover to the new version. Another common use case that we have with this is a security capability where let's say you have an external service whether it's a database service or active directory service that your internal services wants to access. However, you want to get that only certain services only certain internal applications can access those external service and you want to force that policy. And let's say you have a firewall in the middle that's also controlling the access to that shared resource. Well, what you can do with AV is that create this external pod called AV pod that what that does is through policies it says only blue service is allowed to access the external resource and we also do a NAP source netting to a single IP address or in the firewall you have to open up a single IP. You don't have to open up multiple IP addresses. And so it does two things. It allows you to simplify a firewall configuration at the same time. It allows you to control with service in this case a blue service can access the external resource and the red service cannot. All right, so with that, let me pause here, hand the control over to Bhushan where we'll do a detailed demo about everything that we just talked about step by step starting from scratch where you have an OpenShift cluster and now we control deployed and we'll spin up AV service engines what creates deployments and we've run some traffic. Bhushan, over to you. Thank you Ashish. Let me start sharing out here. Okay, so what we aim to do in this demo is create a virtual service like you see. You see it now? Okay, yes. Give it a second. Still trying to share your screen. I think it's a bit. Let's see what we get. We're sharing the screen. I can see that you're sharing the screen but you need to click into your browser. Networks. Okay, so this is what we aim to do in this demo today. We are going to deploy services on OpenShift and this is what it's going to look like. We have a virtual service which corresponds to a service in OpenShift which has a bit on it which is placed on one of the service engines running on one of the cluster nodes. This service, virtual service can have one or multiple pool members and each pool can have multiple backend pods. So let's start with our clean setup. As you see out here, we have OpenShift system with multiple projects on it and AV controller which is right now we don't have anything on it. It's completely clean setup. No applications, no virtual services, nothing. And I also have a dashboard to run some REST APIs to either the OpenShift master or AV controller to provision some services quickly. So let's start with the first day zero configuration that is setting up the OpenShift cloud and let's go back and check out our infrastructure. So as you see earlier, we didn't have, this was none but now we have configured the OpenShift cloud out here. So behind the scenes what it does is configure the master node IP and authentication token, set the AC deployment policies, telling from where to get the image, how to deploy, where to deploy. And on the application side, it also configures the IPAM and DNS profile. So for this demo purpose we are using our internal IPAM and DNS for both east, west and north, south virtual services. In addition to this, AV also maps all the projects which are there on OpenShift as tenants which you can see out here in the tenant view. So for each project on OpenShift, we map it to a tenant on AV. So let's be on admin tenant for now and the next step is to configure the DNS virtual service which will be responding to all the DNS queries for all the applications running that AV will handle. So let's go back to application and see that the virtual service is already up and running. If you go to infrastructure, we see that the service engines are also up. The OpenShift cluster has four nodes, one master and three, three minutes node. And AV controller goes and deploys a service engine on each node. AV controller also has the intelligence to disable the service engine on nodes which are not scheduled for parts. For example, the master node out here is disabled, has the service engine disabled. And if you go back to the DNS virtual service, you see that we are listening on one of the IPs for the North, in the North South network and it will start serving for all the expedients of different services on the OpenShift, right? So let's go to admin project right now. As you see, there are no applications running. So let's go and start some applications on OpenShift side. So when I clicked that button on the demo controller page, what it did was ran a REST API to OpenShift master on deploying the applications and deployment conflicts which in turn deployed the application parts, as you see. And as these applications come up, we automatically sync those and provisions corresponding virtual services automatically and places them on all of those service engines you saw earlier. So let's wait for the applications to come up. All right, looks like all of them are already up and green. As you see out here, all of the applications are have IP in the 172 submit except for photo, which is our North South facing service. It has an IP in the North South submit. All this happens automatically using our internal IPAM. The way we provision this virtual services is through annotations on the service which you configure on OpenShift. So if you go to one of the services and see the annotations for the service, the only thing that the app developer has to do is provide a label called Aviproxy and in the value of that label specify what type of load balancing it requires. Is it regular HTTP or HTTPS or L4 service? What kind of load balancing algorithm requires is it round robin or least connections or any other option which we have. And Aviproxy automatically reads those labels or annotations and provisions corresponding to it. So it's zero touch on Avipy side. So another view which we have out here is called a map view. So let's go for it and let's focus on the photo.com app which is our North South facing app and start some traffic to it. So since you take in line with the traffic, we know what application is talking to what other application and as you see out here in a while, we monitored the traffic coming into your application, North South application and we also trace it back to all of their applications as it flows through your application stack. For example, the client is accessing photo and photo in turn is accessing other applications or each best web that likes shopping cart inventory and checkout. What this enables us to do is let's dig into one of the applications and see how it can help with the security of the virtual services in this case. We see that this application is getting traffic from three other virtual services and we automatically populate those virtual services into the white list. From this day onwards, if you just save this configuration and secure this VS, no other application will be able to talk to this. So to demonstrate this in real time, let's just move one of the application out of the white list and in some time, we will see that the application which removed the arrow will turn red, which indicating that all the traffic from that application is helping blocked while the, okay, as you see right now. In addition to this, along the edges connecting the different applications, we can also see various metrics like throughput, number of complete requests, total request errors, latency, et cetera. The user can also see the same things in a graphical manner on the analytics page for each and every virtual service. Give it some time to load. So you can see end to end timing, throughput, open connections, number of connections, request and all other metrics out here. Also under logs, you can see each and every request coming into the application. For example, out here, it says what client app is trying to reach this app, what URL it is hitting. And if you want to dig even deeper, you can expand one of those requests and see in detail what pod IP center request, which load balancer handled it and which backend pool member it went to along with hundreds of other metrics out here. If we write on this one is just a simple HTTP VS, but if you move to our photo.com, which is North-South, I have configured it as a secure HTTP or HTTPS VS. So on this, we can also see the security analytics for this virtual service where we can see how much percentage of traffic is RSA or EC. You can also see right now, we don't have any attacks happening on this service, but if any attacks happen, we can also see that it comes out here. Apart from that, we have other graphs showing that a percentage of traffic are using different TLS versions, 1.0, 1.1, 1.2, et cetera. Again, transactions per second, a number of key exchanges happening, different types of certificates the clients are using, et cetera, too. I guess from the demo point of view, that was all I'll hand back to Ashish. Yeah, Bushan, one more thing if we can show that if we can show in a North-South whip on other demo whip that we have, the end-to-end timing, the type of geography, call information, the type of top URIs, 404s, et cetera, so maybe we can spend five minutes or so on some of the other things that we can do. Sure, so moving back to the whip we saw initially at the beginning of the demo, for this demo, we have much richer traffic to show. So as you can see on the end-to-end timing side, we can show you a summary of what the client response time looked like, how the server response time looks like, what the data transfer time was, how fast is the application responding. And it's not just for a given current time, but you can view it across multiple time windows. Like you see out here, we are viewing for the past six hours and you see between 8 a.m. to 9 a.m. we saw a certain search in the end-to-end timing. Some of our clients must have seen the application slow down. In this case, usually the blame goes to the networking team, but using OVC UI, you can clearly see that there is no change, relatively no change on the client RTT side, but the app response changed drastically. So maybe it's surely the application which is behaving a bit weirdly, causing the application to slow, client experience to slow down. In addition to this, we can, like I said, we can see each and every request coming in and out of the application. We classify those requests into two types, significant and non-significant logs. The significant ones are the one which either end up in 400 or in some kind of error and or take a really long time. Basically something which needs, which can help debug problems. Non-significant ones are the one which breeze through really easy with no errors at all, like regular 200 requests. So if we open one of these requests, we can see a lot more data about where the actual client is located, what kind of OS is it using, what kind of device is using, what another L7 security metrics like what TLS version, what certificate type is using. And again, we can see from what client IP to which service engine handled the traffic on what port and also on the backend side, which web server handled the request. We can also, if you want, you can also dig down even deeper and see each and every header on the request. All these logs are captured real time but can be either enabled or disabled to whenever required. How this thing helps is, for example, out here, you see that if you go to end to end timing, you see that most of the clients got service between with a relatively low amount of latency, but some of them are seeing about a second of delay for the response. If we just click on this, we automatically filter out all the requests which ended up getting more than 800 milliseconds of delay. If we see that most of this have relatively high client RTT times. So if we start this out based on location, we see that 99% of this traffic is coming from India. So maybe the van link is the culprit out here. So within seconds we are able to quickly pinpoint where the problem is. This was from the networking point of view, but even for the application administrator, you can just, if you see a lot of 404 errors on this application, you can just click on 404 and see all the logs that ended up in 404 and then he can fit this information based on server IP addresses. And he knows which server is the application erroring out on. So even debugging the application problems is really easy when you use RV. On the security side, like I was talking earlier, we can show different percentages of client using different types of TLS versions, different types of certificates. We also give you a real time score of how your application is performing based on your security. For example, out here we are using self-signed certificates. So it's changing the health of the virtual service. I guess that was all back to you Ashish. Thank you very much for a detailed demo. Hope that gave you a glimpse of what RV can do today. And it has more things that we can show you if you are interested in a one-on-one environment, including doing things like auto-scaling capabilities, et cetera. So let us wrap up and then we can take a Q&A. But let me summarize first what we saw earlier today in the presentation. So what you saw was a fully distributed enterprise-grade load balancer that takes care of both your routes for north-south traffic and services for the east-west traffic as well as GSLB. It's a single solution that takes care of all of the above capabilities, including integrated IPAM and DNS for service discovery, fully automated service proxy with service creation, application map and analytics and troubleshooting capabilities that you saw. And what this allows you to do is to have a much simpler experience with very few moving parts of the same RV solution that does all of the above. You don't need a separate hardware load balancer for GSLB or for SSL and a software HAProxy or GroupProxy for East-West, which again, doesn't give you anywhere close to what RV offers. And that results in significant CAPEX and AFEX reduction given the simplification as well as elimination of hardware load balancers. And lower resource consumption. There was a question on the chat that what's the footprint? The footprint is as small as one BCP and one to two gig of memory. And the beauty is that it works in any environment. It works on-prem, it works in public cloud, it works with OpenShift, it works with Kubernetes, it works in AWS, Azure, GCP, et cetera. And finally, the thing that we didn't have time to cover today, but I'll just plug in here is our Ansible automation. So if you were to go to our Ansible repo in GitHub, what you'll notice is that that is, we have a full Ansible modules as you can see here for day zero deployment, so for RV controller deployment, for RV service engine deployment, for web provisioning in a non-OpenShift environment as well. Because OpenShift, as you saw, everything was automated, but in any other environment, we have a full Ansible playbook. And in fact, just today, Red Hat and Avi did an Ansible Fest in London, where one of our Ansible architects along with the Red Hat engineer presented the automation that Avi provides. So check out our GitHub repo as well. And if you're interested in beyond OpenShift, even in our OpenStack solution, reach out to us because we have joint customers using Avi in OpenStack with Red Hat. So with that, I'll leave you here with these resources and we'll take questions. So if you have any questions, reach out to me at AshishatAviNetworks.com or Edward Sharp, who's in our business development department. And then let's take some questions. So the question is, is it possible to test Avi on OpenShift Origin? The answer is yes. And Bushan, do you want to have any color on that? Yeah, it works exactly the same both on the Enterprise version as well as on the OpenShift origin, open source version. Thank you. There's very little difference between the OCP and Origin in terms of the playing and pretty much anything here. So this has been pretty, I think my mind has been blown because I've been looking at Kube Proxy and this just really blows Kube Proxy out of the water. And I'm very appreciative of you guys spending the time giving us this demo and the insights into what we can do with Avi Networks' offering. And I really, I can see now why you have so many joint customers with us because this is something that I'm actually curious to know if the OpenShift Online or the dedicated folks at OpenShift have been using this in the background unbeknownst to me. So I'm gonna have to reach out to them and that team because this just seems like something that's very, not, I wouldn't say completely lightweight, but the benefits you get from running it are just mind-blowing as opposed to just simple load balancing that I've played with a little bit. Yeah, this has been a very great presentation. So folks, if you have questions, please do reach out to the team here and we will looking to see if there's any other questions online in the chat. I think everything got answered in the chat. From a performance standpoint, it looks like this is really not anything with any serious overhead. So I definitely think I'd suggest you give it a try and give them a call. And I would love to talk further, maybe with some of your customers to get them to tell their stories as well. Because I think it's great with the demo sets, but also to get something at scale would be really wonderful. We can get to- Absolutely, Diane. Yeah, absolutely, yes. Thank you. Thank you for the opportunity. This was great. So everybody will be back on next week. Again, we have a couple of events coming up soon. The first big event will be that 2.1.7 is coming out. So we'll have a number of talks coming up on that. We're gonna continue our theme this month with monitoring of all ilks. So there'll be more monitoring talks. And they'll post this and the slide deck. So if you guys can send me a PDF version and the slide deck, we'll post this video and the slides up on the OpenShift blog within two days. It should be up there and running. So if you're listening to this virtually later on, please do reach out and join the mailing list or join our Slack channel. Find us on commons.openshift.org. And really a great presentation from Avi today. My mind is a little blown. There's a lot of details there. And I think this is something that we look to for partners to provide. And this is just really outstanding work. So thanks guys and we'll talk to you all soon. Thank you, Diane. Thank you everyone for joining. Appreciate it.