 Good afternoon, everyone. My name is Steve Ali and I'm a senior director of systems engineering for AVI Networks and I brought along with me my good friend Bhushan Pai, systems engineer, solutions architect, TME, everything. So I'm gonna talk to you a little bit about AVI Networks and how we integrate with OpenShift specifically. And just as far as a little bit of background, AVI Networks has been around for over five years now. We have a global footprint but what we do is in the space of the next generation software-defined application delivery controller. That's a lot of words, but we're next-gen 100% software and we provide ADC services such as load balancing, SSL offloading, caching, compression, DDoS mitigation, web application firewall services all together and I'm gonna go through the little bit of the architecture to help you understand exactly what this looks like. So let's just take a step back for a second and talk about the last decade and a half of where the ADC space has really come from or developed out of the Stone Ages is like how we like to refer to it. But there are a lot of things that were characteristic of ADC vendors, of ADC services that nowadays provide some pretty significant challenges for customers as they roll out new application workloads. One of those is that it's done on proprietary hardware. It's very monolithic. You're tied to that physical environment. It doesn't just have to be on-prem, it could also be in the cloud. It's still just a snapshot of that physical hardware. So that's one limitation. The other is that you have to manage each individual device separately. That can become problematic for large deployments. If you have to go log into each HA pair, active standby and configure them separately, that is a significant issue. There's also very low automation. A lot of those legacy platforms were not built on open rest-based API environments. They're not object-oriented. You can't simply just plug in other open API tools and get them up and running right away. A fourth one is very, very limited telemetry, meaning your visibility is pretty much confined to the two pair that you're logging into at that particular time. So when you talk about an entire distributed fabric, it doesn't really exist when you look at legacy solutions that are out on the market. Very limited analytics and insights. It doesn't really help with development priorities either. And lastly, it's static capacity. You put a physical box in place, you have X amount of capacity per that physical unit, and that's it. Once you hit that limit, you have to upgrade that physical device or put in another device. Moving applications around is not based on application need or end client need. It's based on the physical limitations of that physical appliance that you have those services running on. So what does Avi do differently? What do we bring to market now that differentiates us? The first thing is that we've completely decoupled the management plane from the data plane. We have created what's called the Avi controller and that handles all management or control plane function. And then we've also created the Avi service engines, which are the worker nodes. Those are the physical ADC devices. Now I say physical. Those are the software ADC services that are sitting out there deployed in any environment that you want them to be in. But the controller is now in charge of that entire distributed data plane architecture or fabric. Those service engines can be deployed in any ecosystem. So they can go into a traditional bare metal on-prem environment. They can be deployed in a virtualized environment. They can also be deployed in a containerized environment. That's what we'll talk about. One of the things that you get from this architecture obviously is the built invisibility because you log into one place, the Avi controller, and you have visibility, insights, analytics, data telemetry points to this entire distributed fabric. The other is automation. We're built on 100% open API and you can automate based on what we've already done, the work that we've already put into other third party integration tools. And then lastly is the capacity. The performance of this fabric can expand. We call this elastic HA and we can show a little bit of this, but we can expand and contract this fabric based on real-time traffic needs. Sometimes it's predictable. Sometimes you know that you have a registration event or a particular seasonal traffic event that comes up and you can pre-plan for that. Others such as a volumetric attack, a DDoS attack, for instance, you can't plan for those things. So we can expand and contract on demand as the traffic dictates. All right, so let's just focus on the container's piece and specifically container networking challenges. We talked a little bit about the legacy component, right? The traditional load balancers or ADCs that are out there, proprietary hardware, very inflexible, static. It's sometimes very difficult to move into a multi-cloud deployment model. That's one side of the spectrum. Then the other is these lightweight proxies traditionally deployed in an open source format. But the downfall with those is that they're not fully featured. That's a huge issue. They might be providing services for that one particular application, but that's it. You don't necessarily have the wide range of application services that you're used to in a traditional ADC. A lot of do-it-yourself. It can take quite a bit of operational effort to get these things up and running or working with other third-party tools. And lastly and probably the most important for this conversation is that you oftentimes will have different solutions for both north-south and east-west services. And I'm going to touch on that a little bit more. So container networking requirements specifically for OpenShift. These are the things that we're here to address when we deploy in an OpenShift environment. Service discovery is a big one. We can automatically go out and discover all the services that are already running in the OpenShift master and pull them into the AVI configuration. We also have global and local application services. So as you look to deploy between different data centers, between different cloud environments, between different theaters or regions, we can all do that in one seamless data flow. Application maps. There's no way for me to explain this, so we're going to show it to you in the demo. It's really slick and cool, but it will allow you to manipulate the entire OpenShift environment. Application performance monitoring. Again, it's that insight to both the north-south, that ingress traffic, as well as the east-west. And we're going to show you that as well. And then the security micro-segmentation. So when you're talking about true application service discovery and segmentation for security services, that's all built in and available through AVI. All right, so the first very common deployment for any type of a OpenShift or container deployment or container environment is going to be that ingress north-south traffic. You have an end user client that's coming in. They're asking for a DNS query. We respond with the appropriate address and we will get them to whatever node or pod that service is living in. We'll even load balance. We can do some SSL offload. We'll apply those application services that you're traditionally used to and we'll do that from an ingress north-south standpoint. I think it's pretty common, pretty well known use case. The second then that I touched on earlier is that east-west part. How do we do the exact same thing now as far as delivering those application services only within the OpenShift cluster? Oftentimes it's not just an end user coming into the environment from the outside, right? It's one service connecting to another service. We're going to provide the exact same level of ADC services, the load balancing, the cash compress, SSL. Whatever you need from an ADC standpoint, we're going to now provide that between services inside the cluster. Even if it's a service in one cluster going across to a different cloud in a different cluster, we can provide that same level of security, visibility, service, etc. Okay, just want to touch on two quick use cases and then we're going to turn it over to the demo. But a very common deployment scenario is when you're testing for blue-green app deployments. What we can do is create two strands, two flavors of the same application. If we're running everything on version one or the blue deployment, we can take all those services and gracefully bleed them off. We're not going to necessarily kill them, but we'll bleed them off. So the services coming into the blue deployment are going to remain until they gracefully die. In the meantime, we can stand up the green app and we can start sending net new connections over to that application, testing new features, new services, new configurations, whatever you might want to have introduced into that application type along the way, we'll do that gracefully. And we can flip back and forth the entire time of that life cycle. Sorry, I forgot my builds here. The second use case, which is another pretty common one but can often be dismissed or forgot about, is the fact that a lot of times the traffic will initiate from inside of the cluster and it will try and reach a service that is outside of the cluster. In this case, we have a database that's living outside of the cluster. It's initiated inside. We need to be able to protect that database with a firewall of some kind. Databases, excuse me, firewalls traditionally are looking for one source IP address and we can do what is essentially a reverse proxy or source net through Avi and provide that firewall with the single sourced address, getting the service from the database and providing it, sorry, didn't mean to click on that, providing it back to the service inside of the cluster, whatever service that was called for. But again, just another feature or common use case of Avi being inside of that open shift cluster in this case. Okay, so we're going to step over to the demo now and I just want to highlight the four steps that we're going to go through. Now, number one is that we're going to configure an open shift cloud and to us that's just an integration connection point to the open shift master, the control bus in this case. This step has already been done. It takes roughly 30 seconds to set up from the beginning. I didn't want to have to do it live here, so that piece is already done. What we are going to show you is how we bring up the Avi controller, how it identifies all of the different nodes inside of the Avi cluster and we will deploy a service engine per node that's already in existence there. Step three is we'll deploy an application and step four is that we'll start sending traffic to that application and bring up the application map and show you how cool that is. So let's see if this works here. It's over here. All right, so on the left hand side we have a simple automated script that's being delivered via a web portal and that first step there setting up the open shift cloud is what I mentioned was already done. So that part has been configured already and you can see it there and we have now the service engines up and running. So what that meant is that our controller has created this cloud, it's gone off and it's talked to the open shift bus. It has decided that there are three open shift nodes that are out there in existence and we deployed a service engine that load balancer, that software defined load balancer, inside of each of those nodes. So this is living inside of, natively inside of the open shift cluster right now and along with that we are taking all of the projects that we have listed here that you can see inside of open shift. We associate all of those projects specifically with tenants inside of Avi. Now tenants to us is just a way to create a multi-tenant environment. So you can create projects inside of open shift that will equate to tenants inside of Avi. Okay, we don't have any virtual services running yet. We're going to click on configure DNSVS and it's going to create the first one right now. It'll take a few seconds but again by running a simple automated script, we're initiating an action on the open shift side that's going to automatically pull into the Avi configuration and there it is. It's up and running and live and now we're going to start our applications. So this is going to take all the open shift apps that are out there and deploy them into the Avi configuration and we'll start to see these get populated now. They're getting created on the open shift side. They're going to then automatically get sucked into the to Avi configuration as well and we'll start to see those populate here. There they go. They're starting to fill in. We're waiting for the health checks to complete. We want to make sure that they're up and running successfully and as soon as they are we'll start throwing some traffic at them. We'll generate some traffic and start putting load on them. Now what Bhushan just did is he changed from a list view to a map view and this is going to show the application map that are referred to in the slide and once we start generating traffic now you'll start to see this map up here here in the UI. So this is now taking into consideration all of the segmented traffic communication that's happening inside of the cluster. We can map out exactly which service is talking to which service. We can even show when we dig into it a little bit we can show exactly what metric we want to look at. So right now we're looking at latency. We can take a look at throughput or we can take a look at the errors. We know exactly this is the visibility that we were talking about. We know exactly what's happening between each of these services that's already deployed or that we created live and we can now map out exactly which service should be talking to which service and secure that. So before we throw this into production environment we can lock down the communication that we know should be should be available to talk to each other, throw it into production and now we don't have to worry about any any security breaches within the cluster itself. We can do that by whitelisting. So right now we pulled up a whitelist and we've secured catalog to only be able to talk to marketing and inventory. Those two services are what can talk to catalog in this particular case. Now again once we deploy that into production that's it. It's locked down. It's secure. Now we're going to go into the analytics screen and I'm just going to do a quick time check. So we're going to go into the analytics of this application. This is the exact same level of visibility that you get from every single virtual service that you can figure inside of Avi. This could be a bare metal server running in a data center. It could be an OpenShift node. It could be an AWS instance, GCP instance. It doesn't matter. We give incredible insight and data analytics to every virtual service it's running. It's incredibly easy to troubleshoot because of the visibility and it allows you to help set development priorities as well because you now can trend the usage of any application that you have deployed. So currently we're looking at the application analytics dashboard. We're looking at the last 15 minutes of traffic but we're going to switch over to logs. We have two ways to look at logs. We have both significant and non-significant logging capabilities and here you start to see the examples and if we click on one of these what you see is really cool because we can show you exactly the communication path, the data path between the end user and the back end server. So we know exactly how long each segment takes. The WAN connection perhaps, the local network, which is the server RTT, the app response. We know exactly what the communication and latency looks like inside of the application and then the data transfer. If that application has to go back and call a database we know how long that takes. Each of those segments is stitched together and we present it to you both ingress for a total round trip time. Now in this case it was extremely quick, two milliseconds for the entire transaction to take place. But there will be others where we can pinpoint exactly where there may have been a delay. So that's that's really it. I appreciate everybody listening in and have a great afternoon. Thank you.