 Hello. Yes. Good afternoon, everyone. Welcome to Avi Network's demo theater presentation on OpenStack Private Cloud case study with Time Warner Cable. This is Jason Rawl from Time Warner. Thanks, Jason, for joining us. Thanks for having me. My name is Ashisha, and what we'll do during the presentation is we'll talk about how Time Warner has deployed that OpenStack Private Cloud, talk about how they're using Avi, and then we'll do a live demo. Shall we get started, Jason? Let's get started. All right. So, Jason, you have a big OpenStack Private Cloud deployment, multi-data center. Can you spend a little bit time on talking about what it is? Sure, absolutely. So it is private cloud. We're deployed in two data centers. They're set up as regions. We are, of course, using OpenStack, but we're not using any particular distribution. We do our own CICD, and we're on Kilo for some services. Some services are on Liberty, and a couple are actually on Trunk. So we have the luxury of being able to do that. Besides our deployment topology there, we have a centralized DevOps team, many of which are here today. They do engineering on OpenStack as well as they operate and support our cloud. And then lastly, we have our customers who are internal businesses that are developing applications that are either internally focused or externally focused to the subscribers. And maybe I should back up a little bit. So Time Warner Cable, we're a cable company. We provide video services, broadband, telephone, that type of thing in the U.S. So a number of types of workloads run in this environment, everything from web properties to video platform back-end systems for providing video over IP and those types of things. Sounds good, Jason. Thank you. So we've been working for almost a year or more now. As you went live with your OpenStack deployment, what were some of the load balancing challenges? First of all, what are some of the requirements that you had for your OpenStack cloud? Sure. Well, first off, we definitely needed everything to be self-service. We're trying to enable our internal application development teams. They needed to have the ability to be able to move very quickly. They needed flexibility. And so whatever we could do to provide self-service capabilities was very important. Additionally, we needed some enterprise features. We needed load balancing to be highly available. We needed to do things like SSL offloading and provide multi-tenancy. We needed integration with Keystone. We needed Keystone roles and users to apply to load balancing just like they do to the rest of the OpenStack services. Makes sense. We also were very interested in pertinent isolation. We wanted load balancers to reside within our tenants' projects for a number of reasons. There were some security concerns as well as we wanted the ability to meter the usage and tie it to that particular project. And lastly, as far as deploying and managing the load balancing environment itself as an operator, from an operator perspective, we needed the ability to automate that. We have a lot of CICD tooling in place for the rest of the OpenStack services and we needed that to carry forward to our load balancing service. Fair enough. That makes sense. So as you evaluated different solutions, what were some of the challenges that you faced, whether they're hardware load balancers or open source or software load balancers? So we did look at a number of solutions. From a hardware load balancing perspective, we found those to be a little bit too inflexible, a little bit difficult to integrate with. And in many cases, you're still working in the world of people are opening tickets to get VIPs created and we wanted self-service capabilities. From an open source perspective, there are some good things out there but really being able to provide some of the HA capabilities, those enterprise features, there's a little bit lacking. On the virtual load balancing side, it got to be very complex when you're trying to provision the VIPs and the monitors and things like that. And then looking across all the solutions, really, our customers lacked visibility in what was going on. It's very important that our users of our cloud have visibility into how their application is running, how it's configured, and we didn't want load balancing to be kind of a black box. Makes sense. So let me spend a minute or so on the AVI's load balancing solution for this audience. So AVI's built the next-gen software load balancer with built-in analytics. It's a 100% software solution. It's a distributed load balancer with a centralized control plane. So a single point of control and management and automation, and it's 100% REST API. So every feature, every object in AVI is available through REST API and there was one of the biggest requirements that Jason had, which allows self-service provisioning for the app teams. It's fully integrated with OpenStack services. So it talks to NOVA to spin up these micro load balancers or service engines on demand. In the tenant context, it talks to Keystone for tenancy and for user roles. It spins them up on demand, scales them on demand, talks to Neutron for plumbing them in the right network. And then with working with Time Warner Cable, we added a couple of capabilities that were missing in Horizon. So AVI supported SSL and analytics from day one. But working together, what you'll see is we're built in, in the Horizon dashboard, the ability to manage SSL certificates and application performance monitoring and visibility. So again, you've been a great partner in terms of enhancing the solutions within Horizon. The other key feature of AVI is elastic scale. So as you add more tenants or applications or individual VIP as the traffic grows, AVI can automatically scale out the load balancer and the backend applications. So think of it as Amazon ELB, like operational experience with a full-featured load balancer with a drop-in replacement for HA proxy. That's what AVI is. Now, talking about application monitoring, I think you said it's very important because your app developers cannot be in a black box. As they make a self-service change on a load balancing policy, they should be able to see what's happening right there. Can you talk a little bit about how we've integrated Horizon, integrated application monitoring with Horizon? Sure. So here you see an example of the load balancing panel in Horizon, and it's been augmented with a couple things. One is there's a new tab up there called Security, and that's where you would import your certificates for the SSL offloading. But most importantly is this screen we're seeing here where you have visibility into the analytics and the logging and things like that that are happening in the load balancer, as well as any potential latency that might be occurring in the system. Why this is important is because if an application is having problems, the application owner has a hard time understanding whether or not it is something in the network, it's the load balancer, it's their application, or something on the backend. And this gives them the immediate ability to go look and see where a problem might be and rule out, for example, being the load balancer and having to call an IT team to come in and help out. Makes sense. Perfect self-service operations. So we'll do a demo, live demo in a minute, but before we do that, can you talk about the benefits that you've derived for both your CloudOps team as well as the individual application teams? Sure, sure. So from my Cloud, the DevOps team that we have here at Time Warner, we've got the self-service capabilities. So really, you don't have tickets that you need to respond to to set up Vips and things like that. The elastic scale is pretty important. So not only can the customers easily provision their own load balancers, but the system is smart enough to scale in and out as demand requires. And then the integration of the analytics that we showed you earlier provides kind of that visibility into the system. And then really, this is all about accelerating our customers' ability to build applications, and their adoption of OpenStack is really based upon those self-service capabilities. From the application team's perspective, they can get up and running really quick. So obviously that's important to them. And they used to wait sometimes weeks. They put in a request for a VIP, and they might wait weeks. So that's gone. And now troubleshooting is very easy to do with the visibility that Avi provides. Sounds good. Thank you, Jason. So let's go into the demo. By the way, the solution is available for free downloads. So you can go to our website, download right for free at avinetworks.com. So let me move to the demo, and I hope the Wi-Fi gods are with me, because as you all know, we have had problems with Wi-Fi. So first of all, this is the dashboard that you saw, where I am in a specific tenant called Colo demo. And in this case, I have three VIPs configured. And as you can see here, in addition to the pool, the members and the monitors, I have two extra tabs. The certificate tab. This is the tab that you can use to upload a new certificate. So you can add a certificate, for example. And then once you do that, you can go ahead and then associate, in this case, the certificate is already associated. So you can disassemble your certificate. So basically you have enhanced Horizon to be able to manage your certificates centrally from the dashboard. The other tab that we have added is the load balancer tab. Let me see if my other browser works well. Let me just reload that. So while this is coming up, let me familiarize you with our dashboard to begin with. So you can also, by the way, you can manage it through Horizon, or you can manage it through AVUI. Let me log into the AVUI right there. And this is still coming up. So I'm going to just use this. Now, as you said, it's a fully multi-tenant. So whatever tenants are available on Horizon are also available here on the AVI controller. We saw there were three VIPs configured there. Those are the three VIPs that are visible here, right? And first thing you see is a color-coded health score. So unlike some of the other load balancers, whether you have it's up or down, we have a full color-coded health score which consists of application performance, including end user experience. It accounts for resource utilization. So if my resources are running hot, whether it's a load balancer resources or the backend application resources, it's going to ding the health score. We have a big data analytics engine. So it's constantly baselining the performance. And if you see a sudden spike in latency or a sudden dip in throughput, we're going to call that out in the anomaly penalty. And finally, a security score. So we have a full-fledged SSL off-load solution with insights into security. So if you have any misconfiguration, for example, you have a self-send certificate or you have a certificate that's expiring soon, or if you have a DDoS attacks that's going on, we'll ding the score. So let me dig into a specific application. So first thing you see is the end-to-end latency diagram. Now, this is very important. Without having any agents, without a monitoring fabric, we'll tell you what the end-to-end latency is. For example, over the last six hours, as you can see here, over the last six hours, the latency from your clients to the load balancer, which is the data center, is just about 57 milliseconds. The latency from my load balancer to the backend server, again, this is the network latency, is 10 milliseconds. The application is taking under half a millisecond to respond, and the data transfer time is 28 milliseconds. As Jason said earlier, from an application developer point of view, this is an important tool, because otherwise today what happens is, if there is a problem, there are five different teams from server to network to infrastructure to applications. They get on a call and finger-point at each other. No, it's not my problem. No, it's not my problem. This is a clear troubleshooting tool that tells you if there is a problem, where it is, whether it's app or network or server or anywhere else. Now, beyond that, we've also added capabilities to do real-time log searches. So in this case, for example, I'm showing you all the logs that Avi is capturing. And first thing you see is a Google-like search engine. I can basically search whatever I want. For example, I can say, show me only the location that's coming from US. So I do a quick search, and it's only going to show the location, for example, right? And then I can dig into individual transaction and say, in this case, this particular transaction came from a computer-running Internet Explorer, took under a millisecond, which means it must be a local client. It took about 11 millisecond to hit the backend server, and the application didn't take any time, and so on. So individual transactions are logged, and you can search in whatever you want. For example, you can get a feel for what are all the browsers from the US that are accessing this application. In this case, I see four of them. Or you can look into the end-to-end latency diagram and figure out what's the overall histogram for the US clients, what is the breakdown of the latency. So Jason, before we go further, how are your application teams today using these analytics that Davy's built in? Well, as I said earlier, it provides the insight into where a problem might reside, which is particularly important. It also helps the teams tune their application, right? There may be latency that's perceived, and they think it's coming from the network, for example, and actually they find out that it's their application and the backend members, so they can focus on performance tuning from that perspective. Makes sense. So in addition to some of the logs that we saw, let's talk about security. Again, as Apple has mandated, perfect forward secrecy is mandatory for all the iOS applications. So if you're running an application behind a load balancer, it's important to know what type of client profiles you have. So in this case, for example, are your clients using perfect forward secrecy or not, are they using RSS certificate or EC certificate? What is my TLS and SSL version breakdown, whether it's using TLS 1.0, 1.1, and so on? We also give you a direct feedback on what's your security configuration, whether it's ideal or not. For example, if you have a weak SSL Cypher configure, or if you're using a self-signed certificate, we appropriately adjust the score. Or if you have a certificate that's expanding anytime soon, we appropriately adjust the score. We give you as much visibility and analytics into application performance in addition to doing load balancing. Any questions for either Jason or I from the audience? I can show more unless there are more questions. No? All right. So some of the other things, actually not on this one, let me show you another application where... So this is another view, right? Today, when you look at a load balancer, you don't get this kind of visibility into which are the pool members, what are the health. So in this case, for example, I have one particular load balancer which has one pool member down. Not only we indicate that by color, we also tell you why it's down. So in this case, it's down by HTTP monitor and the ARP is not resolved. So it's very easy to, again, troubleshoot your application performance in a self-service fashion, because all of this is fully synchronized in terms of tenancy from your keystone. So you don't have to have a separate system for monitoring, not a separate system for identity management. All kinds of... Question, DP? Yes. So let me repeat the question. The question is if there is a failure, are there any alert systems or notification systems? The answer is absolutely. So all of that is available here. You can configure alerts, notifications. You can do things like auto-scale the infrastructure. So for example, you know that your backend resources are running hot. You can create custom rules that says that if my CPU load goes above 80%, spin up an extra load balancer instance, or call out a backend orchestration system, like a heat resource, and scale out the application VM. Absolutely. Any more questions? All right. So thank you, Jason, for talking to us and being a great partner. And we'll look forward to more and more deployments. All right, thank you. Thank you, everyone.