 All right, thank you for coming. I hope you can hear me loud and clear. So today's talk is about Kubernetes in hybrid environments. And we do have quite an ambitious plan, because we want to walk you through a few open source projects that we are working on for Kubernetes and Istio. And I feel very brave today, so I'm also planning to do a live demo. So we'll see how that goes. So let's get started. Before we start, can we have a quick show of hands to see how many of you are familiar with F5 Networks and F5 Big IP? Just a quick show of hands. Wow, OK. So pretty much everybody, that's good. So multi-cloud, people mean different things when they talk about multi-cloud solution. And we have a huge percentage of customers interested in that and developing multi-cloud strategies. Most of the time, people refer to multiple public clouds. But many times as well, people include private cloud environments and also SaaS solutions. So for today's purposes, we'll probably go with that second definition. So there are many different reasons you would choose each kind of environment. For example, with private cloud on-prem or in-call location, typical reasons would be enhanced security or as a need a customer can have for layer 2 and layer 3. Networking controls is kind of a thing as an example. And again, as I mentioned, it does seem that a huge percentage, so 85% of companies have a multiple-cloud strategy. And containers, of course, help a lot with that because they are portable across these kind of environments, so both public and private cloud environments. And you do have quite a large variety of different development environments and platform as a service environment, which we are briefly going to cover, including OpenShift and Cloud Foundry. But mostly today is about Kubernetes and some of our projects in that space. So what are drivers for multi-cloud adoption? It can be buyer leverage for different departments and teams. It can be increased availability and disaster recovery. It can be many things. And it turns out that it varies quite a bit depending on company size. So as you can see from this graph, looking at the red parts, so optimizing SLA is a big thing for smaller companies under 100. But they are a bit less concerned with buyer leverage. And it's quite the opposite for large companies, 10,000 or more. They are very concerned about buyer leverage and maybe a bit less in proportion about optimizing SLAs. So challenging. So we are going to show a bit some of the common deployments that we see with customers today and when they move workloads to various clouds and running Kubernetes in offers such as Google Container Services and Azure Container Services or in Amazon. One of the common problems is that the public cloud providers came with all kinds of different feature set and different services. So the investments you make into this kind of integration is not really portable across environments. And also, you end up very often with inconsistent policies between these different environments and that, of course, can create security vulnerabilities. So one of the typical deployments that we see is our devices being deployed in a colocation facility because you have good latency and access to all the different clouds to Google and to Azure and to Amazon to Oracle as well. So that's something we see more and more. And this kind of deployment allows you, as well, to centralize your security policies. So it allows you to replicate your app services across environments, so make our workloads really portable and ensure that you apply the same security policies to them. So some of the benefits I just mentioned are right there, as well. So again, what it would look like is a five sitting on a color or interconnect location can be equinix or something similar or in a data center. And this kind of deployment allows you to centralize your traffic management but also your security policies. And specifically related to Kubernetes and one of the open source projects that we are going to talk about today that is called the F5 Container Connector allows you, if you deploy your Kubernetes workloads, regardless of whether you deploy Kubernetes on-prem or in Amazon or in Google Cloud or Azure Container Services, we have a project, so this F5 Container Connector that runs natively in Kubernetes and will monitor actively the different services that are running and automatically create as a necessary configuration on the F5 devices, sitting, for example, in a color location or in your data center. So that allows you to dynamically populate everything on the F5 devices so you can make sure that even though you have Kubernetes workloads deployed in different clouds, you always have the exact same traffic management or security or DDoS policy applied to them regardless of that. So in terms of more technical terms about how this works, basically the F5 Container Connector runs as it's an ingress controller and the F5 big IP in this deployment will be the Kubernetes ingress points. And so this solution addresses the ingress into Kubernetes environment. However, we also want to address traffic between the different microservices themselves. So this is another project that I'm going to mention right now. And I'm going to do a quick demo as well. So it's called Aspenmesh. It's basically a F5 supported dashboard with Istio at its core. So on top of everything that Istio is offering you, so Grafana and Yeager, to look at your application traces and we are going to see that in the demo, we have another component that is part of our Aspenmesh solution and that is called IstioVet. It's an open source project and it basically allows you to identify misconfiguration between user deployments in Istio and Istio components themselves. So you can identify all kinds of version mismatches and it can do as well predictive analysis based on what happens with other deployments and other customers. So basically, it helps you to avoid making mistakes that can prove costly. So are you ready for a quick demo of Aspenmesh? Great. So my Kubernetes clusters are deployed in Google Container Engine, but as we said, it shouldn't really matter for the solution. So right now, I'm going to switch to, hopefully, you can see my screen. So this is what it looks like. I have deployed the very basic apps, the book info application in Istio. And you can see right now, this is my Aspenmesh dashboard. I can see the various components of my app, but I do not have any requests. So I'm going to go ahead and generate some traffic to that booking for application. And we will see in a second here, you can see I already start to get some requests in there. So if I go now and I go to look at Grafana, for example. Sorry. All right. So you can see global success rate 100%, no 500 errors. And you get all kinds of indicators about how the deployment is going. I'm sending traffic specifically to the product page so I can see what exactly is happening there. I only get 200 responses, so that's all nice and good. And I can do more than that, of course. I can also, also from the Aspenmesh dashboard, I can go to Jagger. And I can look at the actual traces from my app, hopefully. Internet is working, it is. So the service I selected here is the product page because that's what I send traffic for. So you can see that it's actually working. I get all kinds of useful information. So I can go and explore exactly what happens in my environment. So again, this is the app we just sent traffic to. So very basic app, but it helps make the point, hopefully. So I think we have our entire product development team, so I think we can open for questions a bit early. And we'll be happy to talk to you afterwards if these projects are of interest. And also I included, at the end of the slides, I included links to all of these projects that I mentioned. So IstioVet, we would very much welcome contributions to this. You can, right now, there is a number of, we call them vetters. So all kinds of, for example, mesh version. This is one that can detect mismatches between versions of your user deployments and Istio components. There are quite a few others that are interesting. And of course, you can contribute yours as well. So I think we can open for questions, if you'd like. Yeah, yeah. And again, we have the entire hymns here. They can add more to it. So basically, it works by monitoring the Kubernetes services and creating the entire configuration of virtual servers and pool on the F5 device. Correct, correct. So the container connector talks using REST APIs to big IPs sitting somewhere. Again, it can be equinics, or it can be your data center. It can be F5 big IPs sitting somewhere in the cloud. Provided that the container connector can talk to the management interface using REST APIs. So it creates all this dynamically. And it can work in two ways. Very often, we just end up having the pool on the big IP being created with NodePort. But you can also work with a lot of testing, for example, with Calico. And you can have a BGP peering. If you use Calico BGP, for example, BGP peering between F5 big IP devices and the Kubernetes nodes. So therefore, in that scenario, your pool on the F5 device will actually contain actual pod IP addresses. So you get kind of more visibility on this. Exactly, exactly. And we rely on the Kubernetes health monitors. So you never are in a situation where the pod is not actually healthy and ready to receive traffic, but it's considered good by the F5 so you don't have that problem. Did I answer your question? Yeah, I mean we can talk. So you mean by container connector? Yeah, so if you go, for example, with Equinix, let's say, you have two options. So you can have direct connectivity to the different clouds. And we tested that. That works great. So you have Google, you have Oracle, you have Azure, you have Amazon. So you can use that, but you don't really have to. For example, for your development workloads or test workloads, nothing stops you from, even if you have your big IP in Equinix, from establishing IPsec VPN and just route some of your traffic to save some money over IPsec VPN to Google Cloud or to Azure or Amazon. There is a lot of automation to establish VPN tunnels. There are quite a few good projects showing how to do that on GitHub. Are you interested in which public cloud provider are you interested in? Yeah, OK. So I'm going to show right away. It's very easy to actually Google for it. So if you look at, so there is quite a bit of automation that you have available there, a five IPsec. So this one is pretty good, a pretty good example. And it really does everything for you, including availability for the big IPs. So I would encourage you to use that rather than doing it manually. Of course, you can do that. Does it answer your question? So the role is, so big IP will be the ingress point into regardless of whether your apps are deployed on-prem or in Kubernetes running in a managed Kubernetes environment or anywhere else. It allows you to get so traffic to these environments goes through the big IPs. Therefore, you can apply all kinds of traffic management and optimization and DDoS protection or web application firewall protection. So you can use any of five features that you are interested in regardless of whether your app is running or your workload is running in a public cloud, Kubernetes on-prem, or Kubernetes in a managed environment such as Google Container Engine. It just allows you to consolidate your policies in this way and use whichever of five features is of interest to you. And again, it can be web application firewall or DDoS protection or any kind of feature you are interested in. No. So it depends. If you are using direct connectivity, for example, from Equinix, you will have a layer to connectivity directly to the cloud. If not, you can use IPsec VPN to establish connectivity between big IP and, I don't know, Google VPN gateway, something like that. Or other questions? OK, I think. Thank you very much. And don't forget to talk to us. And thank you.