 All right. Sorry for the delay. We'll get started here. This is a demonstration of Istio in the open service broker API. I may have said open source on the website, and that's because I had a typo. Some things we're going to talk about just generally about us. We'll talk about a few Istio basics, as well as the open service broker basics. We have a live demo, talk about some of the ways we want to enhance this demo and collaborate with Istio community, as well as Cloud Foundry to open up the scope of it and we'll take questions. So this is me. I just got married a few weeks ago in Vegas. If you could please tweet at my wife on Twitter and thank her for allowing me to come here. That'd be great. That's at be willies. I work full time on Kubernetes mostly, though I also balance my time with Istio and other Cloud native foundation projects. Don't forget the underscore on there. Oh, yes, underscore. CM Luciano, without the underscore, is a Canadian resident that owns some junk company and like a junk removal company, a lot more successful than I am. So remember the underscore. Hi, I'm Morgan. I'm the horse guy on GitHub. And I work on Kubernetes now, but I was working on Docker before always at IBM and Docker maintainer and Kubernetes member. And my main work nowadays is the service catalog in Kubernetes, which is an implementation of the open service broker, the platform side. So I also have a Twitter and it's there. And cool. Move on. OK, so a little bit about Istio. You may have heard mesh networks thrown around. This is an implementation essentially of a mesh network. There's been a few talks that went over some of the basics of this. The goal is really to have some sort of a drop-in replacement to gain some functionality that you may have to build into client-side libraries. So Istio handles this with a sidecar-based approach. In the Kubernetes sense, it gets injected into your pod, either through the command line utility or how we're going to do it today with the open-serts broker API, which deploys an initializer. The initializer essentially just models with whatever you deployed normally and then inserts in the necessary Istio bits to make your application part of the network. And a few of the features that it supports, metrics collection and tracing are things that most people talk about. It's able to gather these things a little more easier than you instrumenting them because it's capturing both the entry of the traffic and the exit of the traffic so it sees the whole picture. The protocol-level metrics that it collects are basically like HTTP metrics. It's able to capture some of your application-specific metrics if you have them. But you would have to specify those to Istio. Mutual TLS authentication as well. Like IPv6, that's one of those things. It's like, yeah, well, we're going to do that next year. It's priority number one next year. Mutual TLS authentication, one of the big features within Istio, and you're getting that for free. And then I'm going to talk a little more in detail about some of my favorite features, which are a little bit more difficult sometimes to build in without having your client libraries already. Thanks to Jason McGee for a few of these slides that I grabbed from him because they were great. So traffic splitting is one of the things that I really like about Istio. It allows you to, fortunately, move traffic over between different services, either by a percentage based, as this example is showing, or you can use header matching. So inspect for user X and directs to this instance or enable this feature for user X. Circuit breaking is another big one. This is if anyone's ever had a dependency or has been a dependency of someone else. You can be very familiar with a stampeding approach if another service started acting inappropriately, making a lot of calls to you. You're down. Your service crashes. It's just a chain effect. So you can build in some level of retry logic within your application. And this gets fed in using a configuration file similar to what we have here. And then failure injection. This is also a very neat feature. You can visualize what would happen to your application by just moving some sliders around and controlling the flow of the traffic in order to detect what happens if one of your dependencies is going down. How exactly does that affect your service? What happens if there's a delayed response from that service? What's happening to your service? All right. So part of this I'm bringing to the table here is the Open Service Broker API. This was a piece of Cloud Foundry, the service broker, service catalog piece that has been split off into its own specification where we are defining both the server side and sort of the client side interaction. And what it is is it's five APIs, five resource endpoints that are defined to get a catalog, create a instance of the service that you're defining, bind the servers to an application is the Cloud Foundry model. But we're expanding that to support sort of the Kubernetes way as well. And then unbind and delete the service. We recently released the 2.13 spec, but right here is a link to the 2.12 spec. As it is right now, there's two sort of well-known platforms, one of which is obviously the Cloud Foundry, Cloud Controller implementation of it. And at least I would like to think that the Kubernetes platform is the other one. I have heard of other platforms coming up and being created now just at this conference. And what I have been working on for the last year, I guess, or so is the service catalog in Kubernetes, which is the platform side of the service broker in Kubernetes, sort of the native Kubernetes-style API. Using Kubernetes objects, you can use kube control as if this was a built-in thing like pods or deployments once it's installed. API aggregation means that you can talk to the same kube server once you do a kube control list APIs. It shows up in the API list like every other object. And you can use kube control natively. And there's no sort of friction other than the fact that the objects are kind of have weird names. But we're improving that. In the last six days, we've significantly changed the API. So some of the objects you'll see here probably don't exist anymore. But it's alpha state. And we're hopefully about to release the beta very soon. And then the last part of this is that we have taken both the platform service catalog. And we have created a broker, a service broker, a standard interface. And this broker is, unfortunately, right now specific to the Kubernetes platform. But in the future, hopefully, it would be great once I hear a lot of work is going on to somehow support Istio and Cloud Foundry to include the Cloud Foundry platform as well in support for this sort of a future goal. In this case, the broker that we've written is going to be running in Kubernetes. And it will be serving the standard service broker API, which our service catalog will be the platform for. And we will create Kubernetes objects to basically run through the API flow as if it's just run through the API flow and deploy Istio on there, deploy the broker, and then create a binding between Istio and namespaces in Kubernetes. And that way, when you deploy apps into specific namespaces, you will Istioize the applications versus having to do the Istio control inject. Instead of manually having to do it, it automatically does it for you in the background using an initializer. And I think that is time for the demo. So what we're going to show is the service broker that we specifically created for this demo. And as Morgan was saying, there's a few supported ways of getting Istio to work with the inside of your cluster. And normally, people will go with the Istio inject method, which the documentation basically shows you using a kubectl command, and then it shoots it over into another shell, which basically munges with your yaml and adds on things. And you have to remember to do this every time. Otherwise, the sidecar isn't going to be injected in there. There's also just the initializer, which is published by the Istio team, that you could take and modify yourself. However, then people have to have some inkling of how your deployment file is set up, a little more documentation. So in this sense, we decided to try out the service broker. And people that are familiar with implementations of service brokers, this is going to feel very familiar to them. And there's a lot less steps for you to do. We're going to do this within a Kubernetes namespace, which represents some degree of tendency within Kubernetes. It's not strictly hardlined. There's ideas, even of combining namespaces together into another tenant concept. But it's the closest that we can come right now to having basically a group of developers in this namespace and group developers in this namespace. So here, I'm just showing the Kubernetes dashboard for the demo. I have already brought up Kubernetes and just seeded the cluster with a few Istio components. So at this point, the operator would then make sure that they are deploying both of the brokers so that individual developers could utilize them. And we package these up as HALM packages. It's not really important to note really what they do, or the format that they're deployed in. But what is important is to note that we are first deploying the Kubernetes catalog. So this will provide basically the APIs to utilize to hook in other brokers for like our custom broker, providing almost like a menu system for here's all of the applications that we're exposing within here. So that's going off, and it's creating two components right here. And then we're also going to, I'm just going to wait on those to come up. Well, side by side. So this is our specific broker, which is going to register itself and expose the individual bits that we're controlling. So at this point, the Istio initializer has not been provisioned. It's just allowing you to provision one. So at this point, the operator is basically hands off. And now an individual developer that wants to consume Istio within the side of their namespace Green. We'll create the broker link. Yeah, this is just a simple YAML that says, add the broker to the catalog. And here's the URL that it's at, and here's the name that we're going to use in the future when we refer to it. Once the broker is added, there will be the classes, the service classes, and the service plans. For our broker, there's one service class and one plan on that service class because it does one thing. There are no levels to it. And then we will create an instance of it, and then we will create a binding to a specific namespace. And you can see the parameter there as the namespace is going to be Istio testimony. So we're registering the broker, and we're allowing someone to create the initializer. But the initializer has not been created at this point. You can see that here. Nothing has been created yet. No, but we have gone through the open service broker API, and the catalog has been grabbed from the broker and registered, and all of that has been gone into the controller that we have, and we have Kubernetes objects that represent all of those things. So now we have an initializer. So what this means now is that one developer just said, hey, let's start using Istio within our namespace. Now, anything that gets created in this namespace is going to automatically be Istio-enabled without anyone having to modify their deployment, remember those commands or anything. So to demonstrate this, I have a simple example, which is going to demonstrate some of the traffic shaping. I created a very special application, which I effectively named Morgan Web. And this consists of two containers, Morgan Web for Android and Morgan Web for iOS. Nothing specific to Istio within here. I'm not specifying anything that resembles Istio. It's just a normal Kubernetes deployment and a normal Kubernetes service. You can see only one container is defined in each deployment. Oh, yes, very important. Not more than one. There should only be one container when we deploy this normally. So I'm specifying the Istio test mean namespace that I created specifically for my team, and I'm creating that deployment in there. So now I've got two pods coming up. Can everyone see the text of this? I didn't increase the text size of the web quite too much. Was it not a little better? Can anyone not see it if you raise your hand? OK, great. So now if we go into one of these applications, we can see that necessary annotations have been set, and now we have two containers within here, which is the proxy side card. Now this application is registered, so these can start talking to each other. They can start using the retry logic. But I know you're not impressed. This is just showing that two containers could go in there. I'm just demonstrating basic Kubernetes features at this point. So the real meat of this is going is I deployed one application for iOS and one for Android. So now what I want to do is when coming from an Android device, I want to direct to the Android service. When coming from an iOS device, I want to direct to the iOS device. I'm not doing anything like super special here, but this is a feature of Istio. It's called a route rule. So I've got a bit of regex in there to search in the user agent header, which can sometimes be an unreliable way of doing this, but it's been working for me lately. And I'm targeting the specific application that I want these rules to affect. Android iOS is basically it there. So let me just create those. So I'm creating these not with the kubectl command, because I haven't tried that before, but with the kubectl command. So those route rules are created. Now what I'm going to do is I'm going to open up the Android emulator from Android SDK. And I'm going to actually show you that I'm just not making stuff up. So this is the point to MiniCube running on my machine. And this is the port that I have exposed to the public access of service. I hardcoded in the web page that you should be reaching the Android page. So if we don't see that, then something broke. And it did break. Yeah, somehow it ended up backwards. Close enough. Yeah. But for this, I already recorded the demo in case you wanted to see it actually working. OK, so here we go. It's going to be a little smaller, but I discovered that it's probably going to be smaller. And I ended up increasing the check size. This is some riveting, opening up the simulator, remembering where it is. So this is even starting from scratch, essentially. It's booting up a fresh iOS. You stopped it? It's the wrong video, then. Oh, yeah, I did. Sorry about that. OK. So we're starting with essentially exactly what we did before. We have those route rules created. Now I stumbled open up one of the emulators after first showing it on my laptop, I suppose. If it doesn't find a header, which one does it go to? So that might be where I messed up the default rule. So normally, you would have the default instance that you would just send things to if you screwed up a bit. But I assure you this was working all over an hour ago when I tested it. So here we go. We have Morgan Web for iOS. I even refreshed it to prove that I wasn't kidding in going to some static website. It's beautiful. Then I struggled to open up the Android emulator. I go to the exact same URL. Then realized you probably can't see that, so I start extending the screen centers. It says Android. Bam, Android. So this is a simple example of some of the traffic shifting abilities that you can use Istio for when your demo works. But I assure you it does work. I think I may have fat-fingered something, which one of our colleagues said I might. So future work, where are we going with these types of things? So the initializer is something that some individuals on the Istio team that are working on the Istio service broker official are interested in adding. They want to be able to maybe have this on just individual deployments instead of the per namespace. There's also a subtle bug that in other namespaces at the moment, some things may not be working because it's trying to go to the initializer. But the initializer is just watching one namespace. There is a fix for this. But just if you try this at home, it's going to break on that moment when you try that. And I will have all the code for this available on GitHub along with the slides later on today as soon as I just slap on some licenses and whatnot. We're also interested in working with the Cloud Foundry team. I know that they have some PRs open and some stories open for the transparent proxy, which will be bringing Envoy in along with your individual service. As Morgan mentioned, before Istio largely only wins on Kubernetes right now, though, in the latest release, it does allow you to bring VMs in the outside into your existing Istio cluster, assuming your original Istio cluster started on Kubernetes. Really, the goal is just having a common service discovery mechanism that people can swap in with their Cloud provider. If you want to catch me after this, these are technical topics that I like to talk about. And if you don't want to talk about those, these are some other things that I know some stuff about. And just talk to Morgan in general. Yeah. I didn't create a big question slide. But ready for questions? Go ahead. The Istio one? It's created. OK, so when we do the instance, it doesn't really do much. It instantializes the registration in Kubernetes. It creates the initializer registration, which is just a thing that says, in the future, everything that comes in needs to go through this initializer. It doesn't actually create the initializer, which means until you actually create the initializer, everything is broken. So that's not great, but we wanted to try to get binding and instances in. I fixed that in the demo, actually. But then when you do the binding, that is where it actually sets up the initializer for the specific namespace in the parameters of the binding. The initializer is the thing that when a deployment comes in, it munges the deployment somehow. And in our case, it adds the Istio sidecar to it. Yeah, so normally when you're deploying Istio, you're using the Istio CTL command line utility, similar to what I did for creating the rules. And you tack this on to the Kube CTL commands, which you're using to create your applications within Kubernetes. And the Istio CTL command essentially just intercepts exactly what you're trying to send to Kubernetes real quick, munges your YAML, reconciles all of it, and then submits that one big YAML with the sidecar tacked on. And this is something like I said, if you're trying to update that application now and you didn't remember to also tack on that magical Istio CTL thing at the end, now you're just going to get your application and it can't talk to anything because the sidecar didn't get ejected because you forgot to tack that on. So a lot of the service brokers that people have made for Kubernetes have been more of like the stateful service type of thing, like I want a database. So when you're creating an instance of the database and you're binding an application to the database, the creation of the instance and the binding of your application are pretty obvious. But this is basically just a service similar to a load bouncer that you would create. Now there is in a case maybe some additional steps you'll do to register yourself with the load bouncer, say I'm registering from here. But in this example, we really just set up the necessary credentials at a time. It allows you to create new initializers. And then in the binding step is when we actually create the whole deployment. But there's nothing to say that that create instance step couldn't just contain both tasks. Yeah, we need to think about probably the correct semantics for each particular piece. But for the purposes of a demo, that's what we did. Because it's going to intercept it every time. It's basically like having someone run the Istio CTL for you every time without you having to worry about it. So yeah, this is one thing that I was experimenting with today to try to make the demo work a little more reliably to actually create an ingress. At the moment, I'm using a node port, which makes me think that in some cases it's getting trapped in the connection tracker and just routing to the Android service. In the demo example before, I set it up properly with some ingress and everything. But whatever reason, the ingress was just finicky on the Wi-Fi. But so yeah, because I'm using it on Minicube, I just use a node port for now. Normally, you will set up an ingress controller and Istio even provides an ingress. It's self that you can use. But like ingress in the Kubernetes sense, some of these things rely on you either having preset up the Nginx ingress controller or utilizing your cloud du jour load balancer that they're providing. But having said that, all those things being opened up, the routing between the emulators does work a lot more reliably in the ingress controller when you have a real IP address, when you have something that's like a real DNS entry mapping and everything. It would depend on what the client's doing with it. So normally, you are opening things that you want to be able to access outside with an ingress controller. And if your clients with inside of the cluster are also using the ingress controller or using Istio and using the Istio endpoints that are returned when you're communicating with Istio, then they'll get whatever Envoy is directing it to. And that's one of the important parts of Istio. Like all traffic is now being redirected once that sidecar is injected to Envoy. There's IP tables rules that are set up beforehand, which does that. But someone outside of the cluster that has no idea about it could technically call out to Kubernetes in the way that you would normally call out to Kubernetes without going through these ingress controllers. And that's really where that external load bouncer that is ingress aware is going to come in handy to make sure all these things are programmed correctly. Other questions? So talk to me around. I look like this. And thank you for coming.