 Welcome. Thanks for joining me this afternoon. My name is Danian Hansen. I'm a software engineer with Cisco, contributor to Kubernetes, and recently I've dove into Istio. I've learned a few things and thought it would be a good idea to share what I've learned with everyone. So let's get this party started. And just a quick level set for everyone. This is a deep dive, so I'm not going to be covering the high level components and what Istio is all about. So if this is your first Istio session, which I would be surprised, this is probably not the best introduction to Istio for you. But I want to make a quick reference to the sample application that I have running in my Istio mesh. If you've messed around with Istio or looked at the docs, you've probably seen this application called Book Info. So quickly, it is made up of several microservices, product page front end, a detail service, rating service, and a review service that consists of multiple versions of that service. So each version is running a separate pod. V1 shows no stars. V2 shows black stars. V3 shows red stars. Istio architecture. Istio is made up of a control plane and a data plane. And so the control plane consists of components called a mixer, a pilot, and the CA. Also a reference says Istio security or Istio auth. And then the data plane consists of an enhanced version of the Envoy proxy. So if you've been to any of the Istio sessions, you know that this Envoy proxy gets deployed with every pod within your cluster. So to do that, you can either do a manual injection of that pod. This is an example that actually creates the necessary resources within your manifest on the fly, right? So another option is to use an initializer. So starting with Kubernetes 1.7, there's this concept of an initializer. If you're unfamiliar with what an initializer is, it really consists of two components. One is pre-initialization tasks that need to be met before actually running containers that make up your pod. As well as an initialization controller that's responsible for actually implementing those tasks. And here are a couple of flags that you would need to enable on your Kubernetes API server to run the initialization controller. And the initialization controller is a type of dynamic admission controller. If you think of an admission controller, it really sits between when an API request comes into Kube API server, gets authenticated. This piece of code, this admission controller gets called before actually persisting the object. So if I go ahead and say create deployment, after that create deployment is authenticated, the initialization controller gets called. So let's take a look at an initializer pod and config. You can see in my cluster, it's a simple development cluster. Got a master in a node. You can see the pods that are running in the Istio system namespace. I have the control plane components, the CA, the mixer, the pilot, a few additional optional control plane components, the initializer which we've been talking about as well as an ingress and we'll talk more about this. But let's take a look at the initializer really quick, okay? And we do a Kube cuddle, get PO, the initializer, the Istio namespace, and then output it to YAML. And so one of the things to go ahead and point out is that this initializer is essentially a proxy. Where is it at here? I'm sorry. The initializer is simply a web server that's listening on port 83. And there's a couple things that I want to point out as well. So there's a configuration that gets mounted at Istio config. And so that configuration actually comes from a config map. You see config map here, right? So let's look at the config map. So this config map tells the initializer a few things. What image to use for the init and the proxy containers. Another important piece I want to point out is that it actually daisy chains additional configuration components for the mesh wide configuration using a config map. So let's look at this config map really quick as well. And so, again, this is the mesh wide configuration. Do we want to enable mutual TLS? This is for the control plane. There's also a mutual TLS option for the data plane so that the proxies use TLS as they communicate with one another. Do I want to enable tracing? Again, you could use the links that I provided to go through this in more detail. Provides things like how do I reach a discovery address? What's the address for Zipkin if I'm using distributed tracing? So on and so forth. Let's look at the initializer logs. So that's basically an HTTP server listening on port 8083. And here's a configuration. You see some of those configuration components that I pointed out, telling the initializer when it initializes these proxies. Here's the init and the proxy image that I want you to use. And one thing that I neglected to point out when we looked at the pod, let me just jump back there really quick. So here is one of the application containers running or the application pods that's made up of multiple containers. We've got our app container. This one's the details. You see that there's a proxy container that's running in the details pod. But there's also this init container that's running in the pod. And so when you look at the kubectl get pods, you only see two pods running. If we weren't running in Istio Mesh, we would only see one pod, one container within the pod running. But we actually see two containers within each of these pods running. That's because not only do we have that app container running in this example, it's the details container. But we also have the proxy. There's one other container that actually gets started. It's this init container. But if you notice, it actually gets terminated. This init container is part of that initialization. There's some things that we need to do before actually running the proxy in the app container. And what this init container does is it goes into IP tables within the pod, not the actual node or host, goes in there and says redirect all inbound and outbound traffic to the proxy container. And that's exactly how the Envoy proxy is able to intercept all traffic between your app container and the rest of your service mesh and vice versa. And so I made mention that we can use this initializer to automatically inject these side car pods. But you can be granular on how you want to do that. And there's actually an initialization config that you go ahead and you say, hey, here's the API groups that I want to use. Here's the type of objects that I want to use for initializing, for doing this initialization. And if you see here, I point out deployments, right? If you look at any of the book info examples, deployments are used for deploying the applications. And so we see the supported kinds. We actually use an initializer config to say, here's what I want, the type of resources that I want to use for this pre-initialization. And take a look here, right? Well, here's a kind deployment, but it says false, right? So what we do is we actually say within the deployment manifest, we can use annotations to say, hey, I don't want this deployment to be initialized with a sidecar proxy. But in this example, with my book info services, you see that the sidecars are actually getting initialized. Let's see. So pod details for the Envoy sidecar. Let's go back here again and look at this. And we'll use the details pod, for example. And I showed you that there's three containers, two that are running the pod, the proxy, as well as the app container. And we actually pass in arguments to the proxy. And we tell you, and we basically are saying, hey, proxy, what mode do I want you to act in? I want you to be in the sidecar mode. And we'll talk a little bit about another mode. But we're passing in basically the bootstrap configuration of the proxy. So the proxy knows, hey, where's the path for my configuration file? Where can I find the binary? Where do I go for discovery, the pilot address, so on and so forth. And then let's look at the runtime configuration of the proxy. So I can exec. Here I'm going to exec into the details pod and get into the Istio proxy container. And I showed you the path to the configuration file. You're going to see here this revision number. And so after that bootstrap occurs, the Envoy proxy can now speak to pilot and get its configuration and maintain that configuration so that as the mesh configuration changes, we don't have to go out and reprogram these proxies. We push it to the pilot, then pilot pushes it out, any changes to the proxy. And the Envoy proxy supports this hot restart. So it's able to change its configuration without having to be fully reloaded. And so you're going to see a lot of those configuration parameters here that were passed in via the bootstrap configuration. And so the Envoy proxy has one thing that I want to point out here is you see this local admin port and address. So now that I'm actually in the proxy, I can go ahead and curl this endpoint. And I could see that it has several different endpoints such as, okay, listeners. I want to, here's all the listeners that this proxy is listening on. And we're going to pay close attention to this 9080, all right? Because that's the port that this sample book application uses for all those different services that I showed in the diagram. And so since it's, since it has a listener for 9080 and all these other listeners, the proxy also has routes, right? So these are our routes basically saying for this particular listener like 9080, I've got a route table. And what you're going to see here is that there's tons of routes. And there's tons of routes on 9080, again, because of sample book application, all those different components are listening on 9080, right? So details is listening on 9080, the product page, reviews, ratings, so on and so forth, right? So what we can do now is we can actually look at one of these routes, right? Here's a route. We're saying match everything from the route path and send it out this particular cluster. This is the out details default service cluster. And so now what I can do, I'm going to copy that and I can talk to the clusters endpoint on this admin port. And what I'm going to do is just output it to a file so we can rep, right? And then we can rep against clusters, right? So if we put ourselves in the mind of the proxy, we are listening for those ports. We're here in 9080 requests come in. We look up our route and we say, oh, you're supposed to go to this cluster. Well, here's the cluster that we're seeing for details, right? So it gives us maximum connections, all sorts of settings as a canary, no. And you're seeing an endpoint here, too. So this is how the proxy knows where to forward the traffic. So this IP address is the IP address of the details pod 2.69, 2.69. Well, this proxy can do load balancing. It's not going to do load balancing here because we only have a single endpoint. The proxy can also do health checking, right? So it's doing this active health checking. If you've got multiple, if you've got multiple endpoints within a cluster, it can incorporate any policies we create on how we want to load balance across those endpoints. But again, in this example, we only have a single endpoint. So why don't we look at a different cluster? And let's look at, we said it's the reviews service that has multiple pods in it. So we now see that we've got multiple endpoints.71.72.73. And if we have any type of policy, again, that's set up saying I want to load balance based on certain HTTP headers or based on certain labels from Kubernetes, so on and so forth. Let's see. So I think we covered that as well. Okay, so that was the data plane, right? And this enhanced version of the Envoy proxy is the key piece to the Istio data plane. But now let's talk about the control plane. And the first component is pilot, right? So pilot is responsible for maintaining the canonical representation of the service model, right? And it's the responsibility of the adapters such as the Kubernetes adapter to populate that model accordingly. So with the Kubernetes adapter, for example, the adapter implements a controller that sits there and watches the Kubernetes API server for certain resource registrations like deployments, like pods. And it will take those resources and then populate this model that that pilot maintains accordingly, right? And what that does is it allows pilot to provide kind of an interface that's neutral from any of these adapters and not really locking, you know, the mesh down to a specific implementation, right? And so when that when that model is populated based on the adapter, then it allows us to take that information and push it out to the Envoy proxies. You know, one of the key features of pilot is service discovery, right? So pilot expects that there's a service registry that exists, right? Like within Kubernetes using kube DNS, right? And that expectation is that when services are created, the registry is updated. When services are removed, those those services are removed from the registry as well, right? And this allows Envoy to dynamically again to dynamically find out which services exist within the mesh. And what's neat about Envoy is that it doesn't just rely on what it's learning from this service discovery, right? So it can populate all these different endpoints that we just saw right back here, right? It can populate all these different endpoints based on that service discovery. But again, it's also doing this active health checking to each one of those endpoints to ensure that they're healthy, right? And if they're not, it's not going to load balance traffic to that endpoint. So we can take a quick look at the pilot configuration. And so the first thing that jumped out at me is, well, we've got a pilot container, but we also have a proxy container, right? So not only does the Envoy proxy get deployed with our application pods, like reviews or ratings, but actually gets deployed with the pilot control plane component as well. You'll see here that a pilot uses the secret mounts the secret so that it can communicate with Kubernetes API server. So it's an in cluster authentication that occurs since pilot does need to speak to the Kubernetes API server. You'll see that the certs, that's he certs gets mounted so that the the proxy that gets deployed with pilot is able to do that mutual TLS for the control plane traffic between the Envoy proxies and and pilot or between pilot and mixer. You see here the logs of pilot when it starts up. I mentioned that mesh wide configuration that the initializer uses. Well, it's that that config map is not only used for the initializer. It's also used for each of the control plane components. But since the control plane components, they don't get initialized by the initializer, the only the only components that get initialized by the initializer is those sidecar proxies, right? And so we still tell each of the control plane components, use this config map for your mesh config, right? So that that mesh config for those proxies that sit with the control plane components can communicate with with the sidecar proxies as well. Right? So since we are using the Kubernetes adapter, you see that that adapter gets registered. Let's jump over to the mixer. So the mixer is essentially an attribute processing engine. So each of the Envoy sidecar proxies, they go ahead and produce attributes. What attributes those proxies produce is dependent on the the user or the operator, right? So you create these things called attribute manifest that say, Hey, Envoy, here's the important attributes that I care about that for the traffic that comes through you and send this information to mixer. Well, mixer, just like what we showed in pilot has this pluggable back end of infrastructure components. So pluggable back ends could be for logging for telemetry, authentication, so on and so forth, right? And mixer is responsible for taking those attributes from your attribute producers, again, those producers being your Envoy proxies, and then routing them accordingly to the appropriate back end component, making that component making its decision if it's a policy decision, providing it back to mixer, which then funnels it back down to your Envoy. Let's look at the mixer config. And the mixer configuration is going to look very similar to the pilot configuration. The mixer also has the proxy deployed with it. And as with the pilot, we mount the service account secrets so that the mixer can communicate with QB API server securely. We also mount Etsy certs. And these are the certs that are used so that the proxy for the mixer can communicate with the sidecar Envoy proxies. And you see here, this custom config file, I want to take a look at it really quick. And I didn't show you for the pilot, but the pilot also has a custom configuration file, right? A few minutes earlier in the presentation, I showed you the configuration file of a sidecar proxy and how that dynamically gets managed. It gets bootstrapped, but then gets managed by the pilot. Well, these control playing components, they have static configuration files. So if we exec into see this, let's see, let's get into mixer it. And let's get into is to proxy. Hey, it worked. And what did we say this config file was again here? I'll find it. Envoy off our mixer. Here it is. So this is the static configuration file that that we pass into mixer. And, you know, looks a little bit different than the sidecar proxies. We don't have tons of different clusters, tons of different services running. So each of the Envoy proxies run cluster discovery service, service discovery service, all these different services to help construct that chain between the listeners, the routes, the clusters and having the Envoy essentially do the proxying based on that chain of configuration. That was the mixer off that we just showed you. So the mixer logs will go, I'm going to speed things up a little bit just because we're down to our final 10 minutes. To point out a few things from the logs, the mixer is listening on multiple ports. Something here, if you look at it and we're saying the control plane is doing this mutual TLS, why is, you know, why is the mixer container have no certificates or keys or anything like that in its configuration? Well, that's because, again, that sidecar, the proxy gets deployed with the mixer and it's responsible for doing the mutual TLS. So, you know, between mixer and the rest of the service mesh, mixer is not, you know, it's not actually doing any type of TLS termination that's happening at the proxy that sits right next to it. You see that this is an empty config store. This basically says to use the in cluster configuration. I showed you how that secret gets mounted into the pod and this tells mixer to use that in cluster configuration to authenticate to the Kubernetes API server. And here's just a capture that I did of what we're seeing here is actually an Envoy proxy sent in some traffic to mixer and said, hey, you know, take a look at this and we can see that here's all these attributes that we configured for mixer that got pushed down to the proxies and just a few things, right? So when we create a policy within Istio, we can say I want to create some kind of policy based on source service and destination service or any of these other attributes that I don't have highlighted, right? And because my environment has, you know, no route rules nothing configured, you see that there's zero actions here that are taking place because there are no rules. So Istio security, one of the pods is the Istio CA. It's probably one area that I'm not gonna do a deep dive into, but at a high level, the Istio security is responsible for delivering those TLS assets that we saw, right? And the Etsy certs, right? So if we go and exec Istio, it details, curl, instead of looking at a cluster, let's look at the certs, right? This information here was actually delivered from the Istio CA, right? So Istio security, the key components is firstly, authentication, right? So, or, you know, the identity. So without Istio security, we don't know any type of identity information. We see that here's a source service, here's a destination service, but actually attaching an X509 certificate to each of these services gives us strong identity, right? We're able to authenticate based on that identity, right? We're able to do things like mutual TLS so that all the communication between the different services, those services do not have to be configured for TLS, right? If we configure the mesh for TLS, then the envoy proxies, as they intercept that traffic, will then go ahead and create a mutual TLS connection with any of the other proxies that it needs to forward traffic to. There's also the ingress component, right? So how do I have services outside of my Kubernetes cluster communicate to these services running in my mesh, right? So Istio leverages the ingress resource within Kubernetes and if you're familiar with ingress, it needs an ingress controller, right? So when you do a kube cuddle, get pods from the Istio namespace, you're gonna see Istio ingress. That's the ingress controller that's responsible for watching the ingress resource, right? And then based on the rules that you identify in the ingress resource, permitting that traffic into the mesh. And it's the ingress controller that also runs as a proxy, as an envoy proxy, but instead of running it as a sidecar proxy, right? The sidecar proxies, they need IP tables by that init container to intercept all traffic that comes through. The ingress runs in this ingress mode and it's only gonna capture traffic based on the rules that you put into that ingress resource. The ingress resource that I set up basically says, all subpaths from root, so for the product page example, if it's a login, log out, the slash product page, any of those endpoints, it's going to allow into the cluster. Oop, ingress, by default, your services cannot communicate outside of the mesh. Why is that? It's because those sidecar proxies, again, they're configured based on IP tables to intercept all the traffic and those sidecar proxies know nothing about any type of networking outside of the mesh, right? So we can create ingress rules. These ingress rules support HTTP, HTTPS that allow you to selectively allow outbound traffic. If you don't want to use the ingress rule, let's say it's non-HTTP traffic, you can set up include IP ranges, okay? That include IP ranges, this would be a configuration parameter that you add to that config map for the initializer, right? So the initializer, when it initializes the sidecar proxies, it's gonna say, hey, sidecar proxies, only intercept this traffic and the way that it does that again is it's basically saying going into the IP tables tells that init container, go to IP tables instead of sending all traffic, only send traffic in this example for the 10.0.0 slash 16 network. Thank you. Thank you.