 Hey everybody, welcome back to the Deploy STO for production lab part two. So this morning, as you know, most of you were here for Lynn Sun's lab where she talked about how to get started with this to how to use this to. Then we kind of switch gears a little bit to just switch to talking about how to deploy us here for production. So in part one of our lab, and I'm just doing a recap for people that might be just joining, in part one of our lab, we use the instruct platform to launch our lab. And then we deployed the sleep service and the HTTP bin service, just two apps with no STO. And we deployed Envoy not as a sidecar, but as its own pod that's running just like your sleep in HTTP bin. And we showed how you can use Envoy as an independent, basically, a gateway. And then we started adding services to the mesh. We added the HTTP bin application to the mesh and pointed it to only the Istio 183 version of Istio D. So we talked about how to install the Istio control plane and how to set the tag so that it's a particular version to set you up to run multiple versions of Istio. We talked about using the minimal port profile and then breaking apart the Istio operator to multiple files so that you have more control over your control plane versions, your gateway versions, where they get deployed to, what namespace, etc. And then we touched a little bit on observability. We deployed Prometheus and Kiali using the official, their operators. We showed a bunch of configuration on how to configure Prometheus to scrape the right control plane and workload mesh. But we didn't go too much into it because honestly, getting into Prometheus is a whole probably conference on its own. But we did have a lot of examples in there to show how you can get the gist of it. So now if you read the Prometheus documentation, it will make more sense to you on what information you need to set it up on your Istio system. So then that gets us to here. So if you're just joining us, you can skip all of the previous labs that we worked on by, you would go to this URL, add the Deploy Istio production to your study room, start the lab, and then go jump straight to Lab 4, which will take a couple of minutes to run a bunch of scripts, but it should get you to the point of where we are. Any questions? Cool. So the next thing we'll do is use Envoy at the edge. So we're going to use Istio Ingress Gateway and we're going to install the Ingress Gateway. So far, we don't have Ingress Gateway. We've only deployed Istio D and Istio System namespace. We have Envoy running independently, but we don't have an official Istio Ingress Gateway yet. So that's what we're going to do. We're going to create a new namespace for the Istio Ingress, and we're going to deploy Istio Ingress Gateway to that namespace. And then we're going to point it at our Istio D service that's been revisioned, okay? So once that's installed, then we'll talk about how to use, how to create the Istio Gateway resource to handle edge traffic, how to do TLS termination, how to wire it up, how to wire your certificates so that first we'll just give the Istio Ingress Gateway the certs by creating a secret. Then we'll deploy cert manager to have cert manager be the one that's responsible for creating those certificates, creating that secret, and giving it to Ingress Gateway. And then we'll also touch on how you would integrate that with an external PKI like Vault to actually store your root CA. And then finally, we're going to talk about access logging so that you can get information from the gateway on who's calling it, and what type of information it is. When things go wrong, you can get more and more stats. So everyone should be on this screen, on the Instruct platform. They're creating an Ingress Gateway for Istio. All right, cool, let's get started. First thing is let's take a look at the Istio operator file. This is the Istio resource that we used previously to install the Istio D, just the control plane. Keep in mind we used the minimal profile last time. Minimal profile is something that comes out of the box with Istio. When you did that curl command to download Istio, if you open your Istio root folder, I think it's under manifest, and then you'll see a minimal.yaml. That's basically all that minimal profile is, it's using that profile. So now to deploy the Ingress Gateway, we're actually going to set the profile to empty. So we're starting with a fresh clean slate, deploy no components, except the components that I have listed below. So what we have is we've specified that I want to disable autoscaling. So we need to disable autoscaling because we don't want it to scale up and down for our lab, which makes debugging a little bit harder. So this is more just for lab purposes. And then for the components, we're deploying the Ingress Gateway. And then this is where you can specify any customizations you might have for your Ingress Gateway. There was a question on Slack this morning about can you run Istio components and then create an affinity to a particular node? So if you wanted your Ingress Gateways tied to a few set of nodes, let's say you have edge nodes that have internet access, and you only want to run your gateway components on those nodes, then this is a place where you declare that node affinity, whether that's preferred or required, etc. The customizations that we're doing in this lab is we have this pre-stop lifecycle configured. So this basically says it wants Envoy to sleep for five seconds before completely terminating. So that if you have any active connections, give them time to drain. This is a setting that a lot of our production users use. Because they're noticing that as when Envoy is scaling up and down, probably down, that some of their connections are getting dropped because of how quickly it terminates. So this tells it to sleep for five seconds, which gives Envoy a chance to finish handling all the connections before terminating. So now let's create a new namespace for the Ingress Gateway. The common practice up until now has been to deploy the Istio Ingress Gateway system right next to Istio D, right in the Istio System namespace. But now the best practice is to split that up. So we're creating a new namespace called Istio Ingress, and then we're using the Istio Cuddle install command to deploy the Ingress Gateway to this new namespace. So if I list the pods in my Istio Ingress namespace, you can see that I have the Ingress Gateway come up. And then we're gonna set an environment variable so that we can get the IP address of the Ingress Gateway easier. I'm also just going to echo this environment variable that I just set. So that you can see that one, it got set properly, and that two, that it's a normal IP address that you'd be able to use to call your Ingress Gateway. Okay, so Istio Ingress Gateway is installed. It's running, means it's talking to Istio D all as well. So now let's configure this Istio Ingress Gateway to send traffic to my application, and as you know from the lab this morning, that you do that by creating the Istio Gateway and virtual service object. You use the gateway to declare what basically host and port you wanna listen on, and then virtual service to route that traffic to one of the services in your mesh. So we're not gonna look at that virtual service and gateway, but they're already there in this ingress folder. So we're gonna apply those to virtual service and gateway. And then if you do your curl and specify the host of IstioInAction.io and go to the gateway IP address, you should get your response back from the microservice sample that we deployed earlier this morning. All good so far? Great. This session's all about getting information from the various envoy components, right? So this is, traffic is going well, everything is going good. But let's say we want to get more information. We've learned a little bit about how Envoy is configured this morning. Let's say we wanna get that information and take a look at that. So there's IstioCuttleProxyConfig command, and this command is extremely powerful. You can use IstioCuttleProxyConfig to get the Envoy configuration of any Envoy pod, right? Whether it's an Envoy sidecar of one of your pods or if it's the Ingress Gateway, which is also Envoy. So you can point it at any one of these and get the configuration out of it. So you can do a full config dump which will be like thousands and thousands of lines, but you can also specify exactly what you wanna see. So in this case, we only wanna look at the routes that the Ingress Gateway pod is configured with. So IstioCuttleProxyConfig routes and then we're gonna point it at our Ingress Gateway deployment. And then you can see that it's configured to handle this host name. And it's gonna listen on this port and this is the domain. Now if you're good with Envoy configuration and you actually wanna see more details and not just this like simplified table form. Then you can also specify the route that you want and then do dash ojson and then see the full Envoy configuration. Now this is going to look very familiar to you because this morning in the lab, this is where we set the retries as well as the timeouts. That's this configuration. This is what Istio generated for you, okay? So I could also point my IstioCuttleProxyConfig to my HTTP bin service and get a similar output. Next, let's secure this traffic. The gateway that we've created for this example, it's just listening on port 80 and it's not doing any type of TLS. So and in order for you to do two TLS, then you need to provide a certificate that you need to, that the gateway should present to the clients that's calling it. First thing you need, if you wanna do TLS at the edge, is you need to create a secret. And that secret is going to have your certificate and key. So that's what this command is doing is in the Istio ingress namespace, the same namespace where you have your gateway deployment running. You wanna create the secret because Envoy, that Istio gateway Envoy is the one that's doing this edge termination. So it needs access to the secret. So regardless of where you created your gateway resource, your virtual service and gateway resource, those could be in other namespaces. But the secret that you want the gateway to use for edge termination needs to reside in the same namespace as where your gateway is actually running. So that's what this is doing. So it's creating that secret and then once that secret is there, we're going to update the gateway. This time, we're gonna have both. We're gonna have port 80 with no TLS termination. And we also are gonna have port 443 that has TLS simple termination that uses the secret that we just created. So in production, would you do this? No, what changes would you would make? You'd remove this port 80. So let's apply this new gateway. And now, in order to call it, even though I'm using curl, I need to provide that root CA to the curl command. So that it can check that the certificate that the Istio gateway presented it is valid against the CA. Okay, so what we've done is we've generated these certs and then created that secret manually. But in production, you're not creating your secrets with your TLS certificates manually like this. You would use some sort of external tools. So users might not be able to write anything to the Istio Ingress namespace. Users don't want to manage their own certificates and they usually integrate with some external PKI and certificate management tools like cert manager. So let's go through that flow. So let's delete that secret that we've created manually. So once that secret is deleted, if I do that curl command, it shouldn't work anymore. But I think it's still picking up that secret. So once we've deleted that secret, let's recreate it using cert manager. So for that, let's install cert manager. We'll create the cert manager namespace and then download the helm charts for cert manager. And then once you've downloaded the helm charts, do the helm install command and then verify that cert manager is up and running. And we'll give that maybe like 10 more seconds for it to pop up. Right now, cert manager is not integrated with any type of PKI. So it has no way to get that root CA. So we're gonna create a root CA and then give it to cert manager and then cert manager will use that to generate whatever certificates that we ask in. So that's what the next section is doing. It's creating a secret for cert manager to use as a CA. And then once that's done with cert manager, you create this cluster issuer object. And then you tell that object where you want to store the secret that you want it to generate. So we're saying create a secret called cert manager CA certs. Or that's where we're asking it to pick up that CA certs. That's the CA issuer. Once that you've issued that, then now let's ask it to create a certificate. You create a certificate resource and then this is where you would tell it what type of certificate you want. When do you want it to expire, the duration of it, your organization. And then at the very bottom should be the CA that you wanted to pick up. And then the secret name will be the secret that it's going to generate in the Istio ingress namespace. So this is the one that you generated manually in the previous section. Now we're asking cert manager to do it for you. Oops, I copy pasted the wrong thing. And then verify that it got picked up. Wait a couple minutes for people to catch up. This next command will download the secret that cert manager created. And then it's just doing like a base 64 decode. And then using step to inspect the certificate. And what we're looking for is to make sure that the SAN matches. Right here, this X509 V3 subject alternative name. So that has to match the host name that we're going to use to call Istio ingress gateway. Now that this is there in place, now if we call Istio ingress gateway again. And pass it that root CA, we should get a valid response back from our gateway. Everyone with me? Cool. How you would change this in production? You'd probably have something like vault. And you'd have cert manager connected to vault. And vault would have your root CA. Okay, so the next section we're going to start getting towards reducing the amount of configuration that is being passed around everywhere. If you're running a small cluster, let me define what a small environment would be. Something like less than 20 namespaces. I guess I should really be talking about the number of workloads that are in the mesh, the number of services. So if you're running below 50s or so services in the mesh, then I think that you probably won't run into any of these performance problems related to scope and configuration that's being pushed around from one namespace to another. Now that's extremely simplified, it really comes down to how many namespaces that you're using, the more namespaces the better, in terms of how you're delineating all your workloads. But if you were to have one namespace and I have all your workloads deployed to that one namespace, then Istio doesn't really know which workloads able to talk to what were other workloads. So by default, it's going to think everyone needs to talk to everyone. So every envoy will have to get the full configuration of everything. So that's your workloads and there's also the gateway component. The gateway component needs to know what workloads that the gateway needs to talk to, by default, the gateway has a configuration to talk to all the workloads in the mesh. And that starts becoming a problem for some of our larger enterprise deployments of Istio. So let's take a look at that. So if you use the Istio-Cuttle-Proxy-Config command, but this time instead of routes, let's use clusters. Clusters are all your endpoints, the destinations that you can talk to. And then point that at the Istio-Ingres gateway component. Even in our small environment where we only have just one service in the mesh, look at all of the various envoy clusters that the Istio gateway knows about. You can imagine like in a bigger larger scale environment, this is going to get quickly into like hundreds of entries. And you saw how much JSON goes into defining every one of these clusters. So it's a lot of config that gets pushed around. So, yep, eight minute warning, and then that's the end, okay, wow. Okay, some of the best practices would be to Istio operator has this pilot filter gateway cluster config environment variable that you can define in your Istio operator file, and that tells the gateway to only get the information that there's routing rules defined to. So if you have a virtual service that says gateway needs to talk to product page, then only send gateway the product page information. You don't need to know about any other services in the mesh. So once that's deployed, if you check the proxy config clusters again, we're gonna have to take, give it a couple minutes to actually pick up the change. There, I ran the same proxy config clusters command as before, but this time I only have like three or four entries. So much more simpler. To get access logs for the gateway, you can use the Envoy filter API and then this allows you to be very specific in defining the format that you want your log output to be. So over here, we're saying that we wanna capture the server name, the response flags, the response codes, etc. So if you wanna change this, maybe you want JSON format, this Envoy filter is how you would do it, okay? And then let's apply this access log to the cluster. And this Envoy filter is actually scoped to the Ingress Gateway workload. You can scope this Envoy filter to your workloads as well. So if you just wanted access log for your product page application, this is how you'd go about doing it. You'd set the workload selector field. So once that's done, if I send some traffic again to my application and now check the logs of the Istio proxy container in the Ingress Gateway deployment, you can see that now I have these new access log entries. This is extra, I think enabling on access logging is the number one thing you should do to determine that if there is a problem in your network. If you know that it's not, I have three minutes. If you know it's not a configuration problem, but things are still not working well, enable access log. If it's a small environment, you can just remove the workload selector so that it gets applied to everywhere. And then just check the logs of all the proxies. And then you'll be able to see where exactly your request is being routed to. In this case, this is the Envoy cluster that is being routed to. This is my response code, etc. Okay, so we're done with that lab. So we finished lab four, but we spent all of our time on lab four. But it's a good lab, okay? The next labs, we'll talk about rolling out one service at a time. We did a little bit of this in lab two. You follow the same principles in lab four, where you load one service at a time by doing the kubectl inject, or by deploying a canary version of each one of these services next to your real version. And the canary version will have the sidecar. Your real version doesn't. Once the one with the sidecar comes up, you send some traffic to it, if all is well, then you actually delete the canary. And then you restart, you do a rolling restart of your main services after you enable the namespace with injection. And now the new services that comes up will get the right sidecar. And you know it's gonna work well. And then there's MTLS rollout, as well as we will dive a little bit more into configuration scope. So like I was saying before, we will do this lab again. If you go to solo.io, look for the events page. You'll see the various workshops that we run. Find the Deploy Istio and Production Workshop. I actually think we're doing one next week. And then yeah, sign up for that. We'll have much more time to go into this in more detail. So with that, I wanna say thank you. And just like before, there's badges and surveys as well. So we'll share these two links in the Slack. Cool? All right, well, thanks for coming everybody. Here.