 As you're adding services to the mesh, one thing important is to make sure your service continue function, right? So we want to make sure we can still hit the Istio Ingress Gateway on port 443 and the traffic continue to flow. Let's spend a minute to understand what actually happens. So if you take a look at the pod information for the web API pod in the Istio Inaction namespace, you can see it's a little bit long, but you can see it has one init container, which, let me see if I can find it. So it has one init container, which calls it's using this proxy V2 image, and it runs Istio IP tables command to config the IP table for inbound and outbound capture. So that's the purpose of the init container. It sets up the IP table configuration, and it finishes. And then the Istio proxy, Psychoproxy, and the web API container would run in parallel. So this is the Istio proxy container. It's using the same proxy image as Istio init. It has a bunch of environment variables, a little bit more internal Istio configuration, but you can see it configures the Psycho mode with the cluster name and the domain, and you can config tracing, concurrency, all that information. It's configurable. And this is the web API container that's being off the web API deployment YAML. We just looked at it earlier. So we talk about Istio init container. We talk about the arguments for the init container. And the one thing I want to show you is if you're interested, this is getting a little bit of advance. If you're interested in the Istio IP tables command from the pilot agent, you can actually understand some of the configuration I showed you early on, like dash D, for example, dash B, dash I, dash U, what this means. So they actually have really good explanation here. For example, let's figure out what dash D means from the tooling here. So it means exclude ports. So why are we excluding these three ports? Because some of these ports, we don't want our way to capture, for example, house check ports. We don't want to go through mutual KIS to do house check. So these are the ports we purposefully config the psychoproxy to exclude from the capture. So feel free to poke around if you're really curious about how this command works and what these commands means. They are available for you. The next thing we want to talk about is the Istio proxy container. So we talk about the proxy container is using the proxy V2 image, same as the innate container. And it can fix these three ports for the exclude list from the inbound capture. So apparently, yeah, 15090 is the one that emits traffic to premises, and 15020 is the merge premises metrics. So these metrics, we're excluding from capture. I believe R21 is the house check port. So if you go back to the Istio proxy configuration, notice that we also mounted a couple of volumes. So for instance, Istio CA cert, root cert, one of the certs that's mounted. Also, there is Istio token that's also mounted to the psychoproxy. So why do we do that? Because we need these for the use of the proxy. So we need these to bootstrap the psychoproxy. So the psychoproxy can establish a certificate signing request with Istio control plan so that it can get the necessary key and certs signed for the psychoproxy to communicate with the rest of the services in the mesh. With that, we're going to add more services to the mesh. Recommendation, purchase history, and sleep to the mesh by rolling restart the deployment. Now, if you go get the parts for Istio in action, you can see things are being picked up with the psychoproxy and the older parts are being terminated. So we're going to continue to curb the web API through Istio Ingress Gateway, just making sure as we add services to the mesh, we're not breaking any of our existing traffic. So what have you gained? You added psychoproxy. It's going to come across to you. Those psychoproxy does take a little bit of memory, a little bit of CPU. It requires your actual effort. So what have you gained? To do that, notice we have two terminals here now. So we're going to work with both of the terminals. So we're going to first put a loop for that command to access web API. We're going to put a little bit of loop to generate some traffic. And then on the second terminal, too, we're going to open up the Kayali dashboard. So once you open up, click on the Kayali UI tab here, you're going to see the Kayali graph. So click on the graph here and select Istio Inaction. And you can choose to display different data. I like traffic animation. I also like security. So it shows you the live traffic comes through. And it also shows you whether the traffic is mutual TLS. So that's one benefit the psychop brings to you. You immediately gain access to all this data from the Kayali dashboard. The other thing we want to talk about is distributed tracing. So we're going to go back to this terminal 2 and Control C to get out of here. I can Control C correctly. Let's see. And we're going to copy this command over to start the JGAR dashboard. So click on the JGAR UI here. And on the service dropdown, click on Istio Ingress Gateway. So that would capture all the traffic. Or you can click on Web API. So either way, it's going to capture all the traffic coming through the Web API service. And if you click on Find Trace, it's going to show you all the traces for the last 20 traces. So for instance, you can click on one of these. And then you can drill into a little bit more detail about this particular trace information, where the time is spent, the duration. And you can click on the text to find a little bit more information about the request size, response, upstream cluster. So pretty much everything you want to see is here. When sense works, this dashboard could be a little bit boring. But when sense breaks, when you have like an error different than 200, it could be really, really helpful to help your troubleshooting issues. The X request ID is how we know these traces are belonging to the single request. So it's very important to help propagate these headers throughout your service, which may require a change to your service if your service are not automatically propagate these B3 trace headers. The next dashboard we're going to look at is the Grafana dashboard. So click on Terminal 2 here. And we're going to Control C, get out of here. And using dashboard Grafana command to view the Grafana dashboard. Now we're going to select, let's see, go back to the dashboard folder, click on Home. Let me refresh here. Seems a little bit slow to me, too. OK, so if you click on Istio folder here, you can see Istio service dashboard. So it should show you all the services that we have. So all these data that's generated for Web API purchase history recommendation, it's all available for you that you can see all the client data, server requests, client requests, a successful rate, whether the traffic is mutual TLS. Every single data is available for you. And it's an application level data, so a lot of interesting data. So congratulations. You have added the sample application to Istio and observed the metrics tracing and dashboard. So these are the benefits by injecting that site card to your services, and you are getting all these benefits. So if you click on the check, you should be able to get a good result and progress to the next lab. I would say let's take a few minutes break here before we continue. Ram, how is everybody doing? Yeah, I think that was much better. People were able to follow along. So let's do that same pace for the next lab. But like I said, we'll do a few minute break. Maybe you could pull up a blank screen or something with the message that's saying that we'll be back in five minutes so that people that are streaming in can stay up to date. Yeah, sounds good. Yeah, let me create a new slide. All right, congratulations. You guys got clean this far. So now we're going to talk about how to secure the services in your mesh. So let me share my screen. So this is lab four. We did two labs together. We're going to talk about how do you incrementally adding services to the mesh, which actually you already had that in the previous lab. But you might remember you actually see mutual TLS security badge in your Kayali dashboard, right? That is true because it's your automatically config mutual TLS for you. However, that's the best effort that we don't guarantee it's mutual TLS. So if mutual TLS works, the traffic goes through. If mutual TLS doesn't work, we will send plain text. So in order for you to enforce the traffic to be strict mutual TLS, you would have to apply a strict mutual TLS policy either for your target service or for your namespace or for the entire mesh. So that's what we're going to do in this lab. So we're going to enable mutual TLS. And we're going to talk about how the workload key and certificates are distributed in Istio. And we're going to inspect the key and certificate for each of the services. So that you can understand how mutual TLS is enforced by Istio proxy cycle. With that, we're going to jump into a demo on this lab. So you should see this and click on the Start button that would start this particular lab, secure communication with Istio. So you should be already in the Istio basic directory. So you can skip the prerequisite. The first thing we're going to do is check out what are the peer authentication policy in my Kubernetes cluster. So peer authentication policy is the one that specified what are the secure strict mutual TLS policy in my Istio cluster. As you can see, I have nothing specified. So by default, Istio allows permissive mutual TLS to be used, which is what I was mentioning earlier. We will do a best effort on upgrade your connection to mutual TLS from that cycle proxy. But if the traffic didn't go through, we will downgrade to plain packs and still allow the communication to go through. So that's what permissive is for. So we're going to apply strict mutual TLS for the entire mesh. The reason I said it's entire mesh in this example is because I'm applying to the Istio system namespace, which is the root configuration namespace for my cluster. And I'm naming this peer authentication policy with default. And I'm applying mutual TLS mode strict. So I'm deploying this default policy in my root namespace to the entire Istio mesh. As you can see, it's applied. You can also run the Kube gap peer authentication policy. You should be able to see the policy we just applied. Now let's see mutual TLS in action. To do that, we're going to deploy a sleep service in the default namespace. So the reason why we do that is in the default namespace, the sleep pod is one slash one. So it's just one sleep container without the cycle. Now, if you look at the pods in the Istio in action namespace, if you recall, we also have a sleep container in there, but it's two slash two. So this means this has the Istio cycle proxy injected in the Istio in action namespace. So we're going to run a current command to access web API from the sleep in the default namespace. What do you think is going to happen? It's actually not allowed. Why? The reason is we apply the strict mutual TLS policy for entire mesh, which includes every single services in the mesh that includes the web API service. So when the sleep service in the default namespace trying to reach out to the web API service, which are in the mesh, so we're talking about a client outside of the mesh trying to reach a client in the mesh, it doesn't have the key and certificate to do mutual TLS connection, which why the connection is rejected. Now, what if you run the same command except that you're using a different namespace, the sleep in Istio in action namespace? This time, it did succeed, right? Because that sleep container has the Istio proxy, it knows what to talk to, the keys and certificate to do establish the mutual TLS connection and to do the mutual TLS communication with the web API service. If you go back to the Kali dashboard, so let's generate some traffic, you should be able to see a little bit different. I want to point out that difference quickly for you because you're probably wondering, I'm already seeing my security badge, right? So what's the difference? So now if you go to navigate the Kali in the graph that we mentioned, so you can still display the security badge and traffic animation, right? So this is same as before, but one key difference here is if you click on the secure button here, it does show mesh-wide mutual TLS is enabled. So Kali knows that we have mutual TLS enabled on each of the service within the mesh. By the way, this is an enablement on the target service side. So let's dive a little bit deeper to understand how this works. With that, I'm going to exit out of the Kali dashboard command so I have my terminal back. Now, if I do, it's your proxy config, man. By the way, it's a super helpful debugging command allows you to look at the on-way configuration, many of the psychoproxy configuration for each of your services. So if you check up the web API secret, you can see we have the default and the root CA secret for the web API. The default secret contains the public certificate information for the web API service, and you can analyze the secret, the default secret a little bit more by analyzing the issue. So it's issued by cluster.local, which is your local Kubernetes cluster. And you can also check if the public certificate is valid for the web API container. You can see this was just generated. So it's actually only valid for a day, by the way. Don't you find that interesting? After the day, how is this going to stay valid? So it still actually automatically handles certificate rotation for you before it's expiring. So we know, we constantly check when it's expiring, and we stage the certification rotation for you and make sure it automatically renews the certificate in the same way that you are getting the initial key and search signed. And we're using the same way to renew the search for you. Now let's validate the identity of the client search and check if it's valid. So you can grab the subject alternative name, which you will see the specific ID for this particular service. So let's try to spend a little bit of time to understand the specific ID. So where does the cluster.local come from? And this is namespace. This is the Istio inaction, which is your namespace. And service account is web API. So if you go to the Istio config map, during installation of Istio, by default, we config a trust domain for you, which is part of your specific ID. So that trust domain by default is cluster.local, come with the default profile. You can choose to overwrite that if you want. So Istio operator.yaml. So that's how your specific ID is built. And the other thing I want you to check is the service account. We talked about this earlier, but in case you don't remember, the web API deployment was used in this particular service account. We created, which is also used in the specific ID for this particular service. Now, the next question I want you to think through is how does the web API service obtain the necessary key and certificates? So remember, early we reviewed the injected Istio proxy container for the web API part, which you can do it by finding out the part, and you can always do a gap part on this web API. And you can do it dash.yaml. So we did that together when we examined the init container, the psychoproxy container. There was a bunch of volume that amounted to the Istio proxy container. So what are these volumes? And there's the Istio token that we reviewed together, too. It's on the right side here, too. So what are these things, like the CA certs and Istio token? What are the purpose of these? So these are mounted to the Istio psychoproxy container so that when the psychoproxy container starts, the Istio agent, which is also called pilot agent, remember you run the command of pilot agent to check out how it configs the IP tables. So the pilot agent would create the private key for the web API service, and then send the certificate signing request to Istio certificate authority, which by the way is the Istio D control plan, which are installed in our cluster on the Istio system namespace called Istio D. So Istio D, man, that's what it stands for. So it sends the certificate signing request and using the Istio token and the web API service account token as the credential to talk to Istio D, and the Istio agent sends the certificate back. So Istio D gets the certificates back from the Istio D and then sends the private key to the Envoy proxy through the Envoy SDS API. So all this magic happens in the psychoproxy without you actually needing to do anything. The only thing you need is just get the psychoproxy injected. And we talk about the expiration of 24 hours. The next question you may be wondering is, how is mutual tiers strictly enforced? So this is something actually I find really interesting. So when you do a proxy config on your web API deployment, you get to see all the configuration. But one thing interesting is before you have strict mutual tiers enabled, you are actually seeing a lot of the times that we allow raw buffer to your services in the mesh. But once we have mutual tiers enabled, the raw buffer we are allowing actually got reduced. We're still allowing raw buffer for services not in the mesh, but for services in the mesh. We are no longer allowed raw buffers. So we no longer allow plain packs in that sense. That's how we make sure the traffic are mutual tiers and the only mutual tiers are allowed. The other question I want you to start thinking about is, there's so much data. Every time I do an Envoy proxy config, there's just so much data. It's so hard to read and interpret. How can I config my Istio to only listen to the stuff I care about? How can I selectively config what my Istio proxy cycle can see? So in fact, in 30 minutes or so, I think ROM is actually going to talk about what's the best practice to deploy Istio in production, and he's going to cover export to and cycle resources. We also plan to cover discovery selector in one of our future workshops. So stay tuned. These are available in Istio, but we're not going to cover in this lab. So congratulations, guys. You have Envoy's strict mutual tiers policy for all the services in your entire mesh. And the next, we're going to explore controlling traffic and the wisdom services. So click on the check button, which will load the next challenging for you. And you can click on Start to get to the next lab. How's everybody doing? We have a couple of thumbs ups, right? Yeah, I think you can proceed. That's awesome. Yeah, I think we only have 17, 20 minutes. So I want to make sure this lab is a little bit long. I have to say the control traffic lab. So essentially, we're going to introduce a new version of one of our services. And we're going to teach you how do you dark launch the service? How do you do canary-based routing? How do you add resilience to your service? How do you interact with external service? So we may not be able to get everything, but the lab environment will be available for you throughout the day. So feel free to play with them. So in this lab, in addition to gateway and virtual service, we're going to talk about virtual service to not bind to a gateway. So this is interesting. You can config route rule without a gateway, right? Because not every single of your services would be connected to the ingress gateway directly. So we're going to talk about that. We're going to also config subsets and policies for your destination. So destination rules are essential when you have multiple versions of your services. You're going to need to config how are you going to shift your traffic? What are the subsets of your services? Service entry allows you to access external services by having services entry in our Istio system, in our Istio service match register the Istio knows the external service and how to allow you to connect to them. So the first lab, as part of lab five, first portion, we're going to do a doc launch of purchase history version two. Then we're going to do a canary test to route 20% of the traffic to version two of purchase history. Then we're going to control the outbound service by introducing purchase history version three. And we are going to config Istio to only allow register service to go to the external service. So with that, we're going to jump to lab five. We do have a bonus lab, but unlikely where we'll have time to cover. You can do them on your own. It's not going to be required for the certification. So make sure you are in the Istio basic directory, which I am. And let's check out the purchase history version two. As you can see, I actually built a service myself called version two. It's a fourth version based on the Nick Jackson's fix service. The only reason I built this version two is I want to introduce external service into my purchase history service. So it will curate this external service and get a response that I can't really read, but it essentially gets a different response every time. Yeah, it's just a deployment here. And so we talk about destination rule. So one thing we're going to do is config a doc launch by route of the traffic to version one right through weight 100. We can fake all the traffic to version one. We also have a destination rule, because in our virtual service, we specify subset V1. So in destination rule, we define how what is V1, what is V2 subsets? It's by a label selector to select on the deployment of version V1 and version V2. So if you remember our deployment label here, we did label the selector match label. Let's go ahead, apply this virtual service and destination rule to our environment. So now we have configured Istio to send 100% traffic to version one. Now we can introduce version two. By the way, you really want to apply this first before you introduce version two, because the moment you apply version two, Kubernetes by default is going to do wrong dropping the traffic between version one and version two before you actually had a chance to test the version two. So now we deploy version two into our environment. If you do a gap pass on the Istio in action namespace, you can see both of one and two are running. Let's take a look at the logs of version two of purchase history. There's actually an error here. I'm unable to connect to the external service. Connection refused. So this is interesting. Why do we get this error? OK, let's generate some load first. So let's generate some load just making sure our traffic is still good. As you can see, we can fix 100% of traffic to version one. So even though version two was not good, we're still fine, because we're not breaking anybody down. So now let's think through why are we having this behavior? Why is this guy trying to reach out the external service? And it's got the connection refused. Remember, I mentioned early on, when the init container runs the Psychoproxy and the web API or purchase history runs in parallel as the Psychoproxy. So this pose a problem. What if the Psychoproxy needs to reach out to any external service, but it doesn't have network connect? What if the purchase history container needs to reach out to the JSON placeholder service, which is external, before the Envoy Psychoproxy is ready? It can't. That's essentially the problem we're running to. Luckily, there is a configuration where you can hold the application container until the proxy starts. So let's make sure we include that configuration. To do that, you add an annotation to your deployment, like this. And let's go ahead and deploy this. And finger crossed when you fix it for us. OK, so let's check out the logs for this part. It depends on where we are. OK, it's actually picked up. No, it actually takes a little bit of time. So sometimes the logs does take a little bit of time because it takes time to start the container. So let's try it again. Yeah, so now you can see the logs did pick up the new one, which was just started 18 seconds ago. You can see we're able to connect. So that's all because we can fix the whole application until your proxy starts. So this is highly required. If your container, your service, requires external connectivity outside of your Kubernetes cluster, make sure you can fix this. Because I actually stumped on this and took me a while to solve this problem myself. All right, so let's test version two of purchase history. As you can see, we were able to test it successfully by exacting into the container and do a local hosted. So this is like the most simple testing. Now we want to do, once we test a little bit, we want to do a header-based routing, because we don't want to test within our container. We want to pass through some other service. So to do header-based routing, remember, we talk about route rule. So you can fix that in virtual service. You do a header-based matching. When the user is JSON, by the way, it's capital J, you route the destination to version two. And for any other traffic, you continue to route to version one. So let's go ahead, apply this, and let's do a testing. So what we are doing is the user JSON. Why is it still going to version one? All right. This is because in the route rule, we had exact JSON and we were using a lowercase j in the QRA. So let's fix that. So if you use user JSON, finger crossed, uppercase j, tada, it does return purchase history version two, exactly as we told Istio to do. So this is great. We were able to test it. Let's do canary-based to put 20% traffic to version two. So go back to our virtual service and change version one to be weight 80 and version two to be weight 20 so that we can achieve 20% traffic. Let's go ahead and apply this virtual service. Now, if we do a loop of accessing our web API service and grab the purchase history, you can see out of 20, we got exactly four. So that's 20%. So let's continue this trend to shift more stuff to version two. We're going to shift 50% this time and apply this configuration. And let's do a loop. And as you can see, it's about 50%. Not going to count, but it's nice that with Istio, you can config a simple virtual service. And without needing to restart anything, you were able to shift traffic exactly what you wanted it to be. So that's the power of what Istio brings to control traffic. The next thing we're going to do is shift all the traffic to version two. So let's apply that and run this loop again. As you can see, every single traffic is version two now. So very nice. Now, we're going to teach you how to control inbound traffic. By default, Istio allows all the outbound traffic to go through. This is by I'm just double checking the outbound policy. We don't have any outbound policy, so default is allow any. To config Istio to only allow registry only, let's go ahead install this command by setting outbound traffic to be registry only. As you can see, all the configuration are updated. By the way, this configuration does take a little bit of while for Istio to pick up. The main reason is this configuration is updated in a config map. And it does take a while form that config map for Istio to recognize and then push down to the onboarding cycle proxy. So now if we send the traffic to the web API service, let's see how it goes. So Istio goes because I said it's going to take a while. So yeah, now you can see now it fails now. So the onboarding proxy has been refreshed with registry only, so it's not going to allow purchase history to go through because it reaches out to the external service that it's not allowed to access. It's not registered to access. So if you do a pod log command, you can actually see that we actually retry for you. Remember what we talked about? But in Istio, you automatically get retry twice. So this is exactly why you are getting attempt to count three because we retry for you. So let's go ahead fix this. And then we have about four minutes left. Okay, I'll run through this. Let's go ahead fix this. To fix this is apply a service entry command, a resource by importing this external services into the mesh. Now once we import this, you should finger cross to access. Yeah, it did work. So that's the nice thing, right? You have the precise control into, you know, what external services are allowed. So this is all the control traffic section. I'm going to go through the rest of my slides because some of you might be interested in the badge program. So we do have this foundation badge I talked to you through. So we really appreciate you guys provide a feedback to our survey and test. Could one of you, Ram, or we'll maybe send this out. So by completing the test and passing 80%, you will be able to get this foundation for Istio badge or certified by solo. And please give us the feedback through the survey. And we also have the rest of us talks from the solo team. Make sure you attend like Ram's workshop. If you like this workshop, and we also have a talk about designing service management global scale at SAP. That's really interesting. And also multi cluster workshop. And Christian is also talking about zero downtime. So don't miss any of our talks. And visit us at the booth too. We have in person booths and virtual booths. So please engage with the solo team. That's all I have. All right. Thank you, Lynn. Thank you so much, Ram and Will and Eric. I appreciate you guys support. And thanks everyone for attending and joining us for the workshop. We do have a lot of future workshops at solo.io. And if you go to our event page, we do run these type of workshops very frequently on a monthly basis. So feel free to join us there too. Yeah, just to kind of elaborate on that. We covered a lot of content in this very short amount of time. So if you're interested in this, but you want to kind of go at it at a much more slower pace, then go to solo.io. Look for the events section. And then you can see where we run this workshop, the workshop that I'm going to run next, which is a slightly more advanced deploying exterior to production. And we also have workshops around multi-cluster meshes, API gateways, envoys, et cetera. OK, so I think at 11.10 is actually the next workshop, which I will be running. Let me give me a couple of minutes to kind of swap computers, and then we'll get started. OK, great. Thanks, everyone. Have fun learning.