 So hello everyone, just the next workshop is going to be Istio Service Mesh Getting Started Workshop by Odin Sun. Thank you so much for joining and take it away. Okay. Thank you. Hey, everyone. I am going to, sorry, I can't get on the wrong button. I apologize. Mentor, hit down the present button. Okay. So first, I want to introduce myself. Thank you for coming to my workshop. I am the Director of Open Source with Solo. I've been working in Istio for actually four plus years now. I wrote a book to help user quickly get started with Istio. Now I'm also providing workshop using similar contents from my book, but it's a little bit more updated contents. So just real quickly, you might be wondering what is Solo, right? Why is Solo doing Istio workshop? So Solo provides the whole Glue API infrastructure. Fundamentally, we are solving the challenging of service connectivity. It's an application networking company where we connect your services at the edge or inside of the Service Mesh. We have developer portal building on top of Istio. We have WebAssembly Hub. Allow users to extend Envoy as needed. Google Mesh is the mesh product that we provide to help users simplify Istio, and we are hiring. So actually a lot of roles we have provided by our companies. So if you're looking for jobs, check it out. So what I'm offering today as a workshop, because it's only on Alvo. I don't think you can see the slides on the presentation. So if you have trouble to see the slides, what I'm going to do is I'm just going to see it this way, because I think you can see most of the slides without, like maximize, would that work? Yeah, I think that would be fine too. Yeah, I don't know what's going on, but sometimes I've hit it on some of the conference tools, not always. So what we are covering today is Istio workshop. It's a shortened version, because I normally run this workshop for two hours and 30 minutes, but given the time limited, we have. So we're going to run a shortened version of this workshop just to put in the 60 minutes time frame we have here. But it's part of our badge we offer. So if you're interested in learning more about this workshop and get the badge, you can always register to a workshop at solo.io. We offer these types of workshops on a monthly basis. So let's talk about Istio Common Adaption Pattern, because those are the things we're going to cover in the lab. And due to the limited time we have at the lab, we are not going to be able to do every single lab. So if you are already know a little bit about Istio, I'm going to let you pick which lab you are going to do, because I want to do what's benefited you the most. So if you know how to install Istio, I want to skip that installation lab so you can jump to the legs. I know what platform does provide you those capabilities. So we're going to talk about how to expose your services to Istio Ingress Gateway and how to do that securely. So that's the second lab. The third lab, we're going to talk about how do you observe the services within the service mesh? How do you do get the Istio dashboard running with Kayali, which is a project sponsored by Red Hat primarily. And also how do you view distributed tracing with your microservices? How do you view the Istio mesh grid from that dashboard? So we're going to talk all that and it's related to observability. The fourth lab we're going to talk about is, how do you incrementally adapt mutual TS for your mesh? The fifth one is, how do you control your traffic? What if you have multiple versions of a particular service? How do you dock launch the service? How do you gradually shift the traffic to the newer version of the services? How do you add resilience to your services? So that is all the lab. Let me talk about the lab environment in the meanwhile. Actually, before I talk about lab environment, let me send you guys a link. So what I have right now, we're using this platform provided by Instruct. So what I'm going to do is create this link to our chat. So you're going to see this now. So go ahead, visit that URL. It's going to ask you to login, which you can log in with either your GitHub ID or your Google ID. I think they also support Twitter or Facebook. I can't remember exactly. So I would assume if you're in the United States, you probably have one of those IDs. So go ahead, just login, and then you should be able to launch a window, like what I'm showing now, which I am going to close this out. So you should be able to see this workshop, like what I'm seeing now. So that invite I just provided would allow you to access to this. So we have five tracks. The last one is the bonus, which you probably don't have time, but I'm going to let you pick which one you are going to do because of the timing we have, like I mentioned, this is normally two hours, 30 minutes. If you already know how to install the way you do it, just skip to Istio Ingress Gateway. So for me, because the install is straightforward, I'm just going to skip to Istio Ingress Gateway. And because of the time we have, I'm going to try to teach you guys lab two, lab three, and lab four. And then if you guys have time after the session today, you can feel free to continue the lab on your own pace. I think the link is valid for a few more hours for today. Any questions? So once you decide which challenge you are going with, you can click on skip to like what I'm doing now. So now it's going to prep you for the environment. As you can see, it's going to take like two minutes to get my environment to up running. So essentially, so go ahead click on that button for whichever lab you want to. If you don't have an idea, just skip right to the second lab, which is just follow what I'm doing. So let's talk about the environment. So the environment what you are deployed right now is essentially provision a Kubernetes cluster in either of the three clouds. It's going to try to provision one closer to you. And we're going to install Istio for you in the lab one, either you do it or if you skip to lab two, Istio would automatically install for you. And the environment provider terminal and it connects you automatically to the Kubernetes cluster where you can run kubectl commands. So that's essentially the environment. The first lab is just install Istio. So it's nothing interesting. The second lab which we're showing is we're going to have services exposed on the Ingress gateway. We're not going to do anything related to Egress gateway. So we're going to touch on Istio network resources, gateway and virtual services. So gateway is really the edge, low balance of configuration and virtual service are a list of the routing rules. So this is essentially what we will do in the lab two, which is we're going to deploy three services if they haven't been deployed. And then we're going to wire up the services web API and expose it on the Istio Ingress gateway and we're going to do that securely. So my environment's still coming up. Let me know if you have any questions in the chat. So while we're waiting for the environments since that's going to take a little while, I'm going to go through with you on the next lab. Let me double check the environment, yeah. So the next lab we're going to talk about is mesh observability. So what we're going to do is incrementally add services to the mesh and then you're going to see that you automatically gain the visibilities of how the services interact with each other by just simply by adding the services to the mesh. So this is the powerful thing provided by service mesh through on-way proxy. We're going to talk about deploy paths and services to the mesh. So a key, a few important things here is making sure that you name the service port for each of the service port that you want on-way to capture. The pod must have a service associated with it. You want, we want you to enable deployments with app and version. The reason is so that our telemetry observability system can pick it up. We don't want to use UID-1337 as mainly because the on-way proxy are using it. And we want you to check, do you have net admin and natural privilege? Because if you don't, we recommend you to look at Istio-CNI, which is intended to solve this problem. So in the lab three, we're going to reuse the same three services and we're going to add a psychoproxy to each of them and enjoy the observability data. Now, if I go back, as you can see, how many of you get the environments up running? So let me do a poll here, because let me see if I can create a poll, ask a question. Actually, I don't think I can create a poll. I don't have permission. I can create it for you. You create one just so I have an idea of where people are. What do you want to ask? Just if the environment is ready to go. How many of you, yeah, is your environment up at running? Okay, so if your environment is up running, you can go ahead and follow along with me. If your environment are not running, just watch me and then when your environment is running, then you can go ahead and do it. So what we're going to do next is click on the stop button, right? So that's the guiding way to you. And can you guys see my screen okay with the right font? You can see your screen with the terminal and the ECA. So text size is okay? I don't need to change anything. Yeah, I think it's fine. Perfect, okay. So the first thing we're going to do is check which directory we are in. So we're going to CD to this basic directory. It's your basic, which is our lab. Then we're going to deploy the sample application. We talk about web API recommendation and purchase history. By the way, these samples are building using a council's fixed service from Nick Jackson. So what we're going to do first is create the namespace card. It's still in action. And then we're going to deploy the sample application to that namespace. So as you can see, we deployed web API recommendation, purchase history version one, and sleep. Notice we're not injecting any side cards. So this is nothing related to it's still, this is just plain Kubernetes. Okay, all my parts are running. The next thing we're going to do is we're going to, remember we're going to well up the web API to the Istio Ingress Gateway. So we're going to get the service IP, gateway IP for the Istio Ingress Gateway. And we're going to export the port and the secure port. So you can see the gateway IP is here, which we captured. Now what we're going to do is expose our application. So let's review what you need to do to expose the application. The first resource we're going to review is the gateway resource. This is the Istio resource as you can tell here, Istio.io in the networking namespace. And it's a kind of gateway. The important thing here is the name of the gateway, the selector of which deployment this gateway resource is for. So the selector selects, this is the Istio Ingress Gateway early on and which port are you expose your service? Which we start with 80, which is not secure at the moment, but we want to start there. And then what are the host name you are going to expose on that gateway? Let's review our virtual service configuration as you can see the virtual service, the right way to think about it, it's like a route rule, right? So essentially you tell Istio for this web API gateway, I just created, if it's this host the Istioinaction.io, go ahead route the traffic to the web API in the Istio in action namespace. And by the way, you routed to port 8080, even though the port on the gateway was 80. So that's essentially what that means. What we're going to do next is just to understand why is 8080 here, right? So if you review the web API in the Istioinaction namespace, you can see that services running on 8080, which is why. So we're going to apply these two virtual service and gateway resources we just reviewed together. So we're going to apply that resource to Kubernetes and the Istio control plane essentially, listens to these resources from Kubernetes and then propagates these resources to Envoy, SciCa to the language Envoy can understand. Now if we curl or what Istio gateway IP and host the Ingress port, which is 80 here, you're going to see a hello message back from the web API service. You can also drill into a little bit more about the route configuration. As you can see, this is exactly the route we just configured on the Istio Ingress gateway. If you want to see individual routes, you can do that too. For instance, this http.80 routes we created, you can see this route with routes to the web API, port 80 in the Istio inaction namespace. It says, you know, there are retries, and number of retries too, and we're going to retry on 503. You know, if any of these are error conditions, we're going to do retry for you. So these are the detailed route configuration, Istio automatically translates for you based on the gateway resource and the virtual service resource we just deployed. Okay, so the next thing we're going to do is, I mean, this is a plain traffic, right? We're going to need to figure out how do we secure that traffic, right? So the first thing we're going to do is, we're going to create a TLC create in the Istio Inaction, I'm sorry, called IstioInaction.io in the Istio system namespace. So the reason we want to create this is we want to make sure the gateway resource can use it. So let's review this configuration. So this is an updated version of the same web API gateway resource. In this one, we actually delete port 80 and we replace this with 443 because we don't want to allow HTTP access. We only want to allow the secure access and hostname is the same. We did update the protocol and the name of the protocol. And one thing important here is we use the credential name, which we just created a minute ago. Now let's go ahead apply this updated gateway resource. As you can see, this is like your deploy a new updated of your deployment or service with Kubernetes, just basic Kubernetes commands. So now if this updated gateway resource is supplied, now if you occur the IstioIngress gateway on the secure port, which is 443 here, we do expect you're getting a hello back from the web API. So now the question I have for you is what if we run this command that we run five minutes ago, if we visit the web API through IstioIngress gateway on the HTTP, do you think it's going to succeed? Yeah, the answer is no, it's not going to succeed because we is specially configured Istio, we only allow port on the secure port. So that's it guys, this lab teaches you how do you secure your services with IstioIngress gateway that how do you expose your services on the IstioIngress gateway? So your user outside of the cluster can reach your services and how do you secure that connection? So you can run the check command, which tells you whether your lab is successful. If you did all the challenging, we have automated scripts to help you and then we're going to set up the challenging for the next lab. So I'm going to wait here for a minute, just making sure you guys catch up with everything I did. If you have any questions, do let me know in the chat. Okay, I saw three people said they have instructor running and zero people said not running. So that means that probably most people have it running. Thanks for the poll. So we're going to go move on to the next challenging. As I mentioned to you, this is an observability challenging to allow us to gradually add services to the mesh and observe the benefits of the metrics the tracing provided by Istio. So in order to add something to the mesh, we recommend you to use the automatic Psyche injection. You could use manual, but automatic has much better integration with Helm if you're a Helm shop. And the automatic Psyche injection is also very mature. So there's no reason not to use it. So the first thing we're going to do is label the namespace Istio in action with Istio injection enabled. So that essentially tells Istio for anything deployed in this Istio in action namespace, I want to have the Psyche automatically injected. You can cure in the namespace to see we do have this Istio in action enabled for injection. And next thing we're going to do is review the service requirements for onboarding services to Istio. So what we're going to do is review the web API service. As you can see, the web API service has, we did name the port for HTTP. It's on port 8080 with the target port 8081. So in Istio, we recommend you to name the port. This is extremely important because it allows you tell Istio, this traffic is HTTP. So Istio doesn't have to spend CPU or memory to guess in what it would be. And it could be guess run. So we do have automatic protocol detection, but it's going to use more CPU and memory. And then we could get it run too sometimes. So explicitly declare your protocol is highly recommended. Now, from the deployment descriptor here, you can see, we have a label on the app and version. So this is also highly recommended. The reason is in order for us to be able to associate a telemetry data with a particular app and version, this is how we can tell and filter that data and bring that data back to you. And as you can see, the container port is 8081, which is why the target port is mapped to 8081 there. So you're welcome to check the other services, but for the purpose of the time, we're going to move on. Now that you know how to check and what to check. So the first thing we're going to do is roll out restart deployment of web API. And the reason is the moment we restart, it's going to recover with the new pod with the cycle injected because we ask everything in the namespace to have automatic cycle injected. So now we're going to curate this pod as you can see two slash two, that means the cycle is in the new pod. That means the cycle is injected to it. So everything is up running and looking good. Now, if you can also validate in the pod, just making sure your pod logs continue looks good after the cycle is injected, in this case, it does. Now, the other thing it's also important is, okay, you added one service in the match, the other two services are not in the match, is it impacting any of your production traffic, right? Is your web API continue working? And it looks like it does continue working and it does able to call recommendations and purchase history. So in this case, you can continue to test this as you add in services to the match. So let's take a look at what happens in this case. If you go to get pod and do a dash or YAML, you can actually find a lot detail information in this case. The first thing I want to highlight is, there is a initialization container, which is the init container here. And the purpose of that init container is to set up the IP table rules for your pod. So EnvoyPsych is smart to capture all the incoming traffic and also smart to capture all the outgoing traffic. So if you're interested in learning, like what are these parameters means dash U dash M, which I can never remember, but this is the easy way for you to get understanding of each of these commands. The Istio proxy container, which is the other container. So we review the initialization container, by which, by the way, it would dice right after it finished the initialization. The Istio proxy container, you can see it uses proxy v2 Istio 110, which is the image I have right now. You can see we specify what are the CPU and memory. So this is extremely useful for capacity planning. And you can also see Istio CA cert and Istio token are mounted to the pod. So Istio DCA cert, this also Istio token that's mounted here. So a lot of information are mounted to the pod, just to making sure the proxy be able to get the token and be able to securely talk to the control plan, to establish the connection and finish up the identity of the proxy, to be able to get the key to the pod. So this is the key to the proxy. To be able to get the key and the cert signed. So all these magic are happening in the Istio proxy. So the next thing we're gonna do, since everything's going well, is we're gonna do a roll start, the deployment or the rest of the deployment in the Istio in action name space. And now if we do a pod, get a pod command, as you can see, all the new one that's been bring up or have the Istio proxy cycle injected. Now, if we continue curl our application on HTTPS, you can see the application still running, right? So there's not really much impact other than now we have the cycle, we do have to plan for capacity planning, make sure we have enough resources to run the cycle. Now, what we're gonna do next is, we're gonna generate some load. So what I'm gonna do is put a loop of 100 and we sleep every three seconds, but we're gonna continuously hit on our Web API service. And now if you go to terminal two, which is right here, I want you to click on this dashboard Kayali command. So bring that, this would allow you to do like a proxy port forward, right? So that the Kayali UI can actually show you the Kayali UI. So let's click on the namespace drop down. We want you to see, okay, navigate to the Kayali UI and select a graph from the UI and then select the issue in action from the namespace. Now you can see, I have my graph available now. So you can actually see how am I calling, how am I calling the sequence of my service, right? From issue ingress gateway to Web API, to recommendation and to purchase history and click on any of these, you can get a little bit more stats on this connection, the request time, what are the error code? So all that information is automatically available for you without you actually needing to do anything other than just have your cycle injected. By the way, I don't understand why Kayali didn't enable these by default, but I always enable them. So the traffic animation and securities if you turn on you can see the traffic actually constantly flow to the system. Now the next thing we're going to look at is distribute tracing so that you can see another benefit that you will observe when just adding cycle to your services. So if you go back to terminal two, what I'm going to do is I'm going to control C out of here and I'm going to call Istio Cardo dashboard JGAR, which is our distributed tracing system. By the way, that's another Red Hat led project. So now if you go to the JGAR UI, you will actually be able to select services like I can see Istio ingress gateway and I can see, I want to see all the traces related to Istio ingress gateway. Like for instance, this one, it has like six trade spans or I can click on each of them and look at the flow and I can look at, well, each services are spending time. So these type of graphics, trade span information are extremely useful when you need to understand where the time are spanned. And also if you need to debugging any error message, it has all that data available for you. The next thing we want to show you is Istio Griffana dashboard. So what we're going to also do is exit out here and do Istio Cardo dashboard Griffana and hit enter here. So now if you go back to the Griffana UI, you will be able to start to see some of the Istio data. So sorry, I click on the link accidentally. So the first thing we're going to do is navigate to the Griffana UI on the left menu, select dashboards, the icon has four scrolls and then click on the manage menu. And if you click on Istio, you will see a bunch of data, right? So let's see, we want to see Istio service dashboard. So this is service dashboard. And you can see all the client and server requests, the duration, successful rate. So all that data is available for you without you needing to actually do anything. So it's very powerful. If you actually click on, so feel free to play around with this because not only just like the service dashboard, there's also performance dashboard available where you can see like CPU memory by different Istio components, by proxy, by the Istio control plane. So all these data are available for you. Congratulations, that's it for this lab. I'm going to click on the check button just to check if I'm running the lab successfully. Essentially in this lab, we teaches you, how do you add services to Istio through automatic cycle injection? And what are the benefits Istio gives you, right? To allow you to observe what's going on within each of your services. Any questions? Okay, I'm going to wait for two to three minutes, just to allow you guys to catch up. But do let me know if you're already finished, you'd like me to move forward. Okay, we're going to continue on to the next lab. So the next lab we're going to talk about is how do you secure services in your mesh? Typically we recommend you to incrementally add services. I'm sorry, that's the wrong one. How do you enable mutual TLS among your services, right? So we're going to spend a little bit of time to understand how workload keys and certificates are distributed in Istio. How do you inspect key and certificates for each service? And really understand how mutual TLS is enforced by Istio proxy. So we're going to just sit on this topic and do a lab related to this. And we're going to reuse the same example that we have from Web API to recommendation to purchase history. So you should be able to get to the screen, which is lab four, if you decided to do lab four. So the first thing we want you to do is make sure you're in this folder, which we are. So you don't have to do anything if you are. So we want you, the way Istio can fix mutual TLS is through a policy called peer authentication policy. So what we are doing now is just to curate the Istio, the entire Kubernetes cluster to see if there's any peer authentication policy, which we don't have any at the moment. And next thing we're going to do is enable mutual TLS, strict mutual TLS for the entire mesh. And to do it in the entire mesh is by deploy to the Istio system namespace. Istio does allow you to config where you want to hold on the root namespace. And by default, Istio system is the root namespace where you hold the mesh wide configurations. So as you can see, my peer authentication policy applied. Now if you go to and curate, you should be able to see the one I just deployed in the Istio system. And let's see how mutual TLS is in action. So to do that, we're going to apply the sleep application in the default namespace. The reason is, you may be wondering, you already have sleep in the Istio in action, right? Why are you doing this? The reason is, you know, this sleep application is not going to have Psyche injected. Well, the other sleep in Istio in action namespace is going to have Psyche injected. So that's the difference. So now we're going to call from the sleep in the default namespace to the web API in the Istio in action. As you might have guessed, it's not going to work. Why is that? Now, if you actually call from the sleep pod in the Istio in action namespace to the web API in the Istio in action, you know, everything continue works. So why is that? The reason the sleep pod in the default namespace doesn't work is because we enable mutual TRS for the entire mesh. So what does that mean? So that essentially means for anything access the services in the mesh, which web API is a service in the mesh, it needs to have mutual TRS connection. And then in order for a service to establish a mutual TRS connection with the web API service, it needs to have a Psyche proxy because that proxy, remember, we talked about that proxy allows you to establish connection with the control plan, allows you to have the right key inserts signed by Istio CA or whatever external CA you might be using so that it allows you to establish the communication through mutual TRS. So essentially the proxy on the sleep pod in this example in the Istio in action namespace has the right key inserts to upgrade that connection from sleep pod from the proxy that sits next to the sleep pod to the web API service. But the sleep in the default namespace doesn't have the proxy, so it couldn't update the connection when it sends a plain traffic web API is going to reject that because that's how we configure Istio to reject anything without mutual TRS. So the next thing we're going to do is visualize this mutual TRS enforcement in Kayali. So to do that, we're going to generate some load, similar as before, we're going to sleep every three seconds. Now if we go back to Kayali and if we bring back the Kayali UI and get back to the graph view and let's see if we see some security badges and the annotation. Now you can see it actually has an icon here that says mesh wide mutual TRS is enabled. So that's how Kayali actually knows it's enabled. You might be wondering, then in the last lab, I remember you enabled the security badge, it has the security icon. Also, you are absolutely right. So what's the difference, right? So what's the difference in this lab with the lab prior? So in the lab prior, we didn't enable strict mutual TRS. So a mutual TRS is best effort, which means we still, we will try to use mutual TRS if we can. If mutual TRS fail, we're going to continue to allow the connection to go through. So that's what permissive means. So it's not a security purpose, same, but it does allow your traffic continue flow through, which is super helpful as you onboarding your services to the mesh, right? Because as you onboarding, some of your services may have the cycle, some may not. So that's when permissive becomes really useful. But with the moment you enable strict mutual TRS, that means only the traffic that's mutual TRS is allowed. So even though you may not see much difference on Kaili dashboard, the security posture is actually different. Okay, the next thing we're going to do is, let's see, we're going to understand, try to spend a little bit of time, try to understand how mutual TRS works and is still. So to do that, well, we're going to get out of here, one of these terminals, so you can control C to get out. So we're going to do a proxy config, the web API deployment in Istio in action and look at the secret configuration. So from the output, you can see the default secret and your Istio service mesh root CA public certificate, right? So that's all available for you. Now, if you want to check the issue of the public certificate, let's see how you analyze that. So essentially what you can do is, then you can use base 64 decoded and then you can use open SSL to grab the issuer. So you can see this issuer is cluster.local. Now it checked if your public certificate in the default secret is valid. So you can see it has not before, not after. So I think we're definitely four into that window because the time is GMT. So it expires actually 24 hours. Now, the next thing we're going to do is check the identity of the client certificate to see if it's correct. As you can see is X509 certificate and the identity is the speed PID format. It uses the trust domain, the namespace, Istio in action namespace and service account and the web API service. So if you are wondering, right, where is cluster.local? Where is the web API service account come from, right? We're going to dig into that a little bit. So the first thing we're going to do is get the config map of Istio and the graph trust domain. So when you install Istio by default, we use cluster.local as trust domain. You could potentially customize that. Now, if you review the web API.yaml that we deploy together, you can see it uses the service account web API. So this is essentially, you know, how we construct this speed PID. Now, the next question I want you to think so is how does the web API service obtain the needed key and certificate, right? So remember early on, you reviewed the injected proxy container, what it looks like. So one thing I want to highlight here is it has, let's see, Istio token that's mounted. It also have like the root search mounted, right? So the Istio root search is mounted from the Istio CA root search config map. So you can see this config map in the Istio in action name space. Yeah, so this is the Istio CA root search right here. So when the Istio, the Psycop proxy and your application pod, so for example, web API, during the start time of that pod, Istio agent, which we also call it pilot agent, creates the private key for that web API service. And then it sends the certificate signing request, so we should shorten that CSR, to send the request to Istio certificate authority, which by default is Istio D to sign the private key. And it uses the Istio token to say, you know, I am the service, the agent for the web API service, and it also uses, so as you can see, you know, it uses this web API token, along with the Istio token here. So that's how the keys and certificates are signed by the CA. Now, you might be wondering, you know, why is this expire after 24 hours, right? This is actually on purpose for better security. So what happens if the certificate expires after 24 hours? The Istio agent would monitor the web API certificate just to check, you know, is it expiring soon? And if it's expiring really soon, it's going to repeat the certificate signing request that I just walk you through, so that making sure that it can, you know, sign the certificate, make sure it's continuing to be valid for the next 24 hours. Now, the next thing we want to talk about is how is mutual tier strict enforced, right? So if you look at the Envoy configuration for deploy web API, it's actually a lot of configurations. One thing though, when mutual tier strict is enabled you would only have mutual tier as traffic, so you don't need to config filter chain that allows plain traffic in the mesh. So like if you search for transport protocol in your Envoy configuration, when permissive is applied, you would actually see, you know, raw buffer is allowed, but in this case, like if we do a search, I wonder, I don't think it's going to show anything because now we have mutual tier as enabled. So next time if I have permissive, you can run this exercise. Now, another question I want to ask you is if you only have deploy a few services, why there's so much Envoy configuration for your pod, remember we just did a proxy config for web API, I mean there's like tons, tons, tons of data. I guess I can't even go back because my output runs through, right? So it's way much data than you really needed probably. So to solve that problem is you can use discovery selector to config Istio to say, you know, you only listen to services I want you to care about. You can also config the scope and visibility of your cycle configuration. So there are two configurations, one is called export to, one is called cycle. We're actually teaching you how to use that when you have running services in production. So we're going to cover that in our Istio essential lab. So that's it, I'm going to do a check. This is it for this lab and I think I'm almost running out of time also. So if you enjoy this lab, I encourage you, you know, finish it up on your own. And if you don't have time today, you know, I would encourage you check out solo.io, you know, we have a lot of exciting announcements on our website. And we also have a lot of what we're speaking at service.com, we have like seven, eight sessions. And we have like Istio workshops. So if you're interested, you know, just to register to any of the workshop, we're going to go through a lot more detail than this. What I'm walking through with you is a much shorter version than this certification option is. So that's it, I am going to pass here, see if you guys have any questions. And let me know if you like this session, like this lab, any feedback, I would super appreciate it. Thank you so much, Linda, that was a great workshop. All right, so if you guys have any questions, feel free to put them in the chat or you can also just request to share your audio and I can just connect you with them directly. So there is one question in the chat. Is there a way to run this workshop on our local machines too? The way the workshop is instructed, it's not really on your local, but if you really want to run it on local, like if you have an environment, I mean, it still has a lot of documentation, like it teaches you how to do install. It has like a booking for example, you can probably run all those on your workshop. It's just we provide this environment for you and everything is written optimized for that environment. Okay, that's great. I'm glad a few of you at least enjoy the workshop. So definitely reach out to me if you have any questions. You know, I'm always on Twitter or LinkedIn, so feel free to reach out. All right, thank you so much Linda for the workshop and thanks all for attending. I think there's one more slot for like 430, so there's some talks still left for today. So feel free to head over there and watch other talks. And yeah, that's all. Thanks for coming. Thanks for coming. Thanks everyone. Thanks for moderating.