 Well, thank you for attending this workshop. We're going to be talking about Istio Ambient Mesh. Before we get started, I'd like to introduce the folks who will be running the workshop for all of you. So I'm Christian Posta, the Global Field CTO at solo.io. And I have with me a DevRel engineer at Google Cloud. Awesome. So this is a hands-on workshop. And as we get into the labs, if you have questions or things aren't working, then put your hand up. And one of us will come and try to sort it out for you. How many of you in this room were at earlier sessions today, maybe Lin's session specifically, about Ambient Mesh? Most? Most people? OK. That's good. We'll do a quick overview of Ambient Mesh. It sounds like some of you saw the, I think it was the life of a packet through Istio Ambient Mesh earlier. And so that went into a lot more detail. But what we want to focus on here is setting the context, getting some understanding, revisiting maybe the motivation for building Ambient Mesh, and then actually getting hands-on and deploying it and working through the configuration and the experience of using the Istio Ambient Mesh. So the first thing that I do want to point out and how it's related to Ambient Mesh is that Istio has been around for quite a while now. And when you take something that is ends up being really critical in your environment, in your platforms, something related to networking, something that's on the data path, it takes a while to get that infrastructure to be mature and stable and able to run in these environments. And so Istio has been around for a long time. It's gone through instead of growing pains and the maturing process. And now we see, certainly in the Istio community at Solo, working with our customers, we see Istio as probably the most deployed service mesh, certainly at enterprise scale and extremely complex scenarios. And Istio was just recently, I think September 28th, added to the CNCF officially. So it's part of the CNCF projects. And earlier in September, we being the Istio community, but it was a joint effort before that to kind of prove out the concept through various POCs and building APIs and understanding the various components and trade-offs that we might need to make. It was a joint collaboration between our engineering team at Solo and Google. And this came about because we had been researching and digging into how we would kind of solve some of the problems around service mesh adoption that we felt like the sidecar model was kind of inhibiting that. And we started working on that and sharing some of the things that we were doing publicly and ended up working with Google on trying to prove this out, and then announced it to the open source project in September. The motivations for building out Istio ambient mesh are primarily around operations and onboarding and incremental adoption of a mesh. So I know, like I said, if you've seen the overview that we've done earlier today, you'll see that things like sidecars are not first-class concepts in Kubernetes, for example. Controlling the lifecycle of the containers of the sidecar itself and how it's related to the workload container, there's an inherent race condition there. Things like job resources. So you run a job resource with a sidecar, and the job completes, and the sidecar keeps running. So there's these use cases that make the sidecar not as transparent as we like and even more burdensome in upgrade scenarios, when we need to either upgrade because there are new versions, new features we want to take advantage of, or there are CVEs that have been discovered that necessitate upgrading and patching the service mesh. And so for operational reasons, and there's a lot more, actually, Lynn Sun and I wrote a quick little 40-page guide, a book introducing Istio ambient mesh that we have physical copies that we'll be signing at the end of this workshop, I think, outside at the solo booth. But you can also go online and get an e-book about that. That'll go into more detail and more of that motivation, certainly around the operational side. Now there are some side benefits. I feel like there's side benefits. Like I said, the operational aspects are the motivation and the number one reason. Now, some of these side benefits include the cost reduction. I think we did a talk on some of the cost savings and we did a blog on this as well, and as well as some of the security posture. We did a blog on Istio.io about Istio Ambience security posture and in some areas how that can be improved by not using sidecars. So what is Istio Ambient Mesh? It is a sidecar-less implementation of the Istio data plane that, like I said, focuses more on deploying the mesh and operating the mesh transparently from the applications, not tying the data plane to, or not tying the application to a particular infrastructure component's life cycle. So you can make these upgrades and patches independent of the applications, and then there are these benefits to reducing the number of proxies that are running, the provisioning of resources in advance, and getting some of the, when you run at scale, being able to optimize the amount of compute and resource that you need to run the service mesh. So the way that it works is instead of running a sidecar proxy, a full-blown Layer 7 sidecar proxy next to each application, each application instance, what we've done in Istio Ambient is that we've split up the responsibilities of Layer 7, and this includes things like request retries and header-based match-based load balancing and traffic splitting, those fault injection, those types of capabilities from the capabilities that are needed to secure the traffic in the service mesh. So we've created a secure overlay layer that focuses on the security properties of the mesh, so mutual TLS, authorization policies, some limited authorization policies, and that forms the foundation of the zero-trust aspects of the mesh that can then, you can layer on the Layer 7 capabilities as needed, so you'd be able to opt into that. And we do that with, I guess, the two different layers of the data plane. The Layer 7 layer is implemented by Envoy proxy, and you'll see that in our workshop today, and Layer 4 is implemented, technically is Envoy today, but that is going to be optimized to be something else going forward. It doesn't need to be Envoy. The Istio control plane is still, so Istio D, and what you see, if you're familiar with the sidecar approach, the Istio control plane is still there, and that is what serves the configuration to the secure overlay layer that we'll see, and to the Layer 7 proxies, which we call Waypoint proxies, just like it does today. It connects up to the sidecars and gives it configuration updates and takes care of routing and the security properties. You see the same thing, Istio D's doing the same thing with these Layer 4 and Layer 7 proxies as well. Now this is one aspect of what we at Solo have been working on, that is the glue platform that allows you to abstract away a lot of the details of operating and running a mesh, and now, not just at the top layers, your workflow layers, but now down at the workload layers as well, where the developers, when you deploy your applications, you don't see any of these sidecar proxies. So now going a level lower into the details and setting up the workshop that you'll be walking through is that secure overlay layer that I mentioned that operates at Layer 3, Layer 4, that is implemented with data plane components called a Z-Tunnel. And I mentioned when we released Istio Ambient, Z-Tunnel is based on Envoy, but only using the Layer 4 capabilities and the MTLS capabilities of Envoy, mostly for expediency is why that landed the way it did on an initial release. But that's being optimized right now. I think last week in the community, we were discussing a Rust-based implementation and further optimizing the way we configured that component and its actual runtime characteristics. And so we'll see in the workshops that we can use the secure overlay layer that uses these Z-Tunnel agents to implement the zero trust capabilities of the mesh without introducing Layer 7 capabilities and then layer in those more advanced capabilities on top of the secure overlay mechanisms. When we do that, we'll be introducing a Layer 7 proxy that we call the Waypoint proxy. The Waypoint proxy is a policy enforcement point for individual workloads. In the previous talks, we pointed out that the Waypoint proxy is deployed per service account. So that's very similar to the model that we see today with the sidecars where each service account gets its own identity in the sidecar model. It's the same thing that happens in the Istio Ambient data plane as well. When traffic goes from an application to another application, it will first traverse the Z-Tunnel and that's where the security properties will be implemented. And then if it needs to go to a Layer 7 Waypoint proxy, that communication will be encrypted with mutual TLS and then the Waypoint proxy will then apply whatever Layer 7 functionality or behavior that needs to take place and then send the traffic off to the destination and that will happen over mutual TLS as well. The communication protocol, the overlay that we use between the Z-Tunnels themselves or between the Z-Tunnel and the Waypoint proxies is a mechanism that is built on HTTP and is built to support various intricacies of protocols and uses mutual TLS to authenticate the traffic on both sides. So I purposely went a little quick. This is not supposed to be a deep dive on Istio Ambient because I want you to get hands-on. Like I said, see the previous sessions from earlier today. Check out the Istio Ambient to explain book but now let's get hands-on and actually run the workshop. So if you go to this URL that you see on here, that should take you to the workshop environment and I'll have Ron walk you through those parts. We'll use this tool called Instruct. Think of it as just a web browser educational environment where in your browser you should be able to access the full terminal as well as the instructions and we'll spin up Kubernetes clusters in that environment and then run through the process of installing Istio Ambient and deploy some applications and kind of play around with it. So if you go to this URL, can everyone hear me okay? Okay, if you go to this URL, you'll get taken to an instruct page where I'll say, add this particular workshop to my study room and then you'll go to the study room and you should be able to kick off the workshop. So I will switch my windows and share my workshop environment. Once you add it to your study room and launch the workshop, you'll get a screen that kind of looks like this and then if you hit start, you'll see a terminal on your left. Think of this terminal right now is just connected to a virtual machine where there's nothing on there yet. You know, except some CLI tools like Cube Cuddle that we'll leverage today. Is everyone on the screen any troubles? It might take about, I don't know, like 30 seconds to a minute for the loading to complete but can I get a thumbs up if someone's like done and they're seeing the screen? Okay, so one, okay, a couple. All right, I'll pause like another 20 seconds or so. Yeah, I saw a few thumbs up. Okay, great. So like I said, the stuff on the left is your terminal and things on the right are the instructions that you're gonna follow. So the format of this lab today, I will do this step and I'll kind of annotate it and give any information about what that step is doing and then I'll pause for like a couple of seconds and then for you to do the same step with me. First step is just to export an environment variable for simplicity sake so that we can reuse this environment variable later on in the lab and then we will deploy a cluster, a local client cluster that's gonna run inside this virtual machine. So this process takes about a couple minutes but essentially we're just spinning up a kind cluster called, which we're gonna call cluster one so that'll be the name of the context and then if you want to see exactly what this is doing, so while this is running, because this takes a couple minutes, you can click on the files tab and then if you open up the directory structure on the left, you'll see these three folders. The contents of this folders are going to be the files that we're gonna use in the lab today. So this first file that we're looking at right now is this deploy multi SH and this is what is deploying this kind cluster. The kind cluster is going to be a four node Kubernetes cluster and the fact that it's four nodes is going to be a point that we're gonna come back to later. So think about how nodes relate to the Z tunnels and then you'll see the mapping between the two a little bit later in the lab. Okay, so my script is complete. The next step just verifies that the cluster is up and running. It's just checking to see if all the pods in the Kube system namespaces are up and running. So if this second script returns, that means that you're good to go to continue with the lab. So for me, it's still waiting. That just means that the pods and the Kube system namespaces are still kind of spinning up. All right, so I'm done. My Kubernetes cluster is up and running. So if I do kubectl get nodes, you'll see my four node group cluster. And then if I list the pods in the Kube system namespace, for example, you'll see things like Kube proxy and running. Cool, so most of you should be about this step right now. So we have a clean Kubernetes cluster. The next thing we'll do is we'll deploy Istio. Just like with any standard Istio, you have to download the right binaries, right? Ambient is still in experimental. So it's still in preview mode. So it's in an experimental branch. So this wget command goes to that location and gets the right binary. So we're downloading the Istio-Cuttle binary from that experimental branch location. And then extract it. And then you see we'll have the Istio-Cuttle binary. This is the experimental one. And then we will install Istio using that binary. So Istio-Cuttle install. But notice here, I'm setting the profile to ambient. So the way you install Istio is the same way you install Istio today. Just like you have your default minimal demo profile, there's just another profile for ambient. And this profile, if you go look at the manifest directory and pull up ambient.yaml, you'll see how it configures things like the deployment of Ztunnel, sets up the CNIs to route traffic from your pod to Ztunnel, et cetera. So let's give that a minute to finish. Once we completed that installation command, if we check the pods in the Istio system namespace, you'll now see, obviously you'll see Istio-D and the Istio Ingress Gateway. But now we also see four Ztunnels. And the reason why we see four is because Ztunnel is deployed per node basis. And we have four Kubernetes nodes, so there's four Ztunnels. So what we've done so far is just install Istio with the ambient profile. But we don't have any applications yet. So let's do that next. We will deploy the sample application and pay attention to this diagram that's right above the deploy sample application command. Essentially, we're just gonna use a sleep client app that's gonna call the web API service. The web API calls a recommendation and a recommendation calls purchase history. So it's just a chain of upstreams. And if you're interested in looking at what exactly the deployment, the service, the service accounts looks like, you can always click on the files tab like I did before and pull up that YAML. So once you deploy that, if you look at kubectl get pods, you'll see my five applications that are running in the default namespace. Of course, there's no sidecar for any of these because I didn't label my namespace for Istio injection or anything like that. It's just standard Kubernetes containers deployments. To make sure that they're working, we're gonna exec into the sleep container and then from there we'll call, going back up to this diagram, from the sleep container we're gonna exec into that and from there we're gonna call web API. Web API is gonna call recommendation. Recommendation is gonna call purchase history. If you look at the output that from the curl command which called web API, you can see that this stuff from the top is from web API but web API calls recommendation and it embeds that response into its own response. So if you look in the JSON one step in deeper, you'll see this is a recommendation and then if you go look at one more level into it, you'll see purchase history. And then the 200, 200, 200 means that all three of them succeeded well. Next, we'll use the Istio gateway and virtual service resources to expose the web API service through the Istio ingress gateway. So again, take a look at those files if you're interested in the YAML. But yeah, it's just a standard gateway object that listens on a particular host and then it uses a virtual service to route from that gateway to the web API service. We'll set this environment variable to the IP address of the Istio ingress gateway service and then if I do a curl and specify the right host, I should be able to get the same command that I got going through sleep. So now we're just accessing directly to web API from the Istio ingress gateway. There's no sleep involved. So we're still kind of setting the stage for everything. We did install Istio, but we're not using it yet. We're still, you know, all basic Kubernetes stuff. I'm pausing a couple of seconds to make sure everyone's on the same step. Okay. In this next portion of the lab, what we'll do is use TCP dump to sniff the network traffic. So we've called web API from sleep and from ingress gateway, web API called other services. So by doing TCP dump, if I look at, you know, if I sniff some packets, you'll see, for example, in my output, I see that somebody called recommendation service on port 8080. And furthermore, I can see all the headers and information about that particular request. So this means that this connection was not encrypted and someone was able to use TCP dump to actually read that data. So we're still kind of like laying down all the groundwork and then the next portion of the lab will start using ambient. So if you're done with that, click on check and can I get some like thumbs up that, you know, we're at the stage? Awesome, cool. So you all are able to keep up just fine. So I'll kind of increase my speed a little bit. Next thing we'll do is we'll add those sample applications to ambient. And the way to do that is to label that namespace, the default namespace with Istio.io slash data plane mode equals ambient. So similar to the sidecar mode where you're labeling it with injection equals enabled or whatever the revision flag is, we just have a new label for ambient. The beauty of ambient is that you don't have to restart your applications. It is just turning it on and turning it off, right? Like right now it just turned on ambient mode and all the applications that are in this namespace are part of the mesh. And to prove that, we can look at the logs of the Istio CNI pod and you can see that it is configuring the routing rules, the outbound and inbound routing rules from those applications. So from the CNI log you'll see sleep, web API and whatever the other applications are that are running in that namespace. So it's the CNI that's watching for new pods and existing pods and then configuring it to route traffic to ZTunnel. And we already covered, Christian already covered what exactly ZTunnel is and how it works, et cetera. So next thing we'll do is we will generate a lot of traffic so that we can look at some metrics. So this next curl command will go from sleep and call web API 100 times and then you'll just see the response code information. So you can see they're all 200s. So now after I send 100 requests to the web API service, if I take a look at the logs of one of the ZTunnels or all the ZTunnels, if I look at the ZTunnel pod logs, you'll see information in here about traffic that is going through the ZTunnels, right? So if you look at this IP address here, there's source and destination IP address and those IP addresses would map to the ZTunnel pod IPs. Just like before, we'll run the TCP dump command again and then see if we're able to sniff any traffic. So I see some packets, but this time, this is not readable. So this is showing me that whatever this data is from TCP dump output is encrypted. The next step verifies that basically that this IP address that you see in the top of this TCP dump is actually coming from, oh, it actually references these ZTunnel pod IP addresses. So if you're interested in looking into that in more detail. Okay, so what have we accomplished so far? So just by installing Istio and labeling the namespace, we've achieved MTLS from ZTunnel to ZTunnel, meaning that node to node communication between the pods is encrypted. I didn't have to restart any workloads. I didn't have to configure any additional policies. You just get this encryption straight out of the box. So encryption authentication is one thing, right? We talked about how ZTunnel uses the applications certificates to initiate connections on behalf of the application. But now we can do things with that identity, with that strong identity. And one of the things that we can do with Istio, with strong identity is layer authorization policies. So the very first authorization policies basically creates an allow nothing zero trust. No one's allowed to talk to anybody else, foundation layer. So now that I have this allow nothing authorization policies, if I try to do the same curl command again from the ingress gateway. Let me copy just the curl command. You can see that I get, you know, reset reason connection upstream error. So now we have to explicitly allow communication from one service to another service. So that's what the next set of authorization policies will do. I mean, really the takeaway point from this part of the lab is that the enforcement points for your authorization policies are now happening at ZTunnel. We don't need a waypoint proxy for you to do layer four authorization policies. So for MTLS and TCP layer four authorization policies, you do not need a waypoint proxy. So now that we've explicitly created authorization policies on who's allowed to talk to who else, then if I do another curl command, if I redo the curl command as before, you can see now I'm getting good responses back from my user. I also have a deployment called not sleep that does not have authorization policies. So if you try to exec into not sleep and try to curl one of the web API services or one of the services in the mesh, then you'll see that it's not able to make that connection as you'd expect. Is everyone able to reach this section? Great, for the next section, I'll hand it over to Nim from Google. Yeah, thanks, Ram. How's everyone doing so far? You good? Okay, that's awesome. I saw someone walk in kind of last minute, so I just want to put up this tiny URL back up on the slides for anyone that wants to start the workshop. But I also do want to quickly recap what we just did and we have a few visuals here. So of course we created a kind cluster with four nodes. We then installed Istio as we normally do using Istio CTL install, except of course our profile that we used was called ambient. And it does a bunch of things, but the main thing that main takeaway here is that it deploys a Z tunnel for each node that takes care of all of Istio's L4 functionality for you. And we also deployed a bunch of test apps that we could play around with. And we also saw that the way to enable ambient mode or the Z tunnel on a specific namespace is by just labeling it with this label. So everything we've seen so far was just L4 stuff. And what we've at least learned from our users at Google is Istio, a lot of users just need the L4 stuff and the majority of Istio users at this point can stop or if they want to continue adding on the L7 functionality they can do so and they still benefit from this ambient model because adoption is a lot easier because most people adopt with L4 first. So this next section is purely about L7 and we're gonna start off by looking at an L7 authorization policy. More specifically what we're going to do is so right now sleep can talk to web API. That's what we did in the previous section. But we're gonna be a bit more granular about how we allow communication from sleep to web API. And we're gonna say allow only get requests and we're gonna do that by just deploying a authorization policy which you'll see down here. And for us to actually apply that authorization policy and see it work, the web API needs a waypoint proxy attached to its service account. So the way to do that, the way to deploy a waypoint proxy right now is to use the Kubernetes gateway resource and it's important to note that this is the Kubernetes resource and not the Istio gateway resource. And you can see that we're attaching it to the service account of the web API. And we're saying hey, let's use the waypoint proxy from Istio's ambient mesh. So let's go ahead and do that now. The very first command here is exactly that. We're going to apply our gateway. And next we're just gonna check that the waypoint proxy deployment has been deployed. So we're just doing a kubectl get pods and we're looking for our waypoint proxy and there it is. And the next thing of course is applying the authorization policy that says hey, we're only gonna allow traffic from the ingress gateway or the sleep service to our web API that is in the form of a get request. All right, so we're just applying more granular rule here. And then let's test this out. So now if I were to just make a get request from the sleep service to the web API, it should work, all right? And that's your 200 response. But of course if I try to now make a delete request, it shouldn't work. Yeah, so somewhere in here you should see access denied. Yep, it's actually just hidden right there. Okay, so that's just a bit of more granular authorization policies. We'll also make note that the waypoint proxy, once it's deployed for the web API service account, you've also now got access to L7 observability of that web API service. So we're just gonna confirm that here by just looking at an endpoint inside of our waypoint proxy that Prometheus would use for that observability. And then you should see something that looks kind of like what I'm seeing here. And then the next section here we're just doing a bit more playing around with L7 policies. But I've got a visualization here for that too. What we're doing now is we're saying, let's make a delay, let's add a delay of five seconds for any requests going into web API that contain the user Amy inside of the HTTP headers. So of course that is done in Istio using a virtual service. So here we're just applying an Istio virtual service and we're saying let's apply this fixed delay of five seconds and it should only apply if we see the user Amy inside the HTTP header that reaches the web API. So let's apply this now, okay. And now if I try to access or rather if I tried to make a HTTP request with a user Amy inside of our header, I should see a five second delay. I'm just pressing enter now. So one, two, three. And there we go. So that's a bit more of Istio's L7 functionality with the waypoint proxy. And this final section here is doing a bit more L7 stuff. And you can see that I guess I can show you the visualization as well. We're playing around with a bit of traffic splitting here. We're saying for the purchase history service, we're gonna deploy two versions of it, version one and version two. And then we're gonna say put 90% of any traffic going in the purchase history service, put 90% of that traffic to version one and 10% to version two. And of course, because the policy is being applied on the purchase history service, we now need to deploy a waypoint proxy for that purchase history service account. So that's the very first thing that we should be doing. So let's go ahead and let's look at that. So we're again using a gateway apply to say, hey, look, apply this Istio waypoint proxy to the service account called purchase history. So let's first actually deploy version two of our purchase history. And again, kind of like what Ram mentioned, if you go into this files tab, you can see the exact contents of that folder that we just kubectl applied. And let me go back here. And I'm gonna apply the gateway now to deploy the waypoint proxy for purchase history. And then I'm going to apply our Istio virtual service to apply the traffic splitting, the 90%, 10% traffic splitting. And you can see right there we're saying version one should get 90%. So at this point, if I, right, so I guess I've also got to define the two versions as subsets using Istio's destination rules. So this is just saying, splitting up the two different pods that have deployed and contextualizing it for Istio as version one and version two. And now at this point, if I make a whole bunch of requests, I think we're making 100 requests here to the web API, the web API talks to the purchase history. So we should see that later that 90% goes to version one, 10% goes to version two. So let's do that right now. Right, so you can see that about 10 of those, nine of those 100 requests were went to purchase history version two. And the curl output just tells you which version that's going to just grab it and see that it's version two that we're reaching. Okay, so at this point, yep, go ahead. Oh yeah. Have you applied your destination rule? Thanks, Christine. Does anyone else have any questions or any, did anyone, yep, go ahead. Sorry, what's that? The question was, where are the certificates coming from back in the layer for stuff? I can take that. Christine, you wanna cover that? Yeah, it's coming from IstioD. So the Z-tunnels will make a call to IstioD. IstioD is still your CA. Z-tunnel is just acting on behalf of the application that's spun up and said, hey, give me certificates for web API pod. And then IstioD gives it the certificates. And then Z-tunnel uses that certificate to make a connection to the other Z-tunnel. IstioD is still your CA and you can still configure IstioD to use a different CA if you wanted to, just like you do today. Like how IstioD gets its certificate, whether it's self-signed or integrated with the PKI, that stays the same. Yep. Is there anything else you wanna add? Okay, cool. Thanks, Rah. Do we have any more questions in the meantime? Yep, so I guess that's the section about L7 functionality using the Waypoint Proxy. And once you're done this entire section, you can just click check there. And then we're gonna be navigated to the bonus section. So the bonus section is really just answering one question that a lot of folks have, which is whether I can still use sidecars? And the answer is yes. And yes, the ambient mesh and the sidecar pattern will still interoperate. And sidecars will remain a first class citizen of Istio. So this section is just going to just confirm that. So in this section, we're actually deploying the familiar HTTP bin service into the HTTP separate namespace. So let's first create that namespace. And then we are going to inject a sidecar, or enable a sidecar injection in that new namespace that we created. And then we're gonna deploy the HTTP bin service. So you can see it's got its own service account. It's also pulling in this image here, HTTP bin. And let me just apply this here. And so now, because we enabled a sidecar injection, if I do a kubectl, get pods from that namespace, the HTTP bin namespace, I should see that there is a sidecar attached to it. And now if I curl from the sleep service, which is using the ambient mesh Z tunnels to the HTTP bin service, I'll see that in my response. My response should say, somewhere here, it should say X forwarded client cert. Yeah, there you go. So that's how you can confirm that the traffic between the HTTP bin service, which is using a sidecar for its L4 STO stuff is being encrypted as well. And that concludes our bonus section and that actually concludes the entire workshop. So I guess during this time, we'll probably just field around and get questions or if folks are stuck anywhere, we can help out. Yeah, like Nick said, we're gonna be walking around kind of helping people out. If you have any questions, feel free to just... I did wanna share that we do have a survey link. Let me see if I can find my, I don't know where my share button went. There it is. Okay, so there's two more URLs for you. There's a certification exam. I think it's like 10 or 20 questions or so. And if you answer like above 80%, then Sola will send you like the certification badge that you can add to LinkedIn. I don't know, share it within your organization, share it with your manager. There's also a survey link. If you like this workshop and you wanna see more of this type of content, please share any feedback that you have, positive or negative. And just know that Sola will run these type of workshops pretty often, virtually. So if you go to Sola.io and go to upcoming events, you'll see that every week or so, we have workshops around beginner, intermediate and advanced Istio content, things around Envoy, Ambient, GraphQL, and a lot of the stuff that we work on. We build a workshop and we do these free workshops along with an instructor that comes with it and annotates the content like we did today. So with that said, thanks everyone. Again, we'll be walking around answering any questions.